Buscar por

Publicaciones de la facultad

Se muestran 0 publicaciones de 0 resultados.

Abstract:

In recent years, much attention has been paid to the role of classical special functions of a real or complex variable in mathematical physics, especially in boundary value problems (BVPs). In the present paper, we propose a higher-dimensional analogue of the generalized bessel polynomials within Clifford analysis via a special set of monogenic polynomials. We give the definition and derive a number of important properties of the generalized monogenic bessel polynomials (GMBPs), which are defined by a generating exponential function and are shown to satisfy an analogue of Rodrigues' formula. As a consequence, we establish an expansion of particular monogenic functions in terms of GMBPs and show that the underlying basic series is everywhere effective. We further prove a second-order homogeneous differential equation for these polynomials

Abstract:

We consider twisted graphs, that is, topological graphs that are weakly isomorphic to subgraphs of the complete twisted graph Tn. We determine the exact minimum number of crossings of edges among the set of twisted graphs with n vertices and m edges; state a version of the crossing lemma for twisted graphs and conclude that the mid-range crossing constant for twisted graphs is 1/6. Let e(k)(n) be the maximum number of edges over all twisted graphs with n vertices and local crossing number at most k. We give lower and upper bounds for e(k)(n) and settle its exact value for k is an element of {0, 1, 2, 3, 6, 10). We conjecture that for every t > = 1, e(t 2) (n) = (t+1) n - (t+2 2), n > = t + 1

Abstract:

This study has carried out a review of the literature appearing on diversity in the last 50 years. Research findings from this period reveal it is impossible to assume there is a pure and simple relationship between diversity and performance without considering a series of variables that affect this relationship. In this study, emphasis has been placed on the analysis of results arrived at through empirical investigation on the relation between the most studied dimensions of diversity and performance. The results presented are part of a more extensive research

Abstract:

In order to analyze the influence of substituent groups, both electron-donating and electron-attracting and the number of π-electrons on the corrosion inhibiting properties of organic molecules, a theoretical quantum chemical study under vacuo and in the presence of water, using the Polarizable Continuum Model (PCM), was carried out for four different molecules, bearing similar chemical framework structure: 2-mercaptoimidazole (2MI), 2-mercaptobenzimidazole (2MBI), 2-mercapto-5- methylbenzimidazole (2M5MBI), and 2-mercapto-5-nitrobenzimidazole (2M5NBI). From an electrochemical study conducted previously in our group, (R. Álvarez-Bustamante, G. Negrón-Silva, M. Abreu-Quijano, H. Herrera-Hernández, M. Romero-Romo, A. Cuán, M. Palomar-Pardavé. Electrochim. Acta, 54, (2009) 539), it was found that the corrosion inhibition efficiency, IE, order followed by the molecules tested was 2MI > 2MBI > 2M5MBI > 2M5NBI. Thus 2MI turned out to be the best inhibitor. This fact strongly suggests that, contrary to a hitherto generally suggested notion, an efficient corrosion inhibiting molecule neither requires to be a large one, nor possesses an extensive π-electrons number. In this work, from a theoretical study a correlation was found between EHOMO, hardness (η), electron charge transfer (ΔN), electrophilicity (W), back-donation (ΔEBack-donation) and the inhibition efficiency, IE. The negative values of EHOMO and the estimated value of the Standard Free Gibbs energy for all the molecules (based on the calculated equilibrium constant) were negative, indicating that the complete chemical processes in which the inhibitors are involved, occur spontaneously

Abstract:

The run sum chart is an effective two-sided chart that can be used to monitor for process changes. It is known that it is more sensible than the Shewhart chart with runs rules and its performance improves as the number of regions increases. However, as the number of regions increses the resulting chart has more parameters to be defined and its design becomes more involved. In this article, we introduce a one-parameter run sum chart. This chart accumulates scores equal to the subgroup means and signals when the cummulative sum exceeds a limit value. A fast initial response feature is proposed and its run length distribution function is found by a set of recursive relations. We compare this chart with other charts suggested in the literature and find it competitive with the CUSUM, the FIR CUSUM, and the combined Shewhart FIR CUSUM schemes

Abstract:

Processes with a low fraction of nonconforming units are known as high-yield processes. These processes produce a small number of nonconforming parts per million. Traditional methods for monitoring the fraction of nonconforming units such as the binomial and geometric control charts with probability limits are not effective. In order to properly monitor these processes, we propose new two-sided geometric-based control charts. In this article we show how to design, analyze, and evaluate their performance. We conclude that these new charts outperform other geometric charts suggested in the literature

Abstract:

To improve the performance of control charts the conditional decision procedure (CDP) incorporates a number of previous observations into the chart's decision rule. It is expected that charts with this runs rule are more sensitive to shifts in the process parameter. To signal an out-of-control condition more quickly some charts use a headstart feature. They are referred as charts with fast initial response (FIR). The CDP chart can also be used with FIR. In this articIe we analyze and compare the performance of geometric CDP charts with and with no F1R. To do it we model the CDP charts with a Markov chain and find cIosed-form ARL expressions. We find the conditional decision procedure useful when the fraction p of nonconforming units detenorates. However the CDP chart is not very effective for signaling decreases in p

Abstract:

A fast initial response (FIR) feature for the run sum R chart is proposed and its ARL performance estimated by a Markov chain representation. It is shown that this chart is more sensitive than several R charts with runs rules proposed by different authors. We conclude that the run sum R chart is simple to use and a very effective tool for monitoring increases and decreases in process dispersion

Abstract:

The standard S chart signals an out-of-control condition when one point exceeds a control limit. It can be augmented with runs rules to improve its performance in detecting assignable causes. A commonly used rule signals when k consecutive points exceed a control limit. This rule can be used alone or to supplement the standard chart. In this article we derive ARL expressions for charts with the k-ofk runs rule. We show how to design S charts with this runs rule, compare their ARL performance, and make a control chart recommendation when it is important to monitor for both increases and decreases in process dispersion

Abstract:

We analyze the performance of traditional R charts and introduce modifications for monitoring both increases and decreases in the process dispersion. We show that the use of equal tail probability limits and the use of sorne runs rules does not represent a significant improvement for monitoring increases and decreases in the variability of the process. We propose to use the R chart with a central line equal. to the median of the distribution of the range. We al so suggest supplementing this chart with a runs rule that signals when nine consecutive points lie on the same side of the median line. We find that such modifications lead to R charts with improved performance for monitoring the process dispersion

Abstract:

To increase the performance for detecting small shifts, control charts are used with runs rules, The Western Electrical Handbook (1956) suggests runs rules to be used with the Shewhart X chart. In this article, we review the performance of two sets of runs rules. The rules of one set signal if k succesive points fall beyond a limito The rules of the other set signal if k out of k+ 1 consecutive points fall beyond a different limito We suggest runs rules from these sets. They are intended to be used with a modified Shewhart X chart, a chart with 3.3σ limits. We find that for small shifts all suggested charts have improved performance than the Shewhart X chart. For large shifts they have comparable performance

Abstract:

Most control charts for variables data are constructed and analyzed under the assumption that observations are independent and normally distributed. Although this may be adequate for many processes, there are situations where these basic assumptions do not hold. In such cases the use of traditional control charts may not be effective. In this article, we estimate the performance of control charts derived under the assumption of normality (normal charts) but used with a much broader range of distributions. We consider monitoring the dispersion of processes that follow the exponential power family of distributions (a family of distributions which includes the normal as a special case). We have found that if a normal CUSUM chart is used with several of these processes the rate of false alarms might be quite different from the rate that results when a normal process is monitored. A normal chart might also be not sensitive enough in detecting changes in such processes. CUSUM charts suitable for monitoring this family of processes are derived to show how much sensitivity is recovered when the correct chart is used

Abstract:

In this paper we analyze several control charts suitable for monitoring process dispersion when subgrouping is not possible or not desirable. We compare the performances of a moving range chart, a cumulative sum (CUSUM) chart based on moving ranges, a CUSUM chart based on an approximate normalizing transformation, a self-starting CUSUM chart, a change-point CUSUM chart, and a exponentially weighted moving average chart based on the subgroup variance. The average run length performances of these charts are also estimated and compared

Abstract:

We consider several control charts for monitoring normal processes for changes in dispersion. We present comparisons of the average run length performances of these charts. We demonstrate that a CUSUM chart based on the likelihood ratio test for the change point problem for normal variances has an ARL performance that is superior to other procedures. Graphs are given to aid in designing this control chart

Abstract:

It is a common practice to monitor the fraction p of non-conforming units to detect whether the quality of a process improves or deteriorates. Users commonly assume that the number of non-conforming units in a subgroup is approximately normal, since large subgroup sizes are considered. If p is small this approximation might fail even for large subgroup sizes. If in addition, both upper and lower limits are used, the performance of the chart in terms of fast detection may be poor. This means that the chart might not quickly detect the presence of special causes. In this paper the performance of several charts for monitoring increases and decreases in p is analyzed based on their Run Length (RL) distribution. It is shown that replacing the lower control limit by a simple runs rule can result in an increase in the overall chart performance. The concept of RL unbiased performance is introduced. It is found that many commonly used p charts and other charts proposed in the literature have RL biased performance. For this reason new control limits that yield an exact (or nearly) RL unbiased chart are proposed

Abstract:

When monitoring a process it is important to quickly detect increases and decreases in its variability. In addition to preventing any increase in the variability of the process and any deterioration in the quality of the output, it is also important to search for special causes that may result in a smaller process dispersion. Considering this, users should always try to monitor for both increases and decreases in the variability. The process variability is commonly monitored by means of a Shewhart range chart. For small subgroup sizes this control chart has a lower control limit equal to zero. To help monitor for both increases and decreases in variability, Shewhart charts with probability limits or runs rules can be used. CUSUM and EWMA charts based on the range or a function of the subgroup variance can also be used. In this paper a CUSUM chart based on the subgroup range is proposed. Its performance is compared with that of other charts proposed in the literature. It is found that for small subgroup sizes, it has an excellent performance and it thus represents a powerful alternative to currently utilized strategies.

Abstract:

Most control charts for variables data are constructed and analyzed under the assumption that observations are independent and normally distributed. Although this may be adequate for many processes, there are situations where these basic assumptions do not hold. The use of control charts in such cases may not be effective if the rate of false alarms is high or if the control chart is not sensitive in detecting changes in the process. In this paper, the ARL performance of a CUSUM chart for dispersion is analyzed under a variety of non-normal process models. We will consider processes that follow the exponential power family of distributions, a symmetric class of distributions which includes the normal distribution as a special case

Resumen:

En este trabajo estudiamos la elección de financiamiento a largo plazo entre créditos sindicatos y la emisión de bonos de las empresas mexicanas. Restringimos nuestra atención a aquellas empresas que emiten títulos financieros en los mercados de capitales. Nuestros hallazgos sugieren que el tamaño de la empresa, la calidad de sus garantías, y su calidad crediticia, juegan un papel importante en el resultado de la elección. También encontramos que los efectos marginales de estas tres características no son consistentes a lo largo de su rango de valores. En particular, encontramos evidencia de que la calidad crediticia de las empresas tiene un efecto no monótono en la elección

Abstract:

We study the long-term financing choice between syndicated loans and Mexican firms' issuance of public debt. We restrict our attention to those firms that issue securities in capital markets. Our findings suggest that the firm's size, the quality of its collateral, and credit quality play an important role in the outcome of choice. We also find that the marginal effects of these three characteristics are not consistent along their range of values. In particular, we found evidence that the credit quality of firms has a non-monotonic effect in the choice

Resumen:

Propósito. En este artículo se examinan los factores que afectan la probabilidad de que una empresa salga a Bolsa, utilizando una base de datos integral de empresas privadas y públicas en México, de todos los sectores, durante 2006-2014

Abstract:

Purpose. The purpose of this paper is to examine the factors that affect the likelihood of being public using a comprehensive database of private and public companies in Mexico, from all sectors, during 2006-2014

Resumen:

Este artículo analiza la demanda de vivienda en México a través del gasto en servicios de vivienda y el costo de uso del capital residencial de cada hogar representativo por percentil de ingreso. La hipótesis de ingreso permanente se considera como función de las características sociodemográficas y el grado de educación del jefe del hogar. Asimismo, se obtienen las elasticidades de ingreso, riqueza, edad del jefe de familia, tamaño del hogar y número de empleados, así como la semielasticidad del costo de uso de capital residencial. El gasto en vivienda es inelástico, aunque es más sensible al ingreso corriente que al permanente. También demostramos que existe una estructura regresiva en este mercado y se realiza un análisis de sensibilidad con el fin de medir el impacto en el gasto de vivienda ante ciertas variaciones del costo de uso residencial de largo plazo

Abstract:

This article analyzes the demand for housing in Mexico through the approach of spending on housing services and user cost of owner-occupied of each representative household by income percentile. The hypothesis of permanent income as a function of the socio-demographic characteristics and the degree of education of the household head is included in the model. We obtain the elasticity of income, wealth, age of head of household, size of household and number of occupied; as well as the semi-elasticity of the user cost of residential capital. It should be noted that expenditure on housing is inelastic, although it is more sensitive to current income than the permanent income. We show that this market struture is regresive, therefore a sensitivity analysis is performed in order to measure the impact on the housing expenditure related to certain variations of the long-run user cost of owner-occupied

Abstract:

This paper proposes a theoretical model that offers a rationale for the formation of lender syndicates. We argue that the ex-ante process of information acquisition may affect the strategies used to create syndicates. For large loans, the restrictions on lending impose a natural reason for syndication. We study medium-sized loans instead, where there is some room for competition since each financial institution has the ability to take the loan in full by itself. In this case, syndication would be the optimal choice only if their screening costs are similar. Otherwise, lenders would be compelled to compete, since a lower screening cost can create a comparative advantage in interest rates

Abstract:

We consider a model of bargaining by concessions where agents can terminate negotiations by accepting the settlement of an arbitrator. The impact of pragmatic arbitrators—that enforce concessions that precede their appointment—is compared with that of arbitrators that act on principle—ignoring prior concessions. We show that while the impact of arbitration always depends on how costly that intervention is relative to direct negotiation, the range of scenarios for which it has an impact, and the precise effect of such impact, does change depending on the behavior— pragmatic or on principle—of the arbitrator. Moreover the requirement of mutual consent to appoint the arbitrator matters only when he is pragmatic. Efficiency and equilibrium are not aligned since agents sometimes reach negotiated agreements when an arbitrated settlement is more efficient and vice versa. What system of arbitration has the best performance depends on the arbitration and negotiation costs, and each can be optimal for plausible environments

Abstract:

This paper analyzes a War of Attrition where players enjoy private information about their outside opportunities. The main message is that uncertainty about the possibility that the opponent opts out increases the equilibrium probability of concession

Abstract:

In many modern production systems the human operator is faced with problems when it comes to interacting with the production system using the control system. One reason for this is that the control systems are mainly designed with respect to production, without taking into account how an operator is supposed to interact with it. This article presents a control system where the goal is to increase flexibility and reusability of production equipment and program modules. Apart from this, the control system is also suitable for human operator interaction. To make it easier for an operator to interact with the control system, the operator activities vis-a-vis the control system have been divided into so called control levels. One of the six predefined control levels is described more in detail to illustrate how production can be manipulated with the help of a control system at this level. The communication with the control system is accomplished with the help of a graphical tool that interacts with a relational database. The tool has been implemented in Java to make it platform-independent. Some examples of the tool and how it can be used are provided

Abstract:

Modern control systems often exhibit problems in switches between automatic and manual system control. One reason for this is the structure of the control system, which is usually not designed for this type of action. This article presents a method for splitting the control system into different control levels. By switching between these control levels, the operator can increase or decrease the number of manual control activities he wishes to perform while still enjoying the support of the control system. The structural advantages of the control levels are demonstrated for two types of operator activity; 1 control flow tracing; and 2 control flow alteration. These two types of operator activity can be used in such situations as when locating an error, introducing a new machine, changing the ordering of products or optimizing the production flow

Abstract:

Partly due to the introduction of computers and intelligent machines in modern manufacturing, the role of the operator has changed with time. More and more of the work tasks have been automated, reducing the need for human interactions. One reason for this is the decrease in the relative cost of computers and machinery compared to the cost of having operators. Even though this statement may be true in industrialized countries it is not evident that it is valid in developing countries. However, a statement that is valid for both industrialized countries and developing countries is to obtain balanced automation systems. A balanced automation system is characterized by "the correct mix of automated activities and the human activities". The way of reaching this goal, however, might be different depending on the place of the manufacturing installation. Aspects, such as time, money, safety, flexibility and quality, govern the steps to take in order to reach a balanced automation system. In this paper there are defined six steps of automation that identify areas of work activities in a modern manufacturing system, that might be performed by either an automatic system or a human. By combining these steps of automation in what is called levels of automation, a mix of automatic and manual activities is obtained. Through the analysis of these levels of automation, with respect to machine costs and product quality, it is demonstrated which the lowest possible automation level should be when striving for balanced automation systems in developing countries. The bottom line of the discussion is that product supervision should not be left to human operators solely, but rather be performed automatically by the system

Abstract:

In the matching with contracts literature, three well-known conditions (from stronger to weaker)–substitutes, unilateral substitutes (US), and bilateral substitutes (BS)–have proven to be critical. This paper aims to deepen our understanding of them by separately axiomatizing the gap between BS and the other two. We first introduce a new “doctor separability” condition (DS) and show that BS, DS, and irrelevance of rejected contracts (IRC) are equivalent to US and IRC. Due to Hatfield and Kojima (2010) and Aygün and Sönmez (2012), we know that US, “Pareto separability” (PS), and IRC are the same as substitutes and IRC. This, along with our result, implies that BS, DS, PS, and IRC are equivalent to substitutes and IRC. All of these results are given without IRC whenever hospitals have preferences

Abstract:

The paper analyzes the role of labor market segmentation and relative wage rigidity in the transmission process of disinflation policies in an open economy facing imperfect capital markets. Wages are flexible in the nontradables sector, and based on efficiency factors in the tradables sector. With perfect labor mobility, a permanent reduction in the devaluation rate leads in the long run to a real appreciation, a lower ratio of output of tradables to nontradables, an increase in real wages measured in terms of tradables, and a fall in the product wage in the nontradables sector. Under imperfect labor mobility, unemployment temporarily rises.

Abstract:

In this paper we study the adjacency matrix of some infinite graphs, which we call the shift operator on the Lp space of the graph. In particular, we establish norm estimates, we find the norm for some cases, we decide the triviality of the kernel of some infinite trees, and we find the eigenvalues of certain infinite graphs obtained by attaching an infinite tail to some finite graphs

Resumen:

México es un país que se encuentra inmerso en un proceso acelerado de envejecimiento poblacional. Las personas adultas mayores están expuestas a múltiples riesgos, entre ellos la violencia familiar, no solo la generada por sus parejas, sino también por parte de otros miembros de la familia. El objetivo propuesto en este artículo fue analizar las características y principales factores asociados con la violencia familiar de las mujeres adultas mayores (sin incluir la violencia de pareja), según grupos de edad (60-69 o 70 años o más) y subtipos de violencia (emocional, económica, física y sexual). Se hizo un análisis secundario de la Encuesta Nacional sobre la Dinámica de las Relaciones en los Hogares (ENDIREH) 2016. La muestra final estuvo conformada por 18,416 mujeres de 60 años o más, lo cual representó un total de 7,043,622 mujeres de dicho grupo de edad. Se hizo un análisis descriptivo y se estimaron modelos de regresión binaria para determinar los principales factores asociados con la violencia y sus subtipos. La violencia emocional fue la más frecuente, seguida de la económica. Las mujeres de edad más avanzada tuvieron una mayor prevalencia de violencia familiar

Abstract:

Mexico as a country is immersed in a process of accelerated population aging. Older people are exposed to multiple risks, including domestic violence, not only from their couple, but also from other family members. The proposed objective of this article is to analyze the characteristics and main factors associated with family violence of older adult women in 2016 -excluding intimate partner violence- by age groups (60-69 and 70 years old or more) and subtypes of violence (emotional, economical, physical and sexual). An analysis of The National Survey on the Dynamics of Household Relationships (ENDIREH in Spanish) 2016 was used. The final sample consisted of 18,416 women aged 60 or more, which represented a total of 7,043,622 women in the age group. A descriptive analysis was carried out and the binary regression models were estimated to determine to main factors associated with violence and its subtypes. The emotional violence was the most frequent, followed by economic violence

Abstract:

The fact that shocks in early life can have long-term consequences is well established in the literature. This paper examines the effects of extreme precipitations on cognitive and health outcomes and shows that impacts can be detected as early as 2 years of age. Our analyses indicate that negative conditions (i.e., extreme precipitations) experienced during the early stages of life affect children’s physical, cognitive and behavioral development measured between 2 and 6 years of age. Affected children exhibit lower cognitive development (measured through language, working and long-term memory and visual-spatial thinking) in the magnitude of 0.15 to 0.19 SDs. Lower height and weight impacts are also identified. Changes in food consumption and diet composition appear to be key drivers behind these impacts. Partial evidence of mitigation from the delivery of government programs is found, suggesting that if not addressed promptly and with targeted policies, cognitive functioning delays may not be easily recovered

Abstract:

A number of studies document gender differentials in agricultural productivity. However, they are limited to region and crop-specific estimates of the mean gender gap. This article improves on previous work in three ways. First, data representative at the national level and for a wide variety of crops is exploited. Second, decomposition methods—traditionally used in the analysis of wage gender gaps—are employed. Third, heterogeneous effects by women's marital status and along the productivity distribution are analyzed. Drawing on data from the 2011–2012 Ethiopian Rural Socioeconomic Survey, we find an overall 23.4 percentage point productivity differential in favor of men, of which 13.5 percentage points (57%) remain unexplained after accounting for gender differences in land manager characteristics, land attributes, and access to resources. The magnitude of the unexplained fraction is large relative to prior estimates in the literature. A more detailed analysis suggests that differences in the returns to extension services, land certification, land extension, and product diversification may contribute to the unexplained fraction. Moreover, the productivity gap is mostly driven by non-married female managers—particularly divorced women—; married female managers do not display a disadvantage. Finally, overall and unexplained gender differentials are more pronounced at mid-levels of productivity

Abstract:

Public transportation is a basic everyday activity. Costs imposed by violence might have far-reaching consequences. We conduct a survey and exploit the discontinuity in the hours of operation of a program that reserves subway cars exclusively for women in Mexico City. The program seems to be successful at reducing sexual harassment toward women by 2.9 percentage points. However, it produces unintended consequences by increasing nonsexual aggression incidents (e.g., insults, shoving) among men by 15.3 percentage points. Both sexual and nonsexual violence seem to be costly; however, our results do not imply that costs of the program outweigh its benefits

Abstract:

We measure the effect of a large nationwide tax reform on sugar-added drinks and caloric-dense food introduced in Mexico in 2014. Using scanner data containing weekly purchasesof 47,973 barcodes by 8,130 households and an RD design, we find that calories purchasedfrom taxed drinks and taxed food decreased respectively by 2.7% and 3%. However, this wascompensated by increases from untaxed categories, such that total calories purchased didnot change. We find increases in cholesterol (12.6%), sodium (5.8%), saturated fat (3.1%), carbohydrates (2%), and proteins (3.8%)

Abstract:

The frequency and intensity of extreme temperature events are likely to increase with climate change. Using a detailed dataset containing information on the universe of loans extended by commercial banks to private firms in Mexico, we examine the relationship between extreme temperatures and credit performance. We find that unusually hot days increase delinquency rates, primarily affecting the agricultural sector, but also non-agricultural industries that rely heavily on local demand. Our results are consistent with general equilibrium effects originated in agriculture that expand to other sectors in agricultural regions. Additionally, following a temperature shock, affected firms face increased challenges in accessing credit, pay higher interest rates, and provide more collateral, indicating a tightening of credit during financial distress

Abstract:

In this work we construct a numerical scheme based on finite differences to approximate the free boundary of an American call option. Points of the free boundary are calculated by approximating the solution of the Black-Scholes partial differential equation with finite differences on domains that are parallelograms for each time step. Numerical results are reported

Abstract:

We present higher-order quadrature rules with end corrections for general Newton–Cotes quadrature rules. The construction is based on the Euler–Maclaurin formula for the trapezoidal rule. We present examples with 6 well-known Newton–Cotes quadrature rules. We analyzemodified end corrected quadrature rules, which consist on a simple modification of the Newton–Cotes quadratures with end corrections. Numerical tests and stability estimates show the superiority of the corrected rules based on the trapezoidal and the midpoint rules

Abstract:

The constructions of the quadratures are based on the method of central corrections described in [4]. The quadratures consist of the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. Integrals of the above type appear in scattering calculations; we test the performance of the quadrature rules with an example of this kind

Abstract:

In this report we construct corrected trapezoidal quadrature rules up to order 40 to evaluate 2-dimensional integrals of the form ʃD v(x, y) log((x2 + y2)1/2)dxdy, where the domain D is a square containing the point of singularity (0, 0) and v is a C∞ function of compact support contained in D. The procedure we use is a modification of the method constructed in [1]. These quadratures are particularly useful in acoustic scattering calculations with large wave numbers. We describe how to extend the procedure to calculate other 2-dimensional integrals with different singularities

Abstract:

In this paper we construct an algorithm to approximate the solution of the initial value problem y'(t) = f(t,y) with y(t0) = y0. The method is implicit and combines the classical Simpson’s rule with the Simpson’s 3/8 rule to yield an unconditionally A-stable method of order 4

Abstract:

In this paper, we propose an anisotropic adaptive refinement algorithm based on the finite element methods for the numerical solution of partial differential equations. In 2-D, for a given triangular grid and finite element approximating space V, we obtain information on location and direction of refinement by estimating the reduction of the error if a single degree of freedom is added to V. For our model problem the algorithm fits highly stretched triangles along an interior layer, reducing the number of degrees of freedom that a standard h-type isotropic refinement algorithm would use

Abstract:

In this paper we construct an algorithm that generates a sequence of continuous functions that approximate a given real valued function f of two variables that have jump discontinuities along a closed curve. The algorithm generates a sequence of triangulations of the domain of f . The triangulations include triangles with high aspect ratio along the curve where f has jumps. The sequence of functions generated by the algorithm are obtained by interpolating f on the triangulations using continuous piecewise polynomial functions. The approximation error of this algorithm is O(1/N2) when the triangulation contains N triangles and when the error is measured in the L1 norm. Algorithms that adaptively generate triangulations by local regular refinement produce approximation errors of size O(1/N), even if higher-order polynomial interpolation is used

Abstract:

We construct correction coefficients for high-order trapezoidal quadrature rules to evaluate three-dimensional singular integrals of the form, J(v) = integral(D) v(x, y, z)/root x(2) +y(2) + z(2) dx dy dz, where the domain D is a cube containing the point of singularity (0, 0, 0) and v is a C-infinity function defined on R-3. The procedure employed here is a generalization to 3-D of the method of central corrections for logarithmic singularities [1] in one dimension, and [2] in two dimensions. As in one and two dimensions, the correction coefficients for high-order trapezoidal rules for J(v) are independent of the number of sampling points used to discretize the cube D. When v is compactly supported in D, the approximation is the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. These quadrature rules provide an efficient, stable and accurate way of approximating J(v). We demonstrate the performance of these quadratures of orders up to 17 for highly oscillatory functions v. These type of integrals appear in scattering calculations in 3-D

Abstract:

We present a high-order, fast, iterative solver for the direct scattering calculation for the Helmholtz equation in two dimensions. Our algorithm solves the scattering problem formulated as the Lippmann-Schwinger integral equation for compactly supported, smoothly vanishing scatterers. There are two main components to this algorithm. First, the integral equation is discretized with quadratures based on high-order corrected trapezoidal rules for the logarithmic singularity present in the kernel of the integral equation. Second, on the uniform mesh required for the trapezoidal rule we rewrite the discretized integral operator as a composition of two linear operators: a discrete convolution followed by a diagonal multiplication; therefore, the application of these operators to an arbitrary vector, required by an iterative method for the solution of the discretized linear system, will cost N(2)log(N) for a N-by-N mesh, with the help of FFT. We will demonstrate the performance of the algorithm for scatterers of complex structures and at large wave numbers. For numerical implementations, CMRES iterations will be used, and corrected trapezoidal rules up to order 20 will be tested

Abstract:

In this report, we construct correction coefficients to obtain high-order trapezoidal quadrature rules to evaluate two-dimensional integrals with a logarithmic singularity of the form J(v) = ∫D v(x, y) ln (√x2 + y2) dx dy, where the domain D is a square containing the point of singularity (0,0) and v is a C∞ function defined on the whole plane ℝ2. The procedure we use is a generalization to 2-D of the method of central corrections for logarithmic singularities described in [1]. As in 1-D, the correction coefficients are independent of the number of sampling points used to discretize the square D. When v has compact support contained in D, the approximation is the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. These quadrature rules give an efficient, stable, and accurate way of approximating J(v). We provide the correction coefficients to obtain corrected trapezoidal quadrature rules up to order 20

Abstract:

This paper addresses the problem of the optimal design of batch plants with imprecise demands in product amounts. The design of such plants necessarily involves the way that equipment may be utilized, which means that plant scheduling and production must form an integral part of the design problem. This work relies on a previous study, which proposed an alternative treatment of the imprecision (demands) by introducing fuzzy concepts, embedded in a multi-objective Genetic Algorithm (GA) that takes into account simultaneously maximization of the net present value (NPV) and two other performance criteria, i.e. the production delay/advance and a flexibility criterion. The results showed that an additional interpretation step might be necessary to help the managers choosing among the non-dominated solutions provided by the GA. The analytic hierarchy process (AHP) is a strategy commonly used in Operations Research for the solution of this kind of multicriteria decision problems, allowing the apprehension of manager subjective judgments. The major aim of this study is thus to propose a software integrating the AHP theory for the analysis of the GA Pareto-optimal solutions, as an alternative decision-support tool for the batch plant design problem solution

Abstract:

A workflow is a set of steps or tasks that model the execution of a process, e.g., protein annotation, invoice generation and composition of astronomical images. Workflow applications commonly require large computational resources. Hence, distributed computing approaches (such as Grid and Cloud computing) emerge as a feasible solution to execute them. Two important factors for executing workflows in distributed computing platforms are (1) workflow scheduling and (2) resource allocation. As a consequence, there is a myriad of workflow scheduling algorithms that map workflow tasks to distributed resources subject to task dependencies, time and budget constraints. In this paper, we present a taxonomy of workflow scheduling algorithms, which categorizes the algorithms into (1) best-effort algorithms (including heuristics, metaheuristics, and approximation algorithms) and (2) quality-of-service algorithms (including budget-constrained, deadline-constrained and algorithms simultaneously constrained by deadline and budget). In addition, a workflow engine simulator was developed to quantitatively compare the performance of scheduling algorithms

Resumen:

Hoy en día, los riesgos y oportunidades relacionados con la sostenibilidad surgen de la dependencia de una entidad ante los recursos que necesita para operar, pero además por el impacto que ocasiona en estos, por lo que podría traer varios impactos contables en la determinación de información financiera. De esta manera, tanto la información contenida en sus estados financieros como la incluida en la información a revelar sobre sostenibilidad relacionada con la información financiera son datos esenciales para que un usuario evalúe el valor de una entidad. Estos aspectos y otros adicionales se abordan de manera sencilla y amigable en la obra Impactos contables de acuerdo con las NIF para el cierre de estados financieros 2022, que es de utilidad para la preparación y presentación de la información financiera del ejercicio 2022; por ello, este libro es imprescindible para los Contadores independientes y de empresas de cualquier tamaño, además de que está dirigido y es de gran ayuda para el personal de despachos, docentes y estudiantes, tanto de nivel licenciatura como de posgrado y, por supuesto, para los preparadores de información financiera, empresarios y público en general

Abstract:

We study the behavior of a decision maker who prefers alternative x to alternative y in menu A if the utility of x exceeds that of y by at least a threshold associated with y and A. Hence the decision maker's preferences are given by menu-dependent interval orders. In every menu, her choice set comprises of undominated alternatives according to this preference. We axiomatize this broad model when thresholds are monotone, i.e., at least as large in larger menus. We also obtain novel characterizations in two special cases that have appeared in the literature: the maximization of a fixed interval order where the thresholds depend on the alternative and not on the menu, and the maximization of monotone semiorders where the thresholds are independent of the alternatives but monotonic in menus

Abstract:

Given a scattered space X = (X, tau) and an ordinal lambda, we define a topology tau+lambda in such a way that tau+0 = tau and, when X is an ordinal with the initial segment topology, the resulting sequence {tau+lambda}(lambda is an element of Ord) coincides with the family of topologies {I-lambda}(lambda is an element of Ord) used by Icard, Joosten, and the second author to provide semantics for polymodal provability logics. We prove that given any scattered space X of large-enough rank and any ordinal lambda > 0, GL is strongly complete for tau(+lambda). The special case where X = omega(omega) + 1 and lambda = 1 yields a strengthening of a theorem of Abashidze and Blass

Abstract:

We introduce verification logic, a variant of Artemov’s logic of proofs with new terms of the form ¡þ! satisfying the axiom schema þ then ¡þ!:þ. The intention is for ¡þ! to denote a proof of þ in Peano arithmetic, whenever such a proof exists. By a suitable restriction of the domain of ¡·!, we obtain the verification logic VS5, which realizes the axioms of Lewis' system S5. Our main result is that VS5 is sound and complete for its arithmetical interpretation

Abstract:

This note examines evidence of non-fundamentalness in the rate of variation of annual per capita capital stock for OECD countries in the period 1955–2020. Leeper et al. (2013) proposed a theoretical model in which, due to agents performing fiscal foresight, this economic series could exhibit a non-fundamental behavior (in particular, a non-invertible moving average component), which has important implications for modeling and forecasting. Using the methodology proposed in Velasco and Lobato (2018), which delivers consistent estimators of the autoregressive and moving average parameters without imposing fundamentalness assumptions, we empirically examine whether the capital data are better represented with an invertible or a non-invertible moving average model. We find strong evidence in favor of the non-invertible representation since for the countries that present significant innovation asymmetry, the selected model is predominantly non-invertible

Abstract:

Shelf life experiments have as an outcome a matrix of zeroes and ones that represent the acceptance or no acceptance of customers when presented with samples of the product under evaluation in a random fashion within a designed experiment. This kind of response is called a Bernoulli response due to the dichotomous nature (0,1) of its values. It is not rare to find inconsistent sequences of responses, that is when a customer rejects a less aged sample and does not reject an older sample. That is, we find a zero before a one. This is due to the human factor present in the experiment. In the presence of this kind of inconsistencies some conventions have been taken in the literature in order to estimate shelf life distribution using methods and software from the reliability field which requires numerical responses. In this work we propose a method that does not require coding the original responses into numerical values. We use a more reliable coding by using the Bernoulli response directly and using a Bayesian approach. The resulting method is based on solid Bayesian theory and proved computer programs. We show by means of an example and simulation studies that the new methodology clearly beats the methodology proposed by Hough. We also provide the R software necessary for the implementation

Abstract:

Definitive Screening Designs (DSD) are a class of experimental designs that have the possibility to estimate linear, quadratic and interaction effects with relatively little experimental effort. The linear or main effects are completely independent of two factor interactions and quadratic effects. The two factor interactions are not completely confounded with other two factor interactions, and quadratic effects are estimable. The number of experimental runs is twice the number of factors of interest plus one. Several approaches have been proposed to analyze the results of these experimental plans, some of these approaches take into account the structure of the design, others do not. The first author of this paper proposed a Bayesian sequential procedure that takes into account the structure of the design, this procedure consider normal and non normal responses. The creators of the DSD originally performed a forward stepwise regression programmed in JMP, and also used the minimization of a bias corrected version of Akaike's information criterion, and later they proposed a frequentist procedure that considers the structure of the DSD. Both the frequentist and Bayesian procedures, when the number of experimental runs is twice the number of factors of interest plus one, use as initial step fitting a model with only main effects and then check the significance of these effects to proceed. In this paper we present modification of the Bayesian procedure that incorporates the Bayesian factor identification which is an approach that computes, for each factor, the posterior probability that it is active, this includes the possibility that it is present in linear, quadratic or two factor interactions. This a more comprehensive approach than just testing the significance of an effect

Resumen:

Paquete estadístico con interfaz gráfica para análisis secuencial bayesiano de diseños discriminantes definitivos

Abstract:

Definitive Screening Designs are a class of experimental designs that under factor sparsity have the potential to estimate linear, quadratic and interaction effects with little experimental effort. BAYESDEF is a package that performs a five step strategy to analyze this kind of experiments that makes use of tools coming from the Bayesian approach. It also includes the least absolute shrinkage and selection operator (lasso) as a check (Aguirre VM. (2016))

Abstract:

With the advent of widespread computing and availability of open source programs to perform many different programming tasks, nowadays there is a trend in Statistics to program tailor made applications for non statistical customers in various areas. This is an alternative to having a large statistical package with many functions many of which never are used. In this article, we present CONS an R package dedicated to Consonance Analysis. Consonance Analysis is a useful numerical and graphical exploratory approach for evaluating the consistency of the measurements and the panel of people involved in sensory evaluation. It makes use of several uni and multivariate techniques either graphical or analytical, particularly Principal Components Analysis. The package is implemented in a graphical user interface in order to get a user friendly package

Abstract:

Definitive screening designs (DSDs) are a class of experimental designs that allow the estimation of linear, quadratic, and interaction effects with little experimental effort if there is effect sparsity. The number of experimental runs is twice the number of factors of interest plus one. Many industrial experiments involve nonnormal responses. Generalized linear models (GLMs) are a useful alternative for analyzing these kind of data. The analysis of GLMs is based on asymptotic theory, something very debatable, for example, in the case of the DSD with only 13 experimental runs. So far, analysis of DSDs considers a normal response. In this work, we show a five-step strategy that makes use of tools coming from the Bayesian approach to analyze this kind of experiment when the response is nonnormal. We consider the case of binomial, gamma, and Poisson responses without having to resort to asymptotic approximations. We use posterior odds that effects are active and posterior probability intervals for the effects and use them to evaluate the significance of the effects. We also combine the results of the Bayesian procedure with the lasso estimation procedure to enhance the scope of the method

Abstract:

It is not uncommon to deal with very small experiments in practice. For example, if the experiment is conducted on the production process, it is likely that only a very few experimental runs will be allowed. If testing involves the destruction of expensive experimental units, we might only have very small fractions as experimental plans. In this paper, we will consider the analysis of very small factorial experiments with only four or eight experimental runs. In addition, the methods presented here could be easily applied to larger experiments. A Daniel plot of the effects to judge significance may be useless for this type of situation. Instead, we will use different tools based on the Bayesian approach to judge significance. The first tool consists of the computation of the posterior probability that each effect is significant. The second tool is referred to in Bayesian analysis as the posterior distribution for each effect. Combining these tools with the Daniel plot gives us more elements to judge the signiicance of an effect. Because, in practice, the response may not necessarily be normally distributed, we will extend our approach to the generalized linear model setup. By simulation, we will show that not only in the case of discrete responses and very small experiments, the usual large sample approach for modeling generalized linear models may produce a very biased and variable estimators, but also that the Bayesian approach provides a very sensible results

Abstract:

Inference for quantile regression parameters presents two problems. First, it is computationally costly because estimation requires optimising a non-differentiable objective function which is a formidable numerical task, specially with many number of observations and regressors. Second, it is controversial because standard asymptotic inference requires the choice of smoothing parameters and different choices may lead to different conclusions. Bootstrap methods solve the latter problem at the price of enlarging the former. We give a theoretical justification for a new inference method consisting of the construction of asymptotic pivots based on a small number of bootstrap replications. The procedure still avoids smoothing and reduces usual bootstrap methods’ computational cost. We show its usefulness to draw inferences on linear or non-linear functions of the parameters of quantile regression models

Abstract:

Repeatability and reproducibility (R&R) studies can be used to pinpoint the parts of a measurement system that might need improvement. By using simulation, there is practically no difference in using five or 10 parts in many R&R studies

Abstract:

The existing methods for analyzing unreplicated fractional factorial experiments that do not contemplate the possibility of outliers in the data have a poor performance for detecting the active effects when that contingency becomes a reality. There are some methods to detect active effects under this experimental setup that consider outliers. We propose a new procedure based on robust regression methods to estimate the effects that allows for outliers. We perform a simulation study to compare its behavior relative to existing methods and find that the new method has a very competitive or even better power. The relative power improves as the contamination and size of outliers increase when the number of active effects is up to four

Abstract:

The paper presents the asymptotic theory of the efficient method of moments when the model of interest is not correctly specified. The paper assumes a sequence of independent and identically distributed observations and a global misspecification. It is found that the limiting distribution of the estimator is still asymptotically normal, but it suffers a strong impact in the covariance matrix. A consistent estimator of this covariance matrix is provided. The large sample distribution on the estimated moment function is also obtained. These results are used to discuss the situation when the moment conditions hold but the model is misspecified. It also is shown that the overidentifying restrictions test has asymptotic power one whenever the limit moment function is different from zero. It is also proved that the bootstrap distributions converge almost surely to the previously mentioned distributions and hence they could be used as an alternative to draw inferences under misspecification. Interestingly, it is also shown that bootstrap can be reliably applied even if the number of bootstrap replications is very small

Abstract:

It is well known that outliers or faulty observations affect the analysis of unreplicated factorial experiments. This work proposes a method that combines the rank transformation of the observations, the Daniel plot and a formal statistical testing procedure to assess the significance of the effects. It is shown, by means of previous theoretical results cited in the literature, examples and a Monte Carlo study, that the approach is helpful in the presence of outlying observations. The simulation study includes an ample set of alternative procedures that have been published in the literature to detect significant effects in unreplicated experiments. The Monte Carlo study also, gives evidence that using the rank transformation as proposed, provides two advantages: keeps control of the experimentwise error rate and improves the relative power to detect active factors in the presence of outlying observations

Abstract:

Most of the inferential results are based on the assumption that the user has a "random" sample, by this it is usually understood that the observations are a realization from a set of independent identically distributed random variables. However most of the time this is not true mainly for two reasons: one, the data are not obtained by means of a probabilistic sampling scheme from the population, the data are just gathered as they becomes available or in the best of the cases using some kind of control variables and quota sampling.; and second, even if a probabilistic scheme is used, the sample design is complex in the sense that it was not simple random sampling with replacement, but instead some sort of stratification or clustering or a combination of both was required. For an excellent discussion about the kind of considerations that should be made in the first situation see Hahn and Meeker (1993) and a related comment in Aguirre (1994). For the second problem there is a book about the topic in Skinner et a1.(1989). In this paper we consider the problem of evaluating the effect of sampling complexity on Pearson's Chi-square and other alternative tests for goodness of fit for proportions. Work on this problem can be found in Shuster and Downing (1976), Rao and Scott (1974), Fellegi (1980), Holt et al. (1980), Rao and Scott (1981), and Thomas and Rao (1987). Out of this work come up several adjustments to Pearson's test, namely: Wald type tests, average eigenvalue correction and Satterthwaite type correction. There is a more recent and general resampling approach given in Sitter (1992), but it was not pursued in this study

Abstract:

Sometimes data analysis using the usual parametric techniques produces misleading results due to violations of the underlying assumptions, such as outliers or non-constant variances. In particular, this could happen in unreplicated factorial or fractional factorial experiments. To help in this situation alternative analyses have been proposed. For example Box and Meyer give a Bayesian analysis allowing for possibly faulty observations in un replicated factorials and the well known Box-Cox transformation can be used when there is a change in dispersion. This paper presents an analysis based on the rank transformation that deals with the above problems. The analysis is simple to use and can be implemented with a general purpose statistical computer package. The procedure is illustrated with examples from the literature. A theoretical justification is outlined at the end of the paper

Abstract:

The article considers the problem of choosing between two (possibly) nonlinear models that have been fitted to the same data using M-estimation methods. An asymptotically normally distributed lest statistics using a Monte Carlo study. We found that the presence of a competitive model either in the null or the alternative hypothesis affects the distributional properties of the tests, and that in the case that the data contains outlying observations the new procedure had a significantly higher power that the rest of the test

Abstract:

Fuller (1976), Anderson (1971), and Hannan (1970) introduce infinite moving average models as the limit in the quadratic mean of a sequence of partial sums, and Fuller (1976) shows that if the assumption of independence of the addends is made then the limit almost surely holds. This note shows that without the assumption of independence, the limit holds with probability one. Moreover, the proofs given here are easier to teach

Abstract:

A test for the problem or choosing between several nonnested nonlinear regression models simultaneously is presented. The test does not require an explicit specification of a parametric family of distributions for the error term and has a closed form

Abstract:

The asymptotic dislribution of the generalized Cox test for choosing between two multivariate, nonlinear regression models in implicit form is derived. The data is assumed to be generated by a model that need not be either the null or the non-null model. As the data-generating model is not subjected to a Pitman drift the analysis is global, not local, and provides a fairly complete qualitative descriptíon of the power characteristics or the generalized Cox test. Some investigations of these characteristics are included. A new test statistic is introduced that does not requíre an explicit specification of the error distributíon of the null model. The idea is to replace an analytical computation of the expectation of the Cox difference with a bootstrap estimate. The null dístributíon of this new test is derived

Abstract:

A great deal of research has investigated how various aspects of ethnic identity influence consumer behavior, yet this literature is fragmented. The objective of this article was to present an integrative theoretical model of how individuals are motivated to think and act in a manner consistent with their salient ethnic identities. The model emerges from a review of social science and consumer research about US Hispanics, but researchers could apply it in its general form and/or adapt it to other populations. Our model extends Oyserman's (Journal of Consumer Psychology, 19, 250) identity-based motivation (IBM) model by differentiating between two types of antecedents of ethnic identity salience: longitudinal cultural processes and situational activation by contextual cues, each with different implications for the availability and accessibility of ethnic cultural knowledge. We provide new insights by introducing three ethnic identity motives that are unique to ethnic (nonmajority) cultural groups: belonging, distinctiveness, and defense. These three motives are in constant tension with one another and guide longitudinal processes like acculturation, and ultimately influence consumers' procedural readiness and action readiness. Our integrative framework organizes and offers insights into the current body of Hispanic consumer research, and highlights gaps in the literature that present opportunities for future research

Abstract:

In many Solvency and Basel loss data, there are thresholds or deductibles that affect the analysis capability. On the other hand, the Birnbaum-Saunders model has received great attention during the last two decades and it can be used as a loss distribution. In this paper, we propose a solution to the problem of deductibles using a truncated version of the Birnbaum-Saunders distribution. The probability density function, cumulative distribution function, and moments of this distribution are obtained. In addition, properties regularly used in insurance industry, such as multiplication by a constant (inflation effect) and reciprocal transformation, are discussed. Furthermore, a study of the behavior of the risk rate and of risk measures is carried out. Moreover, estimation aspects are also considered in this work. Finally, an application based on real loss data from a commercial bank is conducted

Abstract:

This paper proposes two new estimators for determining the number of factors (r) in static approximate factor models. We exploit the well-known fact that the r largest eigenvalues of the variance matrix of N response variables grow unboundedly as N increases, while the other eigenvalues remain bounded. The new estimators are obtained simply by maximizing the ratio of two adjacent eigenvalues. Our simulation results provide promising evidence for the two estimators

Abstract:

We study a modification of the Luce rule for stochastic choice which admits the possibility of zero probabilities. In any given menu, the decision maker uses the Luce rule on a consideration set, potentially a strict subset of the menu. Without imposing any structure on how the consideration sets are formed, we characterize the resulting behavior using a single axiom. Our result offers insight into special cases where consideration sets are formed under various restrictions

Abstract:

Purpose– This paper summarizes the findings of a research project aimed at benchmarking the environmental sustainability practices of the top 500 Mexican companies. Design/methodology/approach– The paper surveyed the firms with regard to various aspects of their adoption of environmental sustainability practices, including who or what prompted adoption, future adoption plans, decision-making responsibility, and internal/external challenges. The survey also explored how the adoption of environmental sustainability practices relates to the competitiveness of these firms. Findings– The results suggest that Mexican companies are very active in the various areas of business where environmental sustainability is relevant. Not surprisingly, however, the Mexican companies are seen to be at an early stage of development along the sustainability “learning curve”. Research limitations/implications– The sample consisted of 103 self-selected firms representing the six primary business sectors in the Mexican economy. Because the manufacturing sector is significantly overrepresented in the sample and because of its importance in addressing issues of environmental sustainability, when appropriate, specific results for this sector are reported and contrasted to the overall sample. Practical implications– The vast majority of these firms see adopting environmental sustainability practices as being profitable and think this will be even more important in the future. Originality/value– Improving the environmental performance of business firms through the adoption of sustainability practices is compatible with competitiveness and improved financial performance. In Mexico, one might expect that the same would be true, but only anecdotal evidence was heretofore available

Abstract:

We study the consumption-portfolio allocation problem in continuous time when asset prices follow Lévy processes and the investor is concerned about potential model misspecification. We derive optimal consumption and portfolio policies that are robust to uncertainty about the hard-to-estimate drift rate, jump intensity and jump size parameters. We also provide a semi-closed form formula for the detection-error probability and compare various portfolio holding strategies, including robust and non-robust policies. Our quantitative analysis shows that ignoring uncertainty leads to significant wealth loss for the investor

Abstract:

We exploit the manifold increase in homicides in 2008–11 in Mexico resulting from its war on organized drug traffickers to estimate the effect of drug-related homicides on housing prices. We use an unusually rich data set that provides national coverage of housing prices and homicides and exploits within-municipality variations. We find that the impact of violence on housing prices is borne entirely by the poor sectors of the population. An increase in homicides equivalent to 1 standard deviation leads to a 3 percent decrease in the price of low-income housing

Abstract:

This paper examines foreign direct investment (FDI) in the Hungarian economy in the period of post-Communist transition since 1989. Hungary took a quite aggressive approach in welcoming foreign investment during this period and as a result had the highest per capita FDI in the region as of 2001. We discuss the impact of FDI in terms of strategic intent, i.e., market serving and resource seeking FDI. The effect of these two kinds of FDI is contrasted by examining the impact of resource seeking FDI in manufacturing sectors and market serving FDI in service industries. In the case of transition economies, we argue that due to the strategic intent, resource seeking FDI can imply a short-term impact on economic development whereas market serving FDI strategically implies a long-term presence with increased benefits for the economic development of a transition economy. Our focus is that of market serving FDI in the Hungarian banking sector, which has brought improved service and products to multinational and Hungarian firms. This has been accompanied by the introduction of innovative financial products to the Hungarian consumer, in particular consumer credit including mortgage financing. However, the latter remains an underserved segment with much growth potential. For public policy in Hungary and other transition economies, we conclude that policymakers should consider the strategic intent of FDI in order to maximize its benefits in their economies

Abstract:

We propose a general framework for extracting rotation invariant features from images for the tasks of image analysis and classification. Our framework is inspired in the form of the Zernike set of orthogonal functions. It provides a way to use a set of one-dimensional functions to form an orthogonal set over the unit disk by non-linearly scaling its domain, and then associating it an exponential term. When the images are projected into the subspace created with the proposed framework, the rotations in the image affect only the exponential term while the value of the orthogonal functions serve as rotation invariant features. We exemplify our framework using the Haar wavelet functions to extract features from several thousand images of symbols. We then use the features in an OCR experiment to demonstrate the robustness of the method

Abstract:

In this paper we explore the use of orthogonal functions as generators of representative, compact descriptors of image content. In Image Analysis and Pattern Recognition such descriptors are referred to as image features, and there are some useful properties they should possess such as rotation invariance and the capacity to identify different instances of one class of images. We exemplify our algorithmic methodology using the family of Daubechies wavelets, since they form an orthogonal function set. We benchmark the quality of the image features generated by doing a comparative OCR experiment with three different sets of image features. Our algorithm can use a wide variety of orthogonal functions to generate rotation invariant features, thus providing the flexibility to identify sets of image features that are best suited for the recognition of different classes of images

Abstract:

The COVID-19 pandemic triggered a huge, sudden uptake in work from home, as individuals and organizations responded to contagion fears and government restrictions on commercial and social activities (Adams-Prassl et al. 2020; Bartik et al. 2020; Barrero et al. 2020; De Fraja et al. 2021). Over time, it has become evident that the big shift to work from home will endure after the pandemic ends (Barrero et al. 2021). No other episode in modern history involves such a pronounced and widespread shift in working arrangements in such a compressed time frame. The Industrial Revolution and the later shift away from factory jobs brought greater changes in skill requirements and business operations, but they unfolded over many decades. These facts prompt some questions: What explains the pandemic's role as catalyst for a lasting uptake in work from home (WFH)? When looking across countries and regions, have differences in pandemic severity and the stringency of government lockdowns had lasting effects on WFH levels? What does a large, lasting shift to remote work portend for workers? Finally, how might the big shift to remote work affect the pace of innovation and the fortunes of cities?

Abstract:

The pandemic triggered a large, lasting shift to work from home (WFH). To study this shift, we survey full-time workers who finished primary school in twenty-seven countries as of mid-2021 and early 2022. Our crosscountry comparisons control for age, gender, education, and industry and treat the United States mean as the baseline. We find, first, that WFH averages 1.5 days per week in our sample, ranging widely across countries. Second, employers plan an average of 0.7 WFH days per week after the pandemic, but workers want 1.7 days. Third, employees value the option to WFH two to three days per week at 5 percent of pay, on average, with higher valuations for women, people with children, and those with longer commutes. Fourth, most employees were favorably surprised by their WFH productivity during the pandemic. Fifth, looking across individuals, employer plans for WFH levels after the pandemic rise strongly with WFH productivity surprises during the pandemic. Sixth, looking across countries, planned WFH levels rise with the cumulative stringency of government-mandated lockdowns during the pandemic. We draw on these results to explain the big shift to WFH and to consider some implications for workers, organization, cities, and the pace of innovation

Resumen:

This paper presents evidence of large learning losses and partial recovery in Guanajuato, Mexico, during and after the school closures related to the COVID-19 pandemic. Learning losses were estimated using administrative data from enrollment records and by comparing the results of a census-based standardized test administered to approximately 20,000 5th and 6th graders in: (a) March 2020 (a few weeks before school closed); (b) November 2021 (2 months after schools reopened); and (c) June of 2023 (21 months after schools re-opened and over three years after the pandemic started). On average, students performed 0.2 to 0.3 standard deviations lower in Spanish and math after schools reopened, equivalent to 0.66 to 0.87 years of schooling in Spanish and 0.87 to 1.05 years of schooling in math. By June of 2023, students were able to make up for -60% of the learning loss that built up during school closures but still scored 0.08–0.11 standard deviations below their pre-pandemic levels (equivalent to 0.23–0.36 years of schooling)

Abstract:

When analyzing catastrophic risk, traditional measures for evaluating risk, such as the probable maximum loss (PML), value at risk (VaR), tail-VaR, and others, can become practically impossible to obtain analytically in certain types of insurance, such as earthquake, and certain types of reinsurance arrangements, specially non-proportional with reinstatements. Given the available information, it can be very difficult for an insurer to measure its risk exposure. The transfer of risk in this type of insurance is usually done through reinsurance schemes combining diverse types of contracts that can greatly reduce the extreme tail of the cedant’s loss distribution. This effect can be assessed mathematically. The PML is defined in terms of a very extreme quantile. Also, under standard operating conditions, insurers use several “layers” of non proportional reinsurance that may or may not be combined with some type of proportional reinsurance. The resulting reinsurance structures will then be very complicated to analyze and to evaluate their mitigation or transfer effects analytically, so it may be necessary to use alternative approaches, such as Monte Carlo simulation methods. This is what we do in this paper in order to measure the effect of a complex reinsurance treaty on the risk profile of an insurance company. We compute the pure risk premium, PML as well as a host of results: impact on the insured portfolio, risk transfer effect of reinsurance programs, proportion of times reinsurance is exhausted, percentage of years it was necessary to use the contractual reinstatements, etc. Since the estimators of quantiles are known to be biased, we explore the alternative of using an Extreme Value approach to complement the analysis

Abstract:

Estimation of adequate reserves for outstanding claims is one of the main activities of actuaries in property/casualty insurance and a major topic in actuarial science. The need to estimate future claims has led to the development of many loss reserving techniques. There are two important problems that must be dealt with in the process of estimating reserves for outstanding claims: one is to determine an appropriate model for the claims process, and the other is to assess the degree of correlation among claim payments in different calendar and origin years. We approach both problems here. On the one hand we use a gamma distribution to model the claims process and, in addition, we allow the claims to be correlated. We follow a Bayesian approach for making inference with vague prior distributions. The methodology is illustrated with a real data set and compared with other standard methods

Abstract:

Consider a random sample X1, X2,. . ., Xn, from a normal population with unknown mean and standard deviation. Only the sample size, mean and range are recorded and it is necessary to estimate the unknown population mean and standard deviation. In this paper the estimation of the mean and standard deviation is made from a Bayesian perspective by using a Markov Chain Monte Carlo (MCMC) algorithm to simulate samples from the intractable joint posterior distribution of the mean and standard deviation. The proposed methodology is applied to simulated and real data. The real data refers to the sugar content (°BRIX level) of orange juice produced in different countries

Abstract:

This paper is concerned with the situation that occurs in claims reserving when there are negative values in the development triangle of incremental claim amounts. Typically these negative values will be the result of salvage recoveries, payments from third parties, total or partial cancellation of outstanding claims due to initial overestimation of the loss or to a possible favorable jury decision in favor of the insurer, rejection by the insurer, or just plain errors. Some of the traditional methods of claims reserving, such as the chain-ladder technique, may produce estimates of the reserves even when there are negative values. However, many methods can break down in the presence of enough (in number and/or size) negative incremental claims if certain constraints are not met. Historically the chain-ladder method has been used as a gold standard (benchmark) because of its generalized use and ease of application. A method that improves on the gold standard is one that can handle situations where there are many negative incremental claims and/or some of these are large. This paper presents a Bayesian model to consider negative incremental values, based on a three-parameter log-normal distribution. The model presented here allows the actuary to provide point estimates and measures of dispersion, as well as the complete distribution for outstanding claims from which the reserves can be derived. It is concluded that the method has a clear advantage over other existing methods. A Markov chain Monte Carlo simulation is applied using the package WinBUGS

Abstract:

The BMOM is particularly useful for obtaining post-data moments and densities for parameters and future observations when the form of the likelihood function is unknown and thus a traditional Bayesian approach cannot be used. Also, even when the form of the likelihood is assumed known, in time series problems it is sometimes difficult to formulate an appropriate prior density. Here, we show how the BMOM approach can be used in two, nontraditional problems. The first one is conditional forecasting in regression and time series autoregressive models. Specifically, it is shown that when forecasting disaggregated data (say quarterly data) and given aggregate constraints (say in terms of annual data) it is possible to apply a Bayesian approach to derive conditional forecasts in the multiple regression model. The types of constraints (conditioning) usually considered are that the sum, or the average, of the forecasts equals a given value. This kind of condition can be applied to forecasting quarterly values whose sum must be equal to a given annual value. Analogous results are obtained for AR(p) models. The second problem we analyse is the issue of aggregation and disaggregation of data in relation to predictive precision and modelling. Predictive densities are derived for future aggregate values by means of the BMOM based on a model for disaggregated data. They are then compared with those derived based on aggregated data

Resumen:

El problema de estimar el valor acumulado de una variable positiva y continua para la cual se ha observado una acumulación parcial, y generalmente con sólo un reducido número de observaciones (dos años), se puede llevar a cabo aprovechando la existencia de estacionalidad estable (de un periodo a otro). Por ejemplo, la cantidad por pronosticar puede ser el total de un periodo (año) y el cual debe hacerse en cuanto se obtiene información sólo para algunos subperiodos (meses) dados. Estas condiciones se presentan de manera natural en el pronóstico de las ventas estacionales de algunos productos ‘de temporada’, tales como juguetes; en el comportamiento de los inventarios de bienes cuya demanda varía estacionalmente, como los combustibles; o en algunos tipos de depósitos bancarios, entre otros. En este trabajo se analiza el problema en el contexto de muestreo por conglomerados. Se propone un estimador de razón para el total que se quiere pronosticar, bajo el supuesto de estacionalidad estable. Se presenta un estimador puntual y uno para la varianza del total. El método funciona bien cuando no es factible aplicar metodología estándar debido al reducido número de observaciones. Se incluyen algunos ejemplos reales, así como aplicaciones a datos publicados con anterioridad. Se hacen comparaciones con otros métodos

Abstract:

The problem of estimating the accumulated value of a positive and continuous variable for which some partially accumulated data has been observed, and usually with only a small number of observations (two years), can be approached taking advantage of the existence of stable seasonality (from one period to another). For example the quantity to be predicted may be the total for a period (year) and it needs to be made as soon as partial information becomes available for given subperiods (months). These conditions appear in a natural way in the prediction of seasonal sales of style goods, such as toys; in the behavior of inventories of goods where demand varies seasonally, such as fuels; or banking deposits, among many other examples. In this paper, the problem is addressed within a cluster sampling framework. A ratio estimator is proposed for the total value to be forecasted under the assumption of stable seasonality. Estimators are obtained for both the point forecast and the variance. The procedure works well when standard methods cannot be applied due to the reduced number of observations. Some real examples are included as well as applications to some previously published data. Comparisons are made with other procedures

Abstract:

We present a Bayesian solution to forecasting a time series when few observations are available. The quantity to predict is the accumulated value of a positive, continuous variable when partially accumulated data are observed. These conditions appear naturally in predicting sales of style goods and coupon redemption. A simple model describes the relation between partial and total values, assuming stable seasonality. Exact analytic results are obtained for point forecasts and the posterior predictive distribution. Noninformative priors allow automatic implementation. The procedure works well when standard methods cannot be applied due to the reduced number of observations. Examples are provided

Resumen:

Este artículo evalúa algunos aspectos del Proyecto Piloto de Nutrición, Alimentación y Salud (PNAS). Describe brevemente el Proyecto y presenta las características de la población beneficiaria, luego profundiza en el problema de la pobreza y a partir de un índice se evalúa la selección de las comunidades beneficiadas por el Proyecto. Posteriormente se describe la metodología usada en el análisis costo-efectividad y se da el procedimiento para el cálculo de los cocientes del efecto que tuvo el PNAS específicamente en el gasto en alimentos. Por último, se presentan las conclusiones que, entre otros aspectos, arrojan que el efecto del PNAS en el gasto en alimentos de las familias indujo un incremento del gasto de 7.3% en la zona ixtlera y de 4.3% en la zona otomí-mazahua, con un costo de 29.9 nuevos pesos (de 1991) y de 40.9 para cada una de las zonas, respectivamente

Abstract:

An evaluation is made of some aspects of the Proyecto Piloto de Nutrición, Alimentación y Salud, a Pilot Program for Nutrition, Food and Health of the Mexican Government (PNAS). We give a brief description of the Project and characteristics of the target population. We then describe and use the FGT Index to determine if the communities included in the Project were correctly chosen. We describe the method of cost-effectiveness analysis used in this article. The procedure for specifying cost-effectiveness ratios is next presented, and their application to measure the impact of PNAS on Food Expenditures carried out. Finally we present empirical results that show that, among other results, PNAS increased Food Expenditures of the participating households by 7.3% in the Ixtlera Zone and by 4.3% in the Otomí Mazahua Zone, at a cost of N$29.9 (1991) and N$40.9 for each, respectively

Resumen:

Con frecuencia las instituciones financieras internacionales y los gobiernos locales se ven implicados en la implantación de programas de desarrollo. Existe amplia evidencia de que los mejores resultados se obtienen cuando la comunidad se compromete en la operación de los programas, es decir cuando existe participación comunitaria. La evidencia es principalmente cualitativa, pues no hay métodos para medir cuantitativamente esta participación. En este artículo se propone un procedimiento para generar un índice agregado de participación comunitaria. Está orientado de manera específica a medir el grado de participación comunitaria en la construcción de obras de beneficio colectivo. Para estimar los parámetros del modelo que se propone es necesario hacer algunos supuestos, debido a las limitaciones en la información. Se aplica el método a datos de comunidades que participaron en un proyecto piloto de nutrición-alimentación y salud que se llevó a cabo en México

Abstract:

There is ample evidence that the best results are obtained in development programs when the target community gets involved in their implementation and/or operation. The evidence is mostly qualitative, however, since there are no methods for measuring this participation quantitatively. In this paper we present a procedure for generating an aggregate index of community participation based on productivity. It is specifically aimed at measuring community participation in the construction of works for collective benefit. Because there are limitations on the information available, additional assumptions must be made to estimate parameters. The method is applied to data from communities in Mexico participating in a national nutrition, food and health program

Abstract:

A Bayesian approach is used to derive constrained and unconstrained forecasts in an autoregressive time series model. Both are obtained by formulating an AR(p) model in such a way that it is possible to compute numerically the predictive distribution for any number of forecasts. The types of constraints considered are that a linear combination of the forecasts equals a given value. This kind of restriction is applied to forecasting quarterly values whose sum must be equal to a given annual value. Constrained forecasts are generated by conditioning on the predictive distribution of unconstrained forecasts. The procedures are applied to the Quarterly GNP of Mexico, to a simulated series from an AR(4) process and to the Quarterly Unemployment Rate for the United States

Abstract:

The problem of temporal disaggregation of time series is analyzed by means of Bayesian methods. The disaggregated values are obtained through a posterior distribution derived by using a diffuse prior on the parameters. Further analysis is carried out assuming alternative conjugate priors. The means of the different posterior distribution are shown to be equivalent to some sampling theory results. Bayesian prediction intervals are obtained. Forecasts for future disaggregated values are derived assuming a conjugate prior for the future aggregated value

Abstract:

A formulation of the problem of detecting outliers as an empirical Bayes problem is studied. In so doing we encounter a non-standard empirical Bayes problem for which the notion of average risk asymptotic optimality (a.r.a.o.) of procedures is defined. Some general theorems giving sufficient conditions for a.r.a.o. procedures are developed. These general results are then used in various formulations of the outlier problem for underlying normal distributions to give a.r.a.o. empirical Bayes procedures. Rates of convergence results are also given using the methods of Johns and Van Ryzin

Resumen:

El texto examina cuáles son las características y rasgos del habla tanto de las mujeres como de los hombres; hace una valoración a partir de algunas de sus causas y concluye con una invitación a hacernos conscientes de la forma de expresarnos

Abstract:

This article examines the distinctive characteristics and features of how both women and men speak. Based on this analysis, the author will make an assessment, and then invite the reader to become aware of their manner of speaking

Resumen:

Inés Arredondo perteneció a la llamada Generación de Medio Siglo, particularmente, al grupo de intelectuales y artistas que fundaron e impulsaron las actividades culturales de la Casa del Lago durante los años sesenta. El artículo es una semblanza que da cuenta tanto de los hechos más importantes que marcaron la vida y la trayectoria intelectual de Inés Arredondo, como de los rasgos particulares (estéticos) que definen la obra narrativa de esta escritora excepcional

Abstract:

Inés Arredondo belonged to the so-called Mid-Century Generation, namely a group of intellectuals and artists that established and promoted Casa del Lago’s cultural activities in the Sixties. This article gives an account of the important events and intellectual journey that shaped the writer’s life particularly the esthetic characteristics that shaped the narrative work of this exceptional writer

Abstract:

Informality is a structural trait in emerging economies affecting the behavior of labor markets, financial access and economy-wide productivity. This paper develops a simple general equilibrium closed economy model with nominal rigidities, labor and financial frictions to analyze the transmission of shocks and of monetary policy. In the model, the informal sector provides a flexible margin of adjustment to the labor market at the cost of a lower productivity. In addition, only formal sector firms have access to financing, which is instrumental in their production process. In a quantitative version of the model calibrated to Mexican data, we find that informality: (i) dampens the impact of demand and financial shocks, as well as of technology shocks specific to the formal sector, on wages and inflation, but (ii) heightens the inflationary impact of aggregate technology shocks. The presence of an informal sector also increases the sacrifice ratio of monetary policy actions. From a Central Bank perspective, the results imply that informality mitigates inflation volatility for most type of shocks but makes monetary policy less effective

Resumen:

En el presente trabajo, estudiamos los espacios de Brown, que son conexos y no completamente de Hausdorff. Utilizando progresiones aritméticas, construimos una base BG para una topología TG de N, y mostramos que (N, TG), llamado el espacio de Golomb, es de Brown. También probamos que hay elementos de BG que son de Brown, mientras que otros están totalmente separados. Escribimos algunas consecuencias de este resultado. Por ejemplo, (N, TG) no es conexo en pequeño en ninguno de sus puntos. Esto generaliza un resultado probado por Kirch en 1969. También damos una prueba más simple de un resultado presentado por Szczuka en 2010

Abstract:

In the present paper we study Brown spaces which are connected and not completely Hausdorff. Using arithmetic progressions, we construct a base BG for a topology TG on N, and show that (N, TG), called the Golomb space is a Brown space. We also show that some elements of BG are Brown spaces, while others are totally separated. We write some consequences of such result. For example, the space (N, TG) is not connected "im kleinen" at each of its points. This generalizes a result proved by Kirchin 1969. We also present a simpler proof of a result given by Szczuka in 2010

Resumen:

El libro sintetiza la experiencia adquirida en cursos de Ordenadores y Programación impartidos por la autora durante tres años en la Licenciatura de Infórmática de la Facultad de Ciencias de la Universidad Autónoma de Barcelona

Resumen:

En recientes años se ha incrementado el interés en el desarrollo de nuevos materiales en este caso compositos, ya que estos materiales más avanzados pueden realizar mejor su trabajo que los materiales convencionales (K. Morsi, A. Esawi.,, 2006). En el presente trabajo se analiza el efecto de la adición de nanotubos de carbono incorporando nano partículas de plata para aumentar tanto sus propiedades eléctricas como mecánicas. La realización de aleaciones de Aluminio con nanotubos de carbono utilizando molienda de baja energía con una velocidad de 140 rpm y durante un periodo de 24 horas de molienda, partiendo de aluminio al 98% se realizó una aleación con 0.35 de nanotubos de carbono, la molienda se realizó para obtener una buena homogenización ya que la distribución afecta al comportamiento de las propiedades (Amirhossein Javadi, 2013), además de la reducción de partícula y finalmente la incorporación de nanotubos de carbono adicionando nanopartículas de plata por la reducción con borohidruro de sodio por medio de la punta ultrasónica, Las aleaciones obtenidas fueron caracterizadas por Microscopia electrónica de barrido (MEB), Análisis de Difracción de Rayos X, se realizaron pruebas de dureza y finalmente se realizaron pruebas de conductividad eléctrica

Abstract:

In recent years has increased interest in the development of new materials in this case composites, as these more advanced materials can perform their work better than conventional materials. (K. Morsi, A. Esawi.,, 2006). In the present work we analyze the effect of the addition of carbon nanotubes incorporating nano silver particles to increase both their electrical and mechanical properties. The performance of aluminum alloys with carbon nanotubes using low energy grinding with a speed of 140 rpm and during a period of 24 hours of grinding, starting from 98% aluminum, an alloy was made with 0.35 carbon nanotubes, grinding (Amirhossein Javadi, 2013), in addition to the reduction of particle and finally the incorporation of carbon nanotubes by adding silver nanoparticles by the reduction with sodium borohydride by Medium of the ultrasonic tip. The obtained alloys were characterized by Scanning Electron Microscopy (SEM), X-Ray Diffraction Analysis, hardness tests were performed and electrical conductivity tests were finally carried out

Abstract:

In this study, high temperature reactions of Fe–Cr alloys at 500 and 600 °C were investigated using an atmosphere of N2–O2 8 vol% with 220 vppm HCl, 360 vppm H2O and 200 vppm SO2; moreover the following aggressive salts were placed in the inlet: KCl and ZnCl2. The salts were placed in the inlet to promote corrosion and increase the chemical reaction. These salts were applied to the alloys via discontinuous exposures. The corrosion products were characterized using thermo-gravimetric analysis, scanning electron microscopy and X-ray diffraction.The species identified in the corrosion products were: Cr2O3, Cr2O (Fe0.6Cr0.4)2O3, K2CrO4, (Cr, Fe)2O3, Fe–Cr, KCl, ZnCl2, FeOOH, σ-FeCrMo and Fe2O3. The presence of Mo, Al and Si was not significant and there was no evidence of chemical reaction of these elements. The most active elements were the Fe and Cr in the metal base. The Cr presence was beneficial against corrosion; this element decelerated the corrosion process due to the formation of protective oxide scales over the surfaces exposed at 500 °C and even more notable at 600 °C; as it was observed in the thermo-gravimetric analysis increasing mass loss. The steel with the best performance was alloy Fe9Cr3AlSi3Mo, due to the effect of the protective oxides inclusive in presence of the aggressive salts

Abstract:

Cognitive appraisal theory predicts that emotions affect participation decisions around risky collective action. However, little existing research has attempted to parse out the mechanisms by which this process occurs. We build a global game of regime change and discuss the effects that fear may have on participation through pessimism about the state of the world, other players' willingness to participate, and risk aversion. We test the behavioral effects of fear in this game by conducting 32 sessions of an experiment in two labs where participants are randomly assigned to an emotion induction procedure. In some rounds of the game, potential mechanisms are shut down to identify their contribution to the overall effect of fear. Our results show that in this context, fear does not affect willingness to participate. This finding highlights the importance of context, including integral versus incidental emotions and the size of the stakes, in shaping effect of emotions on behavior

Abstract:

Multilayer perceptron networks have been designed to solve supervised learning problems in which there is a set of known labeled training feature vectors. The resulting model allows us to infer adequate labels for unknown input vectors. Traditionally, the optimal model is the one that minimizes the error between the known labels and those inferred labels via such a model. The training process results in those weights that achieve the most adequate labels. Training implies a search process which is usually determined by the descent gradient of the error. In this work, we propose to replace the known labels by a set of such labels induced by a validity index. The validity index represents a measure of the adequateness of the model relative only to intrinsic structures and relationships of the set of feature vectors and not to previously known labels. Since, in general, there is no guarantee of the differentiability of such an index, we resort to heuristic optimization techniques. Our proposal results in an unsupervised learning approach for multilayer perceptron networks that allows us to infer the best model relative to labels derived from such a validity index which uncovers the hidden relationships of an unlabeled dataset

Abstract:

Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters) whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method

Abstract:

One of the basic endeavors in Pattern Recognition and particularly in Data Mining is the process of determining which unlabeled objects in a set do share interesting properties. This implies a singular process of classification usually denoted as "clustering", where the objects are grouped into k subsets (clusters) in accordance with an appropriate measure of likelihood. Clustering can be considered the most important unsupervised learning problem. The more traditional clustering methods are based on the minimization of a similarity criteria based on a metric or distance. This fact imposes important constraints on the geometry of the clusters found. Since each element in a cluster lies within a radial distance relative to a given center, the shape of the covering or hull of a cluster is hyper-spherical (convex) which sometimes does not encompass adequately the elements that belong to it. For this reason we propose to solve the clustering problem through the optimization of Shannon's Entropy. The optimization of this criterion represents a hard combinatorial problem which disallows the use of traditional optimization techniques, and thus, the use of a very efficient optimization technique is necessary. We consider that Genetic Algorithms are a good alternative. We show that our method allows to obtain successfull results for problems where the clusters have complex spatial arrangements. Such method obtains clusters with non-convex hulls that adequately encompass its elements. We statistically show that our method displays the best performance that can be achieved under the assumption of normal distribution of the elements of the clusters. We also show that this is a good alternative when this assumption is not met

Abstract:

This paper proposes a novel distributed controller that solves the leader-follower and the leaderless consensus problems in the task space for networks composed of robots that can be kinematically and dynamically different (heterogeneous). In the leader-follower scenario, the controller ensures that all the robots in the network asymptotically reach a given leader pose (position and orientation), provided that at least one follower robot has access to the leader pose. In the leaderless problem, the robots asymptotically reach an agreement pose. The proposed controller is robust to variable time-delays in the communication channel and does not rely on velocity measurements. The controller is dynamic, it cancels-out the gravity effects and it incorporates a proportional to the error term between the robot and the controller virtual position. The controller dynamics consists of a simple proportional scheme plus damping injection through a second-order (virtual) system. The proposed approach employs the singularity free unit-quaternions to represent the orientation of the end-effectors, and the network is represented by an undirected and connected interconnection graph. The application to the control of bilateral teleoperators is described as a special case of the leaderless consensus solution. The paper presents numerical simulations with a network composed of four 6-Degrees-of-Freedom (DoF) and one 7-DoF robot manipulators. Moreover, we also report some experiments with a 6-DoF industrial robot and two 3-DoF haptic devices

Abstract:

This article examines the effects of committee specialization and district characteristics on speech participation by topic and congressional forum. It argues that committee specialization should increase speech participation during legislative debates, while district characteristics should affect the likelihood of speech participation in non-lawmaking forums. To examine these expectations, we analyze over 100,000 speeches delivered in the Chilean Chamber of Deputies between 1990 and 2018. To carry out our topic classification task, we utilize the recently developed state-of-the-art multilingual Transformer model XLM-RoBERTa. Consistent with informational theories, we find that committee specialization is a significant predictor of speech participation in legislative debates. In addition, consistent with theories purporting that legislative speech serves as a vehicle for the electoral connection, we find that district characteristics have a significant effect on speech participation in non-lawmaking forums

Abstract:

According to conventional wisdom, closed-list proportional representation (CLPR) electoral systems create incentives for legislators to favor the party line over their voters' positions. However, electoral incentives may induce party leaders to tolerate shirking by some legislators, even under CLPR. This study argues that in considering whose deviations from the party line should be tolerated, party leaders exploit differences in voters' relative electoral influence resulting from malapportionment. We expect defections in roll call votes to be more likely among legislators elected from overrepresented districts than among those from other districts. We empirically test this claim using data on Argentine legislators' voting records and a unique dataset of estimates of voters' and legislators' placements in a common ideological space. Our findings suggest that even under electoral rules known for promoting unified parties, we should expect strategic defections to please voters, which can be advantageous for the party's electoral fortunes

Abstract:

This article examines speech participation under different parliamentary rules: open forums dedicated to bill debates, and closed forums reserved for non-lawmaking speeches. It discusses how electoral incentives influence speechmaking by promoting divergent party norms within those forums. Our empirical analysis focuses on the Chilean Chamber of Deputies. The findings lend support to the view that, in forums dedicated to non-lawmaking speeches, participation is greater among more institutionally disadvantaged members (backbenchers, women, and members from more distant districts), while in those that are dedicated to lawmaking debates, participation is greater among more senior members and members of the opposition

Abstract:

We present a novel approach to disentangle the effects of ideology, partisanship, and constituency pressures on roll-call voting. First, we place voters and legislators on a common ideological space. Next, we use roll-call data to identify the partisan influence on legislators’ behavior. Finally, we use a structural equation model to account for these separate effects on legislative voting. We rely on public opinion data and a survey of Argentine legislators conducted in 2007–08. Our findings indicate that partisanship is the most important determinant of legislative voting, leaving little room for personal ideological position to affect legislators’ behavior

Abstract:

Legislators in presidential countries use a variety of mechanisms to advance their electoral careers and connect with relevant constituents. The most frequently studied activities are bill initiation, co-sponsoring, and legislative speeches. In this paper, the authors examine legislators' information requests (i.e. parliamentary questions) to the government, which have been studied in some parliamentary countries but remain largely unscrutinised in presidential countries. The authors focus on the case of Chile - where strong and cohesive national parties coexist with electoral incentives that emphasise the personal vote - to examine the links between party responsiveness and legislators' efforts to connect with their electoral constituencies. Making use of a new database of parliamentary questions and a comprehensive sample of geographical references, the authors examine how legislators use this mechanism to forge connections with voters, and find that targeted activities tend to increase as a function of electoral insecurity and progressive ambition

Abstract:

Using insights from two of the major proponents of the hermeneutical approach, Paul Ricoeur and Hannah Arendt-who both recognized the ethicopolitical importance of narrative and acknowledged some of the dangers associated with it-I will flesh out the worry that "narrativity" in political theory has been overly attentive to storytelling and not heedful enough of story listening. More specifically, even if, as Ricoeur says, "narrative intelligence" is crucial for self-understanding, that does not mean, as he invites us to, that we should always seek to develop a "narrative identity" or become, as he says, "the narrator of our own life story." I offer that, perhaps inadvertently, such an injunction might turn out to be detrimental to the "art of listening." This, however, must also be cultivated if we want to do justice to our narrative character and expect narrative to have the political role that both Ricoeur and Arendt envisaged. Thus, although there certainly is a "redemptive power" in narrative, when the latter is understood primarily as the act of narration or as the telling of stories, there is a danger to it as well. Such a danger, I think, intensifies at a time like ours, when, as some scholars have noted, "communicative abundance" or the "ceaseless production of redundancy" in traditional and social media has often led to the impoverishment of the public conversation

Abstract:

In this paper, I take George Lakoff and Mark Johnson's thesis that metaphors shape our reality to approach the judicial imagery of the new criminal justice system in Mexico (in effect since 2016). Based on twenty-nine in-depth interviews with judges and other members of the judiciary, I study what I call the "dirty minds" metaphor, showing its presence in everyday judicial practice and analyzing both its cognitive basis as well as its effects in how criminal judges understand their job. I argue that the such a metaphor, together with the "fear of contamination" it raises as a result, is misleading and goes to the detriment of the judicial virtues that should populate the new system. The conclusions I offer are relevant beyond the national context, inter alia, because they concern a far-reaching paradigm of judgment

Abstract:

Recent efforts to theorize the role of emotions in political life have stressed the importance of sympathy, and have often recurred to Adam Smith to articulate their claims. In the early twentieth-century, Max Scheler disputed the salutary character of sympathy, dismissing it as an ultimately perverse foundation for human association. Unlike later critics of sympathy as a political principle, Scheler rejected it for being ill equipped to salvage what, in his opinion, should be the proper basis of morality, namely, moral value. Even if Scheler's objections against Smith's project prove to be ultimately mistaken, he had important reasons to call into question its moral purchase in his own time. Where the most dangerous idol is not self-love but illusory self-knowledge, the virtue of self-command will not suffice. Where identification with others threatens the social bond more deeply than faction, “standing alone” in moral matters proves a more urgent task

Abstract:

Images of chemical molecules can be produced, manipulated, simulated and analyzed using sophisticated chemical software. However, in the process of publishing such images into scientific literature, all their chemical significance is lost. Although images of chemical molecules can be easily analyzed by the human expert, they cannot be fed back into chemical software and loose much of their potential use. We have developed a system that can automatically reconstruct the chemical information associated to the images of chemical molecules thus rendering them computer readable. We have benchmarked our system against a commercially available product and have also tested it using chemical databases of several thousand images with very encouraging results

Abstract:

We present an algorithm that automatically segments and classifies the brain structures in a set of magnetic resonance (MR) brain images using expert information contained in a small subset of the image set. The algorithm is intended to do the segmentation and classification tasks mimicking the way a human expert would reason. The algorithm uses a knowledge base taken from a small subset of semiautomatically classified images that is combined with a set of fuzzy indexes that capture the experience and expectation a human expert uses during recognition tasks. The fuzzy indexes are tissue specific and spatial specific, in order to consider the biological variations in the tissues and the acquisition inhomogeneities through the image set. The brain structures are segmented and classified one at a time. For each brain structure the algorithm needs one semiautomatically classified image and makes one pass through the image set. The algorithm uses low-level image processing techniques on a pixel basis for the segmentations, then validates or corrects the segmentations, and makes the final classification decision using higher level criteria measured by the set of fuzzy indexes. We use single-echo MR images because of their high volumetric resolution; but even though we are working with only one image per brain slice, we have multiple sources of information on each pixel: absolute and relative positions in the image, gray level value, statistics of the pixel and its three-dimensional neighborhood and relation to its counterpart pixels in adjacent images. We have validated our algorithm for ease of use and precision both with clinical experts and with measurable error indexes over a Brainweb simulated MR set

Abstract:

We present an attractive methodology for the compression of facial gestures that can be used to drive interaction in real time applications. Using the eigenface method we build compact representation spaces for a variety of facial gestures. These compact spaces are the so called eigenspaces. We do real time tracking and segmentation of facial features from video images and then use the eigenspaces to find compact descriptors of the segmented features. We use the system for an avatar videoconference application where we achieve real time interactivity with very limited bandwidth requirements. The system can also be used as a hands free man-machine interface

Abstract:

We use interactive virtual environments for cognitive behavioral therapy. Working together with children therapists and psychologists, our computer graphics group developed 5 interactive simulators for the treatment of fears and behavior disorders. The simulators run in real time on P4 PCs with graphic accelerators, but also work online using streaming techniques and Web VR engines. The construction of the simulators starts with ideas and situations proposed by the psychologists, these ideas are then developed by graphic designers and finally implemented in 3D virtual worlds by our group. We present the methodology we follow to turn the psychologists ideas and then the graphic designer’s sketches into fully interactive simulators. Our methodology starts with a graphic modeler to build the geometry of the virtual worlds, the models are then exported to a dedicated OpenGL VR engine that can interface with any VR peripheral. Alternatively, the models can be exported to a Web VR engine. The simulators are cost efficient since they require not much more than the PC and the graphics card. We have found that both the therapists and the children that use the simulators find this technology very attractive

Abstract:

We study the motion of the negative curved symmetric two and three center problem on the Poincare upper semi plane model for a surface of constant negative curvature κwhich without loss of generality we assume k = -1. Using this model, we first derive the equations of motion for the 2-and 3-center problems. We prove that for 2-center problem, there exists a unique equilibrium point and we study the dynamics around it. For the motion restricted to the invariant y-axis, we prove that it is a center, but for the general two center problem it is unstable. For the 3-center problem, we show the nonexistence of equilibrium points. We study two particular integrable cases, first when the motion of the free particle is restricted to the y-axis, and second when all particles are along the same geodesic. We classify the singularities of the problem and introduce a local and a global regularization of all them. We show some numerical simulations for each situation

Abstract:

We consider the curved 4-body problems on spheres and hyperbolic spheres. After obtaining a criterion for the existence of quadrilateral configurations on the equator of the sphere, we study two restricted 4-body problems, one in which two masses are negligible and another in which only one mass is negligible. In the former, we prove the evidence square-like relative equilibria, whereas in the latter we discuss the existence of kite-shaped relative equilibria

Abstract:

In this paper, we study the hypercyclic composition operators on weighted Banach spaces of functions defined on discrete metric spaces. We show that the only such composition operators act on the "little" spaces. We characterize the bounded composition operators on the little spaces, as well as provide various necessary conditions for hypercyclicity

Abstract:

Demand response (DR) programs and local markets (LM) are two suitable technologies to mitigate the high penetration of distributed energy resources (DER) that is vastly increasing even during the current pandemic in the world. It is intended to improve operation by incorporating such mechanisms in the energy resource management problem while mitigating the present issues with Smart Grid (SG) technologies and optimization techniques. This paper presents an efficient intraday energy resource management starting from the day-ahead time horizon, which considers load uncertainty and implements both DR programs and LM trading to reduce the operating costs of three load aggregator in an SG. A random perturbation was used to generate the intraday scenarios from the day-ahead time horizon. A recent evolutionary algorithm HyDE-DF, is used to achieve optimization. Results show that the aggregators can manage consumption and generation resources, including DR and power balance compensation, through an implemented LM

Abstract:

Demand response programs, energy storage systems, electric vehicles, and local electricity markets are appropriate solutions to offset the uncertainty associated with the high penetration of distributed energy resources. It aims to enhance efficiency by adding such technologies to the energy resource management problem while also addressing current concerns using smart grid technologies and optimization methodologies. This paper presents an efficient intraday energy resource management starting from the day ahead time horizon, which considers the uncertainty associated with load consumption, renewable generation, electric vehicles, electricity market prices, and the existence of extreme events in a 13-bus distribution network with high integration of renewables and electric vehicles. A risk analysis is implemented through conditional value-at-risk to address these extreme events. In the intraday model, we assume that an extreme event will occur to analyze the outcome of the developed solution. We analyze the solution's impact departing from the day-ahead, considering different risk aversion levels. Multiple metaheuristics optimize the day-ahead problem, and the best-performing algorithm is used for the intraday problem. Results show that HyDE gives the best day-ahead solution compared to the other algorithms, achieving a reduction of around 37% in the cost of the worst scenarios. For the intraday model, considering risk aversion also reduces the impact of the extreme scenarios

Abstract:

The central configurations given by an equilateral triangle and a regular tetrahedron with equal masses at the vertices and a body at the barycenter have been widely studied in [9] and [14] due to the phenomena of bifurcation occurring when the central mass has a determined value m*. We propose a variation of this problem setting the central mass as the critical value m* and letting a mass at a vertex to be the parameter of bifurcation. In both cases, 2D and 3D, we verify the existence of bifurcation, that is, for a same set of masses we determine two new central configurations. The computation of the bifurcations, as well as their pictures have been performed considering homogeneous force laws with exponent a < −1

Abstract:

The leaderless and the leader-follower consensus are the most basic synchronization behaviors for multiagent systems. For networks of Euler-Lagrange (EL) agents different controllers have been proposed to achieve consensus, requiring in all cases, either the cancellation or the estimation of the gravity forces. While, in the first case, it is shown that a simple Proportional plus damping (P+d) scheme with exact gravity cancellation can achieve consensus, in the latter case, it is necessary to estimate, not just the gravity forces, but the parameters of the whole dynamics. This requires the computation of a complicated regressor matrix, that grows in complexity as the degrees-of-freedom of the EL-agents increase. To simplify the controller implementation we propose in this paper a simple P+d scheme with only adaptive gravity compensation. In particular, two adaptive controllers that solve both consensus problems by only estimating the gravitational term of the agents and hence without requiring the complete regressor matrix are reported. The first controller is a simple P+d scheme that does not require to exchange velocity information between the agents but requires centralized information. The second controller is a Proportional-Derivative plus damping (PD+d) scheme that is fully decentralized but requires exchanges of speed information between the agents. Simulation results demonstrate the performance of the proposed controllers

Abstract:

Medical image segmentation is one of the most productive research areas in medical image processing. The goal of most new image segmentation algorithms is to achieve higher segmentation accuracy than existing algorithms. But the issue of quantitative, reproducible validation of segmentation results, and the questions: What is segmentation accuracy ?, and: What segmentation accuracy can a segmentation algorithm achieve ? remain wide open. The creation of a validation framework is relevant and necessary for consistent and realistic comparisons of existing, new and future segmentation algorithms. An important component of a reproducible and quantitative validation framework for segmentation algorithms is a composite index that will measure segmentation performance at a variety of levels. In this paper we present a prototype composite index that includes the measurement of seven metrics on segmented image sets. We explain how the composite index is a more complete and robust representation of algorithmic performance than currently used indices that rate segmentation results using a single metric. Our proposed index can be read as an averaged global metric or as a series of algorithmic ratings that will allow the user to compare how an algorithm performs under many categories

Abstract:

How is the size of the informal sector affected when the distribution of social expenditures across formal and informal workers changes? How is it affected when the tax rate changes along with the generosity of these transfers? In our search model, taxes are levied on formal-sector workers as a proportion of their wage. Transfers, in contrast, are lump-sum and are received by both formal and informal workers. This implies that high-wage formal workers subsidize low-wage formal workers as well as informal workers. We calibrate the model to Mexico and perform counterfactuals. We find that the size of the informal sector is quite inelastic to changes in taxes and transfers. This is due to the presence of search frictions and to the cross-subsidy in our model: for low-wage formal jobs, a tax increase is roughly offset by an increase in benefits, leaving the unemployed approximately indifferent. Our results are consistent with the empirical evidence on the recent introduction of the “Seguro Popular” healthcare program

Abstract:

We calibrate the cost of sovereign defaults using a continuous time model, where government default decisions may trigger a change in the regime of a stochastic TFP process. We calibrate the model to a sample of European countries from 2009 to 2012. By comparing the estimated drift in default relative to that in no-default, we find that TFP falls in the range of 3.70–5.88 %. The model is consistent with observed falls in GDP growth rates and subsequent recoveries and illustrates why fiscal multipliers are small during sovereign debt crises

Abstract:

Employment to population ratios differ markedly across Organization for Economic Cooperation and Development (OECD) countries, especially for people aged over 55 years. In addition, social security features differ markedly across the OECD, particularly with respect to features such as generosity, entitlement ages, and implicit taxes on social security benefits. This study postulates that differences in social security features explain many differences in employment to population ratios at older ages. This conjecture is assessed quantitatively with a life cycle general equilibrium model of retirement. At ages 60-64 years, the correlation between the simulations of this study's model and observed data is 0.67. Generosity and implicit taxes are key features to explain the cross-country variation, whereas entitlement age is not

Abstract:

The consequences of increases in the scale of tax and transfer programs are assessed in the context of a model with idiosyncratic productivity shocks and incomplete markets. The effects are contrasted with those obtained in a stand-in house hold model featuring no idiosyncratic shocks and complete markets. The main finding is that the impact on hours remains very large, but the welfare consequences are very different. The analysis also suggests that tax and transfer policies have large effects on average labor productivity via selection effects on employment

Abstract:

Atmospheric pollution components have negative effects in the health and life of people. Outdoor pollution has been extenseively studied, but a large portion of people stay indoors. Our research focuses on indoor pollution forecasting using deep learning techniques coupled with the large processing capabilities of the cloud computing. This paper also shares the implementation using an open source approach of the code for modeling time-series of different sources data. We believe that further research can leverage the outcomes of our research

Abstract:

Sergio Verdugo's provocative Foreword challenges us to think about whether the concepts we inherited from classical constitutionalism are still useful for understanding our current reality. Verdugo refutes any attempt to defend what he calls "the conventional approach to constituent power." The objective of this article is to contradict Verdugo's assertions which, the Foreword claims, are based on an incorrect notion of the people as a unified body, or as a social consensus. The article argues, instead, for the plausibility of defending the popular notion of constituent power by anchoring it in a historical and dynamic concept of democratic legitimacy. It concludes that, although legitimizing deviations from the established channels for political transformation entails risks, we must assume them for the sake of the emancipatory potential of constituent power

Resumen:

El libro La Corte Enrique Santiago Petracchi II que se reseña muestra la impronta de Enrique Santiago Petracchi durante su segundo período a cargo de la presidencia de la Corte Suprema de Justicia de la Nación Argentina, ocurrida entre los años 2003 y 2006. La Corte se estudia tanto desde el punto de vista interno, con sus reformas en busca de transparencia y legitimidad, como desde el punto de vista externo, contextual y de relación con el resto de los poderes del Estado y la sociedad civil. El recuento incluye una mención de los hitos jurisprudenciales de dicho período y explica cómo la figura de Petracchi es central para comprender este momento de la Corte

Abstract:

The book "The Court Enrique Santiago Petracchi II" that is reviewed shows the imprint of Chief Justice Enrique Santiago Petracchi at the Argentinean Supreme Court of Justice during his second term that occurred between 2003 and 2006. The Court is studied both from the internal point of view, with its reforms in search of transparency and legitimacy, as well as from the external, contextual point of view and relationship with the rest of the State branches and civil society. The account includes a mention of the leading cases of that period and explains how the figure of Petracchi is central to understanding this moment of the Court

Resumen:

El artículo plantea algunas inquietudes sobre el análisis que Gargarella hace de la sentencia de la Corte Interamericana de Derechos Humanos en el caso Gelman vs. Uruguay. Aceptando la idea principal de que las decisiones pueden gradarse según su legitimidad democrática, cuestiona que la Ley de Caducidad uruguaya haya sido legítima en un grado significativo y por tanto haya debido respetarse en su validez por la Corte Interamericana. Ello por cuanto la decisión no fue tomada por todas las personas posiblemente afectadas por la misma. Este principio normativo de inclusión es fundamental para la teoría de la democracia de Gargarella, sin embargo, no fue considerado en su análisis del caso. El artículo explora las consecuencias de tomarse en serio el principio de inclusión y los problemas prácticos que este apareja, en especial respecto de la constitución del demos. Finalmente, propone una alternativa de solución mediante la justicia constitucional/convencional, de acuerdo con un entendimiento procedimental de la misma basado en la participación, según lo pensó J. H. Ely

Abstract:

The article raises some questions about Gargarella’s analysis of the Inter-American Court of Human Rights’ ruling in Gelman v. Uruguay. It accepts the main idea regarding the possibility to grad democratic legitimacy of deci-sions, but it questions that the Uruguayan amnesty has had a high degree of legitimacy. This is because the decision was not made by all possibly affected people. This normative principle of inclusion is fundamental to Gargarella’s theory of democracy, however, it was not considered in his analysis of the case. The article explores the consequences of taking the principle of inclu-sion seriously and the practical problems that it involves, especially regard-ing the constitution of the demos. Finally, it proposes an alternative solution through a procedural understanding of judicial review based on participation, as proposed by J.H. Ely

Resumen:

Este artículo reflexionará críticamente sobre la jurisprudencia de la Suprema Corte de Justicia de la Nación mexicana en torno al principio constitucional de paridad de género. Para hacerlo, se apoyará en una reconstrucción teórica de tres diferentes fundamentos en los que se puede basar la paridad según se entienda la representación política de las mujeres y el principio de igualdad. Estas posturas se identificarán como “de la igualdad formal”, “de la defensa de las cuotas” y “de la paridad”, resaltando cómo en las sentencias mexicanas se mezclan los argumentos de las dos últimas posturas de forma desordenada e incoherente. Finalmente, el artículo propiciará una interpretación del principio de paridad de género como dinámico y por tanto abierto a aportaciones progresivas del sujeto político feminista, que evite los riesgos del esencialismo como el reforzar el sistema binario de sexos. Un principio que, en este momento histórico, requiere medidas de acción afirmativa correctivas, provisionales y estratégicas para alcanzar la igualdad sustantiva de género en la representación política

Abstract:

This article will critically reflect on the jurisprudence of the Mexican Supreme Court of Justice regarding the constitutional principle of gender parity. To do so, it will rely on a theoretical reconstruction of three different foundations of the principle, based on the understanding of the political representation of women and the principle of equality. I will call these positions: "of formal equality," "of the defense of quotas," and "of parity," highlighting how the arguments of the last two positions are mixed in the Mexican judgments in a disorderly and incoherent way. Finally, the article will promote an interpretation of the principle of gender parity as dynamic and therefore open to progressive contributions from the feminist political subject, avoiding both the risks of essentialism and reinforcing the binary system of sexes. This principle requires, at this historical moment, corrective, provisional, and strategic affirmative action measures to achieve substantive gender equality in political representation

Resumen:

El artículo analiza la posibilidad de que estemos ante una transformación constitucional más allá del texto constitucional. Así, en el marco de la llamada "Cuarta Transformación" que se anunció para el país luego de las últimas elecciones, se intenta una reflexión diagnóstica sobre el papel que juega la Suprema Corte de Justicia.

Resumen:

El presente trabajo tiene por objeto analizar el funcionamiento de la rigidez constitucional en México como garantía de la supremacía constitucional. Para ello comenzaré con un estudio sobre la idea de rigidez y la distinguiré del concepto de supremacía. Posteriormente utilizaré dichas categorías para analizar el sistema mexicano y cuestionar su eficacia, es decir, la adecuación entre el medio (rigidez) y el fin (supremacía). Por último haré un par de propuestas de modificación del mecanismo de reforma constitucional en vistas a hacerlo más democrático, más deliberativo y con ello más eficaz para garantizar la supremacía constitucional

Abstract:

This paper analyzes how constitutional rigidity works in México and its consequences for constitutional supremacy. It starts with a conceptual distinction between rigidity and supremacy. Subsequently those categories are used to analyze mexican system and to question the amendment process capability to guarantee constitutional supremacy. Finally, the paper makes some proposals to amend the Mexican constitutional amendment process in order to make it more democratic, deliberative and effective to guarantee constitutional supremacy

Resumen:

El constitucionalismo popular como corriente constitucional contemporánea plantea una revisión crítica a la historia del constitucionalismo norteamericano, reivindicando el papel del pueblo en la interpretación constitucional. A la vez, presenta un contenido normativo anti-elitista que trasciende sus orígenes y que pone en cuestión la idea de que los jueces deban tener la última palabra en las controversias sobre derechos fundamentales. Esta faceta tiene su correlato en los diseños institucionales propuestos, propios de un constitucionalismo débil y centrados en la participación popular democrática

Abstract:

Popular constitutionalism is a contemporary constitutional theory with a critical view of U.S' constitutional narrative focus on judicial supremacy. Instead, popular constitutionalism regards the people as main actor. It defends an anti-elitist understanding of constitutional law. From the institutional perspective, popular constitutionalism proposes a weak model of constitutionalism and a strong participatory democracy

Resumen:

El trabajo tiene como objetivo analizar la sentencia dictada por la Primera Sala de la Suprema Corte de Justicia mexicana en el amparo en revisión 152/2013 en la que se declaró la inconstitucionalidad de la exclusión de los homosexuales del régimen matrimonial en el Estado de Oaxaca. Esta sentencia refleja un cambio importante en la forma de entender el interés legítimo tratándose de la impugnación de normas autoaplicativas (es decir, de normas que causan perjuicio sin que medie acto de aplicación), dando paso a la justiciabilidad de los mensajes estigmatizantes. En el caso, esta forma más amplia de entender el interés legítimo está basada en la percepción de que el derecho discrimina a través de los mensajes que transmite; situación que la Suprema Corte considera puede combatir a través de sus sentencias de amparo. Asimismo, se plantean algunos retos e inquietudes que suscita la sentencia a la luz del activismo judicial que puede conllevar

Abstract:

This paper is focus on the amparo en revision 152/2013 issued by the first chamber of the Supreme Court of Mexico. For now on the Supreme Court is able to judge the stigmatizing messages of law. Furthermore, the amparo en revision 152/2013 develops a broader conception of discrimination and a more activist role of the Supreme Court. Finally, I express some thoughts about the issues that this judgment could pose to the Supreme Court

Abstract:

We elicit subjective probability distributions from business executives about their own-firm outcomes at a one-year look-ahead horizon. In terms of question design, our key innovation is to let survey respondents freely select support points and probabilities in five-point distributions over future sales growth, employment, and investment. In terms of data collection, we develop and field a new monthly panel Survey of Business Uncertainty. The SBU began in 2014 and now covers about 1,750 firms drawn from all 50 states, every major nonfarm industry, and a range of firm sizes. We find three key results. First, firm-level growth expectations are highly predictive of realized growth rates. Second, firm-level subjective uncertainty predicts the magnitudes of future forecast errors and future forecast revisions. Third, subjective uncertainty rises with the firm’s absolute growth rate in the previous year and with the magnitude of recent revisions to its expected growth rate. We aggregate over firm-level forecast distributions to construct monthly indices of business expectations (first moment) and uncertainty (second moment) for the U. S. private sector

Abstract:

We consider several economic uncertainty indicators for the US and UK before and during the COVID-19 pandemic: implied stock market volatility, newspaper-based policy uncertainty, Twitter chatter about economic uncertainty, subjective uncertainty about business growth, forecaster disagreement about future GDP growth, and a model-based measure of macro uncertainty. Four results emerge. First, all indicators show huge uncertainty jumps in reaction to the pandemic and its economic fallout. Indeed, most indicators reach their highest values on record. Second, peak amplitudes differ greatly -from a 35% rise for the model-based measure of US economic uncertainty (relative to January 2020) to a 20-fold rise in forecaster disagreement about UK growth. Third, time paths also differ: Implied volatility rose rapidly from late February, peaked in mid-March, and fell back by late March as stock prices began to recover. In contrast, broader measures of uncertainty peaked later and then plateaued, as job losses mounted, highlighting differences between Wall Street and Main Street uncertainty measures. Fourth, in Cholesky-identified VAR models fit to monthly U.S. data, a COVID-size uncertainty shock foreshadows peak drops in industrial production of 12-19%

Abstract:

This paper proposes a tree-based incremental-learning model to estimate house pricing using publicly available information on geography, city characteristics, transportation, and real estate for sale. Previous machine-learning models capture the marginal effects of property characteristics and location on prices using big datasets for training. In contrast, our scenario is constrained to small batches of data that become available in a daily basis, therefore our model learns from daily city data, employing incremental-learning to provide accurate price estimations each day. Our results show that property prices are highly influenced by the city characteristics and its connectivity, and that incremental models efficiently adapt to the nature of the house pricing estimation task

Abstract:

Purpose: This research analyzes national identity representations held by Generation Z youth living in the United States-Mexico-Canada Agreement (USMCA) countries. In addition, it aims to identify the information on these issues that they are exposed to through social media. Methods: A qualitative approach carried out through in-depth interviews was selected for the study. The objective is to reconstruct social meaning and the social representation system. The constant comparative method was used for the information analysis, backed by the NVivo program. Findings: National identity perceptions of the adolescents interviewed are positive in terms of their own groups, very favorable regarding Canadians, and unfavorable vis-à-vis Americans. Furthermore, the interviewees agreed that social media have influenced their desire to travel or migrate, and if considering migrating, they have also provided advice as to which country they might go to. On another point, Mexicans are quite familiar with the Treaty; Americans are split between those who know something about it and those who have no information whatsoever; whereas Canadians know nothing about it. This reflects a possible way to improve information generated and spread by social media. Practical implications: The results could improve our understanding of how young people interpret the information circulating in social media and what representations are constructed about national identities. We believe this research can be replicated in other countries. Social implications: We might consider that the representations Generation Z has about the national identities of these three countries and what it means to migrate could have an impact on the democratic life of each nation and, in turn, on the relationship among the three USMCA partners

Abstract:

In journalism, innovation can be achieved by integrating various factors of change. This article reports the results of an investigation carried out in 2020; the study sample comprised journalists who participated as social researchers in a longitudinal study focused on citizens' perceptions of the Mexican electoral process in 2018. Journalistic innovation was promoted by the development of a novel methodology that combined existing journalistic resources and the use of qualitative social research methodologies. This combination provided depth to the journalistic coverage, optimized reporting, editing, design, and publication processes, improved access to more complete and diverse information in real time, and enhanced the capabilities of journalists. The latter transformed, through differential behaviors, their way of thinking about and valuing the profession by reconceptualizing and re-evaluating journalistic practices in which they were involved, resulting in an improvement in journalistic quality

Resumen:

Este artículo tiene como objetivo describir las etapas de transformación de la identidad social de los habitantes de la región de Los Altos de Jalisco, México, a partir de la Guerra Cristera y hasta la década de los años 90. El proceso se ha desarrollado en cuatro fases: Oposición (1926-1929), Ajuste (1930-1970), Reforzamiento (1970-1990) y Cambio (1990- ). Este análisis se realiza desde la teoría de la mediación social y presenta un avance de la investigación realizada para la tesis de doctorado Los mitos vivos de México: Identidad regional en Los Altos de Jalisco, dirigida por Manuel Martín Serrano

Abstract:

This article aims to describe the stages of transformation of the social identity in Los Altos de Jalisco, Mexico, from the Cristera War until the 1990s. The process has been developed in four phases: Opposition (1926-1929), Adjustment (1930-1970), Reinforcement (1970-1990) and Change (1990- ). This analysis is carried out from the theory of social mediation and presents an advance of the research performed for the doctoral thesis Los mitos vivos de México: Identidad regional en Los Altos de Jalisco, directed by Manuel Martín Serrano

Resumen:

Las identidades criollas en México han sido poco investigadas. Una de estas identidades se gestó en Los Altos de Jalisco a lo largo de cuatro siglos y permaneció casi intacta, incluso luego de la guerra cristera que cimbró la zona en 1926. Fundamentalmente, la identidad alteña está cimentada en la percepción que se tiene del origen hispano de la comunidad, así como del catolicismo arraigado y reforzado por los cristeros. De esta forma, la familia, institución sagrada para los alteños, guarda y conserva lo hispano y lo católico. El matrimonio funda la familia, le da soporte, orden y estabilidad a la organización social. Al hablar de noviazgo, el concepto de virginidad es central en la conformación de la identidad. La mujer debe conservar su pureza para que la familia alteña siga manteniendo sus valores

Resumen:

¿Cómo se ven a sí mismos y a los otros los habitantes de una región de México? ¿Cómo se articulan las percepciones sobre las creencias del grupo y sus acciones? En este artículo se presenta un modelo para conceptualizar cómo está estructurada la identidad social. A partir de un estudio de caso, la investigación toma como centro los elementos de la identidad, a los que se definen como unidades de significación contenidas en una creencia o enunciado. Posteriormente se detalla el proceso de análisis a través del cual es posible entender su articulación y organización. A través de dos Mundos y dos Dimensiones que conforman la identidad social así como tres ejes que la configuran, el Espacio, el Tiempo y la Relación, este documento analiza el caso de los habitantes de Los Altos, Jalisco, México, con la encomienda de dotar de una primera herramienta para entender quiénes y cómo son, a decir de sí mismos, los habitantes de esta región

Abstract:

How do the individuals from a particular Mexican region see themselves and the rest of the inhabitants of the area? How are the perceptions about the group’s beliefs and their actions articu-lated? This article presents a model aimed at conceptualizing how social identity is structured. Based on a case study, the research focuses on the elements of identity, defined as units of signification within a belief or statement. The paper also details the process of analysis through which it is possible to understand its articulation and organization. We do it by means of two Worlds and two Dimensions that conform the social identity, as well as three axes that shape it: Space, Time and the Relationship, this document analyzes the case of inhabi-tants of Los Altos, Jalisco, Mexico, in order to provide a first tool that helps understand who the inhabitants of this region are, and what they are like, according to themselves

Abstract:

We propose an algorithm for creating line graphs from binary images. The algorithm consists of a vectorizer followed by a line detector that can handle a large variety of binary images and is tolerant to noise. The proposed algorithm can accurately extract higher-level geometry from the images lending itself well to automatic image recognition tasks. Our algorithm revisits the technique of image polygonization proposing a very robust variant based on subpixel resolution and the construction of directed paths along the center of the border pixels where each pixel can correspond to multiple nodes along one path. The algorithm has been used in the areas of chemical structure and musical score recognition and is available for testing at www.docnition.com. Extensive testing of the algorithm against commercial and noncommercial methods has been conducted with favorable results

Resumen:

La relación entre la profesionalidad del periodismo y las normas éticas forman un vínculo importante para una actividad que se enfrenta a nuevos desafíos, que provienen tanto de la reconfiguración de las industrias mediáticas como de la crisis de identidad de la propia profesión. En este marco, se suma la revalorización de saber contar historias con contenidos informativos desde el empleo de herramientas y recursos literarios. Así, el periodismo narrativo potencia un periodismo comprometido, crítico y transformador que, en entornos transmedia, se enfrenta a nuevas preguntas: ¿qué implicaciones éticas tiene la participación activa de las audiencias, la generación de comunidades virtuales, el uso del lenguaje claro, la conjunción de diversos medios de comunicación y el manejo de datos e historias personales? En esta comunicación se presenta un análisis de esta problemática desde la Teoría de la Otredad, como esencia de una ética actual y solidaria que permite el entendimiento y aceptación del ser humano. Como línea central se describen los principales retos que se han detectado a partir de análisis de los principios del periodismo transmedia

Abstract:

The relationship between the professionalism of journalism and ethical standards is an important link for an activity that is facing new challenges, stemming both from the reconfiguration of media industries and the identity crisis of the profession itself. In this context, the revaluation of the ability to tell stories with informative content from the use of literary tools and resources is added. Thus, narrative journalism enhances a committed, critical and transformative journalism that, in transmedia environments, faces new questions: what are the ethical implications of the active participation of audiences, the generation of virtual communities, the use of clear language, the combination of different media and the handling of data and personal stories? This paper presents an analysis of this problem from the perspective of the Theory of Otherness, as the essence of a current and supportive ethics that allows the understanding and acceptance of the human being. As a central line, the main challenges that have been detected from the analysis of the principles of transmedia journalism

Abstract:

Business cycles in emerging economies display very volatile consumption and strongly countercyclical trade balance. We show that aggregate consumption in these economies is not more volatile than output once durables are accounted for. Then, we present and estimate a real business cycles model for a small open economy that accounts for this empirical observation. Our results show that the role of permanent shocks to aggregate productivity in explaining cyclical fluctuations in emerging economies is considerably lower than previously documented. Moreover, we find that financial frictions are crucial to explain some key business cycle properties of these economies

Abstract:

We decompose traditional betas into semibetas based on the signed covariation between the returns of individual stocks in an international market and the returns of three risk factors: local, global, and foreign exchange. Using high-frequency data, we empirically assess stock return co-movements with these three risk factors and find novel relationships between these factors and future returns. Our analysis shows that only semibetas derived from negative risk factor and stock return downturns command significant risk premia. Global downside risk is negatively priced in the international market and local downside risk is positively priced

Abstract:

We use intraday data to compute weekly realized variance, skewness, and kurtosis for equity returns and study the realized moments' time-series and cross-sectional properties. We investigate if this week's realized moments are informative for the cross-section of next week's stock returns. We find a very strong negative relationship between realized skewness and next week's stock returns. A trading strategy that buys stocks in the lowest realized skewness decile and sells stocks in the highest realized skewness decile generates an average weekly return of 19 basis points with a t-statistic of 3.70. Our results on realized skewness are robust across a wide variety of implementations, sample periods, portfolio weightings, and firm characteristics, and are not captured by the Fama-French and Carhart factors. We find some evidence that the relationship between realized kurtosis and next week's stock returns is positive, but the evidence is not always robust and statistically significant. We do not find a strong relationship between realized volatility and next week's stock returns

Resumen:

En este artículo se propone que el humor constituye una forma de comunicación intrapersonal particularmente apta para la (auto) educación filosófica que se encuentra en el corazón de la práctica de la filosofía. Se explican los resultados epistemológicos y éticos de un uso sistemático de la risa autorreferencial. Se defienden los beneficios de una cosmovisión basada en el reconocimiento del ridículo humano, Homo risibilis, comparándolo con otros enfoques de la condición humana

Abstract:

This article presents humor as enacting an intra-personal communication particularly apt for the philosophic (self) education that lies at the heart of the practice of philosophy. It explains the epistemological and ethical outcomes of a systematic use of self-referential laughter. It argues for the benefits of a worldview predicated on acknowledging human ridicule, Homo risibilis, comparing it with other approaches to the human predicament

Abstract:

This paper examines the effects of noncontributory pension programs at the federal and state levels on Mexican households' saving patterns using micro data from the Mexican Income and Expenditure Survey. We find that the federal program curtails saving among households whose oldest member is either 18-54 or 65-69 years old, possibly through anticipation effects, a decrease in the longevity risk faced by households, and a redistribution of income between households of different generations. Specifically, these households appear to be reallocating income away from saving into human capital investments, like education and health. Generally, state programs have neither significant effects on household saving, nor does the combination of federal and state programs. Finally, with a few exceptions, noncontributory pensions have no significant impact on the saving of households with members 70 years of age or older-individuals eligible for those pensions, plausibly because of their dissaving stage in the lifecycle

Abstract:

This paper empirically investigates the determinants of the Internet and cellular phone penetration levels in a crosscountry setting. It offers a framework to explain differences in the use of information and communication technologies in terms of differences in the institutional environment and the resulting investment climate. Using three measures of the quality of the investment climate, Internet access is shown to depend strongly on the country’s institutional setting because fixed-line Internet investment is characterized by a high risk of state expropriation, given its considerable asset specificity. Mobile phone networks, on the other hand, are built on less site-specific, re-deployable modules, which make this technology less dependent on institutional characteristics. It is speculated that the existence of telecommunications technology that is less sensitive to the parameters of the institutional environment and, in particular, to poor investment protection provides an opportunity for better understanding of the constraints and prospects for economic development

Abstract:

We consider a restricted three body problem on surfaces of constant curvature. As in the classical Newtonian case the collision singularities occur when the position particle with infinitesimal mass coincides with the position of one of the primaries. We prove that the singularities due to collision can be locally (each one separately) and globally (both as the same time) regularized through the construction of Levi-Civita and Birkhoff type transformations respectively. As an application we study some general properties of the Hill’s regions and we present some ejection-collision orbits for the symmetrical problem

Abstract:

We consider a symmetric restricted three-body problem on surfaces Mk2 of constant Gaussian curvature k ≠ 0, which can be reduced to the cases k = ±1. This problem consists in the analysis of the dynamics of an infinitesimal mass particle attracted by two primaries of identical masses describing elliptic relative equilibria of the two body problem on Mk2, i.e., the primaries move on opposite sides of the same parallel of radius a. The Hamiltonian formulation of this problem is pointed out in intrinsic coordinates. The goal of this paper is to describe analytically, important aspects of the global dynamics in both cases k = ±1 and determine the main differences with the classical Newtonian circular restricted three-body problem. In this sense, we describe the number of equilibria and its linear stability depending on its bifurcation parameter corresponding to the radial parameter a. After that, we prove the existence of families of periodic orbits and KAM 2-tori related to these orbits

Abstract:

We classify and analyze the orbits of the Kepler problem on surfaces of constant curvature (both positive and negative, S2 and H2, respectively) as functions of the angular momentum and the energy. Hill's regions are characterized and the problem of time-collision is studied. We also regularize the problem in Cartesian and intrinsic coordinates, depending on the constant angular momentum, and we describe the orbits of the regularized vector field. The phase portraits both for S2 and H2 are pointed out

Abstract:

We consider a setup in which a principal must decide whether or not to legalize a socially undesirable activity. The law is enforced by a monitor who may be bribed to conceal evidence of the offense and who may also engage in extortionary practices. The principal may legalize the activity even if it is a very harmful one. The principal may also declare the activity illegal knowing that the monitor will abuse the law to extract bribes out of innocent people. Our model offers a novel rationale for legalizing possession and consumption of drugs while continuing to prosecute drug dealers

Abstract:

We study a channel through which inflation can have effects on the real economy. Using job creation and destruction data from U.S. manufacturing establishments from 1973-1988, we show that both jobs created by new establishments and jobs destroyed by dying establishments are negatively correlated with inflation. These results are robust to controls for the real-business cycle and monetary policy. Over a longer time frame, data on business failures confirm our results obtained from job creation and destruction data. We discuss how interaction of inflation with financial-markets, nominal-wage rigidities, and imperfect competition could explain the empirical evidence

Abstract:

We study how discount window policy affects the frequency of banking crises, the level of investment, and the scope for indeterminacy of equilibrium. Previous work has shown that providing costless liquidity through a discount window has mixed effects in terms of these criteria: It prevents episodes of high liquidity demand from causing crises but can lead to indeterminacy of stationary equilibrium and to inefficiently low levels of investment. We show how offering discount window loans at an above-market interest rate can be unambiguously beneficial. Such a policy generates a unique stationary equilibrium. Banking crises occur with positive probability in this equilibrium and the level of investment is suboptimal, but a proper combination of discount window and monetary policies can make the welfare effects of these inefficiencies arbitrarily small. The near-optimal policies can be viewed as approximately implementing the Friedman rule

Abstract:

We investigate the dependence of the dynamic behavior of an endogenous growth model on the degree of returns to scale. We focus on a simple (but representative) growth model with publicly funded inventive activity. We show that constant returns to reproducible factors (the leading case in the endogenous growth literature) is a bifurcation point, and that it has the characteristics of a transcritical bifurcation. The bifurcation involves the boundary of the state space, making it difficult to formally verify this classification. For a special case, we provide a transformation that allows formal classification by existing methods. We discuss the new methods that would be needed for formal verification of transcriticality in a broader class of models

Abstract:

We evaluate the desirability of having an elastic currency generated by a lender of last resort that prints money and lends it to banks in distress. When banks cannot borrow, the economy has a unique equilibrium that is not Pareto optimal. The introduction of unlimited borrowing at a zero nominal interest rate generates a steady state equilibrium that is Pareto optimal. However, this policy is destabilizing in the sense that it also introduces a continuum of nonoptimal inflationary equilibria. We explore two alternate policies aimed at eliminating such monetary instability while preserving the steady-state benefits of an elastic currency. If the lender of last resort imposes an upper bound on borrowing that is low enough, no inflationary equilibria can arise. For some (but not all) economies, the unique equilibrium under this policy is Pareto optimal. If the lender of last resort instead charges a zero real interest rate, no inflationary equilibria can arise. The unique equilibrium in this case is always Pareto optimal

Abstract:

We consider the nature of the relationship between the real exchange rate and capital formation. We present a model of a small open economy that produces and consumes two goods, one tradable and one not. Domestic residents can borrow and lend abroad, and costly state verification (CSV) is a source of frictions in domestic credit markets. The real exchange rate matters for capital accumulation because it affects the potential for investors to provide internal finance, which mitigates the CSV problem. We demonstrate that the real exchange rate must monotonically approach its steady state level. However, capital accumulation need not be monotonic and real exchange rate appreciation can be associated with either a rising or a falling capital stock. The relationship between world financial market conditions and the real exchange rate is also investigated

Abstract:

In the Mexican elections, the quick count consists in selecting a random sample of polling stations to forecast the election results. Its main challenge is that the estimation is done with incomplete samples, where the missingness is not at random. We present one of the statistical models used in the quick count of the gubernatorial elections of 2021. The model is a negative binomial regression with a hierarchical structure. The prior distributions are thoroughly tested for consistency. Also, we present a fitting procedure with an adjustment for bias, capable of running in less than 5 min. The model yields probability intervals with approximately 95% coverage, even with certain patterns of biased samples observed in previous elections. Furthermore, the robustness of the negative binomial distribution translates to robustness in the model, which can fit well big and small candidates, and provides an additional layer of protection when there are database errors

Abstract:

Quick counts based on probabilistic samples are powerful methods for monitoring election processes. However, the complete designed samples are rarely collected to publish the results in a timely manner. Hence, the results are announced using partial samples, which have biases associated to the arrival pattern of the information. In this paper, we present a Bayesian hierarchical model to produce estimates for the Mexican gubernatorial elections. The model considers the poll stations poststratified by demographic, geographic, and other covariates. As a result, it provides a principled means of controlling for biases associated to such covariates. We compare methods through simulation exercises and apply our proposal in the July 2018 elections for governor in certain states. Our studies find the proposal to be more robust than the classical ratio estimator and other estimators that have been used for this purpose

Abstract:

Despite the rapid change in cellular technologies, Mobile Network Operators (MNOs) keep a high percentage of their deployed infrastructure using Global System for Mobile communications (GSM) technologies. With about 3.5 billion subscribers, GSM remains as the de facto standard for cellular communications. However, the security criteria envisioned 30 years ago, when the standard was designed, are no longer sufficient to ensure the security and privacy of the users. Furthermore, even with the newest fourth generation (4G) cellular technologies starting to be deployed, these networks could never achieve strong security guarantees because the MNOs keep backwards- compatibility given the huge amount of GSM subscribers. In this paper, we present and describe the tools and necessary steps to perform an active attack against a GSM-compatible network, by exploiting the GSM protocol lack of mutual authentication between the subscribers and the network. The attack consists of a so-called man-in-the- middle attack implementation. By using Software Defined Radio (SDR), open-source libraries and open- source hardware, we setup a fake GSM base station to impersonate the network and therefore eavesdrop any communications that are being routed through it and extract information from their victims. Finally, we point out some implications of the protocol vulnerabilities and how these can not be mitigated in the short term since 4G deployments will take long time to entirely replace the current GSM infrastructure

Abstract:

It is shown in the paper that the problem of speed observation for mechanical systems that are partially linearisable via coordinate changes admits a very simple and robust (exponentially stable) solution with a Luenberger-like observer. This result should be contrasted with the very complicated observers based on immersion and invariance reported in the literature. A second contribution of the paper is to compare, via realistic simulations and highly detailed experiments, the performance of the proposed observer with well-known high-gain and sliding mode observers. In particular, to show that – due to their high sensitivity to noise, that is unavoidable in mechanical systems applications – the performance of the two latter designs is well below par

Abstract:

We formulate a p-median facility location model with a queuing approximation to determine the optimal locations of a given number of dispensing sites (Point of Dispensing-PODs) from a predetermined set of possible locations and the optimal allocation of staff to the selected locations. Specific to an anthrax attack, dispensing operations should be completed in 48 hours to cover all exposed and possibly exposed people. A nonlinear integer programming model is developed and it formulates the problem of determining the optimal locations of facilities with appropriate facility deployment strategies, including the amount of servers with different skills to be allocated to each open facility. The objective of the mathematical model is to minimize the average transportation and waiting times of individuals to receive the required service. The mathematical model has waiting time performance measures approximated with a queuing formula and these waiting times at PODs are incorporated into the p-median facility location model. A genetic algorithm is developed to solve this problem. Our computational results show that appropriate locations of these facilities can significantly decrease the average time for individuals to receive services. Consideration of demographics and allocation of the staff decreases waiting times in PODs and increases the throughput of PODs. When the number of PODs to open is high, the right staffing at each facility decreases the average waiting times significantly. The results presented in this paper can help public health decision makers make better planning and resource allocation decisions based on the demographic needs of the affected population

Abstract:

Robust statistical data modelling under potential model mis-specification often requires leaving the parametric world for the nonparametric. In the latter, parameters are infinite dimensional objects such as functions, probability distributions or infinite vectors. In the Bayesian nonparametric approach, prior distributions are designed for these parameters, which provide a handle to manage the complexity of nonparametric models in practice. However, most modern Bayesian nonparametric models seem often out of reach to practitioners, as inference algorithms need careful design to deal with the infinite number of parameters. The aim of this work is to facilitate the journey by providing computational tools for Bayesian nonparametric inference. The article describes a set of functions available in the R package BNPdensity in order to carry out density estimation with an infinite mixture model, including all types of censored data. The package provides access to a large class of such models based on normalised random measures, which represent a generalisation of the popular Dirichlet process mixture. One striking advantage of this generalisation is that it offers much more robust priors on the number of clusters than the Dirichlet. Another crucial advantage is the complete flexibility in specifying the prior for the scale and location parameters of the clusters, because conjugacy is not required. Inference is performed using a theoretically grounded approximate sampling methodology known as the Ferguson & Klass algorithm. The package also offers several goodness-of-fit diagnostics such as QQ plots, including a cross-validation criterion, the conditional predictive ordinate. The proposed methodology is illustrated on a classical ecological risk assessment method called the species sensitivity distribution problem, showcasing the benefits of the Bayesian nonparametric framework

Abstract:

This paper focuses on model selection, specification and estimation of a global asset return model within an asset allocation and asset and liability management framework. The development departs from a single currency capital market model with four state variables: stock index, short and long term interest rates and currency exchange rates. The model is then extended to the major currency areas, United States, United Kingdom, European Union and Japan, and to include a US economic model containing GDP, inflation, wages and government borrowing requirements affecting the US capital market variables. In addition, we develop variables representing emerging market stock and bond indices. In the largest extension we treat a four currency capital markets model and US, UK, EU and Japan macroeconomic variables. The system models are estimated with seemingly unrelated regression estimation (SURE) and generalised autoregressive conditional heteroscedasticity (GARCH) techniques. Simulation, impulse response and forecasting performance is discussed in order to analyse the dynamics of the models developed

Abstract:

Gender stereotypes, the assumptions concerning appropriate social roles for men and women, permeate the labor market. Analyzing information from over 2.5 million job advertisements on three different employment search websites in Mexico, exploiting approximately 235,00 that are explicitly gender-targeted, we find evidence that advertisements seeking "communal" characteristics, stereotypically associated with women, specify lower salaries than those seeking "agentic" characteristics, stereotypically associated with men. Given the use of gender-targeted advertisements in Mexico, we use a random forest algorithm to predict whether non-targeted ads are in fact directed toward men or women, based on the language they use. We find that the non-targeted ads for which we predict gender show larger salary gaps (8–35 percent) than explicitly gender-targeted ads (0–13 percent). If women are segregated into occupations deemed appropriate for their gender, this pay gap between jobs requiring communal versus agentic characteristics translates into a gender pay gap in the labor market

Abstract:

This paper analyses the contract between an entrepreneur and an investor, using a non-zero sum game in which the entrepreneur is interested in company survival and the investor in maximizing expected net present value. Theoretical results are given and the model's usefulness is exemplified using simulations. We have observed that both the entrepreneur and the investor are better off under a contract which involves repayments and a share of the start-up company. We also have observed that the entrepreneur will choose riskier actions as the repayments become harder to meet up to a level where the company is no longer able to survive

Abstract:

We consider the problem of managing inventory and production capacity in a start-up manufacturing firm with the objective of maximising the probability of the firm surviving as well as the more common objective of maximising profit. Using Markov decision process models, we characterise and compare the form of optimal policies under the two objectives. This analysis shows the importance of coordination in the management of inventory and production capacity. The analysis also reveals that a start-up firm seeking to maximise its chance of survival will often choose to keep production capacity significantly below the profit-maximising level for a considerable time. This insight helps us to explain the seemingly cautious policies adopted by a real start-up manufacturing firm

Abstract:

Start-up companies are considered an important factor in the success of a nation’s economy. We are interested in the decisions for long-term survival of these firms when they have considerable cash restrictions. In this paper we analyse several inventory control models to manage inventory purchasing and return policies. The Markov decision models are formulated for both established companies that look at maximising average profit and start-up companies that look at maximising their long-term survival probability. We contrast both objectives, and present properties of the policies and the survival probabilities. We find that start-up companies may need to be riskier if the return price is very low, but there is a period where a start-up firm becomes more cautious than an established company and there is a point, as it accumulates capital, where it starts behaving as an established firm. We compare the various models and give conditions under which their policies are equivalent

Abstract:

A recent cross-cultural study suggests employees may be classified, based on their scores on a measure of work ethic, into three profiles labeled as "live to work," "work to live," and "work as a necessary evil." The present study assesses whether these profiles were stable before and after an extended lockdown that forced employees to work from home for 2 years because of the COVID-19 pandemic. To assess our core research question, we conducted a longitudinal study with employees of a company in the financial sector, collecting data in two waves: February 2020 (n = 692) and June 2022 (n = 598). Tests of profile similarity indicated a robust structural and configural equivalence of the profiles before and after the lockdown. As expected, the prolonged pandemic-based lockdown had a significant effect on the proportion of individuals in each profile. Implications for leading and managing in a post-pandemic workforce are presented and discussed

Abstract:

This study examines the impact of a specific training intervention on both individual- and unit-level outcomes. We sought to examine the extent to which a training intervention incorporating key elements of error management training: (1) positively impacted sales specific self-efficacy beliefs of trainees; and (2) positively impacted unit-level sales growth over time. Results of an 11-week longitudinal field experiment across 19 stores in a national bakery chain indicated that the sales self-efficacy of trainees significantly increased between the levels they had 2 weeks before the intervention started and 4 weeks after it was initiated. Results based on a repeated measures ANOVA also indicated significantly higher sales performance in the intervention group compared with a non-intervention control group. We also sought to address the extent to which individual-level effects may be linked to the organizational level. We also provide evidence with respect to the extent to which changes in individual self-efficacy were associated with unit-level sales performance. Results confirmed this multi-level effect as evidenced by a moderate significant correlation between the average self-efficacy of the staff of each store and its sales performance across the weeks the intervention was in effect. The study contributes to the existing literature by providing direct evidence of the impact of an HRD intervention at multiple organizational levels

Abstract:

Despite the acceptance of work ethic as an important individual difference, little research has examined the extent to which work ethic may reflect shared environmental or socio-economic factors. This research addresses this concern by examining the influence of geographic proximity on the work ethic experienced by 254 employees from Mexico, working in 11 different cities in the Northern, Central and Southern regions of the country. Using a sequence of complementary analyses to assess the main source of variance on seven dimensions of work ethic, our results indicate that work ethic is most appropriately considered at the individual level

Abstract:

This paper explores the relationship between individual work values and unethical decision-making and actual behavior at work through two complementary studies. Specifically, we use a robust and comprehensive model of individual work values to predict unethical decision-making in a sample of working professionals and accounting students enrolled in ethics courses, and IT employees working in sales and customer service. Study 1 demonstrates that young professionals who rate power as a relatively important value (i.e., those reporting high levels of the self-enhancement value) are more likely to violate professional conduct guidelines despite receiving training regarding ethical professional principles. Study 2, which examines a group of employees from an IT firm, demonstrates that those rating power as an important value are more likely to engage in non-work-related computing (i.e., cyberloafing) even when they are aware of a monitoring software that tracks their computer usage and an explicit policy prohibiting the use of these computers for personal reasons

Abstract:

This panel study, conducted in a large Venezuelan organization, took advantage of a serendipitous opportunity to examine the organizational commitment profiles of employees before and after a series of dramatic, and unexpected, political events directed specifically at the organization. Two waves of organizational commitment data were collected, 6 months apart, from a sample of 152 employees. No evidence was found that employees' continuance commitment to the organization was altered by the events described here. Interestingly, however, both affective and normative commitment increased significantly during the period of the study. Further, employee's commitment profiles at Wave 2 were more differentiated than they were at Wave 1

Abstract:

This study, based in a manufacturing plant in Venezuela, examines the relationship between perceived task characteristics, psychological empowerment and commitment, using a questionnaire survey of 313 employees. The objective of the study was to assess the effects of an organizational intervention at the plant aimed at increasing productivity by providing performance feedback on key aspects of its daily operations. It was hypothesized that perceived characteristics of the task environment, such as task meaningfulness and task feedback, will enhance psychological empowerment, which in turn will have a positive impact on employee commitment. Test of a structural model revealed that the relationship of task meaningfulness and task feedback with affective commitment was partially mediated by the empowerment dimensions of perceived control and goal internalization. The results highlight the role of goal internalization as a key mediating mechanism between job characteristics and affective commitment. The study also validates a Spanish-language version of the psychological empowerment scale by Menon (2001)

Resumen:

A pesar de la extensa validación transcultural del modelo de compromiso organizacional de Meyer y Allen (1991), han surgido ciertas dudas respecto a la independencia de los componentes afectivo y normativo y, también, sobre la unidimensionalidad de este último. Este estudio analiza la estabilidad de la estructura del modelo y examina el comportamiento de la escala normativa, empleando 100 muestras, de 250 sujetos cada una, extraídas aleatoriamente de una base de datos de 4.689 empleados. Los resultados muestran cierta estabilidad del modelo, y apoyan parcialmente a la corriente que propone el desdoblamiento del componente normativo en dos subdimensiones: el deber moral y el sentimiento de deuda moral

Abstract:

Although there has been extensive cross-cultural validation of Meyer and Allen’s (1991) model of organizational commitment, some doubts have emerged concerning both the independence of the affective and normative components, and the unidimensionality of the former. This study focuses on analyzing the stability of the model’s structure, and on examining the behaviour of the normative scale. For this purpose, we employed 100 samples of 250 subjects each, extracted randomly from a database of 4,689 employees. The results show certain stability of the model, and partially support research work suggesting the unfolding of the normative component into two subdimensions: one related to a moral duty, and the other to a sense of indebtedness

Abstract:

In recent years there has been an increasing interest among researchers and practitioners to analyze what makes a firm attractive in the eyes of university students, and if individual differences such as personality traits have an impact on this general affect towards a particular organization. The main goal of the present research is to demonstrate that a recently conceptualized narrow trait of personality named dispositional resistance to change (RTC), that is, the inherent tendency of individuals to avoid and oppose changes (Oreg, 2003), can predict organizational attraction of university students to firms that are perceived as innovative or conservative. Three complementary studies were carried out using a total sample of 443 college students from Mexico. In addition to validating the hypotheses, our findings suggest that as the formation of the images of organizations in students’ minds is done through social cognitions, simple stimuli such as physical artifacts, when used in an isolated manner, do not have a significant impact on organizational attraction

Abstract:

The Work Values Scale EVAT (based on its initials in Spanish: Escala de Valores hacia el Trabajo) was created in 2000 to measure values in the work contexto The instrument operationalizes the four higher-order-values of the Schwartz 'rheory (1992) through sixteen items focused on work scenarios. The questionnaire has been used among large samples of Mexican and Spanish individuals reporting adequate psychometric properties. The instrument has recently been translated into Portuguese and Italian, and subsequent1y used in a large-scale study with nurses in Portugal andin a sample of various occupations in Italy. The purpose of this research was to demonstrate the cross-cultural validity of the Work Values Scale EVAT in Spanish, Portuguese, and Italian. Our results suggest that the original Spanish version of the EVAT scale and the new Portuguese and Italian versions are equivalent

Abstract:

The authors examined the validity of the Spanish-language version of the dispositional resistance lO change (RTC) scale. First, the structural validity of the new questionnaire was evaluated using a nested sequence of confirmatory factor analyses. Second, the external validity ofthe questionnaire was assessed, using the four higher-order values of the Schwartz's theory and the four dimensions of the RTC scale: routine seeking, emotional reaction, short-term focus and cognitive rigidity. A sample of 553 undergraduate students from Mexico and Spain was used in the analyses. The results confirmed both the construct structure and the external validity of the questionnaire

Abstract:

The authors examined the convergent validity of the four dimensions of the Resistance to Change scale (RTC): routine seeking, emotional reaction, short-term focus and cognitive rigidity and the four higher-order values of the Schwartz’s theory, using a nested sequence of confirmatory factor analyses. A sample of 553 undergraduate students from Mexico and Spain was used in the analyses. The results confirmed the external validity of the questionnaire

Resumen:

Este estudio analiza el impacto de la diversidad de valores entre los integrantes de los equipos sobre un conjunto de variables de proceso, así como sobre dos tareas con diferentes demandas de interacción social. En particular, se analiza el efecto de la diversidad de valores sobre el conflicto en la tarea y en las relaciones, la cohesión y la autoeficacia grupal. Utilizando un simulador de trabajo en equipo y una muestra de 22 equipos de entre cinco y siete individuos, se comprobó que la diversidad en valores en un equipo, influye de forma directa sobre las variables de proceso y sobre la tarea que demanda baja interacción social, la relación entre la diversidad de valores sobre el desempeño, se ve mediada por las variables de proceso. Se proponen algunas acciones que permitirían poner en práctica los resultados de esta investigación en el contexto organizacional

Abstract:

This study investigates the impact of value diversity among team members on team process and performance criteria on two tasks of differing social interaction demands. Specifically, the criteria of interest included task conflict, relationship conflict, cohesion, and team efficacy and task performance on two tasks demanding different levels of social interaction. Utilizing a team work simulator and a sample comprised of 22 learns of five to seven individuals, it was demonstrated that value diversity directly impacts both task performance and process criteria on the task demanding low social interaction. Meanwhile, in the task requiring high social interaction, value diversity related to task performance via the mediating effects of team processes. Some specific actions are proposed in order to apply the results of this research in the daily context of organizations

Abstract:

The Work Values Scale EVAT (based on its initials in Spanish) was created in 2000 to measure values in the work context. The instrument operationalizes the four higher-order-values of the Schwartz Theory (1992) through sixteen items focused on work scenarios. The questionnaire has been used among large samples of Mexican and Spanish individuals (Arciniega & González, 2006: 2005, González & Arciniega 2005), reporting adequate psychometric properties. The instrument has recently been translated into Portuguese and Italian, and subsequently used in a large-scale study with nurses in Portugal and in a sample of various occupations in Italy. The purpose of this research was to demonstrate the cross-cultural validity of the Work Values Scale EVAT in Spanish, Portuguese, and Italian, using a new technique of measurement equivalence: confirmatory multidimensional scaling (CMDS). Our results suggest that CMDS is a serviceable technique for assessing measurement equivalence, but requires improvements to provide precise fit indices

Abstract:

We examine the impact of team member value and personality diversity on team processes and performance. The research is divided into two studies. First, we examine the impact of personality and value diversity on team performance, relationship and task conflict, cohesion, and team self-efficacy. Second, we evaluate the effect of team members’ values diversity on team performance in two different types of tasks, one cognitive, and the other complex. In general, our results suggest that higher levels of diversity with respect to values were associated with lower levels of team process variables. Also, as expected we found that the influence of team values diversity is higher on a cognitive task than on a complex one

Abstract:

sidering the propositions of Simon (1990;1993) and Korsgaard áp.d collaborators (1997), that an individual who assigns priority to values related to altruism tends to pay less attention to evaluating personal costs and benefits when processing social information, as well as the basic premises of job satisfaction that establishes that this attitude is centered on a cognitive process of evaluating how specific conditions or outcomes in a job fulfill the needs and values of a persono We proposed that individuals who score higher on values associated with altruism, will reveal higher scores on all specific facets of job satisfaction than those who score lower. A sample of 3,201 Mexican employees, living in 11 cities and working for 30 different companies belonging to the same holding, was used in this study. The results of the research c1early support the central hypothesis

Abstract:

Some reviews have shown how different attitudes, demographic and organizational variables generate organizational commitment. Few studies have reported how work values and organizational factors create organizational commitment. This investigation is an attempt to explore the influence that both sets of variables have on organizational commitment. Using the four high-order values proposed by Schwartz (1992) to operationalize the construct of work values, we evaluated the influence of these work values on the development of organizational commitment, in comparison with four facets of work satisfaction and four organizational factors: empowerment, knowledge of organizational goals, and training and communication practices. A sample of 982 employees from eight companies of Northeastern Mexico was used in this study. Our findings suggest that work values occupy less important place on the development of organizational commitment when compared to organizational factors, such as the perceived knowledge of the goals of the organization, or some attitudes such as satisfaction with security and opportunities of development

Abstract:

In this paper we propose the use of new iterative methods to solve symmetric linear complementarity problems (SLCP) that arise in the computation of dry frictional contacts in Multi-Rigid-Body Dynamics. Specifically, we explore the two-stage iterative algorithm developed by Morales, Nocedal and Smelyanskiy [1]. The underlying idea of that method is to combine projected Gauss-Seidel iterations with subspace minimization steps. Gauss-Seidel iterations are aimed to obtain a high quality estimation of the active set. Subspace minimization steps focus on the accurate computation of the inactive components of the solution. Overall the new method is able to compute fast and accurate solutions of severely ill-conditioned LCPs. We compare the performance of a modification of the iterative method of Morales et al with Lemke’s algorithm on robotic object grasping problems.

Abstract:

This paper investigates corporate social (and environmental) responsibility (CSR) disclosure practices in Mexico. By analysing a sample of Mexican companies in 2010, it utilises a detailed manual content analysis and identifies corporate-governance-related determinants of CSR disclosure. The study shows a general association between the governance variables and both the content and the semantic properties of CSR information published by Mexican companies. Although an increased international influence on CSR disclosure is noted, the study reveals the symbolic role of CSR committees and the negative influence of foreign ownership on community disclosure, suggesting that improvements in business engagement with stakeholders are needed for CSR to be instrumental in business conduct

Abstract:

Effective policy-making requires that voters avoid electing malfeasant politicians. However, informing voters of incumbent malfeasance in corrupt contexts may not reduce incumbent support. As our simple learning model shows, electoral sanctioning is limited where voters already believed incumbents to be malfeasant, while information's effect on turnout is non-monotonic in the magnitude of reported malfeasance. We conducted a field experiment in Mexico that informed voters about malfeasant mayoral spending before municipal elections, to test whether these Bayesian predictions apply in a developing context where many voters are poorly informed. Consistent with voter learning, the intervention increased incumbent vote share where voters possessed unfavorable prior beliefs and when audit reports caused voters to favorably update their posterior beliefs about the incumbent's malfeasance. Furthermore, we find that low and, especially, high malfeasance revelations increased turnout, while less surprising information reduced turnout. These results suggest that improved governance requires greater transparency and citizen expectations

Abstract:

A Vector Autoregressive Model of the mexican economy was employed to empirically find the transmission channels of price formation. The structural changes affecting the behavior of the inflation rate during 1970-1987, motivated the analysis of the changing influences of the explanatory variables within three different subperiods, namely: 1970-1976, 1978-1982 and 1983- 1987. A main finding is that, among the variables considered, the public prices were the most important in explaining the variability of the inflation, irrespective of the subperiod under study. Another finding is that inflationary inertia played a different role in each subperiod

Abstract:

The degradation of biopolymers such as polylactic acid (PLA) has been studied for several years; however, the results regarding the mechanism of degradation are not completely understood yet. PLA is easily processed by traditional techniques including injection molding, blow molding, extrusion, and thermoforming; in this research, the extrusion and injection molding processes were used to produce PLA samples for accelerated destructive testing. The methodology employed consisted of carrying out material testing under the guidelines of several ASTM standards; this research hypothesized that the effects of UV light, humidity, and temperature exposure have a statistical difference in the PLA degradation rate. The multivariate analysis of non-parametric data is presented as an alternative to multivariate analysis, in which the data do not satisfy the essential assumptions of a regular MANOVA, such as multivariate normality. A package in the R software that allows the user to perform a non-parametric multivariate analysis when necessary was used. This paper presents a study to determine if there is a significant difference in the degradation rate after 2000 h of accelerated degradation of a biopolymer using the multivariate and non-parametric analyses of variance. The combination of the statistical techniques, multivariate analysis of variance and repeated measures, provided information for a better understanding of the degradation path of the biopolymer

Abstract:

Dried red chile peppers [Capsicum annuum (L.)] are an important agricultural product grown throughout the Southwestern United States and is extensively used in food and for commercial application. Given the high, broad demand for chile attention to the methods of harvesting, storage, transport, and packaging are critical for profitability. Currently, chile should be stored no more than 24 to 36 hours at ambient temperatures from the time of harvest due to the potential for natural fermentation to destroy the crop. The rate for calculating and determining the amount of useable/destroyed chile in ambient conditions is determined by several variables that include the harvesting method (hand-picked, mechanized), time of harvest following the optimal harvesting point (season), weather variations (moisture). In this work, a stochastic simulation-based model is presented to forecast optimal harvesting scenarios capable of supporting farmers and chile processors better plan/manage planting and growth acceleration programs. The tool developed allows for the economic feasibility of storage/stabilization systems, advanced mechanical harvesters, and other future advances based on the amount increase in chile yield to be analyzed. We used described simulation as an analysis tool to obtain the expected coverage and the estimation of the mean and quantile

Abstract:

While the degradation of Polylactic Acid (PLA) has been studied for several years, results regarding the mechanism for determining degradation are not completely understood. Through accelerated degradation testing, data can be extrapolated and modeled to test parameters such as temperature, voltage, time, and humidity. Accelerated lifetime testing is used as an alternative to experimentation under normal conditions. The methodology to create this model consisted of fabricating series of ASTM specimens using extrusion and injection molding. These specimens were tested through accelerated degradation; tensile and flexural testing were conducted at different points of time. Nonparametric inference tests for multivariate data are presented. The results indicate that the effect of the independent variable or treatment effect (time) is highly significant. This research intends to provide a better understanding of biopolymer degradation. The findings indicated that the proposed statistical models can be used as a tool for characterization of the material regarding the durability of the biopolymer as an engineering material. Having multiple models, one for each individual accelerating variable, allow deciding which parameter is critical in the characterization of the material

Abstract:

The degradation of biopolymers such as polylactic acid (PLA) has been studied for several years, however, results regarding the mechanism of degradation are not completely understood yet. It would be advantageous to predict and model the degradation of PLA rates by means of performance. High strength and thermoplasticity allow PLA to be used to manufacture a great variety of products. This material is easily processed by traditional techniques including injection molding process, blow molding, extrusion, and thermoforming ; extrusion and injection molding processes were used to produce PLA samples for accelerated destructive testing in this research; the methodology employed consists of carrying out material testing under the guidelines of several ASTM standards, this research hypothesizes that UV light, humidity, and temperature exposure have a statistical difference in the degradation rate. The multivariate analysis of non-parametric data is presented as an alternative for multivariate analysis in which the data do not satisfy the essential assumptions of regular MANOVA; such as multivariate normality. Ellis et al. created a package in R software that allows the user to perform a non-parametric multivariate analysis, when necessary. This paper presents a study to determine if there is a significant difference in the degradation process of a biopolymer using the multivariate and nonparametric analysis of variance. The combination of the statistical techniques, multivariate analysis of variance, and repeated measures provided information for a better understanding of the degradation path of the biopolymers

Abstract:

Formal models of animal sensorimotor behavior can provide effective methods for generating robotic intelligence. In this article we describe how schema-theoretic models of the praying mantis derived from behavioral and neuroscientific data can be implemented on a hexapod robot equipped with a real time color vision system. This implementation incorporates a wide range of behaviors, including obstacle avoidance, prey acquisition, predator avoidance, mating, and chantlitaxia behaviors that can provide guidance to neuroscientists, ethologists, and roboticists alike. The goals of this study are threefold: to provide an understanding and means by which fielded robotic systems are not competing with other agents that are more effective at their designated task; to permit them to be successful competitors within the ecological system and capable of displacing less efficient agents; and that they are ecologically sensitive so that agent–environment dynamics are well-modeled and as predictable as possible whenever new robotic technology is introduced.

Resumen:

El objetivo principal de este trabajo es entender la contribución del guion a la película Río Escondido en la creación de una obra colectiva, polifónica, de convergencia y alejada de una visión clásica de la autoría. Para ello, proponemos un análisis documental de la historia y otro propiamente del guion como vehículo semiótico que posibilita el camino de un lenguaje escrito a uno visual. Este último análisis se centra en la relevancia del discurso literario del guion que se omite en la película y que puede suponer una discordia entre el guionista y el director. Hallar estos espacios de quiebre posibilita un acercamiento a nuevas formas de ver el cine mexicano, alejadas de la industria y los clichés, y más cercanas al ámbito literario

Abstract:

The aim of this work is to establish the significance of the film Río Escondido’s script. Consequently, it is evident that in this film the role of the author is different since its script is a collective, polyphonic, and convergent work. In such a way, it is analyzed the History’s documents in connection to the story from the script as a semiotic approach that allows the transfer from text to screen. By this way, the analysis is focused on the relevance of the script’s literariness which is supressed in the film and might implies a controversy in between the writer and the director. It is concluded, in this approach, the importance of the literary analysis in the Mexican film industry, far away from mercantilization and stated cliches

Resumo:

O objetivo principal deste trabalho é contribuição de roteiro do filme Río Escondido na criação de uma obra coletiva, polifônica, de convergência e longe de una visão clássica da autoria. Assim, propomos uma análise documental da história e outra propriamente do roteiro de filme como veículo semiótico que visibiliza o percuso de uma linguagem escrita a uma visual. Esta última análise foca na relevância do discurso literário do roteiro e filme que é omitido do filme que que e pode levar a um desentendimiento entre o rotereista e o diretor. Encontrar esses espaços inovadores permite uma abordagem de novas formas de ver o cinema mexicano, longe da indústria e dos clichés, e mais perto do campo literário

Resumen:

El propósito de este artículo es analizar, desde el punto de vista de la historia, la historiografía y el análisis literario, la novela Serpa Pinto. Pueblos en la tormenta (1943), de Giuseppe Garretto. Esta novela, publicada por primera vez en castellano a pesar de la nacionalidad italiana de su autor, pasó desapercibida para la crítica. Revisamos tanto el contexto histórico de las referencias de la obra como de la publicación de la primera edición; situamos la obra como parte de la historiografía literaria del exilio de refugiados europeos en Latinoamérica; analizamos las características literarias de la novela, así como las estrategias de rememoración narrativa que emplea el autor

Abstract:

The purpose of this article is to analyze the novel Serpa Pinto. Pueblos en la tormenta (1943) by Giuseppe Garretto, from the point of view of history, historiography and literary analysis. Published for the first time in Spanish despite the Italian nationality of the author, this novel went unnoticed by critics. We aim to study the historical context of the work references and the publication of the first edition, in order to situate the work as part of the literary historiography of the exile of European refugees in Latin America. Moreover, we analyze the literary characteristics of the novel, as well as the narrative remembrance strategies used by the author

Resumen:

Alfonso Cravioto formó parte de las organizaciones intelectuales y políticas más destacadas de principios del siglo XX. A lo largo de su vida combinó la actividad política y diplomática con la creación literaria, poesía y ensayo, principalmente. En este artículo nos proponemos destacar su contribución a la educación en México, desde una perspectiva que enlaza su experiencia vital con la sensibilidad que lo caracterizó y le ganó el reconocimiento y afecto de sus contemporáneos, al tiempo que anticipó y participó en los primeros años de la Secretaría de Educación Pública

Abstract:

Alfonso Cravioto was a member of the most prominent intellectual and political organizations of the early 20th century. Throughout his life he joined political and diplomatic activities with creating literature, poetry and essays. In this article we intend to highlight his contribution to education in Mexico, from a perspective that links his life experience with the sensitivity which was one of his most pronounced characteristics. Cravioto was recognized and beloved by his contemporaries, and he anticipated and participated in the first years of the Secretary of Mexican Public Education

Resumen:

En la actualidad, una edición crítica completa no sólo debe ir acompañada de una crítica textual rigurosa, sino también de una genética que considere todos los testimonios. En este sentido, el presente trabajo analiza algunas de las problemáticas que se podrían producir si elaborásemos una edición crítica de todos los cuentos de Mauricio Magdaleno, dos de las más importantes serían las siguientes: primera, el hecho de que la mayor parte de los cuentos, tal y como los conocemos hoy, tuviera versiones previas publicadas en fuentes periódicas, lo cual nos obligaría a una investigación más exhaustiva y a una constitución del texto a partir de la colación de variantes de testimonios hemerográficos; segunda, la decisión de criterios que permitan superar la frontera genérica entre un relato costumbrista escrito a partir de una anécdota personal y un cuento susceptible de ser incluido en esta propuesta. Así, ambas problemáticas tienen como raíz común que la labor escritural más estable en la que se desempeñó Magdaleno fuera la periodística, ya que con ésta pudo compatibilizar otras actividades que, sobre todo, le servían para un propósito de sustento material. Una edición crítica de los cuentos de Mauricio Magdaleno otorgaría no sólo la seguridad de poder hacer una buena lectura del texto final, sino que también pondría de relieve el proceso creador del autor, lo cual ayudaría a superar la crítica anacrónica a la que se le ha sometido. Por último, y con base en el estado actual de la cuestión que se expone en este trabajo, se presenta una tabla con las versiones de los cuentos encontradas hasta ahora y otra con una propuesta sustentada de un posible índice de los cuentos que integrarían la edición crítica

Abstract:

Currently, a complete critical edition must not only be accompanied by a rigorous textual criticism but also by a genetic edition that considers all the testimonies. In this sense, this work analyzes some of the problems that could occur if we publish a critical edition the entirety of Mauricio Magdaleno’s short stories. There would be two important problems: first, the fact that most of the stories, as we know them today, had previous versions published in periodical sources, which would oblige us to make an exhaustive investigation and to gather and collate the texts from the varied hemerographic testimonies; second, the decision of any criterion that allows overcoming the generic border between a story written from a personal anecdote and a short story that can be included in our edition. Thus, both problems have as a common root: Magdaleno was a journalist and supplemented his journalistic career with other activities that served as material support. A critical edition of Mauricio Magdaleno’s short stories would be a good reading of his work, and we could know about the author’s creative process, which would help to overcome the anachronistic criticism that has been published on the topic. Finally, and based on the current status, we propose a table with the versions of the short stories found and another table with a proposal supported by a possible index of the short stories that would make up the critical edition

Resumen:

El objetivo principal de este trabajo es aproximarse a una conceptualización de la poesía de argumento o versos de argumentar que se produce en el son jarocho, especialmente en la región de Los Tuxtlas (México). Además, se analizarán sus características poéticas y la forma en que se desarrolla dentro del ritual festivo. Para ello, el estudio se apoyará en el análisis de algunas de estas poesías contenidas en cuadernos de poetas, así como en testimonios orales de estos, ya recogido en libros o en entrevistas, es decir, se analizará la poesía en relación con su contexto. En el dialogismo y en la tópica de esta poesía se encuentran dos elementos fundamentales para entender las dinámicas de tradicionalización e innovación que se producen en estos rituales festivos músico-poéticos, a pesar del componente creativo —improvisado a veces— que depende de los verseros

Abstract:

The main objective of this work is to approach a conceptualization of the poetry of argu ment or verses of arguing that occurs in the son jarocho, especially in the region of Los Tu xtlas (Mexico). In addition, we will analyze its poetic characteristics and the way it develops within the festive ritual. For this, we will do an analysis of some of these poems contained in poets’ notebooks, as well as on oral testimonies of these poets, whether they have been collected in books or interviews, that is, we will analyze the poetry in relation to its context. In the dialogism and in the topic of this poetry we find two fundamental elements to unders tand the dynamics of traditionalization and innovation that take place in these poetic-musical festive rituals, despite the creative component —sometimes improvised— that depends on the verseros

Resumo:

O objetivo principal deste trabalho é abordar uma conceituação da poesia de argumento ou versos de argumentação que se produz no son jarocho, especialmente na região de Los Tuxtlas. Além disso, serão analisadas as suas características poéticas e a forma como se des envolve dentro do ritual festivo. Para tanto, será desenvolvida uma análise tanto de alguns desses poemas contidos em cadernos de poetas, quanto os depoimentos orais destes, sejam eles coletados em livros ou entrevistas, ou seja, se analisará a poesia em relação ao seu con texto. Encontramos no dialogismo e na temática desta poesia dois elementos fundamentais para compreender a dinâmica de tradicionalização e inovação que se realiza nestes rituais poético-musicais festivos, apesar da componente criativa —por vezes improvisada— que depende dos versos

Resumen:

La tradición del huapango arribeño que se lleva a cabo en la Sierra Gorda de Querétaro y Guanajuato, y en la Zona Media de San Luis Potosí, se revela en la celebración de rituales festivos de carácter músico-poético, tanto civiles como religiosos, en donde la glosa conjuga la copla y la décima. La tradición se rige bajo la influencia de una serie de normas de carácter consuetudinario y oral, entre las cuales destacan el «reglamento» y el «compromiso». El presente artículo indaga en torno a la naturaleza jurídica, lingüística y literaria de dicha normatividad; su interacción con otras normas de carácter externo a la fiesta; su influencia, tanto en la performance o ritual festivo -especialmente en la topada-, como en la creación poética. A partir de fuentes etnográficas (entrevistas y grabaciones de fiestas) y bibliográficas, el objetivo es dilucidar el papel que juega dicha normatividad en la conservación y transformación de la tradición

Abstract:

The tradition of the huapango arribeño that is performed in the Sierra Gorda of Querétaro and Guanajuato, and in the Zona Media of San Luis Potosí, is reflected in the celebration of festival rituals of a musical-poetic nature, both civil and religious, where the glosa blends the coplaand the décima. The tradition is governed by a series of rules of traditional and oral nature, among which the «reglamento» (rules) and the «compromiso» (commitment) stand out. This article investigates the legal, linguistic and literary nature of these norms; their interaction with other norms of an external character to the festival; their influence, both in the performance or festive ritual -especially in the topada-, and in the poetic creation. Using ethnographic (interviews and recordings of festivals) and bibliographic sources, the aim is to elucidate the role these norms played in the conservation and transformation of tradition

Resumen:

Desde el punto de vista estético, los escritores Mauricio Magdaleno y Salvador Novo formaron parte de dos corrientes literarias extremas. Las trayectorias de ambos autores gozan de sorprendentes paralelismos tanto biográficos como en relación con el cultivo de diferentes disciplinas literarias. El debate que se suscitó en México en los años veinte y treinta en torno a la identidad y a la nacionalidad provocó un enfrentamiento feroz entre ambos, que, sin embargo, no impidió un progresivo acercamiento propiciado por la participación política. El artículo muestra el difícil equilibrio entre la toma de posicionamientos ideológicos, la responsabilidad generacional ante la construcción de un Estado moderno y la inquietud artística. Un poema inédito de Salvador Novo a Maurico Magdaleno y una imagen de ambos velando el cuerpo del padre Ángel María Garibay K. desmuestran la cercanía que se profesaron al final de sus vidas

Abstract:

From an aesthetic point of view, writers Mauricio Magdaleno and Salvador Novo formed part of two extreme literary trends. The literary career of both authors shows striking parallels in their biography and in their relation to the cultivation of different literary disciplines. The causes a fierce confrontation between them. This, however, does not prevent a progresive approach favored by political participation. The article illustrated the difficult balance between takin ideological positions. the generational responsability to build a modern state and the artistic interest. An unpublished poem from Salvador Novo to Mauricio Magdaleno and an image of both keeping vigil over the body of Father Angel Maria Garibay K. demonstrate the closeness that professed themselves at the end of their lives

Abstract:

This article examines the ability of recently developed statistical learning procedures, such as random forests or support vector machines, for forecasting the first two moments of stock market daily returns. These tools present the advantage of the flexibility of the considered nonlinear regression functions even in the presence of many potential predictors. We consider two cases: where the agent's information set only includes the past of the return series, and where this set includes past values of relevant economic series, such as interest rates, commodities prices or exchange rates. Even though these procedures seem to be of no much use for predicting returns, it appears that there is real potential for some of these procedures, especially support vector machines, to improve over the standard GARCH(1,1) model the out-of-sample forecasting ability for squared returns. The researcher has to be cautious on the number of predictors employed and on the specific implementation of the procedures since using many predictors and the default settings of standard computing packages leads to overfitted models and to larger standard errors

Abstract:

Participating in regular physical activity (PA) can help people maintain a healthy weight, and it reduces their risks of developing cardiovascular diseases and diabetes. Unfortunately, PA declines during early adolescence, particularly in minority populations. This paper explores design requirements for mobile PA-based games to motivate Hispanic teenagers to exercise. We found that some personality traits are significantly correlated to preference for specific motivational phrases and that personality affects game preference. Our qualitative analysis shows that different body weights affect beliefs about PA and games. Design requirements identified from this study include multi-player capabilities, socializing, appropriate challenge level, and variety

Abstract:

To achieve accurate tracking control of robot manipulators, many schemes have been proposed. Some common approaches are based on robust and adaptive control techniques, while when necessary velocity observers are employed. Robust techniques have the advantage of requiring few prior information of the robot model parameters/structure or disturbances while tracking can be achieved, for instance, by using sliding mode control. On the contrary, adaptive techniques guarantee trajectory tracking but under the assumption that the robot model structure is perfectly known and it is linear in the unknown parameters, while joint velocities are also available. In this letter, some experiments are carried out to find out whether combining a robust and an adaptive controller may increase the performance of the system, as long as the adaptive term can be treated as a perturbation by the robust controller. The results are compared with an adaptive robust control law, showing that the proposed combined scheme performs better than the separated algorithms, working on their own and then the comparison laws

Abstract:

To achieve accurate tracking control of robot manipulators many schemes have been proposed. A common approach is based on adaptive control techniques, which guarantee trajectory tracking under the assumption that the robot model structure is perfectly known and linear in the unknown parameters, while joint velocities are available. Despite tracking errors tend to zero, parameter errors do not unless some persistent excitation condition is fulfilled. There are few works dealing with velocity observation in conjunction with adaptive laws. In this note, an adaptive control/observer scheme is proposed for tracking position of robot manipulators. It is shown that tracking and observation errors are ultimately bounded, with the characteristic that when a persistent excitation condition is matched then they, as well as the parameter errors, tend to zero. Simulation results are in good agreement with the developed theory

Abstract:

This study determinates that morbidity presents a mediating impact between intimate partner violence against women and labor productivity in terms of absenteeism and presenteeism. Partial least squares structural equation modeling (PLS-SEM) was used on a nationwide representative sample of 357 female owners of micro-films in Peru. The resulting data reveals that morbidity is a mediating variable between intimate partner violence against women and absenteeism (ß=0.213; p<.001), as well as between intimate partner violence against women and presenteeism (ß=0.336; p<.001). This finding allows us to understand how such intimate partner violence against women negatively affects the workplace productivity in the context of a micro-enterprise, a key element in many economies across the world

Abstract:

The purpose of this paper is to determine the prevalence of economic violence against women, specifically in formal sector micro-firms managed by women in Peru, a key Latin American emerging market. Additionally, the authors have identified the demographic characteristics of the micro-firms, financing and credit associated with women who suffer economic violence. Design/methodology/approach. In this study, a structured questionnaire was administered to a representative sample nationwide (357 female micro-entrepreneurs). Findings. The authors found that 22.2 percent of female micro-entrepreneurs have been affected by economic violence at some point in their lives, while at the same time 25 percent of respondents have been forced by their partner to obtain credit against their will. Lower education level, living with one’s partner, having children, business location in the home, lower income, not having access to credit, not applying credit to working capital needs, late payments and being forced to obtain credit against one’s will were all factors associated with economic violence. Furthermore, the results showed a significant correlation between suffering economic violence and being a victim of other types of violence (including psychological, physical or sexual); the highest correlation was with serious physical violence (r=0.523, p<0.01)

Abstract:

The Fourier method approach to the Neumann problem for the Laplacian operator in the case of a solid torus contrasts in many respects with the much more straight forward situation of a ball in 3-space. Although the Dirichlet-to-Neumann map can be readily expressed in terms of series expansions with toroidal harmonics, we show that the resulting equations contain undetermined parameters which cannot be calculated algebraically. A method for rapidly computing numerical solutions of the Neumann problem is presented with numerical illustrations. The results for interior and exterior domains combine to provide a solution for the Neumann problem for the case of a shell between two tori

Abstract:

The paper addresses the issues raised by the simultaneity between the supply function and the domestic and foreign demand for exportables, analysing the microeconomic foundations of the simultaneous price and output decisions of a firm which operates in the exportables sector of an open economy facing a domestic and a foreign demand for its output. A specific characteristic of the model is that it allows for the possibility of price discrimination, which is suggested by the observed divergencies in the behaviour of domestic and export prices. The famework developed is used to investigate the recent behaviour of prices and output in two industries of the German manufacturing sector

Abstract:

Measuring says that for every sequence (Cδ)δ<ω1 with each Cδ being a closed subset of δ there is a club C ⊆ ω1 such that for every δ ∈ C, a tail of C ∩ δ is either contained in or disjoint from Cδ. We answer a question of Justin Moore by building a forcing extension satisfying measuring together with 2ℵ0 > ℵ2. The construction works over any model of ZFC + CH and can be described as a finite support forcing iteration with systems of countable structures as side conditions and with symmetry constraints imposed on its initial segments. One interesting feature of this iteration is that it adds dominating functions f : ω1 −→ ω1 mod. countable at each stage

Abstract:

We separate various weak forms of Club Guessing at ω1 in the presence of 2No large, Martin's Axiom, and related forcing axioms. We also answer a question of Abraham and Cummings concerning the consistency of the failure of a certain polychromatic Ramsey statement together with the continuum large. All these models are generic extensions via finite support iterations with symmetric systems of structures as side conditions, possibly enhanced with ω-sequences of predicates, and in which the iterands are taken from a relatively small class of forcing notions. We also prove that the natural forcing for adding a large symmetric system of structures (the first member in all our iterations) adds N1-many reals but preserves CH

Resumen:

La estructura de À la recherche du temps perdu se explica a partir de la experiencia psíquica del tiempo del protagonista de la novela, según la concepción proustiana de la temporalidad a la luz del pensamiento de San Agustín, Henri Bergson y Edmund Husserl

Abstract:

The structure of À la recherche du temps perdu is explained through the psychic experience of time of the protagonist of the novel, according to the Proustian conception of temporality in light of the thought of Saint Augustine, Henri Bergson and Edmund Husserl

Resumen:

La actividad onírica ocupa un lugar preponderante en la antropología filosófica de María Zambrano. Para la pensadora andaluza, soñar es una facultad cognitiva de la que depende, en último análisis, el desarrollo anímico y la salud emocional de la persona humana. Esta nota muestra la actualidad del pensamiento zambraniano sobre los sueños a la luz de algunas tesis de la neurociencia contemporánea

Abstract:

The oneiric activity occupies a preponderant place in the philosophical anthropology of María Zambrano. For the Andalusian thinker, dreaming is a cognitive faculty on which depends, in the last analysis, the development of the soul and the emotional health of the human person. This note shows the actuality of Zambrano's thought on dreams in the light of some theses of contemporary neuroscience

Resumen:

Utilizando como motivo conductor la idea de que la biblioteca de un académico refleja su ethos y su cosmovisión, este texto epidíctico celebra la trayectoria intelectual de Nora Pasternac, profesora del Departamento Académico de Lenguas del ITAM

Abstract:

Using as a driving motif the idea that an academic’s library reflects his ethos and wordview, this epidictic text celebrates the intellectual trajectory of Nora Pasternac, professor of the Academic Department of Languages at ITAM

Resumen:

En el texto se comentan algunos pasajes de tres novelas y un cuento de Ignacio Padilla a la luz de la Monadología, de G. W. Leibniz, y del Manuscrito encontrado en Zaragoza, de Jan Potocki, con el propósito de mostrar el uso de la construcción en abismo como procedimiento narrativo en la obra de este escritor mexicano

Abstract:

The text discusses some passages of three novels and a story by Ignacio Padilla in light of Monadologie by G.W. Leibniz and the Manuscript Found in Saragossa by Jan Potocki, with the purpose of showing the use of mise en abyme as a narrative technique in the work of this Mexican writer

Resumen:

Utilizando como principio explicativo la noción de distancia fenomenológica, en el ensayo se exponen las formas de relación entre lengua fuente y lengua meta, y entre texto fuente y texto meta, en la teoría de la traducción de Walter Benjamin

Abstract:

Using the notion of phenomenological distance as an explanatory principle, we will explore in this article the relationship between source language and target language and between source text and target text in Walter Benjamin's translation theory

Resumen:

En la antropología filosófica de María Zambrano, dormir y despertar no son meros actos empíricos del vivir cotidiano, sino operaciones egológicas formales involucradas en el proceso de autoconstitución del sujeto. En este artículo se describen poniendo en diálogo el libro zambraniano Los sueños y el tiempo con la noción de ipseidad, tal como la entienden Paul Ricoeur, Edmund Husserl, Hans Blumenberg y Michel Henry

Abstract:

In Maria Zambrano’s philosophical anthropology, sleeping and awakening are not mere empirical acts of daily life, but formal egological acts involves in self-construction. In this article, we will discuss Maria Zambrano’s work, The dreams and time, along with the idea of selfhood as understood by Paul Ricoeur, Edmund Husserl, Hans Blumenberg, and Michel Henry

Abstract:

The homotopy classification problem for complete intersections is settled when the complex dimension is larger than the total degree

Abstract:

A rigidity theorem is proved for principal Eschenburg spaces of positive sectional curvature. It is shown that for a very large class of such spaces the homotopy type determines the diffeomorphism type

Abstract:

We address the problem of parallelizability and stable parallelizability of a family of manifolds that are obtained as quotients of circle actions on complex Stiefel manifolds. We settle the question in all cases but one, and obtain in the remaining case a partial result

Abstract:

The cohomology algebra mod p of the complex projective Stiefel manifolds is determined for all primes p. When p = 2 we also determine the action of the Steenrod algebra and apply this to the problem of existence of trivial subbundles of multiples of the canonical line bundle over a lens space with 2-torsion, obtaining optimal results in many cases

Abstract:

The machinery of M. Kreck and S. Stoltz is used to obtain a homeomorphism and diffeomorphism classification of a family of Eschenburg spaces. In contrast with the family of Wallach spaces studied by Kreck and Stolz we obtain abundant examples of homeomorphic but not diffeomorphic Eschenburg spaces. The problem of stable parallelizability of Eschenburg spaces is discussed in an appendix

Abstract:

In this paper, we introduce the notion of a linked domain and prove that a non-manipulable social choice function defined on such a domain must be dictatorial. This result not only generalizes the Gibbard-Satterthwaite Theorem but also demonstrates that the equivalence between dictatorship and non-manipulability is far more robust than suggested by that theorem. We provide an application of this result in a particular model of voting. We also provide a necessary condition for a domain to be dictatorial and use it to characterize dictatorial domains in the cases where the number of altematives is three

Abstract:

We study entry and bidding patterns in sealed bid and open auctions. Using data fromthe U.S. Forest Service timber auctions, we document a set of systematic effects: sealed bid auctions attract more small bidders, shift the allocation toward these bidders, and can also generate higher revenue. A private value auction model with endogenous participation can account for these qualitative effects of auction format. We estimate the model’s parameters and show that it can explain the quantitative effects as well. We then use the model to assess bidder competitiveness, which has important consequences for auction design

Abstract:

The role of domestic courts in the application of international law is one of the most vividly debated issues in contemporary international legal doctrine. However, the methodology of interpretation of international norms used by these courts remains underexplored. In particular, the application of the Vienna rules of treaty interpretation by domestic courts has not been sufficiently assessed so far. Three case studies (from the US Supreme Court, the Mexican Supreme Court, and the European Court of Justice) show the diversity of approaches in this respect. In the light of these case studies, the article explores the inevitable tensions between two opposite, yet equally legitimate, normative expectations: the desirability of a common, predictable methodology versus the need for flexibility in adapting international norms to a plurality of domestic environments

Abstract:

Christensen, Baumann, Ruggles, and Sadtler (2006) proposed that organizations addressing social problems may use catalytic innovation as a strategy to create social change. These innovations aim to create scalable, sustainable, and systems-changing solutions. This empirical study examines: (a) whether catalytic innovation applies to Mexican social entrepreneurship; (b) whether those who adopt Christensen et al.’s (2006) strategy generate more social impact; and (c) whether they demonstrate economic success. We performed a survey of 219 Mexican social entrepreneurs and found that catalytic innovation does occur within social entrepreneurship, and that those social entrepreneurs who use catalytic innovations not only maximize their social impact but also maximize their profits, and that they do so with diminishing returns to scale

Résumé:

Christensen, Baumann, Ruggles et Sadtler (2006) proposent que les organisations qui s'occupent de problèmes sociaux, peuvent utiliser l'innovation catalytique comme une stratégie visant à créer un changement social. Ces innovations cherchent à créer des solutions évolutives, durables, et qui changent le système. Cette étude empirique examine : (a) si l'innovation catalytique peut s'appliquer dans le domaine de l'entrepreneuriat social mexicain; (b) si les entrepreneurs qui adoptent la stratégie de Christensen et al. (2006), donne lieu à un impact social; et (c) si elle démontre engendrer un succès économique. Nous avons effectué un sondage à 219 entrepreneurs sociaux mexicains et avons constaté que l'innovation catalytique se produit au sein des entreprenariats sociaux, et que les entrepreneurs sociaux qui utilisent des innovations catalytiques maximisent non seulement l'impact social, mais aussi leurs profits et qu'ils le font avec des rendements à échelle décroissante

Abstract:

We study pattern formation in a 2D reaction-diffusion (RD) subcellular model characterizing the effect of a spatial gradient of a plant hormone distribution on a family of G-proteins associated with root hair (RH) initiation in the plant cell Arabidopsis thaliana. The activation of these G-proteins, known as the Rho of Plants (ROPs), by the plant hormone auxin is known to promote certain protuberances on RH cells, which are crucial for both anchorage and the uptake of nutrients from the soil. Our mathematical model for the activation of ROPs by the auxin gradient is an extension of the model of Payne and Grierson [PLoS ONE, 4 (2009), e8337] and consists of a two-component Schnakenberg-type RD system with spatially heterogeneous coefficients on a 2D domain. The nonlinear kinetics in this RD system model the nonlinear interactions between the active and inactive forms of ROPs. By using a singular perturbation analysis to study 2D localized spatial patterns of active ROPs, it is shown that the spatial variations in the nonlinear reaction kinetics, due to the auxin gradient, lead to a slow spatial alignment of the localized regions of active ROPs along the longitudinal midline of the plant cell. Numerical bifurcation analysis together with time-dependent numerical simulations of the RD system are used to illustrate both 2D localized patterns in the model and the spatial alignment of localized structures

Abstract:

We aimed to make a theoretical contribution to the happy-productive worker thesis by expanding the study to cases where this thesis does not fit. We hypothesized and corroborated the existence of four relations between job satisfaction and innovative performance: (a) unhappy-unproductive, (b) unhappy-productive, (c) happy-unproductive, and (d) happy-productive. We also aimed to contribute to the happy-productive worker thesis by studying some conditions that influence and differentiate among the four patterns. Hypotheses were tested in a sample of 513 young employees representative of Spain. Cluster analysis and discriminant analysis were performed. We identified the four patterns. Almost 15 % of the employees had a pattern largely ignored by previous studies (e.g., unhappy-productive). As hypothesized, to promote well-being and performance among young employees, it is necessary to fulfill the psychological contract, encourage initiative, and promote job self-efficacy. We also confirmed that over-qualification characterizes the unhappy-productive pattern, but we failed to confirm that high job self-efficacy characterizes the happy-productive pattern. The results show the relevance of personal and organizational factors in studying the well-being-performance link in young employees

Abstract:

Conventional wisdom suggests that promising free information to an agent would crowd out costly information acquisition. We theoretically demonstrate that this intuition only holds as a knife-edge case in which priors are symmetric. Indeed, when priors are asymmetric, a promise of free information in the future induces agents to increase information acquisition. In the lab, we test whether such crowding out occurs for both symmetric and asymmetric priors. Our results are qualitatively in line with the predictions: When priors are asymmetric, the promise of future free information induces subjects to acquire more costly information

Abstract:

Region-of-Interest (ROI) tomography aims at reconstructing a region of interest C inside a body using only x-ray projections intersecting C and it is useful to reduce overall radiation exposure when only a small specific region of a body needs to be examined. We consider x-ray acquisition from sources located on a smooth curve Γ in R3 verifying the classical Tuy condition. In this generic situation, the non-trucated cone-beam transform of smooth density functions f admits an explicit inverse Z as originally shown by Grangeat. However Z cannot directly reconstruct f from ROI-truncated projections. To deal with the ROI tomography problem, we introduce a novel reconstruction approach. For densities f in L∞(B) where B is a bounded ball in R3, our method iterates an operator U combining ROI-truncated projections, inversion by the operator Z and appropriate regularization operators. Assuming only knowledge of projections corresponding to a spherical ROI C subset of subset B, given ɛ > 0, we prove that if C is sufficiently large our iterative reconstruction algorithm converges at exponential speed to an ɛ-accurate approximation of f in L∞. The accuracy depends on the regularity of f quantified by its Sobolev norm in W5(B). Our result guarantees the existence of a critical ROI radius ensuring the convergence of our ROI reconstruction algorithm to an ɛ-accurate approximation of f. We have numerically verified these theoretical results using simulated acquisition of ROI-truncated cone-beam projection data for multiple acquisition geometries. Numerical experiments indicate that the critical ROI radius is fairly small with respect to the support region B

Resumen:

El Tratado de Libre Comercio entre la UE y México (tlcuem) entró en vigor en el año 2000, constituyéndose en uno de los acuerdos más importantes del comercio transatlántico. El objetivo de este trabajo es analizar los resultados del acuerdo en materia de comercio entre los países socios al cabo de una década, e identificar los principales determinantes económicos. Se estima un modelo de gravedad para una muestra de 60 países durante el periodo 1994-2011. Los resultados indican que dicho tratado ha sido relevante en la intensificación de las relaciones comerciales entre ambos socios

Abstract:

The Free Trade Agreement between the European Union and Mexico (eumfta) was enforced in 2000, becoming one of the most important transatlantic trade agreements. The goal of this research is to analyze the results of this agreement a decade after the signature. A gravity model is estimated for a sample of 60 countries along the period 1994-2011. The results indicate that such an agreement has given rise to an increase in the bilateral trade flows between these two commercial partners

Abstract:

We study an at-scale natural experiment in which debit cards were given to cash transfer recipients who already had a bank account. Using administrative account data and household surveys, we find that beneficiaries accumulated a savings stock equal to 2% of annual income after two years with the card. The increase in formal savings represents an increase in overall savings, financed by a reduction in current consumption. There are two mechanisms. First, debit cards reduce transaction costs of accessing money. Second, they reduce monitoring costs, which led beneficiaries to check their account balances frequently and build trust in the bank

Abstract:

Transaction costs are a significant barrier to the take-up and use of formal financial services. Account opening fees and minimum balance requirements prevent the poor from opening bank accounts (Dupas and Robinson 2013), and small subsidies can lead to large increases in take-up (Cole, Sampson, and Zia 2011). Indirect transaction costs—such as travel time -are also a barrier: the distance to the nearest bank or mobile money agent is a key predictor of take-up of savings accounts (Dupas et al. forthcoming) and mobile money (Jack and Suri 2014). In turn, increased access to financial services can reduce poverty and increase welfare (Burgess and Pande 2005; Suri and Jack 2016). Digital financial services, such as ATMs, debit cards, mobile money, and digital credit, have the potential to reduce transaction costs. However, existing studies rarely measure indirect transaction costs. We provide evidence on how a specific technology -a debit card- lowers indirect transaction costs by reducing travel distance and foregone activities. We study a natural experiment in which debit cards tied to existing savings accounts were rolled out geographically over time to beneficiaries of the Mexican cash transfer program Oportunidades. Prior to receiving debit cards, beneficiaries received transfers directly into a savings account every two months. After receiving cards, beneficiaries continue to receive their benefits in the savings account, but can access their transfers and savings at any bank’s ATM. They can also check their balances at any bank’s ATM or use the card to make purchases at point of sale terminals. We find that debit cards reduce the median road distance to access the account from 4.8 to 1.3 kilometers (km). As a result, the proportion of beneficiaries who walk to withdraw the transfer payments increases by 59 percent. Furthermore, prior to receiving debit cards, 84 percent of beneficiari

Abstract:

We study a natural experiment in which debit cards are rolled out to beneficiaries of a cash transfer program, who already received transfers directly deposited into a savings account. Using administrative account data and household surveys, we find that before receiving debit cards, few beneficiaries used the accounts to make more than one withdrawal per period, or to save. With cards, beneficiaries increase their number of withdrawals and check their balances frequently; the number of checks decreases over time as their reported trust in the bank and savings increase. Their overall savings rate increases by 3–4 percent of household income

Abstract:

Two career-concerned experts sequentially give advice to a Bayesian decision maker (D). We find that secrecy dominates transparency, yielding superior decisions for D. Secrecy empowers the expert moving late to be pivotal more often. Further, (i) only secrecy enables the second expert to partially communicate her information and its high precision to D and swing the decision away from first expert's recommendation; (ii) if experts have high average precision, then the second expert is effective only under secrecy. These results are obtained when experts only recommend decisions. If they also report the quality of advice, fully revealing equilibrium may exist

Abstract:

In recent years, more and more countries have included different kinds of gender considerations in their trade agreements. Yet many countries have still not signed their very first agreement with a gender equality-related provision. Though most of the agreements negotiated by countries in the Asia-Pacific region have not explicitly accommodated gender concerns, a limited number of trade agreements signed by countries in the region have presented a distinct approach: the nature of provisions, drafting style, location in the agreements, and topic coverage of such provisions contrast with the gender-mainstreaming approach employed by the Americas or other regions. This chapter provides a comprehensive account and assessment of gender-related provisions included in the existing trade agreements negotiated by countries in the Asia-Pacific, explains the extent to which gender concerns are mainstreamed in these agreements, and summarizes the factors that impede such mainstreaming efforts in the region

Abstract:

The most common provisions we find in almost all multilateral, regional and bilateral trade agreements are the exception clauses that allow countries to protect public morals, humans, animals or plant health and life and conserve exhaustible natural resources. If countries can allow trade-restrictive measures that aim to protect these non-economic interests, is it possible to negotiate a specific exception to justify measures that are aimed at protecting women's economic interests as well? Is the removal of barriers that impede women's participation in trade any less important than the conservation of exhaustible natural resources such as sea turtles or dolphins? In that context, this article prepares a case for the inclusion of a specific exception that can allow countries to leverage women's economic empowerment through international trade agreements. This is done after carrying out an objective assessment of whether a respondent could seek protection under the existing public morality exception to justify a measure that is taken to protect women's economic interests

Abstract:

Mexico has by far the world's highest death rate linked to obesity and other chronic diseases. As a response to the growing pandemic of obesity, Mexico has adopted a new compulsory front-of-pack labeling regulation for pre-packaged foods and nonalcoholic beverages. This article provides an assessment of the regulation's consistency with international trade law and the arguments that might be invoked by either side in a hypothetical trade dispute on this matter

Abstract:

In the past few months, we have witnessed the 'worst deal' in the history of the USA become the 'best deal' in the history of the USA. The negotiation leading to the United States-Mexico-Canada Agreement (USMCA) appeared as an 'asymmetrical exchange' scenario that could have led to an unbalanced outcome for Mexico. However, Mexico stood firm on its positions and negotiated a modernized version of North American Free Trade Agreement. Mexico faced various challenges during this renegotiation, not only because it was required to negotiate with two developed countries but also due to the high level of ambition and demands raised by the new US administration. This paper provides an account of these impediments. More importantly, it analyzes the strategies that Mexico used to overcome the resource constraints it faced amidst the unpredictable political dilemma in the US and at home. In this manner, this paper seeks to provide a blueprint of strategies that other developing countries could employ to overcome their negotiation capacity constraints, especially when they are dealing with developed countries and in uncertain political environments

Abstract:

Health pandemics affect women and men differently, and they can make the existing gender inequalities much worse. COVID-19 is one such pandemic, which can have substantial gendered implications both during and in the post-pandemic world. Its economic and social consequences could deepen the existing gender inequalities and roll back the limited gains made in respect of women empowerment in the past few decades. The impending global recession, multiple trade restrictions, economic lockdown, and social distancing measures can expose vulnerabilities in social, political, and economic systems, which, in turn, could have a profound impact on women’s participation in trade and commerce. The article outlines five main reasons that explain why this health pandemic has put women employees, entrepreneurs, and consumers at the frontline of the struggle. It then explores how free trade agreements can contribute in repairing the harm in the post-pandemic world. In doing so, the author sheds light on various ways in which the existing trade agreements embrace gender equality considerations and how they can be better prepared to help minimize the pandemic-inflicted economic loss to women

Abstract:

The World Trade Organization (WTO) Dispute Settlement System (DSS) is in peril. The Appellate Body (AB) is being held as a 'hostage' by the very architect and the most frequent user of WTO DSS, the United States of America. This will bring the whole DSS to a standstill as the inability of AB to review the appeals will have a kill-off effect on the binding value of Panel rulings. If the most celebrated DSS collapses, the members would not be able to enforce their WTO rights. The WTO-inconsistent practices and violations would increase and remain unchallenged. The rights without remedies would soon lose their charm, and we might witness a higher and faster drift away from multilateral trade regulation. This is a grave situation. This piece is an academic attempt to analyse and diffuse the key points of criticism against AB. A comprehensive assessment of reasons behind this criticism could be a starting point to resolve this gridlock. The first part of this Article investigates the reasons and motivations of the US behind these actions as we cannot address the problems without understanding them in a comprehensive manner. The second part looks at this issue from a systemic angle as it seeks to address the debate on whether WTO resembles common or civil law, as most of the criticism directed towards judicial activism and overreach is 'much ado about nothing'. The concluding part of this piece briefly looks at the proposals already made by scholars to resolve this deadlock, and it leaves the readers with a fresh proposal to deliberate upon

Abstract:

In the recent years, we have witnessed a sharp increase in the number of free trade agreements (FTAs) with gender-related provisions. The key champions of this evolution include Canada, Chile, New Zealand, Australia and Uruguay. These countries have proposed a new paradigm, i.e. a paradigm where FTAs are considered vehicles to achieving the economic empowerment of women. This trend is spreading like a wild-fire to other parts of the world. More and more countries are expressing their interest in ensuring that their FTAs are genderresponsive and not simply gender-neutral or gender-blind in nature. The momentum is on, and we can expect many more agreements in the future to include stand-alone chapters or exclusive provisions on gender issues. This article is an attempt to tap into this ongoing momentum, as it puts forward a newly designed self-evaluation maturity framework to measure gender-responsiveness of trade agreements. The proposed framework is to help policy-makers and negotiators to: (1) measure gender-responsiveness of trade agreements; (2) identify areas where agreements need critical improvements; and (3) receive recommendations to improve the gender-fabric of trade agreements that they are negotiating or have already negotiated. This is the first academic intervention presenting this type of gender-responsiveness model for trade agreements

Abstract:

Purpose - World Trade Organisation grants rights to its members, and WTO Dispute Settlement Understanding (DSU) provides a rule-oriented consultative and judicial mechanism to protect these rights in cases of WTO-incompatible trade infringements. However, the DSU participation benefits come at a cost. These costs are acutely formidable for least developing countries (LDCs) which have small market size and trading stakes. No LDC has ever filed a WTO compliant, with the only exception of India-Battery dispute filed by Bangladesh against India. This paper aims to look at the experience of how Bangladesh – so far the only LDC member that has filed a formal WTO complaint – persuaded India to withdraw anti-dumping duties India had imposed on the import of acid battery from Bangladesh. Design/methodology/approach - The investigation is grounded on practically informed findings gathered through authors’ work experience and several semi-structured interviews and discussions which the authors have conducted with government representatives from Bangladesh, government and industry representatives from other developing countries, trade lawyers and officials based in Geneva and Brussels, and civil society organisations. Findings - The discussion provides a sound indication of the participation impediments that LDCs can face at WTO DSU and the ways in which such challenges can be overcome with the help of resources available at the domestic level. It also exemplifies how domestic laws and practices can respond to international legal instruments and impact the performance of an LDC at an international adjudicatory forum. Originality/value - Except one book chapter and a working paper, there is no literature available on this matter. This investigation is grounded on practically informed findings gathered with the help of original empirical research conducted by the authors

Abstract:

Mexico has employed special methodologies for price-determination and calculation of dumping margins againts Chinese imports in almost all anti-dumping investigations. This chapter attemps to explain and analyze the NME-especific procedures eployed by Mexican authorities in anti-dumping proceedings againts China. It also clarifies the Mexican standpoint on the controversial issue of how the expiry of section 15(a)(ii) of China's Accession Protocol to the WTO impacts the surviving parts of Section 15 of the Protocol, and whether Mexico has changed its treatment towards Chinese imports following the expiry of Section 15(a)(ii) post 12 December 2016

Abstract:

Multiple scholarly works have argued that developing country members of World Trade Organization (WTO) should enhance their dispute settlement capacity to successfully and cost effectively navigate the system of WTO Dispute Settlement Understanding (DSU). It is one thing to be a part of WTO agreements and know the WTO rules, and another to know how to use and take advantage of those agreements and rules in practice. The present investigation seeks to conduct a detailed examination of the latter with a specific focus on critically examining public private partnership (PPP) strategies that can enable developing countries to effectively utilize the provisions of WTO DSU. To achieve this purpose, the article examines how Brazil, one of the most active DSU users among developing countries, has strengthened its DSU participation by engaging its private stakeholders during the management of WTO disputes. The identification and evaluation of the PPP strategies employed by the government and industries in Brazil may prompt other developing countries to determine their individual approach towards PPP for the handling of WTO disputes

Abstract:

World Trade Organisation Dispute Settlement Understanding (WTO DSU) is a two-tier mechanism. The first tier is international adjudication and the second tier is domestic handling of trade disputes. Both tiers are interdependent and interconnected. A case that is poorly handled at the domestic level generally stands a relatively lower chance of success at the international level, and hence, the future of WTO litigation is partially predetermined by the manner in which it is handled at the domestic level. Moreover, most of the capacity-related challenges faced by developing countries at WTO DSU are deeply rooted in the domestic context of these countries, and their solutions can best be found at the domestic level. The present empirical investigation seeks to explore a domestic solution to the capacity-related challenges faced mainly by developing countries, as it examines the model of public private partnership (PPP). In particular, the article examines how India, one of the most active DSU users among developing countries, has strengthened its DSU participation by engaging its private stakeholders during the management of WTO disputes. The identification and evaluation of the PPP strategies employed by the government and industries, along with an analysis of the challenges and potential limitations that such partnerships have faced in India, may prompt other developing countries to review or revise their individual approach towards the future handling of WTO dispute

Abstract:

With the advent of globalization and industrialization, the significance of WTO DSU as an international institution of trade dispute governance has expanded tremendously, as a landmark achievement of Uruguay Round negotiations. Exploring the fact that whether the 'pendulum' of DSU is tilted towards developed economies, much to the disadvantage of the developing world, it becomes imperative to devise a strategy within the existing framework, to balance the equilibrium of this tilted pendulum. WTO, being recognized as an area of public international law, and expanding its routes to the sphere of private international law, the approach of public private partnership can be efficiently designed to help developing countries overcome their challenges in using WTO DSU, among the other approaches suggested by various experts. This study aims at exploring ways in which this partnership can be devised and implemented in the context of developing countries and also analyzing the limits of developing countries in implementing this strategy

Resumen:

Las respuestas más comunes al escalamiento percibido del crimen con violencia a través de la mayor parte de América Latina son el aumento del tamaiío y los poderes de la policía local y -en la mayorra de los casos incrementar-la participación de las fuerzas armadas para confrontar tanto al crimen común como al organizado. En México el debate se ha visto agudizado por la extensa violencia vinculada a los conflictos entre organizaciones de narcotráfico y entre éstas y las fuerzas de seguridad del gobierno, en las cuales el ejército y la marina han desempeiíado papeles importantes. Con base en la World Values Survey y datos del Barómetro de las Américas, examinamos tendencias de la confianza pública en la policía, el sistema judicial y las fuerzas armadas en México entre 1990 y 2010. Aquí preguntamos: ¿Está difundido y generalizado a través de la muestra el apoyo público para emplear a los militares como policías? ¿O existen patrones de apoyo y oposición respecto a la opinión pública? Nuestros hallazgos principales fueron: 1) que las fuerzas armadas clasificaron en primer lugar en relación con la confianza, mientras que la confianza en otras instituciones mexicanas tuvo una tendencia negativa entre 2008 y 2010, además la confianza en los militares aumentó ligeramente; 2) los encuestados respondieron que los militares respetan los derechos humanos más que el promedio y sustancialmente más que la policla o el gobierno en general; 3) el apoyo público para los militares en la lucha contra el crimen es fuerte y está distribuido de manera equitativa a través del espectro ideológico y de los grupos sociodemográficos, y 4) los patrones de apoyo surgen con mayor claridad respecto a percepciones, actitudes y juicios de desempeño. A modo de conclusión consideramos algunas de las implicaciones pollticas y de polltica de nuestros hallazgos.

Abstract:

Typical responses to the perceived escalation of violent crime throughout most of Latin America are to increase the size and powers of the regular police and -in most cases- to expand the involvement by the armed forces to confront both common and organized crime. Participation by the armed forces in domestic policing, in turn, has sparked debates in several countries about the serious risks incurred, especially with respect to human rights violations. In Mexico the debate is sharpened by the extensive violence linked to conflicts among drug-trafficking organizations and between these and the government's security forces, in which the Army and Navy have played leading roles. Using World Values Survey and Americas Barometer data, we examine trends in public confidence in the police, justice system, and armed forces in Mexico over 1990-2010. Using Vanderbilt University's 2010 LAPOP survey we compare levels of trust in various social, political, and government actors, locating Mexico in the broader Latin America context. Here we ask: Is public support for using the military as police widespread and generalized across the sample? Or are there patterns of support and opposition with respect to public opinion? Our main findings are that: 1) the armed forces rank at the top regarding trust, and -while trust in other Mexican institutions tended to decline in 2008-2010- trust in the military increased slightly; 2) respondents indicate that the military respects human rights more than the average and substantially more than the police or government generally; 3) public support for the military in fighting crime is strong and distributed evenly across the ideological spectrum and across socio-demographic groups, and 4) patterns of support emerge more clearly with respect to perceptions, attitudes, and performance judgments. By way of conclusion we conconsider some of the political and policy implications of our findings

Abstract:

We study a class of boundedly rational choice functions which operate as follows. The decision maker uses two criteria in two stages to make a choice. First, she shortlists the top two alternatives, i.e. two finalists, according to one criterion. Next, she chooses the winner in this binary shortlist using the second criterion. The criteria are linear orders that rank the alternatives. Only the winner is observable. We study the behavior exhibited by this choice procedure and provide an axiomatic characterization of it. We leave as an open question the characterization of a generalization to larger shortlists

Abstract:

In this study, we analyzed students' understanding of a complex calculus graphing problem. Students were asked to sketch the graph of a function, given its analytic properties (1st and 2nd derivatives, limits, and continuity) on specific intervals of the domain. The triad of schema development in the context of APOS theory was utilized to study students' responses. Two dimensions of understanding emerged, 1 involving properties and the other involving intervals. A student's coordination of the 2 dimensions is referred to as that student's overall calculus graphing schema. Additionally, a number of conceptual problems were consistently demonstrated by students throughout the study, and these difficulties are discussed in some detail

Abstract:

Array-based comparative genomic hybridization (aCGH) is a high-resolution, high-throughput technique for studying the genetic basis of cancer. The resulting data consist of log fluorescence ratios as a function of the genomic DNA location and provide a cytogenetic representation of the relative DNA copy number variation. Analysis of such data typically involves estimating the underlying copy number state at each location and segmenting regions of DNA with similar copy number states. Most current methods proceed by modeling a single sample/array at a time, and thus fail to borrow strength across multiple samples to infer shared regions of copy number aberrations. We propose a hierarchical Bayesian random segmentation approach for modeling aCGH data that uses information across arrays from a common population to yield segments of shared copy number changes. These changes characterize the underlying population and allow us to compare different population aCGH profiles to assess which regions of the genome have differential alterations. Our method, which we term Bayesian detection of shared aberrations in aCGH (BDSAScgh), is based on a unified Bayesian hierarchical model that allows us to obtain probabilities of alteration states as well as probabilities of differential alterations that correspond to local false discovery rates for both single and multiple groups. We evaluate the operating characteristics of our method via simulations and an application using a lung cancer aCGH data set. This article has supplementary material online

Abstract:

The quadratic and linear cash flow dispersion measures M2 and Ñ are two immunization risk measures designed to build immunized bond portfolios. This paper generalizes these two measures by showing that any dispersion measure is an immunization risk measure and therefore, it sets up a tool to be used in empirical testing. Each new measure is derived from a different set of shocks (changes on the term structure of interest rates) and depends on the corresponding subset of worst shocks. Consequently, a criterion for choosing appropriate immunization risk measures is to take those developed from the most reasonable sets of shocks and the associated subset of worst shocks and then select those that work best empirically. Adopting this approach, this paper then explores both numerical examples and a short empirical study on the Spanish Bond Market in the mid-1990s to show that measures between linear and quadratic are the most appropriate, and amongst them, the linear measure has the best properties. This confirms previous studies on US and Canadian markets that maturity-constrained-duration-matched portfolios also have good empirical behavior

Abstract:

This paper presents a condition equivalent to the existence of a Riskless Shadow Asset that guarantees a minimum return when the asset prices are convex functions of interest rates or other state variables. We apply this lemma to immunize default-free and option-free coupon bonds and reach three main conclusions. First, we give a solution to an old puzzle: why do simple duration matching portfolios work well in empirical studies of immunization even though they are derived in a model inconsistent with equilibrium and shifts on the term structure of interest rates are not parallel, as assumed? Second, we establish a clear distinction between the concepts of immunized and maxmin portfolios. Third, we develop a framework that includes the main results of this literature as special cases. Next, we present a new strategy of immunization that consists in matching duration and minimizing a new linear dispersion measure of immunization risk

Abstract:

Given modal logics L1 and L2, their lexicographic product L1 x L2 is a new logic whose frames are the Cartesian products of an L1-frame and an L2-frame, but with the new accessibility relations reminiscent of a lexicographic ordering. This article considers the lexicographic products of several modal logics with linear temporal logic (LTL) based on "next" and "always in the future". We provide axiomatizations for logics of the form L x LTL and define cover-simple classes of frames; we then prove that, under fairly general conditions, our axiomatizations are sound and complete whenever the class of L-frames is cover-simple. Finally, we prove completeness for several concrete logics of the form L x LTL

Abstract:

Classic institutionalism claims that even authoritarian and non-democratic regimens would prefer institutions where all members could make advantageous transactions. Thus, structural reform geared towards preventing and combating corruption should be largely preferred by all actors in any given setting. The puzzle, then, is why governments decide to maintain, or even create, inefficient institutions. A perfect example of this paradox is the establishment of the National Anti-corruption System (SNA) in Mexico. This is a watchdog institution, created to fight corruption, which is itself often portrayed as highly corrupted and inefficient. The limited scope of anti-corruption reforms in the country is explained by the institutional setting in which these reforms take place, where political behaviour is highly determined by embedded institutions that privilege centralized decision-making. Mexican reformers have historically privileged those reforms that increase their gains and power, and delayed and boycotted those that negatively affect them. Since anti-corruption reforms adversely affected rent extraction and diminished the power of a set of political actors, the bureaucrats who benefited from the current institutional setting embraced limited reforms or even boycotted them. Thus, to understand failed reforms it is necessary to understand the deep-rooted political institutions that shape the behaviour of political actors. This analysis is important for other modern democracies where powerful bureaucratic minorities are often able to block changes that would be costly to their interests, even if the changes would increase net gains for the country as a whole

Abstract:

In this paper we study the problem of Hamiltonization of nonholonomic systems from a geometric point of view. We use gauge transformations by 2-forms (in the sense of Ševera and Weinstein in Progr Theoret Phys Suppl 144:145–154 2001) to construct different almost Poisson structures describing the same nonholonomic system. In the presence of symmetries, we observe that these almost Poisson structures, although gauge related, may have fundamentally different properties after reduction, and that brackets that Hamiltonize the problem may be found within this family. We illustrate this framework with the example of rigid bodies with generalized rolling constraints, including the Chaplygin sphere rolling problem. We also see through these examples how twisted Poisson brackets appear naturally in nonholonomic mechanics

Abstract:

We propose a novel method for reliably inducing stress in drivers for the purpose of generating real-world participant data for machine learning, using both scripted in-vehicle stressor events and unscripted on-road stressors such as pedestrians and construction zones. On-road drives took place in a vehicle outfitted with an experimental display that lead drivers to believe they had prematurely ran out of charge on an isolated road. We describe the elicitation method, course design, instrumentation, data collection procedure and the post-hoc labeling of unplanned road events to illustrate how rich data about a variety of stress-related events can be elicited from study participants on-road. We validate this method with data including psychophysiological measurements, video, voice, and GPS data from (N=20) participants. Results from algorithmic psychophysiological stress analysis were validated using participant self-reports. Results of stress elicitation analysis show that our method elicited a stress-state in 89% of participants

Abstract:

Do economic incentives explain forced displacement during conflict? This paper examines this questionin Colombia, which has had one of the world’s most acute situations of internal displacement associatedwith conflict. Using data on the price of bananas along with data on historical levels of production, I findthat price increases generate more forced displacement in municipalities more suitable to produce thisgood. However, I also show that this effect is concentrated in the period in which paramilitary powerand operations reached an all-time peak. Additional evidence shows that land concentration amongthe rich has increased substantially in districts that produce these goods. These findings are consistentwith extensive qualitative evidence that documents the link between economic interests and local polit-ical actors who collude with illegal armed groups to forcibly displace locals and appropriate their land,especially in areas with more informal land tenure systems, like those where bananas are grown morefrequently

Abstract:

This study was undertaken to explore pre-service teachers' understanding of injections and surjections. There were 54 pre-service teachers specialising in the teaching of Mathematics in Grades 10–12 curriculum who participated in the project. The concepts were covered as part of a real analysis course at a South African university. Questionnaires based on an initial genetic decomposition of the concepts of surjective and injective functions were administered to the 54 participants. Their written responses, which were used to identify the mental constructions of these concepts, were analysed using an APOS (action-process-object-schema) framework and five interviews were carried out. The findings indicated that most participants constructed only Action conceptions of bijection and none demonstrated the construction of an Object conception of this concept. Difficulties in understanding can be related to students' lack of construction of the concepts of functions and sets that are a prerequisite to working with bijections

Resumen:

El autor intenta deducir una teoría poética del escritor mexicano partiendo de su obra, que divide en tres partes; enseguida, tras un interludio –“la noche obscura del poeta”– trata la función del recuerdo como estímulo literario. Y termina con un apunte hacia el popularismo artístico de Alfonso Reyes

Abstract:

The author attempts to formulate a poetic theory of this Mexican writer based on his works, which is divided in three parts; after an interlude –“the poet’s darkest night”– he studies how remembrance works as a literary stimulus and concludes commenting on Alfonso Reyes’ artistic popularism

Abstract:

The cognitive domains of a communication scheme for learning physics are related to a framework based on epistemology, and the planning of an introductory calculus textbook in classical mechanics is shown as an example of application

Abstract:

Uniform inf-sup conditions are of fundamental importance for the finite element solution of problems in incompressible fluid mechanics, such as the Stokes and Navier–Stokes equations. In this work we prove a uniform inf-sup condition for the lowest-order Taylor–Hood pairs Q2×Q1 and P2×P1 on a family of affine anisotropic meshes. These meshes may contain refined edge and corner patches. We identify necessary hypotheses for edge patches to allow uniform stability and sufficient conditions for corner patches. For the proof, we generalize Verfürth’s trick and recent results by some of the authors. Numerical evidence confirms the theoretical results

Abstract:

In this work we present and analyze new inf-sup stable, and stabilised, finite element methods for the Oseen equation in anisotropic quadrilateral meshes. The meshes are formed of closed parallelograms, and the analysis is restricted to two space dimensions. Starting with the lowest order Q 1 2 x P 0 pair, we first identify the pressure components that make this finite element pair to be non-inf-sup stable, especially with respect to the aspect ratio. We then propose a way to penalise them, both strongly, by directly removing them from the space, and weakly, by adding a stabilisation term based on jumps of the pressure across selected edges. Concerning the velocity stabilisation, we propose an enhanced grad-div term. Stability and optimal a priori error estimates are given, and the results are confirmed numerically

Abstract:

In his landmark article, Richard Morris (1981) introduced a set of rat experiments intended “to demonstrate that rats can rapidly learn to locate an object that they can never see, hear, or smell provided it remains in a fixed spatial location relative to distal room cues” (p. 239). These experimental studies have greatly impacted our understanding of rat spatial cognition. In this article, we address a spatial cognition model primarily based on hippocampus place cell computation where we extend the prior Barrera–Weitzenfeld model (2008) intended to allow navigation in mazes containing corridors. The current work extends beyond the limitations of corridors to enable navigation in open arenas where a rat may move in any direction at any time. The extended work reproduces Morris’s rat experiments through virtual rats that search for a hidden platform using visual cues in a circular open maze analogous to the Morris water maze experiments. We show results with virtual rats comparing them to Morris’s original studies with rats

Abstract:

The study of behavioral and neurophysiological mechanisms involved in rat spatial cognition provides a basis for the development of computational models and robotic experimentation of goal-oriented learning tasks. These models and robotics architectures offer neurobiologists and neuroethologists alternative platforms to study, analyze and predict spatial cognition based behaviors. In this paper we present a comparative analysis of spatial cognition in rats and robots by contrasting similar goal-oriented tasks in a cyclical maze, where studies in rat spatial cognition are used to develop computational system-level models of hippocampus and striatum integrating kinesthetic and visual information to produce a cognitive map of the environment and drive robot experimentation. During training, Hebbian learning and reinforcement learning, in the form of Actor-Critic architecture, enable robots to learn the optimal route leading to a goal from a designated fixed location in the maze. During testing, robots exploit maximum expectations of reward stored within the previously acquired cognitive map to reach the goal from different starting positions. A detailed discussion of comparative experiments in rats and robots is presented contrasting learning latency while characterizing behavioral procedures during navigation such as errors associated with the selection of a non-optimal route, body rotations, normalized length of the traveled path, and hesitations. Additionally, we present results from evaluating neural activity in rats through detection of the immediate early gene Arc to verify the engagement of hippocampus and striatum in information processing while solving the cyclical maze task, such as robots use our corresponding models of those neural structures

Abstract:

Anticipation of sensory consequences of actions is critical for the predictive control of movement that explains most of our sensory-motor behaviors. Plenty of neuroscientific studies in humans suggest evidence of anticipatory mechanisms based on internal models. Several robotic implementations of predictive behaviors have been inspired on those biological mechanisms in order to achieve adaptive agents. This paper provides an overview of such neuroscientific and robotic evidences; a high-level architecture of sensory-motor coordination based on anticipatory visual perception and internal models is then introduced; and finally, the paper concludes by discussing the relevance of the proposed architecture within the context of current research in humanoid robotics

Abstract:

The study of spatial memory and learning in rats has inspired the development of multiple computational models that have lead to novel robotics architectures. Evaluation of computational models and resulting robotic architecture is usually carried out at the behavioral level by evaluating experimental tasks similar to those performed with rats. While multiple metrics are defined to evaluate behavioral performance in rats, metrics for robot task evaluation are very limited mostly to success/failure and time to complete task. In this paper we present a set of metrics taken from rat spatial memory and learning evaluation to further analyze performance in robots. The proposed set of metrics, learning latency and ability to navigate minimal distance to goal, should offer the robotics community additional tools to assess performance and validity of models in biologically-inspired robotic architectures at the task performance level. We also provide a comparative evaluation using these metrics between similar spatial tasks performed by rat and robot in comparable environments

Abstract:

In this paper we present a comparative behavioral analysis of spatial cognition in rats and robots by contrasting a similar goal-oriented task in a cyclical maze, where a computational system-level model of rat spatial cognition is used integrating kinesthetic aud visual information to produce a cognitive map of the environnnent and drive robot experimentation. A discussion of experiments in rats and robots is presented contrasting learning latency while characterizing behavioral procedures such as body rotations during navigation and election of routes to the goal

Abstract:

This paper presents a robot architecture with spatial cognition and navigation capabilities that captures some properties of the rat brain structures involved in learning and memory. This architecture relies on the integration of kinesthetic and visual information derived from artificial landmarks, as well as on Hebbian learning, to build a holistic topological-metric spatial representation during exploration, and employs reinforcement learning by means of an Actor-Critic architecture to enable learning and unlearning of goal locations. From a robotics perspective, this work can be placed in the gap between mapping and map exploitation currently existent in the SLAM literature. The exploitation of the cognitive map allows the robot to recognize places already visited and to find a target from any given departure location, thus enabling goal-directed navigation. From a biological perspective, this study aims at initiating a contribution to experimental neuroscience by providing the system as a tool to test with robots hypotheses concerned with the underlying mechanisms of rats' spatial cognition. Results from different experiments with a mobile AlBO robot inspired on c1assical spatial tasks with rats are described, and a comparative analysis is provided in reference to the reversal task devised by O'Keefe in 1983

Abstract:

A computational model of spatial cognition in rats is used to control an autonomous mobile robot while solving a spatial task within a cyclic maze. In this paper we evaluate the robot's behavior in terms of place recognition in multiple directions and goal-oriented navigation against the results derived from experimenting with laboratory rats solving the same spatial task in a similar maze. We provide a general description of the bio-inspired model, and a comparative behavioral analysis between rats and robot

Abstract:

In this paper we present a model designed on the basis of the rat's brain neurophysiology to provide a robot with spatial cognition and goal-oriented navigation capabilities. We describe place representation and recognition processes in rats as the basis for topological map building and exploitation by robots. We experiment with the model by training a robot to find the goal in a maze starting from a fixed location, and by testing it to reach the same target from new different starting locations

Abstract:

We present a model designed on the basis of the rat's brain neurophysiology to provide a robot with spatial cognition and goal-oriented navigation capabilities. We describe target learning and place recognition processes in rats as basis for topological map building and exploitation by robots. We experiment with the model in different maze configurations by training a robot to find the goal starting from a fixed localion, and by testing it to reach the same target from new different starting locations

Abstract:

In this paper we present a model designed on the basis of the neurophysiology of the rat hippocampus to control the navigation of a real robot. The model allows the robot to learn reward locations dynamically moved in different environments, to build a topological map, and to return home autonomously. We describe robot experimentation results from our tests in a T-maze, an 8-arm, radial maze and an extended maze

Abstract:

In this paper we present a model composed of layers of neurons designed on the basis of the neurophysiology of the rat hippocampus to control the navigation of a real robot. The model allows the robot to learn reward locations in different mazes and to return home autonomously by building a topological map of the environment. We described robotic experimentation results from our tests in a T-maze, an 8-arm radial maze and a 3-T shaped maze

Abstract:

In this paper we present a biologically-inspired robotic exploration and navigation model based on the neurophysiology of the rat hippocampus that allows a robot to rind goals and return home autonomously by building a topological map of the environment. We present simulation and experimentation results from a T-maze tested and discuss future research

Abstract:

The time-dependent restricted (n+ 1) -body problem concerns the study of a massless body (satellite) under the influence of the gravitational field generated by n primary bodies following a periodic solution of the n-body problem. We prove that the satellite has periodic solutions close to the large-amplitude circular orbits of the Kepler problem (comet solutions), and in the case that the primaries are in a relative equilibrium, close to small-amplitude circular orbits near a primary body (moon solutions). The comet and moon solutions are constructed with the application of a Lyapunov–Schmidt reduction to the action functional. In addition, using reversibility techniques, we compute numerically the comet and moon solutions for the case of four primaries following the super-eight choreography

Abstract:

We aim to associate a cytokine profile obtained through data mining with the clinical characteristics of patients with subgroups with advanced non-small-cell lung cancer (NSCLC). Our results provide evidence that complex cytokine networks may be used to identify patient sub-groups with different prognoses in advanced NSCLC that could serve as potential biomarkers for best treatment choices

Resumen:

La neumonitis por hipersensibilidad (NH) es una enfermedad inflamatoria difusa del parénquima pulmonar provocada por la inhalación repetida de partículas orgánicas. Las células dendríticas y sus precursores desempeñan un papel importante no sólo como células presentadoras de antígenos, sino también como parte de una red de procesos inmunorregulatorios. Dependiendo de su linaje y estado de diferenciación y activación, las células dentríticas pueden promover una intensa respuesta inmunológica por parte de los linfocitos T o bien, producir un estado de anergia. Objetivo: Caracterizar fenotípicamente las células dentríticas de origen mieloide (CDm) y plasmacitoide (CDp) presentes en el lavado bronquioalveolar (LBA) de pacientes con NH en etapas subaguda o crónica y comparada con lo observado en pacientes con fibrosis pulmonar idiopática (FPI) y sujetos control

Abstract:

Hypersensitivity pneumonitis (HP) is a diffuse inflammatory disease of lung parenchyma resulting from repetitive inhalation of organic particles. Dendritic cells and their precursors play an important role not only as antigen presenting cells but also as part of an immunoregulatory network. Depending on their lineage and stage of differentiation and activation, dendritic cells can promote a strong T-lymphocytes-mediated immunological response or an anergy state. Objective: To phenotypically characterize myeloid (mDCs) and plasmacitoid (pDCs) dendritic cells recovered in bronchoalveolar lavage from patients with subacute or chronic HP, and to compare the results with that obtained in patients with idiopathic pulmonary fibrosis (IPF) and healthy controls. Methods: BAL cells from 8 patients with subacute HP, 8 with chronic HP, 8 with IPF and 4 healthy subjects were used. The phenotype of dendritic cells subpopulations were characterized by means of flow cytometry

Abstract:

The monitoring of patients with dementia who receive comprehensive care in day centers allows formal caregivers to make better decisions and provide better care to patients. For instance, cognitive and physical therapies can be tailored based on the current stage of disease progression. In the context of day centers of the Mexican Federation of Alzheimer, this work aims to design and evaluate Alzaid, a technological platform for assisting formal caregivers in monitoring patients with dementia. Alzaid was devised using a participatory design methodology that consisted in eliciting and validating requirements from 22 and 9 participants, respectively, which were unified to guide the construction of a high-fidelity prototype evaluated by 14 participants. The participants were formal caregivers, medical staff, and management. This work contributes a high-fidelity prototype of a technological platform for assisting formal caregivers in monitoring patients with dementia considering restrictions and requirements of four Mexican day centers. In general, the participants perceived the prototype as quite likely to be useful, usable, and relevant in the job of monitoring patients with dementia (p-value < 0.05). By evaluating and designing Alzaid that unifies requirements for monitoring patients of four day centers, this work is the first effort towards a standard monitoring process of patients with dementia in the context of the Mexican Federation of Alzheimer

Abstract:

Many Americans continued some forms of social distancing after the pandemic. This phenomenon is stronger among older persons, less educated individuals, and those who interact daily with persons at high risk from infectious diseases. Regression models fit to individual-level data suggest that social distancing lowered labor force participation by 2.4 percentage points in 2022, 1.2 points on an earnings-weighted basis. When combined with simple equilibrium models, our results imply that the social distancing drag on participation reduced US output by $205 billion in 2022, shrank the college wage premium by 2.1 percentage points, and modestly steepened the cross-sectional age-wage profile

Abstract:

This paper studies how biases in managerial beliefs affect managerial decisions, firm performance, and the macroeconomy. Using a new survey of US managers I establish three facts. (1) Managers are not overoptimistic: sales growth forecasts on average do not exceed realizations. (2) Managers are overprecise: they underestimate future sales growth volatility. (3) Managers overextrapolate: their forecasts are too optimistic after positive shocks and too pessimistic after negative shocks. To quantify the implications, I estimate a dynamic general equilibrium model in which managers of heterogeneous firms use a subjective beliefs process to make forward-looking hiring decisions. Overprecision and overextrapolation lead managers to overreact to firm-level shocks and overspend on adjustment costs, destroying 2.1% to 6.8% of the typical firm's value. Pervasive overreaction leads to excess volatility and reallocation, lowering consumer welfare by 0.5% to 2.3% relative to the rational-expectations equilibrium. These findings suggest overreaction could amplify asset-price and business-cycle fluctuations

Abstract:

As knowledge workers have shifted to hybrid, we're not seeing an equivalent drop in demand for office space. New survey data suggests cuts in office space of 1% to 2% on average. There are three trends driving this: 1) Workers are uncomfortable with density, and the only surefire way to reduce density is to cut person days on site without cutting square footage; 2) Most employees want to work from home on Mondays and Fridays, which means the shift to hybrid affords only meager opportunities to economize on office space; and 3) Employers are reshaping office space to become more inviting social spaces that encourage face-to-face collaboration, creativity, and serendipitous interactions

Abstract:

Drawing on data from the firm-level Survey of Business Uncertainty, we present three pieces of evidence that COVID-19 is a persistent reallocation shock. First, rates of excess job and sales reallocation over 24-month periods (looking back 12 months and ahead 12 months) have risen sharply since the pandemic struck, especially for sales. Second, as of December 2020, firm-level forecasts of sales revenue growth over the next year imply a continuation of recent changes, not a reversal. Third, COVID-19 shifted relative employment growth trends in favor of industries with a high capacity for employees to work from home

Abstract:

Employees want to work from home 2.5 days a week on average, according to a monthly survey of 5,000 Americans. Desires to work from home and cut commuting have strengthened as the pandemic has lingered, and many have become increasingly comfortable with remote interactions. The rapid spread of the Delta variant is also undercutting the drive for a full-time return to the office any time soon. Tight labor markets are also a challenge for firms that want a full-time return

Abstract:

About one-fifth of paid workdays will be supplied from home in the post-pandemic economy, and more than one-fourth on an earnings-weighted basis. In view of this projection, we consider some implications of home internet access quality, exploiting data from the new Survey of Working Arrangements and Attitudes. Moving to high-quality, fully reliable home internet service for all Americans ("universal access") would raise earnings-weighted labor productivity by an estimated 1.1% in the coming years. The implied output gains are $160 billion per year, or $4 trillion when capitalized at a 4% rate. Estimated flow output payoffs to universal access are nearly three times as large in economic disasters like the COVID-19 pandemic. Our survey data also say that subjective well-being was higher during the pandemic for people with better home internet service conditional on age, employment status, earnings, working arrangements, and other controls. In short, universal access would raise productivity, and it would promote greater economic and social resilience during future disasters that inhibit travel and in-person interactions

Abstract:

We develop several pieces of evidence about the reallocative effects of the COVID-19 shock on impact and over time. First, the shock caused three to four new hires for every ten layoffs from March 1 to mid-May 2020. Second, we project that one-third or more of the layoffs during this period are permanent in the sense that job losers won’t return to their old jobs at their previous employers. Third, firm-level forecasts at a one-year horizon imply rates of expected job and sales reallocation that are two to five times larger from April to June 2020 than before the pandemic. Fourth, full days working from home will triple from 5 percent of all workdays in 2019 to more than 15 percent after the pandemic ends. We also document pandemic-induced job gains at many firms and a sharp rise in cross-firm equity return dispersion in reaction to the pandemic. After developing the evidence, we consider implications for the economic outlook and for policy. Unemployment benefit levels that exceed worker earnings, policies that subsidize employee retention irrespective of the employer’s commercial outlook, and barriers to worker mobility and business formation impede reallocation responses to the COVID-19 shock

Abstract:

Economic uncertainty jumped in reaction to the COVID-19 pandemic, with most indicators reaching their highest values on record. Alongside this rise in uncertainty has been an increase in downside tail-risk reported by firms. This uncertainty has played three roles. First, amplifying the drop in economic activity early in the pandemic; second slowing the subsequent recovery; and finally reducing the impact of policy as uncertainty tends to make firms more cautious in responding to changes in business conditions. As such, the incredibly high levels of uncertainty are a major impediment to a rapid recovery. We also discuss three other factors exacerbating the situation: the need for massive reallocation as COVID-19 permanently reshapes the economy; the rise in working from home, which is impeding firm hiring; and the ongoing medical uncertainty over extent and duration of the pandemic. Collectively, these conditions are generating powerful headwinds against a rapid recovery from the COVID-19 recession

Abstract:

The Dirichlet process mixture model and more general mixtures based on discrete random probability measures have been shown to be flexible and accurate models for density estimation and clustering. The goal of this paper is to illustrate the use of normalized random measures as mixing measures in nonparametric hierarchical mixture models and point out how possible computational issues can be successfully addressed. To this end, we first provide a concise and accessible introduction to normalized random measures with independent increments. Then, we explain in detail a particular way of sampling from the posterior using the Ferguson–Klass representation. We develop a thorough comparative analysis for location-scale mixtures that considers a set of alternatives for the mixture kernel and for the nonparametric component. Simulation results indicate that normalized random measure mixtures potentially represent a valid default choice for density estimation problems. As a byproduct of this study an R package to fit these models was produced and is available in the Comprehensive R Archive Network (CRAN)

Resumen:

Objetivo. Caracterizar el impacto de la infección por SARS-CoV-2 en trabajadores de una gran empresa esencial del Área Metropolitana de la Ciudad de México, utilizando prevalencia puntual de infección aguda, prevalencia puntual de infección pasada a través de anticuerpos séricos e incapacidades temporales para el trabajo por enfermedad respiratoria (ITT-ER). Material y métodos. Cuatro encuestas aleatorias, tres durante 2020 y una en 2021, sobre de disponibilidad de vacunas. Desenlaces: prevalencia puntual de infección aguda a través de pruebas de PCR (polymerase chain reaction, por sus siglas en inglés) en saliva, prevalencia puntual de infección pasada a través de anticuerpos séricos contra Covid-19 (niveles de anticuerpos S/N), ITT-ERs y prevalencia de síntomas durante los seis meses anteriores. Resultados. La prevalencia de casos positivos para SARS-CoV-2 fue de 1.29-4.88% y, en promedio, la cuarta parte de participantes presentó anticuerpos prevacunación; más de la mitad de participantes con ITT-ER tenían anticuerpos. Las posibilidades de tener anticuerpos fueron 6-7 veces mayores entre aquellos con ITT-ER. Conclusiones. Los altos niveles de anticuerpos contra el Covid-19 en la población de estudio reflejan que la cobertura es alta entre los trabajadores de esta industria. Las ITTs son una herramienta útil para rastrear epidemias en el lugar de trabajo

Abstract:

Objective. To characterize the impact of SARS-CoV-2 infection in workers from an essential large-scale company in the Greater Mexico City Metropolitan Area using point prevalence of acute infection, point prevalence of past infection through serum antibodies and respiratory disease short-term disability claims (RD-STDC). Materials and methods. Four randomized surveys, three during 2020 before and one after (December 2021) vaccines' availability. Outcomes: point prevalence of acute infection through saliva PCR (polymerase chain reaction) testing, point prevalence of past infection through serum antibodies against Covid-19, RD-STDC and prevalence of symptoms during the previous six months. Results. Prevalence of SARS-CoV-2 cases was 1.29-4.88%, on average, a quarter of participants pre-vaccination were seropositive; over half of participants with a RD-STDC had antibodies. The odds of having antibodies were 6-7 times more among workers with an RD-STDC. Conclusions. High antibody levels against Covid-19 in this study population reflects that coverage is high among workers in this industry. STDCs are a useful tool to track workplace epidemics

Abstract:

Chaotic properties in the dynamics of Toeplitz operators on the Hardy–Hilbert space H2(D) are studied. Based on previous results of Shkarin and Baranov and Lishanskii, a characterization of different versions of chaos formulated in terms of the coefficients of the symbol for the tridiagonal case are obtained. In addition, easily computable sufficient conditions that depend on the coefficients are found for the chaotic behavior of certain Toeplitz operators

Abstract:

This work studies ultra wideband (UWB) communications over multipath residential indoor channels. We study the relationship between the fading margin and the transmitter–receiver separation distance for both the line of sight and the no line of sight scenarios. Impairments such as small scale fading as well as large scale fading are considered. Some implications of the results for UWB indoor network design are discussed

Abstract:

We study a repeated game with payoff externalities and observable actions where two players receive information over time about an underlying payoff-relevant state, and strategically coordinate their actions. Players learn about the true state from private signals, as well as the actions of others. They commonly learn the true state (Cripps et al., 2008), but do not coordinate in every equilibrium. We show that there exist stable equilibria in which players can overcome unfavorable signal realizations and eventually coordinate on the correct action, for any discount factor. For high discount factors, we show that in addition players can also achieve efficient payoffs

Abstract:

The question whether a minimum rate of sick pay should be mandated is much debated. We study the effects of this kind of intervention with student subjects in an experimental laboratory setting rich enough to allow for moral hazard, adverse selection, and crowding out of good intentions. Both wages and replacement rates offered by competing employers are reciprocated by workers. However, replacement rates are only reciprocated as long as no minimum level is mandated. Although we observe adverse selection when workers have different exogenous probabilities for being absent from work, this does not lead to a market breakdown. In our experiment, mandating replacement rates actually leads to a higher voluntary provision of replacement rates by employers

Abstract:

Computation of the volume of space required for a robot to execute a sweeping motion from a start to a goal has long been identified as a critical primitive operation in both task and motion planning. However, swept volume computation is particularly challenging for multi-link robots with geometric complexity, e.g., manipulators, due to the non-linear geometry. While earlier work has shown that deep neural networks can approximate the swept volume quantity, a useful parameter in sampling-based planning, general network structures do not lend themselves to outputting geometries. In this paper we train and evaluate the learning of a deep neural network that predicts the swept volume geometry from pairs of robot configurations and outputs discretized voxel grids. We perform this training on a variety of robots from 6 to 16 degrees of freedom. We show that most errors in the prediction of the geometry lie within a distance of 3 voxels from the surface of the true geometry and it is possible to adjust the rates of different error types using a heuristic approach. We also show it is possible to train these networks at varying resolutions by training networks with up to 4x smaller grid resolution with errors remaining close to the boundary of the true swept volume geometry surface

Abstract:

This article is devoted to the design of robust position-tracking controllers for a perturbed wheeled mobile robot. We address the final objective of pose-regulation in a predefined time, which means that the robot position and orientation must reach desired final values simultaneously in a user-defined time. To do so, we propose the robust tracking of adequate trajectories for position coordinates, enforcing that the robot's heading evolves tangent to the position trajectory and consequently the robot reaches a desired orientation. The robust tracking is achieved by a proportional-integral action or by a super-twisting sliding mode control. The main contribution of this article is a kinematic control approach for pose-regulation of wheeled mobile robots in which the orientation angle is not directly controlled in the closed-loop, which simplifies the structure of the control system with respect to existing approaches. An offline trajectory planning method based on parabolic and cubic curves is proposed and integrated with robust controllers to achieve good accuracy in the final values of position and orientation. The novelty in the trajectory planning is the generation of a set of candidate trajectories and the selection of one of them that favors the correction of the robot's final orientation. Realistic simulations and experiments using a real robot show the good performance of the proposed scheme even in the presence of strong disturbances

Abstract:

Two robust kinematic controllers for position trajectory tracking of a perturbed wheeled mobile robot are presented. We address a final objective of fixed-time pose-regulation, which means that the robot position and orientation must reach desired final values simultaneously in a user-defined time. To achieve that, we propose the robust tracking of adequate trajectories for position, which drives the robot to get a desired final orientation due to the nonholonomic motion constraint. Hence, the main contribution of the paper is a complete strategy to define adequate reference trajectories as well as robust controllers to track them in order to enforce the pose-regulation of a wheeled mobile robot in a desired time. Realistic simulations show the good performance of the proposed scheme even in the presence of strong disturbances

Abstract:

Correlations of measures of percentages of white coat color, five measures of production and two mea sures of reproduction were obtained from 4293 first lactation Holsteins from eight Florida dairy farms. Percentages of white coat color were analyzed as recorded and transformed by an extension of Box-Cox procedures. Statistical analyses were by derivative-free restricted maximum likelihood (DFREML) with an animal model. Phenotypic and genetic correlations of white percentage (not transformed) were with milk yield, 0.047 and 0.097; fat yield, 0.002 and 0.004; fat percentage, -0.047 and -0.090; protein yield, 0.024 and 0.048; protein percentage, -0.070 and -0.116; days open, -0.012 and -0.065; and calving interval, -0.007 and -0.029. Changes in magnitud e of correlations were very small for all variables except days open. Genetic and phenotypic correlations of transformed values with days open were -0.027 and -0.140. Modest positive correlated responses would be expected for white coat color percentage following direct selection for milk, fat, and protein yields, but selection for fat and protein percentages, days open, or calving interval would lead to small decreases

Abstract:

As part of a project to identify all maximal centralising monoids on a four-element set, we determine all centralising monoids witnessed by unary or by idempotent binary operations on a four-element set. Moreover, we show that every centralising monoid on a set with at least four elements witnessed by the Mal’cev operation of a Boolean group operation is always a maximal centralising monoid, i.e., a co-atom below the full transformation monoid. On the other hand, we also prove that centralising monoids witnessed by certain types of permutations or retractive operations can never be maximal

Abstract:

Motivated by reconstruction results by Rubin, we introduce a new reconstruction notion for permutation groups, transformation monoids and clones, called automatic action compatibility, which entails automatic homeomorphicity. We further give a characterization of automatic homeomorphicity for transformation monoids on arbitrary carriers with a dense group of invertibles having automatic homeomorphicity. We then show how to lift automatic action compatibility from groups to monoids and from monoids to clones under fairly weak assumptions. We finally employ these theorems to get automatic action compatibility results for monoids and clones over several well-known countable structures, including the strictly ordered rationals, the directed and undirected version of the random graph, the random tournament and bipartite graph, the generic strictly ordered set, and the directed and undirected versions of the universal homogeneous Henson graphs

Abstract:

We investigate the standard context, denoted by K(Ln), of the lattice Ln of partitions of a positive integer n under the dominance order. Motivated by the discrete dynamical model to study integer partitions by Latapy and Duong Phan and by the characterization of the supremum and (infimum) irreducible partitions of n by Brylawski, we show how to construct the join-irreducible elements of Ln+1 from Ln. We employ this construction to count the number of join-irreducible elements of Ln, and confirm that the number of objects (and attributes) of K(Ln) has order Θ(n2). We also discuss the embeddability of K(Ln) into K(Ln+1) with special emphasis on n=9

Abstract:

We consider finitary relations (also known as crosses) that are definable via finite disjunctions of unary relations, i.e. subsets, taken from a fixed finite parameter set Γ. We prove that whenever Γ contains at least one non-empty relation distinct from the full carrier set, there is a countably infinite number of polymorphism clones determined by relations that are disjunctively definable from Γ. Finally, we extend our result to finitely related polymorphism clones and countably infinite sets Γ. These results address an open problem raised in Creignou, N., et al. Theory Comput. Syst. 42(2), 239–255 (2008), which is connected to the complexity analysis of the satisfiability problem of certain multiple-valued logics studied in Hähnle, R. Proc. 31st ISMVL 2001, 137–146 (2001)

Abstract:

C-clones are polymorphism sets of so-called clausal relations, a special type of relations on a finite domain, which first appeared in connection with constraint satisfaction problems in work by Creignou et al. from 2008. We completely describe the relationship regarding set inclusion between maximal C-clones and maximal clones. As a main result we obtain that for every maximal C-clone there exists exactly one maximal clone in which it is contained. A precise description of this unique maximal clone, as well as a corresponding completeness criterion for C-clones is given

Abstract:

We show how to reconstruct the topology on the monoid of endomorphisms of the rational numbers under the strict or reflexive order relation, and the polymorphism clone of the rational numbers under the reflexive relation. In addition we show how automatic homeomorphicity results can be lifted to polymorphism clones generated by monoids

Abstract:

Our goal is to model the joint distribution of a series of 4 × 2 × 2 × 2 contingency tables for which some of the data are partially collapsed (i.e., aggregated in as few as two dimensions). More specifically, the joint distribution of four clinical characteristics in breast cancer patients is estimated. These characteristics include estrogen receptor status (positive/negative), nodal involvement (positive/negative), HER2-neu expression (positive/negative), and stage of disease (I, II, III, IV). The joint distribution of the first three characteristics is estimated conditional on stage of disease and we propose a dynamic model for the conditional probabilities that let them evolve as the stage of disease progresses. The dynamic model is based on a series of Dirichlet distributions whose parameters are related by a Markov prior structure (called dynamic Dirichlet prior). This model makes use of information across disease stage (known as “borrowing strength” and provides a way of estimating the distribution of patients with particular tumor characteristics. In addition, since some of the data sources are aggregated, a data augmentation technique is proposed to carry out a meta-analysis of the different datasets

Abstract:

We introduce the logics GLPΛ, a generalization of Japaridze’s polymodal provability logic GLPω where Λ is any linearly ordered set representing a hierarchy of provability operators of increasing strength. We shall provide a reduction of these logics to GLPω yielding among other things a finitary proof of the normal form theorem for the variable-free fragment of GLPΛ and the decidability of GLPΛ for recursive orderings Λ. Further, we give a restricted axiomatization of the variable-free fragment of GLPΛ

Resumen:

Las empresas familiares representan la mayoría de las organizaciones en México y participan en una gran variedad de industrias. Las hay de todos tamaños, incluso dentro de las más grandes del país. En la Bolsa Mexicana de Valores (BMV), el 70% de las empresas que emiten acciones son familiares (Belausteguigoitia, 2012). Se han realizado investigaciones en diversos países que comparan los rendimientos entre empresas familiares y no familiares (Villalonga y Amit, 2006). Faltaba en México un estudio comparativo de esta índole, que permitiera conocer el desempeño de las organizaciones familiares. Este trabajo ofrece un valor adicional, ya que compara los rendimientos durante años de crisis y estabilidad. Los resultados preliminares indican que durante años de relativa inestabilidad financiera (por la crisis de 2008), las organizaciones familiares muestran un rendimiento significativamente mayor que las no familiares, mientras que en años de estabilidad los rendimientos tienden a ser semejantes. Además de exponer resultados pormenorizados durante este período, se plantean algunas hipótesis que explican el hecho de que las organizaciones familiares se desempeñen mejor en tiempos de crisis

Abstract:

Family businesses represent most organizations in Mexico and participate in a wide variety of industries. They come in all sizes, even among the largest in the country. In the Mexican Stock Exchange (BMV), 70% of the companies that issue shares are family members (Belausteguigoitia, 2012). Research has been conducted in various countries comparing returns between family and non-family businesses (Villalonga and Amit, 2006). A comparative study of this nature was lacking in Mexico, which would allow us to know the performance of family organizations. This work offers additional value as it compares returns during years of crisis and stability. Preliminary results indicate that during years of relative financial instability (due to the 2008 crisis), family organizations show significantly higher returns than non-family ones, while in years of stability the returns tend to be similar. In addition to presenting detailed results during this period, some hypotheses are put forward that explain the fact that family organizations perform better in times of crisis

Resumen:

Este artículo analiza la relación entre compartir conocimiento y el comportamiento pro-organizacional no ético (CPE), así como el potencial efecto amplificador de dos factores: la resistencia al cambio de los empleados y la percepción del clima político de la organización

Abstract:

This paper aims to investigate the relationship of knowledge sharing with unethical pro-organizational behavior (UPB) and the potential augmenting effects of two factors: employees' dispositional resistance to change and perceptions of organizational politics

Resumo:

Este artigo analisa a relação entre compartilhar o conhecimento e comportamento pró-organizacional antiético (CPA), bem como o potencial efeito ampliador de dois fatores: a resistência a mudança de funcionários e a percepção do clima político da organização

Resumen:

Las Empresas Familiares son organizaciones con una gran carga emotiva. Por esta razón, la dimensión familiar, que ejerce una gran influencia sobre la empresa, debe ser correctamente canalizada en la empresa, con la idea de lograr que su impacto sea positivo. A continuación expongo algunas ideas prácticas que pueden ayudar a prevenir conflictos. Como es de imaginarse, estas ideas, además de contribuir a reducir el potencial de conflicto, pueden mejorar la marcha en las organizaciones familiares

Abstract:

Price-based climate change policy instruments, such as carbon taxes or cap-and-trade systems, are known for their potential to generate desirable results such as reducing the cost of meeting environmental targets. Nonetheless, carbon pricing policies face important economic and political hurdles. Powerful stakeholders tend to obstruct such policies or dilute their impacts. Additionally, costs are borne by those who implement the policies or comply with them, while benefits accrue to all, creating incentives to free ride. Finally, costs must be paid in the present, while benefits only materialize over time. This chapter analyses the political economy of the introduction of a carbon tax in Mexico in 2013 with the objective of learning from that process in order to facilitate the eventual implementation of an effective cap-and-trade system in Mexico. Many of the lessons in Mexico are likely to be applicable elsewhere. As countries struggle to meet the goals of international environmental agreements, it is of utmost importance that we understand the conditions under which it is feasible to implement policies that reduce carbon emissions

Abstract:

The Global International Waters Assessment (GIWA) was created to help develop a priority setting mechanism for actions in international waters. Apart from assessing the severity of environmental problems in ecosystems, the GIWA's task is to analyze potential policy actions that could solve or mitigate these problems. Given the complex nature of the problems, understanding their root causes is essential to develop effective solutions. The GIWA provides a framework to analyze these causes, which is based on identifying the factors that shape human behavior in relation to the use (direct or indirect) of aquatic resources. Two sets of factors are analyzed. The first one consists of social coordination mechanisms (institutions). Faults in these mechanisms lead to wasteful use of resources. The second consists of factors that do not cause wasteful use of resources per se (poverty, trade, demographic growth, technology), but expose and magnify the faults of the first group of factors. The picture that comes out is that diagnosing simple generic causes, e.g. poverty or trade, without analyzing the case specific ways in which the root causes act and interact to degrade the environment, will likely ignore important links that may put the effectiveness of the recommended policies at risk. A summary of the causal chain analysis for the Colorado River Delta is provided as an example

Abstract:

In this article, a distributed control system for robotíc applications is presented. First, the robot control theory based on task-functions is outlined; then a model for robotic applications is proposed. This philosophy is more general than the classical hierarchical structure and allows smart sensor feedback at the servo level. Next a software and hardware architecture is proposed to implement such control algorithms. As all the system activity depends on communication between tasks, it is necessary to design a real-time communication system a topic addressed in this paper

Abstract:

In this paper, based on interconnection and damping assignment passivity-based control approach, a multi-loop adaptive controller for output voltage regulation of a hybrid system comprised of a proton exchange membrane fuel cell energy generation system and a hybrid energy storage system consisting of supercapacitors and batteries, is detailed. The control scheme relies on the design of two control loops, i.e., an outer loop for voltage regulation through current reference generation, and an inner loop for current reference tracking through duty cycle generation. Furthermore, an adaptive law based on immersion and invariance theory is designed to enhance the outer loop behavior through unknown load approximation. Finally, numeric simulation results show the correct performance of the adaptive multi-loop controller when sudden load changes are induced in the system

Abstract:

In this paper, we study a three-dimensional system of differential equations which is a generalization of the system introduced by Yu and Wang (Eng Technol Appl Sci Res 3:352-358, 2013), a continuation of the study of chaotic attractors [see Yu and Wang (Eng Tech Appl Sci Res 2:209-215, 2012)]. We show that these systems admit a zero-Hopf non-isolated equilibrium point at the origin and prove the existence of a limit cycle emanating from it. We illustrate our results with some numerical simulations

Abstract:

We prove that all non-degenerate relative equilibria of the planar Newtonian n–body problem can be continued to spaces of constant curvature κ, positive or negative, for small enough values of this parameter. We also compute the extension of some classical relative equilibria to curved spaces using numerical continuation. In particular, we extend Lagrange's triangle configuration with different masses to both positive and negative curvature spaces

Abstract:

We study a particular (1+2n)-body problem, conformed by a massive body and 2n equal small masses, since this problem is related with Maxwell's ring solutions, we call planet to the massive body, and satellites to the other 2n masses. Our goal is to obtain doubly-symmetric orbits in this problem. By means of studying the reversing symmetries of the equations of motion, we reduce the set of possible initial conditions that leads to such orbits, and compute the 1-parameter families of time-reversible invariant tori. The initial conditions of the orbits were determined as solutions of a boundary value problem with one free parameter, in this way we find analytically and explicitly a new involution, until we know this is a new and innovative result. The numerical solutions of the boundary value problem were obtained using pseudo arclength continuation. For the numerical analysis we have used the value of 3.5x10-4 as mass ratio of some satellite and the planet, and it was done for n=2,3,4,5,6. We show numerically that the succession of families that we have obtained approach the Maxwell solutions as n increases, and we establish a simple proof why this should happen in the configuration

Abstract:

By using analytical and numerical tools we show the existence of families of quasiperiodic orbits (also called relative periodic orbits) emanating from a kite configuration in the planar four-body problem with three equal masses. Relative equilibria are periodic solutions where all particles are rotating uniformely around the center of mass in the inertial frame, that is the system behaves as a rigid body problem, in rotating coordinates in general these solutions are quasiperidioc. We introduce a new coordinate system which measures (in the planar four-body problem) how far is an arbitrary configuration from a kite configuration. Using these coordinates, and the Lyapunov center theorem, we get families of quasiperiodic orbits, an by using symmetry arguments, we obtain periodic ones, all of them emanating from a kite configuration

Abstract:

In recent research on financial crises, large exogenous shocks to total factor productivity (TFP) are used as the driving force accounting for large output falls. TFP fell 3% after the Korean 1997 financial crisis. We find evidence that the large fall in TFP is mostly due to a sectoral reallocation of labor from the more productive manufacturing and construction sectors to the less productive wholesale trade sector, the public sector and agriculture. We construct a two-sector model that accounts for the labor reallocation. The model has a consumption sector and an investment sector. Firms face sector-specific working capital constraints, which we calibrate with data from financial statements. The rise in interest rates makes inputs more costly. The model accounts for 42% of the TFP fall. The model also accounts for 53% of the fall in GDP. It is broadly consistent with the post-crisis behavior of the Korean economy

Abstract:

Using variation in firms' exposure to their CEOs resulting from hospitalization, we estimate the effect of CEOs on firm policies, holding firm-CEO matches constant. We document three main findings. First, CEOs have a significant effect on profitability and investment. Second, CEO effects are larger for younger CEOs, in growing and family-controlled firms, and in human-capital-intensive industries. Third, CEOs are unique: the hospitalization of other senior executives does not have similar effects on performance. Overall, our findings demonstrate that CEOs are a key driver of firm performance, which suggests that CEO contingency plans are valuable

Abstract:

This paper uses a unique dataset from Denmark to investigate the impact of family characteristics in corporate decision making and the consequences of these decisions on firm performance. We focus on the decision to appoint either a family or external chief executive officer (CEO). The paper uses variation in CEO succession decisions that result from the gender of a departing CEO's firstborn child. This is a plausible instrumental variable (IV), as male first-child firms are more likely to pass on control to a family CEO than are female first-child firms, but the gender of the first child is unlikely to affect firms' outcomes. We find that family successions have a large negative causal impact on firm performance: operating profitability on assets falls by at least four percentage points around CEO transitions. Our IV estimates are significantly larger than those obtained using ordinary least squares. Furthermore, we show that family-CEO underperformance is particularly large in fast-growing industries, industries with highly skilled labor force, and relatively large firms. Overall, our empirical results demonstrate that professional, nonfamily CEOs provide extremely valuable services to the organizations they head

Abstract:

This paper presents a two dimensional convex irregular bin packing problem with guillotine cuts. The problem combines the challenges of tackling the complexity of packing irregular pieces, guaranteeing guillotine cuts that are not always orthogonal to the edges of the bin, and allocating pieces to bins that are not necessarily of the same size. This problem is known as a two-dimensional multi bin size bin packing problem with convex irregular pieces and guillotine cuts. Since pieces are separated by means of guillotine cuts, our study is restricted to convex pieces. A beam search algorithm is described, which is successfully applied to both the multi and single bin size instances. The algorithm is competitive with the results reported in the literature for the single bin size problem and provides the first results for the multi bin size problem

Abstract:

The intuitive notion of evidence has both semantic and syntactic features. In this paper, we develop an evidence logic for epistemic agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N. Further notions of relative plausibility between world sand beliefs based on the latter ordering are then defined in terms of this evidence structure, yielding our intended models for evidence-based beliefs. In addition, we also consider a second more general flavor, where belief and plausibility are modeled using additional primitive relations, and we prove a representation theorem showing that each such general model is a p-morphic image of an intended one. This semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. We give a structural study of the resulting ‘uniform’ and ‘flat’ models. Our main result are sound and complete axiomatizations for the logics of all four major model classes with respect to the modal language of evidence, belief and safe belief. We conclude with an outlook toward logics for the dynamics of changing evidence, and the resulting language extensions and connections with logics of plausibility change.

Abstract:

Opportunistic electoral fiscal policy cycle theory suggests that all subnational officials will raise fiscal spending during elections. Ideological partisan fiscal policy cycle theory suggests that only left-leaning governments will raise election year fiscal spending, with right-leaning parties choosing the reverse. This article assesses which of these competing logics applies to debt policy choices. Cross-sectional time-series analysis of yearly loan acquisition across Mexican municipalities -on statistically matched municipal subsamples to balance creditworthiness across left- and right-leaning governments- shows that all parties engage in electoral policy cycles but not in the way originally thought. It also shows that different parties favored different types of loans, although not always according to partisan predictions. Both electoral and partisan logics thus shape debt policy decisions -in contrast to fiscal policy where these logics are mutually exclusive- because debt policy involves decisions on multiple dimensions, about the total and type of loans

Abstract:

This research focuses on a hyperbolic system that describes bidisperse suspensions, consisting of two types of small particles dispersed in a viscous fluid. The dependence of solutions on the relative position of contact manifolds in the phase space is examined. The wave curve method serves as the basis for the first and second analyses. The former involves the classification of elementary waves that emerge from the origin of the phase space. Analytical solutions to prototypical Riemann problems connecting the origin with any point in the state space are provided. The latter focuses on semi-analytical solutions for Riemann problems connecting any state in the phase space with the maximum packing concentration line, as observed in standard batch sedimentation tests. When the initial condition crosses the first contact manifold, a bifurcation occurs. As the initial condition approaches the second manifold, another structure appears to undergo bifurcation, although it does not represent an actual bifurcation according to the triple shock rule. The study reveals important insights into the behavior of solutions in relation to these contact manifolds. This research sheds light on the existence of emerging quasi-umbilic points within the system, which can potentially lead to new types of bifurcations as crucial elements of the elliptic/hyperbolic boundary in the system of partial differential equations. The implications of these findings and their significance are discussed

Abstract:

This contribution is a condensed version of an extended paper, where a contact manifold emerging in the interior of the phase space of a specific hyperbolic system of two nonlinear conservation laws is examined. The governing equations are modelling bidisperse suspensions, which consist of two types of small particles differing in size and viscosity that are dispersed in a viscous fluid. Based on the calculation of characteristic speeds, the elementary waves with the origin as left Riemann datum and a general right state in the phase space are classified. In particular, the dependence of the solution structure of this Riemann problem on the contact manifold is elaborated

Abstract:

We study the connection between the global liquidity crisis and the severe credit crunch experienced by finance companies (SOFOLES) in Mexico using firm-level data between 2001 and 2011. Our results provide supporting evidence that, as a result of the liquidity shock, SOFOLES faced severely restricted access to their main funding sources (commercial bank loans, loans from other organizations, and public debt markets). After controlling for the potential endogeneity of their funding, we find that the liquidity shock explains 64 percent of SOFOLES' credit contraction during the recent financial crisis (2008-2009). We use our estimates to disentangle supply from demand factors as determinants of the credit contraction. After controlling for the large decline in loan demand during the financial crisis, our findings suggest that supply factors (such as nonperforming loans and lower liquidity buffers) also played a significant role. Finally, we find that financial deregulation implemented in 2006 may have amplified the effects of the global liquidity shock

Abstract:

We develop a two-country, three-sector model to quantify the effects of Korean trade policies for structural change from 1963 through 2000. The model features non-homothetic preferences, Armington trade, proportional import tariffs and export subsidies, and is calibrated to match sectoral value added data on Korean production and trade. Korea's tariff liberalization increased imports and trade, especially agricultural imports, accelerating de-agriculturalization and intensifying industrialization. Korean subsidy liberalization lowered exports and trade, especially industrial exports, attenuating industrialization. Thus, while individually powerful agents for structural change, Korea's tariff and subsidy reforms offset each other. Subsidy reform dominated quantitatively; lower trade, higher agricultural and lower industrial employment shares, and slower industrialization were observed than in a counterfactual economy with no post-1963 policy reform

Abstract:

In this article, we address the problem of adaptive state observation of linear time-varying systems with delayed measurements and unknown parameters. Our new developments extend the results reported in our recently works. The case with known parameters has been studied by many researchers. However in this article we show that the generalized parameter estimation-based observer design provides a very simple solution for the unknown parameter case. Moreover, when this observer design technique is combined with the dynamic regressor extension and mixing estimation procedure the estimated state and parameters converge in fixed-time imposing extremely weak excitation assumptions

Abstract:

We consider an otherwise conventional monetary growth model in which spatial separation and limited communication create a transactions role for currency, and stochastic relocation gives rise to financial intermediaries. In this framework we consider how changes in fiscal and monetary policy, and in reserve requirements, affect inflation, capital formation, and nominal interest rates. There is also considerable scope for multiple equilibria; we show how reserve requirements that never bind along actual equilibrium paths can play an important role in avoiding undesirable equilibria. Finally, we demonstrate that changes in (apparently) nonbinding reserve requirements can have significant, real effects

Abstract:

Efficient modeling of probabilistic dependence is one of the most important areas of research in decision analysis. Several approximations to joint probability distributions have being developed with the intent of capturing the probabilistic dependence among variables. However, the accuracy of these methods in a wide range of settings is unknown. In these paper, we develop a methodology to test the accuracy of several approximations distributions

Abstract:

We characterize dominant-strategy incentive compatibility with multidimensional types. A deterministic social choice function is dominant-strategy incentive compatible if and only if it is weakly monotone (W-Mon). The W-Mon requirement is the following: If changing one agent's type (while keeping the types of other agents fixed) changes the outcome under the social choice function, then the resulting difference in utilities of the new and original outcomes evaluated at the new type of this agent must be no less than this difference in utilities evaluated at the original type of this agent

Abstract:

Germany's two-year membership of the UN Security Council ended on 31 December 2020. Having started with big expectations, it was hit by the hard realities of increasingly divisive world politics in the time of a global pandemic. While Germany fell short of its ambitions on the two thematic issues of Women, Peace and Security, and Climate and Security, it can point to some successes, in particular with regard to Sudan. These successes did not, however, increase Germany's prospects of securing a permanent, seat on the Council for the foreseeable future. This section provides an overview of Germany's activities on the Council and ends with a brief evaluation

Abstract:

As the only international organization that aspires to be unviersal both in terms of its membership as well as in terms of the policy fields in which it intervenes, the United Nations (UN) occupies a unique position in international lawmaking. Focusing on the UN's political and judicial or quasi-judicial organs does not, however, fully capture the organization's lawmaking activities. Instead, much of the UN's impact on international law today can be traced back to its civil servants. In this paper, I argue that international lawmaking today is best understood as processual and fluid, and that in contemporary lawmaking thus understood, the UN Secretariat plays an important role. I then propose two ways in which international civil servants add to international lawmaking: they engage in executive interpretation of broad and elusive norms, and they act as an interface between various actors on different governance levels. This raises novel legitimacy challenges for the international legal order

Abstract:

Germany's two-year membership in the UN Security Council ended on 31 December 2020. Starting with big expectations, it hit the hard realities of increasingly divisive world politics in times of a global pandemic. Nevertheless, Germany can point to some notable successes. Every eight years, Germany applies for a non-permanent membership in the Security Council. The recent term was the sixth time that Germany sat on the Council. It had applied with an ambitious program: strengthening the women, peace and security agenda; putting the link between climate change and security squarely on the Council's agenda; strengthening the humanitarian system; and lastly, revitalizing the issue of disarmament and arms control. It chose those four thematic issues to make its mark, besides the Council's day-to-day business dealing with country situations. Has Germany been successful?

Abstract:

When actors express conflicting views about the validity or scope of norms or rules in relation to other norms or rules in the international sphere, they often do so in the language of international law. This contribution argues that international law's hermeneutic acts as a common language that cuts across spheres of authority and can thus serve as a conflict management tool for interface conflicts. Often, this entails resorting to an international court. While acknowledging that courts cannot provide permanent solutions to the underlying political conflict, I submit that court proceedings are interesting objects of study that promote our understanding of how international legal argument operates as a conflict management device. I distinguish three dimensions of common legal form, using the well-known EC-Hormones case as illustration: a procedural, argumentative, and substantive dimension. While previous scholarship has often focused exclusively on the substantive dimension, I argue that the other two dimensions are equally important. In concluding, I reflect on a possible explanation as to why actors are disposed to resort to international legal argument even if this is unlikely to result in a final solution: there is a specific authority claim attached to international law qua law

Abstract:

The International Court of Justice (ICJ) occupies a special position amongst international courts. With its quasi-universal membership and ability to apply in principle the whole body of international law, it should be well-placed to adjudicate its cases with a holistic view on international law. This article examines whether the ICJ has lived up to this expectation. It analyses the Court's case load in the 21st century through the lens of inter-legality as the current condition of international law. With regard to institutional inter-legality, the authors observe an increase of inter-State proceedings based on largely the same facts that are initiated both before the ICJ and other courts and identify an increasing need to address such parallel proceedings. With regard to substantive inter-legality, the article provides analyses of ICJ cases in the fields of consular relations and human rights, as well as international environmental law. The authors find that the ICJ is rather reluctant in situating norms within their broader normative environment and restrictively applies only those rules its jurisdiction is based on. The Court does not make use of its abilities to adjudicate cases holistically and thus falls short of expectations raised by its own members 20 years ago

Abstract:

Contemporary international law often presents itself as an almost impenetrable thicket of overlapping legal regimes that materialize through multilateral treaties at the global, regional and sub-regional levels, customary law and other regulatory orders. Often, overlaps between different regimes manifest themselves as constellations of norms that appear to be conflictual. Valentin Jeutner's book entitled Irresolvable Norm Conflicts in International Law, based on his doctoral thesis defended at Cambridge University in 2015, is the latest in a recent series of monographs that address norm conflicts in international law. On his own account, his work differs from other studies in that it explores certain constellations of public international law where 'the legal order confronts legal subjects with seemingly impossible expectations', where 'a state of legal superposition' exists (at 6). As indicated in the title, Jeutner is not concerned with norm conflicts in general but, rather, with a specific subset of conflicts: those that are, legally speaking, irresolvable. Jeutner proposes that such irresolvable conflicts ought to be called 'legal dilemmas' and argues that such dilemmas ought to be solved by the sovereign actor facing a dilemma: the state. His book is informed by deontic logic and formulates an abstract theory of the legal dilemma as a novel concept for international law, which the author explicitly introduces as a stipulative definition - a term of art (at 19)

Resumen:

Hart dedicó poca atención a la regla de adjudicación –lo mismo hizo la literatura especializada. El propósito de este escrito consiste en intentar ir más allá de las escasas indicaciones brindadas por Hart sobre el tema de la regla de adjudicación y detallar la función que desempeña en el seno de su concepción del derecho. El método elegido es esencialmente reconstructivo: no se trata de tomar inspiración en Hart para elaborar una noción propia de regla de adjudicación, sino de poner de relieve las potencialidades –aunque también los límites– de este tipo de regla secundaria. Para ello, en primer lugar se profundizan las conexiones entre la regla de adjudicación, por un lado, y la coacción y la interpretación jurídica, por el otro: el objetivo consiste en dibujar la posición teórica de los jueces, que se desprende, en particular, de la investigación de sus (distintas) tareas en relación con los casos dudosos y los casos claros. A continuación, tal postura teórica se somete a crítica; prestando atención, en particular, al problema de la definitividad e infalibilidad de las sentencias, se demuestra cómo Hart consideró la aplicación del derecho de forma demasiado declarativa

Abstract:

H.L.A. Hart did not pay much attention to the rule of adjudication –and neither did scholars. This paper aims to go beyond what Hart explicitly says about it and to give an account of its role within his concept of law. The perspective will be reconstructive, since the goal is not to develop an original concept of rule of adjudication, inspired on Hart’s theory of law, but rather to shed light on the potential –but also the limits– of this kind of secondary rule. Therefore, the article will first explore the interrelation between the rule of adjudication, on the one hand, and coercion and legal interpretation, on the other: the goal is to outline the theoretical position of judges, which becomes clear when analyzing their (different) tasks in easy and hard cases. Then, this position is put under criticism; by examining, in particular, the well-known problem of the infallibility and finality of judicial decisions, it is shown that Hart considered the judicial application of law in a too declarative way

Abstract:

The nested Kingman coalescent describes the ancestral tree of a population undergoing neutral evolution at the level of individuals and at the level of species, simultaneously. We study the speed at which the number of lineages descends from infinity in this hierarchical coalescent process and prove the existence of an early-time phase during which the number of lineages at time t decays as 2 gamma/ct2, where c is the ratio of the coalescence rates at the individual and species levels, and the constant gamma approximate to 3.45 is derived from a recursive distributional equation for the number of lineages contained within a species at a typical time

Abstract:

In this paper, we study the genealogical structure of a Galton-Watson process with neutral mutations. Namely, we extend in two directions the asymptotic results obtained in Bertoin [Stochastic Process. Appl. 120 (2010) 678–697]. In the critical case, we construct the version of the model in Bertoin [Stochastic Process. Appl. 120 (2010) 678–697], conditioned not to be extinct. We establish a version of the limit theorems in Bertoin [Stochastic Process. Appl. 120 (2010) 678–697], when the reproduction law has an infinite variance and it is in the domain of attraction of an a-stable distribution, both for the unconditioned process and for the process conditioned to nonextinction. In the latter case, we obtain the convergence (after re-normalization) of the allelic sub-populations towards a tree indexed CSBP with immigration

Abstract:

We consider the compact space of pairs of nested partitions of N, where by analogy with models used in molecular evolution, we call "gene partition" the finer partition and "species partition" the coarser one. We introduce the class of nondecreasing processes valued in nested partitions, assumed Markovian and with exchangeable semigroup. These processes are said simple when each partition only undergoes one coalescence event at a time (but possibly the same time). Simple nested exchangeable coalescent (SNEC) processes can be seen as the extension of Λ-coalescents to nested partitions. We characterize the law of SNEC processes as follows. In the absence of gene coalescences, species blocks undergo Λ-coalescent type events and in the absence of species coalescences, gene blocks lying in the same species block undergo i.i.d. Λ-coalescents. Simultaneous coalescence of the gene and species partitions are governed by an intensity measure vs on (0,1] x M1 ([0,1]) providing the frequency of species merging and the law in which are drawn (independently) the frequencies of genes merging in each coalescing species block. As an application, we also study the conditions under which a SNEC process comes down from infinity

Abstract:

Survey research suggests that managers and employees see remote work very differently. Managers are more likely to say it harms productivity, while employees are more likely to say it helps. The difference may be commuting: Employees consider hours not spent commuting in their productivity calculations, while managers don't. The answer is clearer communication and policies, and for many companies the best policy will be managed hybrid with two to three mandatory days in office

Abstract:

Many CEOs are publicly gearing up for yet another return-to-office push. Privately, though, executives expect remote work to keep on growing, according to a new survey. That makes sense: Employees like it, the technology is improving, and - at least for hybrid work - there seems to be no loss of productivity. Despite the headlines, executives expect both hybrid and fully remote work to keep increasing over the next five years

Abstract:

In this paper we provide two significant extensions to the recently developed parameter estimation-based observer design technique for state-affine systems. First, we consider the case when the full state of the system is reconstructed in spite of the presence of unknown, time-varying parameters entering into the system dynamics. Second, we address the problem of reduced order observers with finite convergence time. For the first problem, we propose a simple gradient-based adaptive observer that converges asymptotically under the assumption of generalised persistent excitation. For the reduced order observer we invoke the advanced dynamic regressor extension and mixing parameter estimator technique to show that we can achieve finite convergence time under the weak interval excitation assumption. Simulation results that illustrate the performance of the proposed adaptive observers are given. This include, an unobservable system, an example reported in the literature and the widely popular, and difficult to control, single-ended primary inductor converter

Abstract:

The problem of estimating constant parameters from a standard vector linear regression equation in the absence of sufficient excitation in the regressor is addressed. The first step to solve the problem consists in transforming this equation into a set of scalar ones using the well-known dynamic regressor extension and mixing technique. Then, a novel procedure to generate new scalar exciting regressors is proposed. The superior performance of a classical gradient estimator using this new regressor, instead of the original one, is illustrated with comprehensive simulations

Abstract:

Wind turbines are often controlled to harvest the maximum power from the wind, which corresponds to the operation at the top of the bell-shaped power coefficient graph. Such a mode of operation may be achieved implementing an extremum seeking data-based strategy, which is an invasive technique that requires the injection of harmonic disturbances. Another approach is based on the knowledge of the analytic expression of the power coefficient function, an information usually unreliably provided by the turbine manufacturer. In this paper we propose a globally, exponentially convergent on-line estimator of the parameters entering into the windmill power coefficient function. This corresponds to the solution of an identification problem for a nonlinear, nonlinearly parameterized, underexcited system. To the best of our knowledge we have provided the first solution to this challenging, practically important, problem

Abstract:

In this paper we address the problem of adaptive state observation of affine-in-the-states time-varying systems with delayed measurements and unknown parameters. The development of the results proposed in the [Bobtsov et al. 2021a] and in the [Bobtsov et al. 2021c] is considered. The case with known parameters has been studied by many researchers--see [Sanz et al. 2019, Bobtsov et al. 2021b] and references therein-- where, similarly to the approach adopted here, the system is treated as a linear time-varying system. We show that the parameter estimation-based observer (PEBO) design proposed in [Ortega et al. 2015, 2021] provides a very simple solution for the unknown parameter case. Moreover, when PEBO is combined with the dynamic regressor extension and mixing (DREM) estimation technique [Aranovskiy et al. 2016, Ortega et al. 2019], the estimated state converges in fixed-time with extremely weak excitation assumptions

Abstract:

The problem of effective use of Phasor Measurement Units (PMUs) to enhance power systems awareness and security is a topic of key interest. The central question to solve is how to use these new measurements to reconstruct the state of the system. In this article, we provide the first solution to the problem of (globally convergent) state estimation of multimachine power systems equipped with PMUs and described by the fourth-order flux-decay model. This article is a significant extension of our previous result, where this problem was solved for the simpler third-order model, for which it is possible to recover algebraically part of the unknown state. Unfortunately, this property is lost in the more accurate fourth-order model, and we are confronted with the problem of estimating the full state vector. The design of the observer relies on two recent developments proposed by the authors, a parameter estimation based approach to the problem of state estimation and the use of the Dynamic Regressor Extension and Mixing (DREM) technique to estimate these parameters. The use of DREM allows us to overcome the problem of lack of persistent excitation that stymies the application of standard parameter estimation designs. Simulation results illustrate the latter fact and show the improved performance of the proposed observer with respect to a locally stable gradient-descent-based observer

Abstract:

In this article, we propose a PI passivity-based controller, applicable to a large class of switched power converters, that ensures global state regulation to a desired equilibrium point. A solution to this problem requires full state-feedback, which makes it practically unfeasible. To overcome this limitation we construct a state observer that is implementable with measurements that are available in practical applications. The observer reconstructs the state in finite-time, ensuring global convergence of the PI. An adaptive version of the observer, where some parameters of the converter are estimated, is also proposed. The excitation requirement for the observer is very weak and is satisfied in normal operation of the converters. Realistic simulation results illustrate the excellent performance and robustness vis-à-vis noise and parameter uncertainty of the proposed output-feedback PI

Abstract:

In this paper we address the problem of state observation of linear time-varying (LTV) systems with delayed measurements, which has attracted the attention of many researchers-see Sanz et al. (2019) and references therein. We show that, the parameter estimation-based observer (PEBO) design proposed in Ortega, Bobtsov, Nikolaev, Schiffer, and Dochain (2021), Ortega, Bobtsov, Pyrkin, and Aranovskiy (2015) provides a very simple solution to the problem with reduced prior knowledge. Moreover, when PEBO is combined with the dynamic regressor extension and mixing (DREM) estimation technique (Aranovskiy, Bobtsov, Ortega, & Pyrkin, 2017; Ortega, Gerasimov, Barabanov, & Nikiforov, 2019), the estimated state converges in fixed-time with extremely weak excitation assumptions

Abstract:

How many dimensions adequately characterize voting on U.S. trade policy? How are these dimensions to be interpreted? This paper seeks those answers in the context ofvoting on the landmark 1988 Omnibus Trade and Competitiveness Act. The paper takes steps beyond the existing literature. First, using a factor analytic approach, the dimension issue is examined to determine whether subsets of roll call votes on trade policy are correlated. A factor-analytic result allows the use of a limited number ofvotes for this purpose. Second, a structural model with latent variables is used to find what economic and political factors comprise these dimensions. The study yields two main findings. More than one dimension determines voting in the Senate, with the main dimension driven by economic interest, not ideology. Although two dimensions are required to fuHy account for House voting, one dimension dominates. That dimension is driven primarily by party. Based on reported evidence, and a growing consensus in the congressional studies literature, this finding is attributed to interest-based leadership that evolves in order to solve collective action problems faced by individual legislators

Abstract:

The World Trade Organization (WTO) has three main functions: (i) it provides a negotiation forum where Members can negotiate new agreements and understandings, (ii) it provides a judicial forum where trade disputes between countries can be settled, and (iii) it acts as an executive forum for the administration and application of the WTO agreements, including capacity-building and training in this respect. Currently, it is only performing its executive function, as the other two functions remain stalled. The authors in this article analyse two challenges that have contributed to paralysing the WTO's legislative and judicial functions. With this assessment, the authors suggest that the 'real elephant in the room', i.e., the root-cause behind these challenges, is the avoidable 'consensus-based decision-making

Abstract:

For a multilateral system to be sustainable, it is important to have several escape clauses which can allow countries to protect their national security concerns. However, when these escape windows are too wide or ambiguous, defining their ambit and scope becomes challenging yet crucial to ensure that they are not open to misuse. The recent Panel Ruling in Russia-Measures Concerning Traffic in Transit is the very first attempt by the WTO to clarify the scope and ambit of National Security Exception. In this paper, we argue that the Panel has employed a combination of an objective and a subjective approach to interpret this exception. This hybrid approach to interpret GATT Article XXI (b) provides a systemic balance between the sovereign rights of the members to invoke the security exception and their right to free and open trade. But has this Ruling opened Pandora's box? In this paper, we address this issue by providing an in-depth analysis of the Panel's decision

Abstract:

This essay is concerned with computation as a tool for the analysis of mathematical models in economics. It is our contention that the use of high-performance computers is likely to play the substantial role in providing understanding of economic models that it does with regard to models in the physical and biological sciences. The main thrust of our commentary is that numerical simulations of mathematical models are in certain respects like experiments performed in a laboratory, and that this view imposes conditions on the way they are carried out and reported

Abstract:

Our research aims to aid children with autism improve social interactions through discrete messages received in a waist smart band. In this paper, we describe the design and development of an interactive system in a Microsoft Band 2 and present results of an evaluation with three students that gives us positive evidence that using this form of support can increase the quantity and quality of the social interactions

Resumen:

La Historia Roderici Campidocti es una obra hispano-latina medieval que narra los sucesos más significativos en la brillante trayectoria militar de Rodrigo Díaz de Vivar, el Cid Campeador. En esta nota, presentamos algunos comentarios en torno a la aparición y la función de las aves en dicha obra, así como en otros textos de tema cidiano, para establecer posibles relaciones e influencias entre ellos a partir de un tópico en común

Abstract:

The Historia Roderici Campidocti is a medieval Hispanic-latino novel relating the most important events in the military career of Rodrigo Díaz de Vivar, the Lord Champion. In this article, we will comment on the appearance and the role of birds in El Cid literature in order to establish their potential relationship and influence

Abstract:

In this article, we present some new results for the design of PID passivity-based controllers (PBCs) for the regulation of port-Hamiltonian (pH) systems. The main contributions of this article are: (i) new algebraic conditions for the explicit solution of the partial differential equation required in this design; (ii) revealing the deleterious impact of the dissipation obstacle that limits the application of the standard PID-PBC to systems without pervasive dissipation; (iii) the proposal of a new PID-PBC which is generated by two passive outputs, one with relative degree zero and the other with relative degree one. The first output ensures that the PID-PBC is not hindered by the dissipation obstacle, while the relative degree of the second passive output allows the inclusion of a derivative term. Making the procedure more constructive and removing the requirement on the dissipation significantly extends the realm of application of PID-PBC. Moreover, allowing the possibility of adding a derivative term to the control, enhances its transient performance

Abstract:

In this Action-Process-Object-Schema (APOS) study, we aim to study the extent to which results from our previous research on student understanding of two-variable functions can be replicated in a different institutional context. We conducted the experience at a university in another country and with a different instructor than in previous studies. The experience consisted in comparing two sections of the same course; one taught through lectures and the other using activities designed with APOS theory and the ACE didactical strategy (Activities, Class discussion, Exercises). We show the results of a comparison of students' performance in both groups and give evidence of the generalizability of our previously obtained research results and the possible replication of didactic aspects across institutions. We found that using the APOS theory didactical approach favors a deeper understanding of two-variable functions. We also found that factors other than the activity sets and teaching strategy affected the replication

Abstract:

In this paper we provide a necessary and sufficient condition for the solv-ability of inhomogeneous Cauchy-Riemann type systems where the datum consists of continuous C-valued functions and we describe its general solution by embedding the system in an appropriate quaternionic setting

Abstract:

This paper aims to study several boundary value properties, to derive a Poincare-Bertrand formula and to solve some Dirichlet-type problems for the 2D quaternionic metaharmonic layer potentials

Abstract:

The Cimmino system offers a natural and elegant generalization to four-dimensional case of the Cauchy–Riemann system of first order complex partial differential equations. Recently, it has been proved that many facts from the holomorphic function theory have their extensions onto the Cimmino system theory. In the present work a Poincaré–Bertrand formula related to the Cauchy–Cimmino singular integrals over piecewise Lyapunov surfaces in R4 is derived with recourse to arguments involving quaternionic analysis. Furthermore, this paper obtains some analogues of the Hilbert formulas on the unit 3-sphere and on the 3-dimensional space for the theory of Cimmino system

Abstract:

Starting from Sinclair's 1976 work [6] on automatic continuity of linear operators on Banach spaces, we prove that sequences of intertwining continuous linear maps are eventually constant with respect to the separating space of a fixed linear map. Our proof uses a gliding hump argument. We also consider aspects of continuity of linear functions between locally convex spaces and prove that such a linear function T from the locally convex space X to the locally convex space Y is continuous whenever the separating space G(T) is the zero vector in Y and for which X and Y satisfy conditions for a closed graph theorem

Abstract:

We prove an extension of the Patero optimization criterion to locally complete locally convex vector spaces to guarantee the existence of fixed points of set-valued maps

Abstract:

In this article we discuss the relationship between three types of locally convex spaces: docile spaces, Mackey first countable spaces, and sequentially Mackey first countable spaces. More precisely, we show that docile spaces are sequentially Mackey first countable. We also show the existence of sequentially Mackey first countable spaces that are not Mackey first countable, and we characterize Mackey first countable spaces in terms of normability of certain inductive limits

Abstract:

Bisectors of line segments are quite simple geometrical objects. Despite their simplicity, they have many surprising and useful properties. As metric objects, the shape of bisectors depends upon the metric considered. This article discusses geometric properties of bisectors of line segments in the plane, when the bisectors are taken with respect to the usual p-norms. Although the shape of bisectors changes as their defining p-norm varies, it is shown that the bisectors share exactly three points (or infinitely many points in exceptional cases determined by the orientation of the base line segment)

Abstract:

Ekeland's variational principle and the existence of critical points of dynamical systems, also known as multiobjective optimization, have been proved in the setting of locally complete spaces. In this article we prove that these two properties can be deduced one from the other under certain convexity conditions

Abstract:

In this paper we prove Ekeland's variational principIe in the setting of locally complete spaces for lower semi continuous functions from above and bounded below. We use this theorem to prove Caristi's fixed point theorem in the same setting and also for lower semi continuous functions

Abstract:

We define a generalization of Mackey first countability and prove that it is equivalent to being docile. A consequence of the main result is to give a partial affirmative answer to an old question of Mackey regarding arbitrary quotients of Mackey first countable spaces. Some applications of the main result to spaces such as inductive limits are also given

Abstract:

For a partition P of a set C we prove that if P maximizes the generalized utilitarian social welfare function then P is a Pareto optimal partition. We also give a condition under whic the partition that maximizes is unique

Abstract:

More than half of the students in the Latin American and the Caribbean region are below Pisa level 1 which means that the majority of the students in our region cannot identify information and carry out routine procedures according to direct instructions in explicit situations. There have been some good experiences in each country to reverse the depicted situation but it is not enough and this is not happening in all countries. I will talk about these experiences. In all of them professional mathematicians need to help teachers to have the necessary knowledge, and become more effective instructors that can raise the standard of every student

Abstract:

In this paper we prove an extension of Ekeland’s variational principle in the setting of locally complete spaces. We also present an Equilibrium version of the Ekeland-type variational principle, a Caristi-Kirk type fixed point theorem for multivalued maps and a Takahashi minimization theorem, we then prove that they are equivalent

Abstract:

In this paper we study the Pareto optimization problem in the framework of locally convex spaces with the restricted assumption that only some related sets are locally complete

Abstract:

We prove an extension of Ekeland’s variational principle to locally complete spaces which uses subadditive, strictly increasing continuous functions as perturbations

Abstract:

We prove that for the inductive limit of sequentially complete spaces regularity or local completeness imply the Banach Disk Closure Property (BDCP) (an inductive limite enjoys the BDCP if all Banach disks in the steps of the inductive limit are such that their closures, with respect to the inductive limit topology, are Banach discs as well). In particular we obtain that for an inductive limit of sequentially complete spaces, regularity is equivalent to the BDCP plus an “almost regularity” condition

Abstract:

We study the heredity of local completeness and the strict Mackey convergence property from the locally convex space E to the space of absolutely p-summable sequences on E, lp(E) for 1 ≤ p < ∞

Abstract:

We defined the lp,q-summability property and study the relations between the lp,q-summability property, the Banach-Mackey spaces and the locally complete spaces. We prove that, for c0-quasibarrelled spaces, Banach-Mackey and locally complete are equivalent. Last section is devoted to the study of CS-closed sets introduced by Jameson and Kakol

Abstract:

In this paper we consider the Sobolev-Slobodeckij spaces W^(m,p) (R,E) where E is a strict (LF)-space, m belongs to (0, infinity)\ N and p belongs to [1, infinity). We prove that W^(m,p) (R,E) has the approximation property provided E has it, furthermore if E is a Banach space with the strict approximation property then W^(m,p) (R,E) has this property

Abstract:

This is a study of relationship between the concepts of Mackey, ultrabornological, bornological, barrelled, and infrabarrelled spaces and the concept of fast completeness. An example of a fast complete but not sequentially complete space is presented.

Abstract:

The notion of "praxeology" from the anthropological theory of the didactic (ATD) can be used as a framework to approach what has recently been called the networking of theories in mathematics education. Theories are interpreted as research praxeologies, and different modalities of "dialogues" between research praxeologies are proposed, based on alternatively considering the main features and proposals of one theory from the perspective of the other. To illustrate this networking methodology, we initiate a dialogue between APOS (action-process-object-schema) and the ATD itself. It starts from the theoretical component of both research praxeologies followed by the technological and technical ones. Both dialogue modalities and the resulting insights are illustrated, and the elements of APOS and the ATD that the dialogue can promote and develop are underlined. The results found indicate that a complete dialogue taking into account all components of research praxeologies appears as an unavoidable step in the networking of research praxeologies

Abstract:

We study transaction costs for making deposits within the privatized pension system in Mexico. We analyze an expansion of access channels for additional (voluntary) contributions at 7-Eleven stores, followed by a media campaign providing information on this policy and persuasive messages to save. We estimate a differential 6-9 percent increase in the volume of transactions post-policy in municipalities with 7-Eleven relative to those without. However, due to smaller deposits compared to pre-policy sizes, we find modest effects on the flow of savings. Contribution size was not just smaller for marginal savers, but also decreased significantly for some inframarginal savers

Abstract:

This paper proves the large deviation principle for a class of non-degenerate small noise diffusions with discontinuous drift and with state-dependent diffusion matrix. The proof is based on a variational representation for functionals of strong solutions of stochastic differential equations and on weak convergence methods.

Abstract:

We consider the construction of Markov chain approximations for an important class of deterministic control problems. The emphasis is on the construction of schemes that can be easily implemented and which possess a number of highly desirable qualitative properties. The class of problems covered is that for which the control is affine in the dynamics and with quadratic running cost. This class covers a number of interesting areas of application, including problems that arise in large deviations, risk-sensitive and robust control, robust filtering, and certain problems in computer vision. Examples are given as well as a proof of convergence

Abstract:

In this article, we study the conditions for convergence of the recently introduced dynamic regressor extension and mixing (DREM) parameter estimator when the extended regressor is generated using linear time-invariant filters. In particular, we are interested in relating these conditions with the ones required for convergence of the classical gradient (or least squares), namely the well-known persistent excitation (PE) requirement on the original regressor vector, ø(t)E Rq , with q E N the number of unknown parameters. Moreover, we study the case when only interval excitation (IE) is available, under which DREM, concurrent, and composite learning schemes ensure global convergence, being the convergence for DREM in a finite time . Regarding PE, we prove, under some mild technical assumptions, that if ø is PE, then the scalar regressor of DREM, ΔN E R , is also PE ensuring exponential convergence. Concerning IE, we prove that if ø is IE, then ΔN is also IE. All these results are established in the almost sure sense, namely proving that the set of filter parameters for which the claims do not hold is of zero measure. The main technical tool used in our proof is inspired by a study of Luenberger observers for nonautonomous nonlinear systems recently reported in the literature

Abstract:

This article aims to provide a new problem formulation of path following for mechanical systems without time parameterization nor guidance laws, namely, we express the control objective as an orbital stabilization problem. It is shown that it is possible to adapt the immersion and invariance technique to design static state feedback controllers that solve this problem. In particular, we select the target dynamics adopting the recently introduced Mexican sombrero energy assignment method. To demonstrate the effectiveness of the proposed method we apply it to control underactuated marine surface vessels

Abstract:

In this article we propose an energy pumping-and-damping technique to regulate nonholonomic systems described by kinematic models. The controller design follows the widely popular interconnection and damping assignment passivity-based methodology, with the free matrices partially structured. Two asymptotic regulation objectives are considered: drive to zero the state or drive the systems total energy to a desired constant value. In both cases, the control laws are smooth, time-invariant, state-feedbacks. For the nonholonomic integrator we give an almost global solution for both problems, with the objectives ensured for all system initial conditions starting outside a set that has zero Lebesgue measure and is nowhere dense. For the general case of higher order nonholonomic systems in chained form, a local convergence result is given. Simulation results comparing the performance of the proposed controllerwith other existing designs are also provided

Abstract:

Morphometric datasets only convey useful information about variation when measurement landmarks and relevant anatomical axes are clearly defined. We propose that anatomical axes of 3D digital models of bones can be standardized prior to measurement using an algorithm that automatically finds a universal geometric alignment among sampled bones. As a case study, we use teeth of "prosimian" primates. In this sample, equivalent occlusal planes are determined automatically using the R-package auto3dgm. The area of projection into the occlusal plane for each tooth is the measurement of interest. This area is used in computation of a shape metric called relief index (RFI), the natural log of the square root of crown area divided by the square root of occlusal plane projection area. We compare mean and variance parameters of area and RFI values computed from these automatically orientated tooth models with values computed from manually orientated tooth models. According to our results, the manual and automated approaches yield extremely similar mean and variance parameters. The only differences that plausibly modify interpretations of biological meaning slightly favor the automated treatment because a greater proportion of differences among subsamples in the automated treatment are correlated with dietary differences. We conclude that—at least for dental topographic metrics—automated alignment recovers a variance pattern that has meaning similar to previously published datasets based on manual data collection. Therefore, future applications of dental topography can take advantage of automatic alignment to increase objectivity and repeatability

Abstract:

In "Why Democracy Protests Do Not Diffuse," we examine whether or not countries are significantly more likely to experience democracy protests when one or more of their neighbors recently experienced a similar protest. Our goal in so doing was not to attack the existing literature or to present sensational results, but to evaluate the extent to which the existing literature can explain the onset of democracy protests more generally. In addition to numerous studies attributing to diffusion the proliferation of democracy protests in four prominent waves of contention in Europe (1848, 1989, and early 2000s) and in the Middle East and North Africa (2011), there are multiple academic studies, as well as countless articles in the popular press, claiming that democracy protests have diffused outside these well-known regions and periods of contention (e.g., Bratton and van de Walle 1992; Weyland 2009; della Porta 2017). There are also a handful of cross-national statistical analyses that hypothesize that anti-regime contention, which includes but is not limited to democracy protests, diffuses globally (Braithwaite, Braithwaite, and Kucik 2015; Gleditsch and Rivera 2017; Escriba`-Folch, Meseguer, and Wright 2018). Herein, we discuss what we can and cannot conclude from our analysis about the diffusion of democracy protests and join our fellow forum participants in identifying potential areas for future research. Far from closing this debate, we hope our article will stimulate further conversations and analyses about the theoretical and empirical bases of contention, diffusion, and democratization

Abstract:

One of the primary international factors proposed to explain the geographic and temporal clustering of democracy is the diffusion of democracy protests. Democracy protests are thought to diffuse across countries, primarily, through a demonstration effect, whereby protests in one country cause protests in another based on the positive information that they convey about the likelihood of successful protests elsewhere and, secondarily, through the actions of transnational activists. In contrast to this view, we argue that, in general, democracy protests are not likely to diffuse across countries because the motivation for and the outcome of democracy protests result from domestic processes that are unaffected or undermined by the occurrence of democracy protests in other countries. Our statistical analysis supports this argument. Using daily data on the onset of democracy protests around the world between 1989 and 2011, we find that in this period, democracy protests were not significantly more likely to occur in countries when democracy protests had occurred in neighboring countries, either in general or in ways consistent with the expectations of diffusion arguments

Abstract:

In this chapter, we briefly discuss key dynamical features of a generalised Schnakenberg model. This model sheds light on the morphogenesis of plant root hair initiation process. Our discussion is focused on the plant called Arabidopsis thaliana, which is a prime plant-model for plant researchers as is experimentally well known. Here, relationships between physical attributes and biochemical interactions that occur at a sub-cellular level are revealed. The model consists of a two-component non-homogeneous reaction-diffusion system, which takes into account an on-and-off switching process of small G-protein family members called Rho-of-Plants (ROPs). This interaction however is catalysed by the plant hormone auxin, which is known to play a crucial role in many morphogenetic processes in plants. Upon applying semi-strong theory and performing numerical bifurcation analysis, all together with time-step simulations, we present results from a thorough analysis of the dynamics of spatially localised structures in 1D and 2D spatial domains. These compelling dynamical features are found to give place to a wide variety of patterns. Key features of the full analysis are discussed

Abstract:

En este manuscrito discutimos una de las contribuciones que la teoría de los sistemas dinámicos ofrece al entendimiento del crecimiento de una población de individuos con recursos limitados. En este contexto, exploramos brevemente algunas de las ideas que permiten entender dos mecanismos de vital importancia en ecología: la competencia y la interacción con los recursos del medio. El enfoque que seguiremos consistirá en la exposición de los resultados clave publicados en [1] por R. Dilao y T. Domingos para la dinámica de una generalización del modelo de Leslie. Asimismo, presentamos los resultados de algunos experimentos numéricos cuando se considera un retraso de la variable de evolución en la dinámica del crecimiento de los recursos y una dinámica de competencia simple entre los miembros de la población

Resumen:

En este artículo se describen las características esenciales que determinan un comportamiento caótico en los sistemas dinámicos. Con este fin, se reproduce el análisis de la Ecuación Logística. También, se muestran los conceptos de dimensión fractal y autosimilitud por medio del Conjunto de Mandelbrot. La sensibilidad a condiciones iniciales es ejemplificada por el Sistema de Lorenz. Finalmente, se exponen los ingredientes necesarios de la ruta Ruelle–Takens–Newhouse al caos en el sistema de reacción-difusión Barrio–Varea–Aragón–Maini

Resumen:

En este capítulo se expone una introducción a las condiciones que producen espontáneamente estructuras localizadas en sistemas de reacción-difusión, donde la parte cinética contiene no-linealidades cuadráticas y cúbicas y en regiones de parámetros donde este fenómeno ocurre. Se presenta una breve muestra de ejemplos de patrones en sistemas físicos y biológicos cuya naturaleza es de origen localizado. En seguida, se exploran las condiciones que dan pie a la bifurcación de Turing y las propiedades elementales del mecanismo conocido como 'homoclinic snaking'. El objetivo principal de este capítulo consiste en exponer los ingredientes necesarios que capturan la esencia de ambas maquinarias y que, por tanto, inducen la aparición de estructuras localizadas en sistemas que generalmente producen patrones extendidos. Se utiliza un sistema generalizado del tipo Schnackenberg con términos de fuente y pérdida en una y dos dimensiones espaciales como ejemplo

Abstract:

A generalized Schnakenberg reaction-diffusion system with source and loss terms and a spatially dependent coefficient of the nonlinear term is studied both numerically and analytically in two spatial dimensions. The system has been proposed as a model of hair initiation in the epidermal cells of plant roots. Specifically the model captures the kinetics of a small G-protein ROP, which can occur in active and inactive forms, and whose activation is believed to be mediated by a gradient of the plant hormone auxin. Here the model is made more realistic with the inclusion of a transverse coordinate. Localized stripe-like solutions of active ROP occur for high enough total auxin concentration and lie on a complex bifurcation diagram of single- and multipulse solutions. Transverse stability computations, confirmed by numerical simulation show that, apart from a boundary stripe, these one-dimensional (1D) solutions typically undergo a transverse instability into spots. The spots so formed typically drift and undergo secondary instabilities such as spot replication. A novel two-dimensional (2D) numerical continuation analysis is performed that shows that the various stable hybrid spot-like states can coexist. The parameter values studied lead to a natural, singularly perturbed, so-called semistrong interaction regime. This scaling enables an analytical explanation of the initial instability by describing the dispersion relation of a certain nonlocal eigenvalue problem. The analytical results are found to agree favorably with the numerics. Possible biological implications of the results are discussed

Abstract:

A mathematical analysis is undertaken of a Schnakenberg reaction-diffusion system in one dimension with a spatial gradient governing the active reaction. This system has previously been proposed as a model of the initiation of hairs from the root epidermis Arabidopsis, a key cellular-level morphogenesis problem. This process involves the dynamics of the small G-proteins, Rhos of plants, which bind to form a single localized patch on the cell membrane, prompting cell wall softening and subsequent hair growth. A numerical bifurcation analysis is presented as two key parameters, involving the cell length and the overall concentration of the auxin catalyst, are varied. The results show hysteretic transitions from a boundary patch to a single interior patch and to multiple patches whose locations are carefully controlled by the auxin gradient. The results are confirmed by an asymptotic analysis using semistrong interaction theory, leading to closed form expressions for the patch locations and intensities. A close agreement between the numerical bifurcation results and the asymptotic theory is found for biologically realistic parameter values. Insight into the initiation of transition mechanisms is obtained through a linearized stability analysis based on a nonlocal eigenvalue problem. The results provide further explanation of the recent agreement found between the model and biological data for both wild-type and mutant hair cells

Abstract:

Subcritical Turing bifurcations of reaction-diffusion systems in large domains lead to spontaneous onset of well-developed localized patterns via the homoclinic snaking mechanism. This phenomenon is shown to occur naturally when balancing source and loss effect are included in a typical reaction-diffusion system, leading to a super- to subcritical transition. Implications are duscussed for a range of physical problems, arguing that subcritically leads to naturally robust phase transitions to localized patterns

Abstract:

Transmission switching has proven to be a highly useful post-contingency recovery technique by allowing power system operators increased levels of control through leveraging the topology of the power system. However, transmission switching remains only implemented in limited capacity because of concerns over computational complexity, uncertainty of performance in AC systems, and scalability to real-world, large-scale systems. We propose a heuristic which uses a sophisticated guided undersampling procedure combined with logistic regression to accurately identify transmission switching actions to reduce post-contingency AC power flow violations. The proposed heuristic was tested on real-world, large-scale AC power system data and consistently identified optimal or near optimal transmission switching actions. Because the proposed heuristic is computationally inexpensive, addresses an AC system, and is validated on real-world large-scale data, it directly addresses the aforementioned issues regarding transmission switching implementation

Abstract:

We evaluate whether the market reacts rationally to profit warnings by testing for subsequent abnormal returns. Warnings fall into two classes: those that include a new earnings forecast, and those that offer only the guidance that earnings will be below current expectations. We find significant negative abnormal returns in the first three months following both types of warning. There is also evidence that underreaction is more pronounced when the disclosure is less precise. Abnormal returns are significantly more negative following disclosures that offer only qualitative guidance than when a new earnings forecast is included

Abstract:

In this work we perform a first study of basic invariant sets of the spatial Hill's four-body problem, where we have used both analytical and numerical approaches. This system depends on a mass parameter in such a way that the classical Hill's problem is recovered when m=0. Regarding the numerical work, we perform a numerical continuation, for the Jacobi constant C and several values of the mass parameter m by applying a classical predictor-corrector method, together with a high-order Taylor method considering variable step and order and automatic differentiation techniques, to specific boundary value problems related with the reversing symmetries of the system. The solution of these boundary value problems defines initial conditions of symmetric periodic orbits. Some of the results were obtained departing from periodic orbits within Hill’s three-body problem. The numerical explorations reveal that a second distant disturbing body has a relevant effect on the stability of the orbits and bifurcations among these families. We have also found some new families of periodic orbits that do not exist in the classical Hill's three-body problem; these families have some desirable properties from a practical point of view

Abstract:

The circular restricted four-body problem studies the dynamics of a massless particle under the gravitational force produced by three point masses that follow circular orbits with constant angular velocity, the configuration of these circular orbits forms an equilateral triangle for all time; this problem can be considered as an extension of the celebrated restricted three-body problem. In this work we investigate the orbits which emanate from some equilibrium points. In particular, we focus on the horseshoe shaped orbits (rotating frame), which are well known in the restricted three-body problem. We study some families of symmetric horseshoe orbits and show their relation with a family of the so called Lyapunov orbits

Abstract:

In this work we perform numerical explorations of some families of planar periodic orbits in the Hill approximation of the restricted four-body problem. This approximation is obtained by performing a symplectic scaling which sends the two massive bodies to infinity, by the means of expanding the potential as a power series depending on the mass of the smallest primary, and taking the limit as this mass tends to zero. The limiting Hamiltonian depends only on the relative mass of the second smallest primary. The resulting dynamics shares similarities with both the restricted three-body problem and the restricted four-body problem. We focus on certain families of symmetric periodic orbits of the infinitesimal particle, for some values of the mass parameter. We explore the evolution of these families as the Jacobi constant, or, equivalently, the energy, is varied continuously, and provide details on the horizontal and vertical stability of each family

Abstract:

The legal and economic analysis presented here empirically tests the theoretical framework advanced by Kugler, Verdier, and Zenou (2003) and Buscaglia (1997). This paper goes beyond the prior literature by focusing on the empirical assessment of the actual implementation of the institutional deterrence and prevention mechanisms contained in the United Nations’ Convention against Transnational Organized Crime (Palermo Convention). A sample of 107 countries that have already signed and/or ratified the Convention was selected. The paper verifies that the most effective implemented measures against organized crime are mainly founded on four pillars: (i) the introduction of more effective judicial decision-making control systems causing reductions in the frequencies of abuses of procedural and substantive discretion; (ii) higher frequencies of successful judicial convictions based on evidentiary material provided by financial intelligence systems aimed at the systematic confiscation of assets in the hands of criminal groups and under the control of “licit” businesses linked to organized crime; (iii) the attack against high level public sector corruption (that is captured and feudalized by organized crime) and (iv) the operational presence of government and/or non-governmental preventive programs (funded by the private sector and/or governments and/or international organizations) addressing technical assistance to the private sector, educational opportunities, job training programs and/or rehabilitation (health and/or behavioral) of youth linked to organized crime in high-risk areas (with high-crime, high unemployment, and high poverty)

Abstract:

This paper investigates IS flexibility as a way to meet competitive challenges in hypercompetitive industries by enabling firms to implement strategies by adapting their operations processes quickly to fast-paced changes. It presents a framework for IS flexibility as a multidimensional construct. Through a literature review and initial case study analysis, factors to assess flexibility in each dimension are constructed. Findings of an exploratory study conducted to test the framework are reported. Based on the above it is argued that the concept of IS flexibility in hypercompetitive industries IS flexibility must be pursued from a holistic perspective to understand how it may be exploited to achieve a competitive advantage

Abstract:

Guided by the relationship between the breadth-first walk of a rooted tree and its sequence of generation sizes, we are able to include immigration in the Lamperti representation of continuous-state branching processes.We provide a representation of continuous-state branching processes with immigration by solving a random ordinary differential equation driven by a pair of independent Lévy processes. Stability of the solutions is studied and gives, in particular, limit theorems (of a type previously studied by Grimvall, Kawazu and Watanabe and by Li) and a simulation scheme for continuous-state branching processes with immigration. We further apply our stability analysis to extend Pitman’s limit theorem concerning Galton–Watson processes conditioned on total population size to more general offspring laws

Abstract:

In this paper we tackle a two-dimensional cutting problem to optimize the use of raw material in a furniture company. Since the material used to produce pieces of furniture comes from a natural source, the plywood sheets may present defects that affect the total plywood that can be used in a single sheet. The heuristic presented in this research deals with these defects and present the best way to handle them. It also considers the use of the plywood sheets for the long term planning of the company, since usually purchases of raw material are done only at certain periods of time, and must last for several weeks. Experimental results show how an intelligent cutting plan and selection of the plywood sheets reduce considerable the amount of raw material needed compared with the current operation of the company, and guarantees that the purchased sheets last during the planning period, regardless of the available area to cut pieces on each plywood

Resumen:

En este trabajo se presenta un problema de corte y empaquetamiento bidimensional que optimiza el uso de materia prima en una fábrica de muebles. Dado que el material proviene de una fuente natural, como son las planchas de contrachapado de madera, pueden presentar defectos. Estos defectos afectan la cantidad de materia prima disponible en cada plancha, ya que no siempre se pueden incluir en las piezas finales. La heurística presentada en este trabajo, muestra la mejor manera de tratar con los defectos. También consideramos el uso de las planchas de contrachapado en una planeación a largo plazo, dado que la compra de materia prima normalmente se realiza de forma periódica, y las existencias deben ser suficientes para completar la demanda de varias semanas

Abstract:

In this work, we consider a batching machine that can process several jobs at the same time. Batches have a restricted batch size, and the processing time of a batch is equal to the largest processing time among all jobs within the batch. We solve the bi-objective problem of minimizing the maximum lateness and number of batches. This function is relevant as we are interested in meeting due dates and minimizing the cost of handling each batch. Our aim is to find the Pareto-optimal solutions by using an epsilon-constraint method on a new mathematical model that is enhanced with a family of valid inequalities and constraints that avoid symmetric solutions. Additionally, we present a biased random-key genetic algorithm to approximate the optimal Pareto points of larger instances in reasonable time. Experimental results show the efficiency of our methodologies

Abstract:

The cross entropy method was initially developed to estimate rare event probabilities through simulation, and has been adapted successfully to solve combinatorial optimization problems. In this paper we aim to explore the viability of using cross entropy methods for the vehicle routing problem. Published implementations to this problem have only considered a naive route-splitting scheme over a very limited set of instances. In addition of presenting a better route-splitting algorithm we designed a cluster-first/route-second approach. We provide computational results to evaluate these approaches, and discuss its advantages and drawbacks. The innovative method, developed in this paper, to generate clusters may be applied in other settings. We also suggest improvements to the convergence of the general cross entropy method

Abstract:

We address the problem of scheduling a single batching machine to minimize the maximum lateness with a constraint restricting the batch size. A solution for this NP-hard problem is defined by a selection of jobs for each batch and an ordering of those batches. As an alternative, we choose to represent a solution as a sequence of jobs. This approach is justified by our development of a dynamic program to find a schedule that minimizes the maximum lateness while preserving the underlying job order. Given this solution representation, we are able to define and evaluate various job-insert and job-swap neighborhood searches. Furthermore we introduce a new neighborhood, named split–merge, that allows multiple job inserts in a single move. The split–merge neighborhood is of exponential size, but can be searched in polynomial time by dynamic programming. Computational results with an iterated descent algorithm that employs the split–merge neighborhood show that it compares favorably with corresponding iterated descent algorithms based on the job-insert and job-swap neighborhoods

Abstract:

Expert systems are designed to solve complex problems by reasoning with and about specialized knowledge like an expert. The design of concrete is a complex task that requires expert skills and knowledge. Even when given the proportions of the ingredients used, predicting the exact behavior of concrete is not a trivial task, even for experts, because other factors that are hard to control or foresee also exert some influence over the final properties of the material. This paper presents some of our attempts to build a new expert system that can design different types of concrete (hydraulic, bacterial, cellular, lightweight, high-strength, architectural, etc.) for different environments. The system also optimizes the use of additives and cement, which are the most expensive raw materials used in the manufacture of concrete

Abstract:

The arrival of modern brain imaging technologies has provided new opportunities for examining the biological essence of human intelligence as well as the relationship between brain size and cognition. Thanks to these advances, we can now state that the relationship between brain size and intelligence has never been well understood. This view is supported by findings showing that cognition is correlated more with brain tissues than sheer brain size. The complexity of cellular and molecular organization of neural connections actually determines the computational capacity of the brain. In this review article, we determine that while genotypes are responsible for defining the theoretical limits of intelligence, what is primarily responsible for determining whether those limits are reached or exceeded is experience (environmental influence). Therefore, we contend that the gene-environment interplay defines the intelligent quotient of an individual

Abstract:

The ability to create, use and transfer knowledge may allow the creation or improvement of new products or services. But knowledge is often tacit: It lives in the minds of individuals, and therefore, it is difficult to transfer it to another person by means of the written word or verbal expression. This paper addresses this important problem by introducing a methodology, consisting of a four-step process that facilitates tacit to explicit knowledge conversion. The methodology utilizes conceptual modeling, thus enabling understanding and reasoning through visual knowledge representation. This implies the possibility of understanding concepts and ideas, visualized through conceptual models, without using linguistic or algebraic means. The proposed methodology is conducted in a metamodel-based tool environment whose aim is efficient application and ease of use

Abstract:

Purpose- The purpose of this paper is to devise a crowdsourcing methodology for acquiring and exploiting knowledge to pro file unscheduled transport networks for design of efficient routes for public transport trips. Design/methodology/approach- This paper analyzes daily travel itineraries within Mexico City provided by 610 public transport users. In addition, a statistical analysis of quality-of-service parameters of the public transport systems of Mexico City was also conducted. From the statistical analysis, a knowledge base was consolidated to characterize the unscheduled public transport network of Mexico City. Then, by using a heuristic search algorithm for finding routes, public transport users are provided with efficient routes lor their trips. Findings- The findings of the paper are as follows. A crowdsourcing methodology can be used to characterize complex and unscheduled transport networks. In addition, the knowledge of the clowds can be used to devise efficient routes for trips (using public transport) within a city. Moreover, the design of routes for trips can be automated by SmartPaths, a mobile application for public transport navigation. Research limitations/implications- The data collected from the public transport users of Mexico City may vary through the year. Originality/value- The significance and novelty is that the present work is the earliest effort in making use of a crowdsourcing approach for profiling unscheduled public transport networks to design efficient routes for public transport trips

Abstract:

KAMET II Concept Modeling Language (CML) is a consistent visual language with high usability and flexibility devised to acquire and organize knowledge from different sources in a very intuitive way. Similar recent work, which suggests visual tools for supporting knowledge acquisition (KA) processes, like Cmaptools and ICONKAT, are closed environments that cannot be easily translated to more popular frameworks like Protégé. On the other hand, languages for Semantic Web used for KA, like Extensible Markup Languages (XML), are designed for machine interpretation without considering the users interaction. KAMET II CML, on the contrary, cares about the input facilities for constructing knowledge models without disregarding its complexity, and it is compatible with commercial methodologies. We describe and demonstrate the advantages of KAMET II CML by proving its consistency and formality using Concept Algebra, a mathematical structure for the formal treatment of concepts and their algebraic relations, operations and associative rules. We do a direct transformation of KAMET II CML diagnosis models to Concept Network (CN) diagrams making use of Concept Algebra. As a result, KAMET II CML models are compatible with regular ontology representations and can be shared and used by other systems without adding complexity

Abstract:

The demand on Knowledge Management in the organizations, which are out‐performing their peers by above average growth in intellectual capital and wealth creation has lead to a growing community of IT people, who have adopted the idea of building Corporate or Organizational Memory Information Systems (OMIS). This system acknowledges the dynamics of the organizational environments, wherein the traditional design of information systems does not cope adequately with these organizational aspects. The successful development of such a system requires a careful analysis of essential for providing a cost‐effective solution which will be accepted by the employees/users and can be evolved in the future. This paper proposes a nine‐layered framework for improving OMIS’ implementation plan in order to support the effort to captures, shares and preserves the Organizational Memory (OM). The purpose of this framework is to gain a better understanding of how some factors are critical for the successful application of OMIS in order to face how to design suitable OMIS to turn the scattered, diverse knowledge of their people into well‐documented knowledge assets ready for deposit and reuse to benefit the whole organization

Abstract:

Knowledge acquisition (KA) is considered today a cognitive process that involves both dynamic modeling and knowledge generation activities. We understand KA should be seen as a spiral of epistemological and ontological content that grows upward by transforming tacit knowledge into explicit knowledge, which in turn becomes the basis for a new spiral of knowledge generation. This paper presents some of our attempts to develop a knowledge acquisition methodology that mainly build a bridge between two important fields: knowledge acquisition and knowledge management. KAMET II (Cairó and Guardati, 2012), the evolution of KAMET, represents a modern approach to creating diagnosis‐specialized knowledge models and knowledge‐based systems (KBS) that are more efficient

Abstract:

The knowledge acquisition (KA) process is not "mining from the expert’s head" and writing rules for building knowledge-based systems (KBS), as it was 20 years ago when KA was often confused with knowledge elicitation activity, and modern engineering tools did not exist. The KA process has definitely changed. Today knowledge acquisition is considered a cognitive process that involves both dynamic modeling and knowledge generation activities. KA should be seen as a spiral of epistemological and ontological content that grows upward by transforming tacit knowledge into explicit knowledge, which in turn becomes the basis for a new spiral of knowledge generation. This paper presents some of our attempts to build a new knowledge acquisition methodology that brings together and includes all of these ideas. KAMET II, the evolution of KAMET (Cairó, 1998), represents a modern approach to creating diagnosis-specialized knowledge models that can be run by Protégé 2000, the open source ontology editor and knowledge-based framework

Abstract:

Work on electronic negotiation has motivated the development of systems with strategies specifically designed to establish protocols for buying and selling goods on the Web. On the one hand, there are systems where agents interact with users through dialogues and animations, helping them to find products while learning from their preferences to plan future transactions. On the other hand, there are systems that employ knowledge-bases to determine the context of the interactions and to define the boundaries inherently established by the e-Commerce. This paper introduces the idea of developing an agent with both capabilities: negotiation and interaction in an e-Commerce application via virtual reality (with a view to apply it in the Latin-American market, where both the technological gap and an inappropriate approach to motivate electronic transactions are important factors). We address these issues by presenting a negotiation strategy that allows the interaction between an intelligent agent and a human consumer with Latin-American idiosyncrasy and by including a graphical agent to assist the user on a virtual basis. We think this may reduce the impact of the gap created by this new technology

Abstract:

The human brain is undoubtedly the most impressive, complex, and intricate organ that has evolved over time. It is also probably the least understood, and for that reason, the one that is currently attracting the most attention. In fact, the number of comparative analyses that focus on the evolution of brain size in Homo sapiens and other species has increased dramatically in recent years. In neuroscience, no other issue has generated so much interest and been the topic of so many heated debates as the difference in brain size between socially defined population groups, both its connotations and implications. For over a century, external measures of cognition have been related to intelligence. However, it is still unclear whether these measures actually correspond to cognitive abilities. In summary, this paper must be reviewed with this premise in mind

Abstract:

The knowledge acquisition (KA) process has evolved during the last years. Today KA is considered a cognitive process that involves both a dynamic modeling and knowledge generation activities. This should be seen as a spiral of epistemological and ontological content that grows up by transforming tacit knowledge into explicit knowledge, which in turn becomes the basis for a new spiral of knowledge generation. This paper shows some of our attempts to build a new knowledge acquisition methodology that collects and includes all of these ideas. KAMET II, the evolution of KAMET [1], represents a modern approach for building diagnosis-specialized knowledge models that could be run by Protégé

Abstract:

A number of research efforts were devoted to deploying agent technology applications in the field of Agent-Mediated Electronic Commerce. On the one hand, there are applications that simplify electronic transactions such as intelligent search engines and browsers, learning agents, recommender agent systems and agents that share knowledge. Thanks to the development and availability of agent software, e-commerce can use more than only telecommunications and online data processing. On the other hand, there are applications that include negotiation as part of their core activities such as the information systems field with negotiation support systems; multi-agent systems field with searching, trading and negotiation agents; and market design field with electronic auctions. Although negotiation is an important business activity, it has not been studied extensively either in traditional business or in ecommerce context. This paper introduces the idea of developing an agent with negotiation capabilities applied to the Latin American market, where both the technological gap and an inappropriate approach to motivate electronic transactions are important factors. We address these issues by presenting a negotiation strategy that allows the interaction between an intelligent agent and consumers with Latin American idiosyncrasy

Abstract:

Nowadays, Information Technologies (lT) are part of, practically, every aspect of life and business. The spread use of IT has led to both positive and negative consequences in the areas where they are applied. One of these areas is the implementation of Knowledge Management Systems (KMS). The application of certain technologies within a KMS pursues the efficient and effective realization of specific functions within the system. However, it is quite common to face a deficient or erroneous knowledge creation due to an incorrect utilization of these technologies. An incorrect application can go from a lack of definition of the role of a technology, to a perception of IT as a solution to knowledge creation problems itself. The incorrect application of technological solutions to KM issues can lead to a deficient creation of knowledge, an inability to update knowledge because of the rapid evolution of technologies, and, even, to erroneous decision-making. To address the described issue, this paper states the importance of understanding the influence of a KMS in an organization and developing conscience regarding the role of IT both within such organization and in society. Next, a description of different functions in KMS and' the adequate technologies to correctly implement them is made, taking into account the knowledge structure and capabilities of the enterprise. Finally, the significance of a KMS and a social ecosystem to support the implementation of IT is acknowledged, both to make the best out of an investment in technology and to achieve a consistent knowledge creating structure

Abstract:

The role of football in society has changed substantially over the past twenty years becoming to even larger extend the daily topic of millions of people around the world. Nowadays, the attention is drawn to both the needs of fans and obviously, the business. The game has evolved greatly in terms of physical performance while few changes are observed in its tactical aspect. The main goal of the professional soccer clubs is always to possess the best players, in the meanwhile forgetting about the team, except of course for some rare exceptions. This paper demonstrates that the practice of football as well as other sports should not ignore the basic requirements necessary to build a team and it also presents ways to address the value of knowledge management and its benefits for all stakeholders. Some of the key points discussed in this paper are the theory of team building, the different roles of group members and the transfer of know-how from generation to generation and how all these combined can make a difference

Abstract:

Traditional organizations' lack of Knowledge Leadership has become a major issue. Our proposal is a new type of team manager: the Knowledge Leader, who performs several functions of the Knowledge Champions and extends the notion of CKO into the teamwork context. This paper intends to clarify the relevance and necessity of a Knowledge Leader in every workgroup in organizations. Knowledge Leaders keep Knowledge Management efforts aligned to business strategies, making organizations competent. Without its leaders, a company will turn into a hollow shell

Abstract:

Few years ago, so many companies were convinced that they could reach positions of competitive advantage through investments in information technologies (lT's). This was certainly true for a while; however, lT's by themselves are not able to provide organizations with such a position anymore. One of the reasons for that is the tendency for the information technologies to beco me commodities; hence any competitor who has the enough acquisitive power could replicate the technological deployment of the leader, destroying this way the position of advantage. This work wants to explore a new source of competitive advantage called knowledge management, and its relationship with lT's. This paper also shows that companies should not regard lT's and KM as competitors but as coordinated efforts when trying to reach a position of competitive advantage

Abstract:

The overwhelming world situation has profoundly disturbed researchers, scientists and citizens, making them wonder about the vital element that guarantees life quality. While authorities and politicians struggle t9 provide for their citizens, idealists dream about the idea, as cliché as it must be, of a more prosperous world for everyone. Information technology and development alone have proved to be necessary, but indeed have been insufficient. Quite surprisingly, it's béen transpired that knowledge is the most precious asset every entity has. The importance of knowledge has been confirmed given that it is actually capable of providing an enduring solution to cities' main problems. Following this trend in the 90's, the link between knowledge and dties was born along with the concept Knowledge City {KC).Around this idea, since 2004, Monterrey's government implemented the program Infernational Knowledge Cify, a project based upon actions specifically designed to increase value to the city through knowledge creation and innovation. Its objective was and still is, to introduce Monterrey into the age of knowledge whilst transforming its landscapes and people. Hereafter, the objective of this paper was established: to determine whether or not Monterrey is becoming a KC. As such, this paper follows Ergazaki's methodology explained in A unified mefhodological approach for fhe developmenf of knowledge cities which is applied to Monterrey's knowledge infrastructure and also incorporates functional concepts derived from Knowledge Management. Furthermore, to complete the analysis, a comparison with the international admired KC of Singapore is included. The results indicate that Monterrey is a developing KC, improving by the minute

Abstract:

This paper presents an approach to automatic translation in two steps. The first one is recognition (syntactic) and the last one is interpretation (semantic). In the recognition phase, we use volatile grammars. They represent an innovation to logic-based grammars. For the interpretation phase, we use componential analysis and the laws of integrity. Componential analysis defines the sense of the lexical parts. The rules of integrity, on the other hand, are in charge of refining, improving and optimizing translation. We applied this framework for general analysis of romance languages and for the automatic translation of texts from Spanish into other neo-Latin languages and vice versa

Abstract:

This paper presents a series of conventionalities and tools for the general analysis of romance languages and for the automatic translation of texts from Spanish into other neo-Latin languages and vice versa. The work implies two well-defined phases: recognition (syntactic) and interpretation (semantic). In the recognition phase, we use volatile grammars. They represent an innovation to logic-based grammars. For the interpretation phase, we use componential analysis and the laws of integrity. Componential analysis defines the sense of the lexical parts that constituted the sentence through a collection of semantic features. The rules of integrity, on the other hand, have the goal of refining, improving and optimizing translation

Abstract:

Different methodologies have been developed to solve various tasks such as classification, design, planning, scheduling and diagnosis. Diagnosis is a task of which the desired output is a malfunction of a system. KAMET (Knowledge- Acquisition Methodology) is a knowledge engineering methodology aimed to attack diagnosis tasks exclusively. In this article KAMET II, the second version of KAMET, will be presented with the objective of knowing its most important characteristics as well as its modeling notation which will subsequently be necessary for the knowledge bases, Problem-Solving Methods (PSMs) and the knowledge model specification. KAMET is a model-based methodology appointed to administer knowledge acquisition from multiple knowledge sources. The methodology provides a mechanism by means of which knowledge acquisition is achieved in an incremental fashion and in a cooperative environment. One important feature is the specification used to describe knowledge-based systems independently of their implementation. A four-component architecture is presented to achieve this goal and to allow component separation and consequently component reuse

Abstract:

Problem-solving methods are ready-made software components that can be assembled with domain knowledge bases to create application systems. In this paper, we describe this relationship and how it can be used in a principled manner to construct knowledge systems. We have developed a ontologies: first, to describe knowledge bases and problem-solving methods as independent components that can be reused in different application systems; and second, to mediate knowledge between the two kinds of components when they are assembled in a specific system. We present our methodology and a set of associated tools that have been created to support developers in building knowledge systems and that have been used to conduct problem-solving method reuse

Abstract:

Information systems with an intelligent or knowledge component are now prevalent and include knowledge-based systems, intelligent agents, and knowledge management systems. These systems are capable of explain their reasoning or justify their behavior. Empirical studies, mainly with knowledge-based systems, are reviewed and linked to a theoretical and practical base. The present paper has two main objectives: a) to present a negotiation strategy that allows the interaction between an intelligent agent and a human consumer. This kind of negotiation is adapted to the Latin American market and idiosyncrasy where an appropriate tool to perform automated negotiations over Web does not exist. b) To include animations in order to show an agent that represents an actual person. This incorporation aims to reduce the impact and gap created by the new technology. The agent presented can find an optimal path to achieve its goal using its mental states and libraries designed for the business roles

Abstract:

The Knowledge-Acquisition (KA) community necessities demand more effective ways to elicit knowledge in different environments. Methodologies like CommonKADS [8], MIKE [1] and VITAL [6] are able to produce knowledge models using their respective Conceptual Modeling Languages (CML). However, sharing and reuse is nowadays a must-have in knowledge engineering (KE) methodologies and domain-specific KA tools in order to permit Knowledge-Based System (KBS) developers to work faster with better results, and give them the chance to produce and utilize reusable Open Knowledge Base Connectivity (OKBC)-constrained models. This paper presents the KAMET II Methodology, which is the diagnosis-specialized version of KAMET [2,3], as an alternative for creating knowledge-intensive systems attacking KE-specific risks. We describe here one of the most important characteristics of KAMET II which is the use of Protégé 2000 for implementing its CML models through ontologies

Abstract:

Knowledge engineering (KE) is not “mining from the expert’s head” as it was in the first generation of knowledge-based systems (KBS). Modern KE is considered more a modeling activity. Models are useful due to the incomplete accessibility that knowledge engineer has to the experts knowledge and because not all the knowledge is completely necessary to reach the majority of projects goals. KAMET II2, the evolution of KAMET (Knowledge-Acquisition Methodology), is a modern approach for building diagnosis-specialized knowledge models. This new diagnosis methodology pursues the objective of being a complete robust methodology to lead knowledge engineers and project managers (PM) to build powerful KBS by giving the appropriate modeling tools and by reducing KE-specific risks. Not only KAMET II encompasses the conceptual modeling tools, but it presents the adaptation to the implementation tool Protégé 2000 [6] for visual modeling and knowledge-base-editing as well. However, only the methodological part is presented in this paper

Abstract:

In recent years technology has experiented an exponential growth creating bridges between research areas that were independent from each other a few years ago. This paper focuses in an application combining three research areas: virtual environrnents, intelligent agents and museum web pages. The application consists in a virtual visit to a museum guided by an intelligent agent. The reactive agent implemented responds in real time to user's requests, such as precise information of the artwork, the authors' biography and other interesting facts such as museum's history and regional knowledge of the country where the museums lays. Our agent is ¡ncapable of providing different layers of data, making differences between adults, children and young people as well as between local and foreign users. The agent has some autonomy during the visit and permits the user to make his own choices. The virtual environment allows a semi-interactive visit through the museum's architecture. The user follows a pre-defined museum tour, but is able to perform interactive actions such as zoom s, free views of the museum's structure, free views of the artwork and may advance, go back, or terminate a visit at any time

Abstract:

Software project mistakes represent a loss of millions of dollars to thousands of companies all around the world. These software projects that somehow ran off course share a common problem: Risks became unmanageable. There are certain number of conjectures we can draw from the high failure rate: Bad management procedures, an inept manager was in charge, managers are not assessing risks, poor or inadequatemethodologies where used, etc. Some of them might apply to some cases, or all, or none, is almost impossible to think in absolute terms when a software project is an ad hoc solution to a given problem. Nevertheless,there is an ongoing effort in the knowledge engineering (KE) community to isolate risk factors, and provide remedies for runaway projects, unfortunately, we are not there yet. This work aims to express some general conclusions for risk assessment of software projects, particularly, but notlimited to, those involving knowledge acquisition (KA)

Abstract:

At the beginning of the 1980s, the Artificial Intelligence (Al) community showed little interest in research on methodologies for the construction of knowledge-based systems (KBS) and for knowledge acquisition (KA). The main idea was the rapid construction of prototypes with LISP machines, expert system shells, and so on. Over time, the community saw the need for a structured development of KBS projects, and KA was recognized as the critical stage and the bottleneck for the construction of KBS. Concerning KA, many publications have appeared since then. However, very few have focused on formal plans to manage knowledge acquisition and from multiple knowledge sources. This paper addresses this important problem. KAMET is a formal plan based on models designed to manage knowledge acquisition from multiple knowledge sources. The objective of KAMET is to improve, in some sense, the phase of knowledge acquisition and knowledge modeling process, making them more efficient.

Abstract:

Throughout time, Expert Systems (ES) have been applied on different areas of knowledge. In Telecommunications, the application of ESs has not been as outstanding as in other areas; however, some important applications have been seen lately; especially those focused on failure provisioning, monitoring and diagnosis [6]. In this article, we introduce DIFEVS, an ES for the remote diagnosis and correction of failures in satellite communications ground stations. DIFEVS also allows to considerably reduce the time elapsed between the initial moment of failure and when it is solved

Abstract:

Knowledge Acquisition (KA) in the 90s has been recognized as a critical stage in the construction of Knowledge Based Systems (KBS) and as the bottleneck for theír development. Nowadays a lot of material is published on KA, most of which ís focused on difficulties encountered in the process of knowledge elicitation. on the tools for KA, and on the verification and validation of Knowledge Bases (KB). A very limited number of these publications, however, have emphasised the need for formal plans to deal with knowledge elicitation of single and multiple experts. In this paper we propose a methodology based on models to deal with knowledge elicitation of multiple experts. The method provides a solid mechanism to accomplish KA in an increasing manner, by stages, and in a cooperative environment

Abstract:

During the past decades, many methods have been developed for the creation of Knowledge-Based Systems (KBS). For these methods, probabilistic networks have shown to be an important tool to work with probability-measured uncertainty. However, quality of probabilistic networks depends on a correct knowledge acquisition and modelation. KAMET is a model-based methodology designed to manage knowledge acquisition from multiple knowledge sources [1] that leads to a graphical model that represents causal relations. Up to now, all inference methods developed for these models are rule-based, and therefore eliminate most of the probabilistic information. We present a way to combine the benefits of Bayesian networks and KAMET, and reduce their problems. To achieve this, we show a transformation that generates directed acyclic graphs, the basic structure of Bayesian networks [2], and conditional probability tables, from KAMET models. Thus, inference methods for probabilistic networks may be used in KAMET models

Abstract:

Aluminum matrix composites (AMCs) reinforced with aluminum diboride (AlB2) particles are obtained through a casting process. A mixture design experiment combined with split-split plot experiment helped to assess the significance of the effects of cold work on precipitation hardening prior to aging. Both cold work and aging allowed higher microhardness of the composite matrix, which is further increased by higher levels of boron and copper. Microstructure analysis showed a good distribution of reinforcements and revealed a grain subdivision pattern due to cold work. Tensile tests helped corroborate the microhardness measurements. Fracture surface analysis showed a predominantly mixed brittle–ductile mode

Abstract:

In this paper, we report the results obtained of comparing how people make kense of health information when receiving it via two different media: application for a mobile device and a printed pamphlet. The study was motivated by the 2009 outbreak of the AHN1 influenza and the need to educate the general public using all possible media to disseminate symptoms and treatments in a timely manner. In this study, we investigate the influence of the media in the sensemaking process when processing health information that has be to comprehended fast and thoroughly as it can be life saving. We propose recommendations based on the obtained results

Resumen:

En marzo de 2021 se reformó la Constitución mexicana para transitar a un sistema de precedentes. Esta enmienda establece que las "razones" de las sentencias de la Suprema Corte serán obligatorias para los tribunales inferiores. Sin embargo, la reforma se enmarca en una arraigada práctica de tesis jurisprudenciales, i. e., enunciados abstractos identificados por la misma Corte al resolver un caso. Además, no hay consenso sobre qué son estas razones y por qué deberían ser vinculantes. El objetivo de este artículo es identificar las posibles concepciones de razones para revelar los distintos roles de la Corte en la creación del derecho judicial. Se utilizan nociones de la ratio decidendi del common law como herramientas de introspección para identificar cuatro modelos de creación del derecho en la práctica mexicana, a saber: legislación judicial, reglas implícitas, justificaciones político-morales, y categorías sociales. Aunque la primera concepción parece ser la dominante, las alternativas amplían el abanico para entender cómo es que la Corte crea derecho dependiendo del contexto interpretativo en que opere

Abstract:

In March 2021, the Mexican Constitution was amended to transition to a system of precedents. This amendment mandates that the "reasons" of Supreme Court rulings will be binding on the lower courts. However, the reform is rooted in a long-standing practice of case lawdoctrine, e.g., abstract statements the Court itself makes when deciding a case. Moreover, there is no consensus as to what these reasons are and why they should be binding. The objective of this article is to identify the possible notions of reasons to explore the Court's different roles in shaping judicial law. Concepts of the common law ratio decidendi are used as an insight to identify four models of law making in Mexican practice, namely: judicial legislation, implicit rules, moral-political justifications and social categories. Although the first model seems to prevail, the others offer a broad understanding of how the Court creates law depending on the interpretative context in which it operates

Resumen:

Este artículo busca mostrar que la concepción dominante sobre los órganos constitucionales autónomos (OCA) está equivocada. Estos organismos no son completa ni igualmente independientes del resto de poderes del Estado. Al contrario, se caracterizan por tener una variedad de diseños que les dan diferentes niveles de autonomía. Entender esta variedad de diseños es el primer paso para identificar las causas, reales o percibidas, sobre los defectos del sistema actual de equilibrio institucional, así como las consecuencias que un régimen bien diseñado podría generar. Hasta este momento, la discusión generalmente se ha limitado a si deben existir o no este tipo de organismos. A partir de este trabajo, los debates podrían abordar nuevos cuestionamientos sobre cómo deben diseñarse y cuánta autonomía habría que otorgarles

Abstract:

This article argues that the mainstream conception about the autonomous constitutional agencies is mistaken. These agencies are neither complete nor equally independent from the other branches of government. On the contrary, there is a variety of institutional designs that grants them different levels of autonomy. Acknowledging this variety of designs is the first step for understanding the causes, real or perceived, of the flaws in the current system of institutional balance, as well as the consequences that a well-designed regime could generate. Until today, most of the debate has been limited to the convenience or undesirability of having this kind of agencies. This article may be the starting point for discussions on how decisionmakers should design these agencies and how much autonomy they should grant them

Abstract:

Coherentists fail to distinguish between the individual revision of a conviction and the intersubjective revision of a rule. This paper fills this gap. A conviction is a norm that, according to an individual, ought to be ascribed to a provision. By contrast, a rule is a judicially ascribed norm that controls a case and is protected by the formal principles of competence, certainty, and equality. A revision of a rule is the invalidation or modification such a judicially ascribed norm, provided that the judge meets the burden of argumentation of formal principles. Thus, judges can revise their convictions without changing the law

Abstract:

The 'new NAFTA' agreement between Canada, Mexico, and the United States maintained the system for binational panel judicial review of antidumping and countervailing duty determinations of domestic government agencies. In US-Mexico disputes, this hybrid system brings together Spanish and English-speaking lawyers from the civil and the common law to solve legal disputes applying domestic law. These panels raise issues regarding potential bicultural, bilingual, and bijural (mis)understandings in legal reasoning. Do differences in language, legal traditions, and legal cultures limit the effectiveness of inter-systemic dispute resolution? We analyze all of the decisions of NAFTA panels in US-Mexico disputes regarding Mexican antidumping and countervailing duty determinations and the profiles of the corresponding panelists. This case study tests whether one can actually comprehend the 'other'. To what extent can a common law, English-speaking lawyer understand and apply Mexican law, expressed in Spanish and rooted in a distinct legal culture?

Resumen:

Desde el Derecho Comparado, se hace un análisis y una defensa del control de la constitucionalidad de la jurisprudencia. Se sostiene que este tipo de control es razonable, excepcionalmente, si se entiende la jurisprudencia como reglas prima facie protegidas por los principios formales de competencia, igualdad, certeza y jerarquía, no como reglas estrictas sustentadas solo en el principio de jerarquía, como la Suprema Corte mexicana las ha entendido

Abstract:

From the perspective of Comparative Law, the paper argues in favor of constitutional review on precedents. It claims that this type of review is reasonable in exceptional circumstances if precedents are understood as prima facie rules that safeguarded by the formal principles of competence, equality, certainty and hierarchy. Not as strict rulse grounded only in the principle of hierarchy, as the Mexican Supreme Court has understood them

Resumen:

En medio de una crisis democrática en México, surgen las candidaturas independientes como manera de ampliar la participación política. Sin embargo, para lograr el registro en las elecciones de 2018, el Instituto Nacional Electoral (INE) implementó una app sin considerar las diferencias de clase, étnicas/raciales, lingüísticas, temporales, geográficas y de habilidades digitales. Estas exclusiones se visibilizaron en las impugnaciones en torno a la candidatura de Marichuy, una mujer indígena nahua que buscó su registro como independiente a la presidencia. En este capítulo analizamos este uso vertical y mono-cultural de las tecnologías de la información y la comunicación (TIC), justificado en el discurso de “modernidad”, como una expresión de tecno-colonialismo. Contribuimos así a entender la formación de una sub-ciudadanía digital manifiesta en la subordinación y estigmatización jurídica, participativa y política en la que se priorizan valores tecnocráticos en lugar de la ampliación de los canales democráticos

Resumo:

No meio de uma crise democrática no México, as candidaturas independentes surgem como forma de expandir a participação política. No entanto, para conseguir o registo nas eleições de 2018, o Instituto Nacional Electoral (INE) implementou uma app sem considerar as diferenças de classe étnicas/raciais, linguísticas, temporais, geográficas e de habilidades digitais. Essas exclusões foram visíveis nas impugnações em torno a candidatura de Marichuy, uma indígena nahua que buscou seu registro como independente para a presidência. Neste capítulo analisamos este uso vertical e mono-cultural das tecnologias da informação e comunicação (TIC), justificado no discurso da "modernidade", como uma expressão de tecno-colonialismo. Contribuímos, assim, para com¬preender a formação de uma sub-cidadania digital que se manifesta na subordinação e estigmatização legal, participativa e política em que os valores tecnocráticos são priorizados em vez de expandir os canais democráticos

Resumen:

En este artículo se discute la derrotabilidad del precedente constitucional desde una perspectiva analítica y normativa. Analíticamente, se sostiene que la derrotabilidad es una propiedad contingente de los precedentes que se manifiesta cuando nuevas normas adscritas reducen o eliminan el campo de aplicación del precedente original. Normativamente, se propone que la derrotabilidad es una colisión entre los principio formales de igualdad y seguridad jurídica y principios de justicia sustantiva. Una norma se derrota cuando circunstancias o fuentes constitucionalmente relevantes ausentes en el precedente, pero presentes en el caso posterior, justifican emitir una nueva norma que funciona como excepción o invalidación de la norma anterior. De manera más práctica, se proponen cuatro técnicas argumentativas que hacen manifiesta la derrotabilidad de los precedentes. Estas técnicas son la distinción de casos, la circunscripción, la inaplicación y la desaplicación del precedente. Así, el artículo busca contribuir al debate teórico y práctico que ha surgido a partir de que el Pleno de la Suprema Corte resolviera la C.T. 299/2013

Abstract:

This article discusses the defeatability of the constitutional precedent from an analytical and normative perspective. Analytically, it is argued that defeatability is a contingent property of precedents that manifests itself when new ascribed standards reduce or eliminate the scope of the original precedent. Normatively, it is proposed that the defeatability is a collision between the formal principle of equality and legal certainty and principles of substantive justice. A rule is defeated when circumstances or constitutionally relevant sources absent in the preceding, but present in the later case, justify issuing a new rule that functions as an exception or override the previous rule. In a more practical way, four argumentative techniques are proposed that make manifest the defeatability of the precedents. These techniques are the distinction of cases, the circumscription, the inapplicability and the deapplication of the precedent. Thus, the article seeks to contribute to the theoretical and practical debate that has emerged since the plenary of the Supreme Court resolved the C.T. 299/2013

Abstract:

This article analyses the migration of the common law doctrine of precedent to civil law constitutionalism. Using the case study of Mexico and Colombia, it suggests how this doctrine should be tailored to the civil law context. Historically, the civil law tradition adhered to the doctrine of jurisprudence constante that grants relative persuasiveness to precedents, once they are reiterated. However, the trend is to consider single constitutional precedents as binding. Universalist judges are borrowing common law concepts to interpret precedents joining the global trend while particularists consider such migration a foreign imposition that distorts the civil law theory of sources. This article takes a dialogical approach and occupies a middle ground between universalist and particularist approaches. The doctrine of precedent should be adopted, but it must also be reconfigured considering three distinctive features of the civil law: (a) canonical rationes decidendi; (b) precedent overproduction; and (c) a fragmented judiciary

Resumen:

Este artículo tiene como objetivo informar los resultados de la puesta a prueba de una propuesta didáctica para la enseñanza de la optimización dinámica, en particular del cálculo de variaciones. El diseño de la propuesta se hizo con base en la teoría APOE y se puso a prueba en una institución de enseñanza superior. Los resultados obtenidos del análisis de las respuestas de los estudiantes a un cuestionario y una entrevista ponen de manifiesto que los estudiantes muestran concepciones proceso y, en ocasiones, objeto de los conceptos abstractos de esta disciplina como resultado de su aplicación, aunque se detectaron algunas dificultades que resultaron difíciles de superar para dichos alumnos

Abstract:

The purpose of this paper is to present the results of a research study on a didactical proposal to teach Dynamical Optimization, in particular, Calculus of Variations. The proposal design was based on APOS theory and was tested at a Mexican private university. Results obtained from the analysis of students responses to a questionnaire and an interview show that students construct process conceptions, and in some cases, object conceptions of this discipline´s abstract concepts. Some problems were however difficult to overcome for these students

Resumen:

Se presenta un estado del conocimiento sobre los avances que se han producido en el campo de la investigación en educación matemática, con respecto a la enseñanza y aprendizaje del concepto de espacio vectorial. Para organizar la revisión se utilizaron dos preguntas guía: ¿qué obstáculos para el aprendizaje del concepto de espacio vectorial se han identificado? y ¿qué sugerencias didácticas se han hecho para favorecer el aprendizaje con significado del concepto de espacio vectorial? Además de proporcionar respuesta a estas preguntas, el análisis de los resultados obtenidos ofrece una síntesis del conocimiento actual y una perspectiva acerca de posibles áreas para investigaciones futuras relacionadas con la enseñanza y el aprendizaje del concepto de espacio vectorial

Abstract:

A state of the art is presented on the advances that have taken place in the field of mathematics education research in connection to the teaching and learning of the concept of vector space. Two guiding questions were used to organize the review: what learning obstacles related to the concept of vector space have been identified? and what teaching proposals have been made to promote a meaningful learning of the concept of vector space? In addition to providing answers to these questions, the analysis of the obtained results offers a synthesis of current knowledge and a perspective on possible areas for future research related to teaching and learning of the concept of vector space

Abstract:

This paper proposes a research framework for studying the connections --realized and potential--between unstructured data (UD) and cybersecurity and internal controls. In the framework, cybersecurity and internal control goals determine the tasks to be conducted. The task influences the types of UD to be accessed and the types of analysis to be done, which in turn influences the outcomes that can be achieved. Patterns in UD are relevant for cybersecurity and internal control, but UD poses unique challenges for its analysis and management. This paper discusses some of these challenges including veracity, structuralizing, bias, and explainability

Abstract:

This paper proposes a cybersecurity control framework for blockchain ecosystems, drawing from risks identified in the practitioner and academic literature. The framework identifies thirteen risks for blockchain implementations, ten common to other information systems and three risks specific to blockchains: centralization of computing power, transaction malleability, and flawed or malicious smart contracts. It also proposes controls to mitigate the risks identified; some were identified in the literature and some are new. Controls that apply to all types of information systems are adapted to the different components of the blockchain ecosystem

Abstract:

Context. Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims. We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods. The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results. Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors

Abstract:

This paper deals with a research that pretends to explore the strategies that favor the understanding of the representation of parametric curves in the plane. We report the results of a three years long teaching experience with college students where we explore and explain the difficulties and strategies that students have when with problems that involve parameterization

Abstract:

This article proposes a more nuanced method to assess the accuracy of preelection polls in competitive multiparty elections. Relying on data from the 2006 and 2012 presidential campaigns in Mexico, we illustrate some shortcomings of commonly used statistics to assess survey bias when applied to multiparty elections. We propose the use of a Kalman filter-based method that uses all available information throughout an electoral campaign to determine the systematic error in the estimates produced for each candidate by all polling firms. We show that clearly distinguishing between sampling and systematic biases is a requirement for a robust evaluation of polling firm performance, and that house effects need not be unidirectional within a firm's estimates or across firms

Resumen:

El estudio de la conducta legislativa en la Cámara de Diputados durante el periodo de 1998 a 2006 presenta un problema potencialmente serio: no todos los votos han sido publicados en la Gaceta Parlamentaria. Analizamos la naturaleza de estos datos en aras de explorar la representatividad de la muestra de votos disponibles y las posibles repercusiones de este problema en los análisis legislativos existentes. Para esto, aprovechamos la aparición de una página de Internet que registra la totalidad de los votos de la Cámara de Diputados desde el año 2006 en paralelo con el sistema existente desde 1998. Mediante la exploración de los mecanismos que generan la omisión de votos y la comparación de distintas estimaciones del comportamiento legislativo, concluimos que los votos no publicados merman la precisión de estimadores de uso común pero no introducen ningún tipo de sesgo. A la par, hacemos pública una base de datos para el estudio del Congreso mexicano

Abstract:

This paper examines the nature of the data available for studying legislative behavior in Mexico. In particular, we evaluate a potentially serious problem: only a subset of roll-call votes have been released for the critical transition period of 1998-2006. We test whether this subset is a representative sample of all votes, and thus suitable for study, or whether it is biased in a way that misleads scholarship. Our research strategy takes advantage of a partial overlap between two roll call vote reporting sources by the Chamber of Deputies: the site with partial vote disclosure, created in 1998 and still in place today; and the site with universal vote disclosure since 2006 only. An examination of the data generation and publication mechanisms, comparing different estimations of legislative behavior, reveals that omitted votes reduce the precision of estimates but do not introduce bias. Scholarship of the lower chamber can therefore proceed with data that we make public with the publication of the paper

Abstract:

We propose a generalized 3D shape descriptor for the efficient classification of 3D archaeological artifacts. Our descriptor is based on a multi-view approach of curvature features, consisting of the following steps: pose normalization of 3D models, local curvature descriptor calculation, construction of 3D shape descriptor using the multi-view approach and curvature maps, and dimensionality reduction by random projections. We generate two descriptors from two different paradigms: 1) handcrafted, wherein the descriptor is manually designed for object feature extraction, and directly passed on to the classifier and b) machine learnt, in which the descriptor is automatically learns the object features through a pretrained deep neural network model (VGG-16) for transfer learning and passed on to the classifier. These descriptors are applied to two different archaeological datasets: 1) non-public Mexican dataset, represented by a collection of 963-3D archaeological objects from the Templo Mayor Museum in México City, that includes anthropomorphic sculptures, figurines, masks, ceramic vessels, and musical instruments; and 2) 3D pottery content-based retrieval benchmark dataset, consisting of 411 objects. Once the multi-view descriptors are obtained, we evaluate their effectiveness by using the following object classification schemes: K-Nearest neighbor, support vector machine, and structured support vector machine. Our object descriptors classification results are compared against five popular 3D descriptors in the literature, namely, rotation invariant spherical harmonic, histogram of spherical orientations, signature of histograms of orientations, symmetry descriptor, and reflective symmetry descriptor

Abstract:

Delta-hedged option returns consistently decrease in volatility of volatility changes (volatility uncertainty), for both implied and realized volatilities. We provide a thorough investigation of the underlying mechanisms including model-risk and gambling-preference channels. Uncertainty of both volatilities amplifies the model risk, leading to a higher option premium charged by dealers. Volatility of volatility-increases, rather than that of volatility-decreases, contributes to the effect of implied volatility uncertainty, supporting the gambling-preference channel. We further strengthen this channel by examining the effects of option end-users net demand and lottery-like features, and by decomposing implied volatility changes into systematic and idiosyncratic components

Abstract:

In this paper, we explore the interplay of virus contact rate, virus production rates, and initial viral load during early HIV infection. First, we consider an early HIV infection model formulated as a bivariate branching process and provide conditions for its criticality R0>1. Using dimensionless rates, we show that the criticality condition R0>1 defines a threshold on the target cell infection rate in terms of the infected cell removal rate and virus production rate. This result has motivated us to introduce two additional models of early HIV infection under the assumption that the virus contact rate is proportional to the target cell infection probability (denoted by VV+0). Using the second model, we show that the length of the eclipse phase of a newly infected host depends on the target cell infection probability, and the corresponding deterministic equations exhibit bistability. Indeed, occurrence of viral invasion in the deterministic dynamics depends onR0and the initial viral loadV0. If the viral load is small enough, eg, V0≪0, then there will be extinction regardless of the value ofR0. On the other hand, if the viral load is large enough, eg, V0≫0 and R0>1, then there will be infection. Of note, V0≈𝜃corresponds to a threshold regime above which virus can invade. Finally, we briefly discuss between-cell competition of viral strains using a third model. Our findings may help explain the HIV population bottlenecks during within-host progression and host-to-host transmission

Abstract:

The MoProSoft Integral Tool, or HIM for its name in Spanish, is a Web-designed s)'stem to support 11l0nitoring the MoProSoft, a software process model defined as part of a strategy to encourage the software industry in Mexico. Tlle HIM-assistant, is a system added to the HIM, which main objectives are to give a guide for the automated use oi the MoProSoft and improve the aid provided to the HIM users. To reach these objectives, elements fr0111 software engÚzeering along witlz two areas oi artificial i11telligence, multiagenl systems and case based reasoning, were applied . lo develop llze HIM-assistant. The task involved the H/M-assistant analysis and design plzases usíng the MESSAGE methodology, as well as the development of lhe system and the peiformance of tests. Tlze major importance of the work líes on the integratíon oi differenl areas to Julfill the objectives, usíng existing progress instead oi developing a totally new solution

Abstract:

The Ministry of Social Development in Mexico is in charge of creating and assigning social programmes targeting specific needs in the population for the improvement of the quality of life. To better target the social programmes, the Ministry is aimed to find clusters of households with the same needs based on demographic characteristics as well as poverty conditions of the household. Available data consists of continuous, ordinal, and nominal variables, all of which come from a non-i.i.d complex design survey sample. We propose a Bayesian nonparametric mixture model that jointly models a set of latent variables, as in an underlying variable response approach, associated to the observed mixed scale data and accommodates for the different sampling probabilities. The performance of the model is assessed via simulated data. A full analysis of socio-economic conditions in households in the Mexican State of Mexico is presented

Abstract:

The A-RIO AQM mechanism has been recently introduced as a viable component for implementing the AF Per Hop Behavior in DiffServ architectures. A-RIO has been thoroughly studied and compared against other AQM algorithms in terms of fairness, performance and setting complexity. In this paper, we extend these studies by analyzing how A-RIO behaves faced to some well known parameters that affect TCP performance: RTT delay, packet size and the presence of unresponsive flows. Our study is based on extensive ns-2 simulations in settings considering under and over-provisioned networks

Abstract:

Active queue management (AQM) mechanisms manage queue lengths by dropping packets when congestion is building up; end-systems can then react to such losses by reducing their packet rate, hence avoiding severe congestion. They are also very useful for the differentiated forwarding of packets in the DiffServ architecture. Many studies have shown that setting the parameters of an AQM algorithm may prove difficult and error-prone, and that the performance of AQM mechanisms is very sensitive to network conditions. The Adaptive RIO mechanism (A-RIO) [16] addresses both issues. It requires a single parameter, the desired queuing delay and adjusts its internal dynamics accordingly. A-RIO has been thoroughly evaluated in terms of delay response and network utilization [16] but no study has been conducted in order to evaluate its behaviour in terms of fairness. By way of ns-2 simulations, this paper examines A-RIO's ability to fairly share the network's resources (bandwidth) between the flows contending for those resources. Using Jain's fairness index as our performance metric, we compare the bandwidth distribution among flows obtained with A-RIO and with RIO

Abstract:

Following Davie's example of a Banach space failing the approximation property (1973), we show how to construct a Banach space E which is asymptotically Hilbertian and fails the approximation property. Moreover, the space E is shown to be a subspace of a space with an unconditional basis which is "almost" a weak Hilbert space and which can be written as the direct sum of two subspaces all of whose subspaces have the approximation property

Abstract:

The paper studies probability forecasts of inflation and GDP by monetary authorities. Such forecasts can contribute to central bank transparency and reputation building. Problems with principal and agent make the usual argument for using scoring rules to motivate probability forecasts confused; however, their use to evaluate forecasts remains valid. Public comparison of forecasting results with a “shadow” committee is helpful to promote reputation building and thus serves the motivational role. The Brier score and its Yates-partition of the Bank of England’s forecasts are compared with those of a group of non-bank experts

Abstract:

Studies of strategic sophistication in experimental normal form games commonly assume that subjects' beliefs are consistent with independent choice. This paper examines whether beliefs are consistent with correlated choice. Players play a sequence of 2x2 normal form games with distinct opponents and no feedback. Another set of players, called predictors, report a likelihood ranking over possible outcomes. A substantial proportion of the reported rankings are consistent with the predictors believing that the choice of actions in the 2x2 game are correlated. Predictions seem to be correlated around focal outcomes and the extent of correlation over action profiles varies systematically between games (i.e., prisoner's dilemma, stag hunt, coordination, and strictly competitive)

Abstract:

This study reports a laboratory experiment wherein subjects play a hawk-dove game. We try to implement a correlated equilibrium with payoffs outside the convex hull ofNash equilibrium payoffs by privately recommending play. We find that subjects are reluctant to follow certain recommendations. We are able to implement tbis correlated equilibrium, however, when subjects play against robots that always follow recommendations, including in a control treatment in which human subjects receive the robot "earnings." This indicates that the lack of mutual knowledge of conjectures, rather than social preferences, explains subjects' failure to play the suggested correlated equilibrium when facing other human players

Abstract:

This paper presents a model in which a durable goods monopolist sells a product to two buyers. Each buyer is privately informed about his own valuation. Thus all players are imperfectly informed about market demand. We study the monopolist's pricing behavior as players' uncertainty regarding demand vanishes in the limit. In the limit, players are perfectly informed about the downward-sloping demand. We show that in all games belonging to a fixed and open neighborhood of the limit game there exists a generically unique equilibrium outcome that exhibits Coasian dynamics and in which play lasts for at most two periods. A laboratory experiment shows that, consistent with our theory, outcomes in the Certain and Uncertain Demand treatments are the same. Median opening prices in both treatments are roughly at the level predicted and considerably below the monopoly price. Consistent with Coasian dynamics, these prices are lower for higher discount factors. Demand withholding, however, leads to more trading periods than predicted

Resumen:

Se propone un nuevo paradigma educativo para México, a partir de una visión amplia, crítica e innovadora que corresponda a la realidad, necesidades y circunstancias del país

Abstract:

In this article, we propose a new educational paradigm for Mexico based on an encompassing, critical, and innovative vision conforming with the country’s realities, needs, and circumstances

Resumen:

Un recuerdo del gran escritor y personaje, don Ramón del Valle Inclán: su personalidad estrafalaria y consistente; su pertenencia –y amistad– a la Generación del 98, y la creación estética del esperpento, que se refleja en sus obras, particularmente en las más famosas, Luces de Bohemia y Tirano Banderas

Abstract:

This article is dedicated to the memory of the great writer and protagonist, Don Ramón del Valle Inclán. We pay tribute to his outlandish yet consistent personality, his association and friendship with the Generation of ’98, and the creation of the esperpento, present in his works, notably in the most famous ones, Luces de Bohemia and Tirano Banderas

Abstract:

Universality, a desirable feature in any system. For decades, elusive measurements of three-phase flows have yielded countless permeability models that describe them. However, the equations governing the solution of water and gas co-injection has a robust structure. This universal structure stands for Riemann problems in green oil reservoirs. In the past we established a large class of three phase flow models including convex Corey permeability, Stone I and Brooks-Corey models. These models share the property that characteristic speeds become equal at a state somewhere in the interior of the saturation triangle. Here we construct a three-phase flow model with unequal characteristic speeds in the interior of the saturation triangle, equality occurring only at a point of the boundary of the saturation triangle. Yet the solution for this model still displays the same universal structure, which favors the two possible embedded two-phase flows of water-oil or gas-oil. We focus on showing this structure under the minimum conditions that a permeability model must meet. This finding is a guide to seeking a purely three-phase flow solution maximizing oil recovery

Abstract:

In 1977 Korchinski presented a new type of shock discontinuity in conservation laws. These singular solutions were coined δ-shocks since there is a time dependent Dirac delta involved. A naive description is that such δ-shock is of the overcompressive type: a single shock wave belonging to both families, the four characteristic lines of which impinge into the shock itself. In this work, we open the fan of solutions by studying two-family waves without intermediate constant states but possessing central rarefactions or comprising δ-shocks

Abstract:

We discuss the solution for commonly used models of the flow resulting from the injection of any proportion of three immiscible fluids such as water, oil, and gas in a reservoir initially containing oil and residual water. The solutions supported in the universal structure generically belong to two classes, characterized by the location of the injection state in the saturation triangle. Each class of solutions occurs for injection states in one of the two regions, separated by a curve of states for most of which the interstitial speeds of water and gas are equal. This is a separatrix curve because on one side water appears at breakthrough, while gas appears for injection states on the other side. In other words, the behavior near breakthrough is flow of oil and of the dominant phase, either water or gas; the non-dominant phase is left behind. Our arguments are rigorous for the class of Corey models with convex relative permeability functions. They also hold for Stone’s interpolation I model [5]. This description of the universal structure of solutions for the injection problems is valid for any values of phase viscosities. The inevitable presence of an umbilic point (or of an elliptic region for the Stone model) seems to be the cause of this universal solution structure. This universal structure was perceived recently in the particular case of quadratic Corey relative permeability models and with the injected state consisting of a mixture of water and gas but no oil [5]. However, the results of the present paper are more general in two ways. First, they are valid for a set of permeability functions that is stable under perturbations, the set of convex permeabilities. Second, they are valid for the injection of any proportion of three rather than only two phases that were the scope of [5]

Abstract:

Flow of three fluids in porousmedia is governed by a system of two conservation laws. Shock solutions are described by curves in state space, which is the saturation triangle of the fluids. We study a certain bifurcation locus of these curves, which is relevant for certain injection problems. Such structure arises, for instance, when water and gas are injected in a mature reservoir either to dislodge oil or to sequestrate CO2. The proof takes advantage of a certain wave curve to ensure that the waves in the flow are a rarefaction preceded by a shock, which is in turn preceded by a constant two-phase state (i.e., it lies at the boundary of the saturation triangle). For convex permeability models of Corey type, the analysis reveals further details, such as the number of possible two-phase states that correspond to the above mentioned shock, whatever the left state of the latter is within the saturation triangle

Resumen:

En teoría de números, el estudio de los números primos tiene una relevancia central. Se sabe que Hilbert creía que esta teoría sería siempre la parte más pura de las matemáticas, el vuelco vino con la criptografía y a su vez la búsqueda por números primos cada vez más grandes. Vemos, a lo largo de la historia desde 1952 hasta los días actuales, que 32 de los 33 números primos más grandes registrados son aquellos llamados primos de Mersenne; solamente de 1989 a 1992 el número 391581·2^216193−1 salió de esta regla. Desde 1996 todos los resultados provienen del proyecto colectivo GIMPS. Este trabajo propone una prueba simple para el teorema por el cual conocemos los primos de Mersenne, sin sofisticadas herramientas de teoría de números. La prueba es accesible y tan sencilla que nos permitiría ir un poco más allá y generalizar los primos de Mersenne al mostrar una gran parte de la familia que estaba escondida

Abstract:

Is ignition or extinction the fate of an exothermic chemical reaction occurring in a bounded region within a heat conductive solid consisting of a porous medium? In the spherical case, the reactor is modeled by a system of reaction-diffusion equations that reduces to a linear heat equation in a shell, coupled at the internal boundary to a nonlinear ODE modeling the reaction region. This ODE can be regarded as a boundary condition. This model allows the complete analysis of the time evolution of the system: there is always a global attractor. We show that, depending on physical parameters, the attractor contains one or three equilibria. The latter case has special physical interest: the two equilibria represent attractors ("extinction" or "ignition") and the third equilibrium is a saddle. The whole system is well approximated by a single ODE, a "reduced" model, justifying the "heat transfer coefficient" approach of chemical engineering

Abstract:

This paper provides evidence on the difficulty of expanding access to credit through large institutions. We use detailed observational data and a large-scale countrywide experiment to examine a large bank's experience with a credit card that accounted for approximately 15% of all first-time formal sector borrowing in Mexico in 2010. Borrowers have limited credit histories and high exit-risk – a third of all study cards are defaulted on or canceled during the 26 month sample period. We use a large-scale randomized experiment on a representative sample of the bank's marginal borrowers to test whether contract terms affect default. We find that large experimental changes in interest rates and minimum payments do little to mitigate default risk. We also use detailed data on purchases and payments to construct a measure of bank revenue per card and find it is generally low and difficult to predict (using machine learning methods), perhaps explaining the bank's eventual discontinuation of the product. Finally, we show that borrowers generating a favorable credit history are much more likely to switch banks providing suggestive evidence of a lending externality. Taken together these facts highlight the difficulty of increasing financial access using large formal sector financial organizations

Abstract:

This paper analyzes the existence and extent of downward nominal wage rigidities in the Mexican labor market using data from the administrative records of the Mexican Social Security Institute (IMSS). This establishment-level, panel dataset allows us to track workers employed with the same firm, observe their wage profiles and calculate the nominal-wage changes they experience over time. Based on the estimated density functions of nominal wage changes, we are able to calculate some standard tests of nominal wage rigidity that have been proposed in the literature. Furthermore, we extend these tests to take into account the presence of minimum wage laws that may affect the distribution of nominal wage changes. The densities and tests calculated using these data are similar to those obtained using administrative data from other countries, and constitute a significant improvement over the measures of nominal wage rigidities obtained from household survey data. We document the importance of minimum wages in the Mexican labor market, as evidenced by the large fraction of minimum wage earners and the indexation of wage changes to the minimum wage increases. We find considerably more nominal wage rigidity than previous estimates obtained for Mexico using data from the National Urban Employment Survey (ENEU) suggest, but lower than that reported for developed countries by other studies that use comparable data

Abstract:

Autonomous agents (AAs) are capable of evaluating their environment from an emotional perspective by implementing computational models of emotions (CMEs) in their architecture. A major challenge for CMEs is to integrate the cognitive information projected from the components included in the AA's architecture. In this chapter, a scheme for modulating emotional stimuli using appraisal dimensions is proposed. In particular, the proposed scheme models the influence of cognition on appraisal dimensions by modifying the limits of fuzzy membership functions associated with each dimension. The computational scheme is designed to facilitate, through input and output interfaces, the development of CMEs capable of interacting with cognitive components implemented in a given cognitive architecture of AAs. A proof of concept based on real-world data to provide empirical evidence that indicates that the proposed mechanism can properly modulate the emotional process is carried out

Abstract:

In this paper we present a mechanism to model the influence of agents’ internal and external factors on the emotional evaluation of stimuli in computational models of emotions. We propose the modification of configurable appraisal dimensions (such as desirability and pleasure) based on influencing factors. As part of the presented mechanism, we introduce influencing models to define the relationship between a given influencing factor and a given set of configurable appraisal dimensions utilized in the emotional evaluation phase. Influencing models translate factors’ influences (on the emotional evaluation) into fuzzy logic adjustments (e.g., a shift in the limits of fuzzy membership functions), which allow biasing the emotional evaluation of stimuli. We implemented a proof-of-concept computational model of emotions based on real-world data about individuals’ emotions. The obtained empirical evidence indicates that the proposed mechanism can properly affect the emotional evaluation of stimuli while preserving the overall behavior of the model of emotions

Abstract:

In this paper we introduce the concept of configurable appraisal dimensions for computational models of emotions of affective agents. Configurable appraisal dimensions are adjusted based on internal and/or external factors of influence on the emotional evaluation of stimuli. We developed influencing models to define the extent to which influencing factors should adjust configurable appraisal dimensions. Influencing models define a relationship between a given influencing factor and a given set of configurable appraisal dimensions. Influencing models translate the influence exerted by internal and external factors on the emotional evaluation into fuzzy logic adjustments, e.g., a shift in the limits of fuzzy membership functions. We designed and implemented a computational model of emotions based on real-world data about emotions to evaluate our proposal. Our empirical evidence suggests that the proposed mechanism properly influences the emotional evaluation of stimuli of affective agents

Abstract:

In this paper, we present a computational model of emotions based on the context of an Integrative Framework designed to model the interaction of cognition and emotion. In particular, we devise mechanisms for assigning an emotional value to events perceived by autonomous agents using a set of appraisal variables. Defined as fuzzy sets, these appraisal variables model the influence of cognition on emotion assessment. We do this by changing the limits of fuzzy membership functions associated to each appraisal variable. In doing so, we aim to provide agents with a degree of emotional intelligence. We also defined a case study involving three agents, two with different personalities (as a cognitive component) and another one without a personality to explore their reactions to the same stimulus, obtaining as a result, a different emotion for each agent. We noticed that emotions are biased by the interaction of cognitive and affective information suggesting the elicitation of more precise emotions

Resumen:

En este trabajo se presenta un modelo para optimizar el balanceo de inventario de un sistema de bicicletas compartidas. Se aplica con datos reales de Ecobici de un número específico de estaciones en la zona de Polanco de la Ciudad de México para realizar un re-balanceo estático (durante un periodo definido del día). Se define una función de satisfacción que toma en cuenta la probabilidad de hallar bicicletas disponibles y espacios libres para dejarlas en cada estación. Se optimiza una función ponderada de la satisfacción y el tiempo total de la ruta para llevar a cabo la carga y descarga con un vehículo al resolver un problema de programación lineal mixto entero. Los resultados sugieren que es posible realizar una optimización del re-balanceo de inventario en un sistema de bicicletas compartidas con mínimos recursos

Resumen:

A cinco siglos del encuentro entre dos mundos, que dio inicio a un periodo histórico en el cual se gestaron los cimientos del México actual, se analizan varios aspectos del proceso de creación y consolidación de Nueva España, su papel dentro de la monarquía hispánica y la Iglesia católica, así como sus posibles consecuencias en el desarrollo de la cultura mexicana

Abstract:

Five centuries after the encounter between two worlds, which began a historical period in which the foundations of today’s Mexico were developed, several aspects of the process of creation and consolidation of New Spain, its role within the Hispanic monarchy and the Catholic church are analyzed, as well as its possible consequences in the flourishing of Mexican culture

Resumen:

El propósito de este estudio es analizar la forma en que se genera (producto interno bruto), asigna (ingreso nacional), distribuye (ingreso disponible), utiliza (gasto y ahorro) y acumula (riqueza) el valor generado a partir del trabajo (intelectual y manual) y los recursos naturales (que también aportan valor); es decir, el propósito es estudiar la desigualdad en la repartición del valor generado en la economía a partir de la teoría del valor objetiva, en lugar de medir la desigualdad subjetiva del bienestar (felicidad) por medio del consumo (utilidad). Si bien se considera importante el tema de las capacidades y libertades (igualdad de oportunidades), se toma como referencia el marco más amplio de la necesidad de cumplimiento de los derechos humanos. Por ello, se otorga más importancia a las medidas urgentes que se deben tomar ex ante, ya que en ellas estaría la solución a la problemática de la pobreza y la desigualdad de los países de América Latina y el Caribe. Es preciso repartir de manera justa los beneficios que genera la sociedad y otorgar a todos sus miembros el goce pleno de los derechos humanos para construir un mundo más justo

Resumen:

La mayoría de los investigadores sobre la desigualdad en México han concluido que, aunque la inequidad en los ingresos es muy alta, su tendencia es a la baja. Han llegado a esta conclusión porque han utilizado las cifras oficiales de ingresos de las encuestas de hogares, sin hacer ninguna corrección. Por el contrario, en este estudio se propone un ajuste a los datos de ingresos provenientes de las encuestas de ingresos y gastos de los hogares, basado en las cuentas nacionales. Las cifras ajustadas muestran que la desigualdad es alta y creciente, debido a las políticas públicas implementadas desde mediados de la década de 1980. Por lo tanto, debemos atrevernos a pensar de manera diferente y cambiar el curso económico del país

Abstract:

Most inequality researchers in Mexico have concluded that although income inequality is very high, there is a downward trend in this inequality. They have come to this conclusion because they have used official income figures from household surveys without making any correction. In contrast, this study proposes an adjustment to income data from household income and expenditure surveys, based on national accounts. Adjusted figures show that inequality is high and increasing, due to public policies implemented since the mid-1980s. Therefore, we must dare to think differently and change the economic course of the country

Resumen:

En 2014, la riqueza total del país ascendió a 76,7 billones de pesos. El 37% de ella estaba en manos de los hogares; el gobierno administraba el 23%, las empresas privadas el 19%, las empresas públicas el 9%, el resto del mundo poseía el 7% y las instituciones financieras el 5%. En promedio cada hogar tendría, si hubiera una distribución equitativa, 900.000 pesos en activos físicos (casas, terrenos, automóviles y diversos bienes del hogar), y financieros (dinero e inversiones financieras), monto que sería más que suficiente para que las personas tuvieran una vida holgada: cerca de 400.000 pesos por adulto, en promedio. Lamentablemente la repartición es muy desigual. Dos terceras partes de la riqueza están en manos del 10% más rico del país y el 1% de los muy ricos acaparan más de un tercio. Por ello, el coeficiente de Gini de la riqueza es de 0,79. La distribución es todavía más desigual en los activos financieros: el 80% es propiedad del 10% más rico. En 2015 había en el país tan sólo 211.000 contratos de mexicanos celebrados en casas de bolsa, con una inversión total por 16 billones de pesos, el 22% de la riqueza nacional. El 11% de los contratos tienen un monto de inversión mayor a 500 millones de pesos y suman el 79,5% del total de la inversión. Es decir, hay 23.000 personas (si asumimos un contrato por persona), que tienen el 80% de la inversión de la Bolsa Mexicana de Valores. Por ello México está presente en la lista de la revista Forbes, así como en los reportes que han elaborado las instituciones financieras encargadas de gestionar los fondos patrimoniales, quienes ven en el país un mercado al cual atender. En los últimos once años, entre 2003 y 2014, la riqueza del país aumentó a una tasa promedio anual de 7,9%, en términos reales, por lo que México duplicó el monto de su riqueza entre 2004 y 2014. En cambio, el producto interno bruto tuvo un magro crecimiento de 2,6% promedio anual en el mismo período. [...]

Resumen:

En este estudio se propone una metodología para ajustar los datos de las encuestas de ingresos y gastos de los hogares en México con la información de las cuentas nacionales a fin de contar con información confiable para el estudio de la desigualdad en el ingreso y la riqueza, en especial entre los sectores de mayores ingresos. A partir de un método que asigna adecuadamente los ingresos sin afectar las medidas de pobreza, se calcula que la desigualdad en México es significativamente mayor a la que se ha estimado hasta ahora. La proporción del ingreso corriente total que concentra el 10% de las familias más ricas de México se incrementa del 35% al 62%, con lo que el coeficiente de Gini aumenta de 0,45 a 0,68. De igual forma, el 1% de las familias más ricas concentra el 22,8% del ingreso total y su ingreso promedio es de 625.000 pesos mensuales. Al utilizar la función de Pareto, este documento concluye también que la proporción de ingreso del 1% de los hogares más ricos se eleva a 34,2% y sus ingresos medios a 973.000 pesos mensuales, mientras que el 0,1% de las familias (poco más de 31.000) suman el 19% del ingreso y sus percepciones medias ascienden a 5.000.000 de pesos mensuales. De igual forma, este estudio señala que si se considera tan sólo la asignación del ingreso primario y se excluyen por tanto las transferencias, el 10% más rico concentra el 66%, con lo que el coeficiente de Gini se eleva a 0,73

Resumen:

Se analizan las principales ideas del libro El capital en el siglo xxi de Thomas Piketty: las fuerzas que inciden en la desigualdad de la riqueza y el ingreso; la primera ley del capitalismo y el aumento en la proporción del capital en la economía; la segunda ley del capitalismo y el aumento en la proporción de riqueza respecto al ingreso nacional; la desigualdad en los ingresos del trabajo y del capital; el cambio en la composición de la riqueza y las recomendaciones del autor para salvar la globalización y el sistema de mercado de los problemas sociales y políticos que la inequidad ha producido. Se confrontan estas ideas con México. Se concluye que el libro de Piketty puede ayudar a diseñar un mejor futuro para México, siempre y cuando nos atrevamos a pensar diferente

Abstract:

In this article, we will analyze the main ideas from Thomas Piketty’s book, Capital in the Twenty-First Century. They consist of the following: the elements causing inequality in wealth and income, the first law of capitalism and the increase of the proportion of capital in the economy, the second law of capitalism and the increase in the proportion of wealth with respect to national income, the inequalities in the division of income of labor and capital, and the changing forms of wealth. Moreover, the author gives recommendations to save both globalization and the market economy from the social and political problems brought on by inequality. All these ideas are contrasted with the Mexican economy and it is concluded that Piketty’s contributions are helpful to devise a better future for our country, as long as we dare to think differently

Resumen:

El problema del hambre en México es aún más grave de lo que se piensa. Quien no sufre subnutrición, está mal nutrido; la población de México está famélica u obesa; tan sólo el 14% tiene una nutrición adecuada. Una de las causas más importantes es el cambio en el entorno alimenticio, producto de la apertura comercial de México con Estados Unidos. Como parte del tlcan han llegado a México una gran cantidad de productos alimenticios procesados que han provocado una “epidemia” de diabetes en el país, que ocupa el primer lugar entre las causas de defunciones. La “Cruzada contra el Hambre” es insuficiente: se requiere implantar políticas públicas más contundentes, como un impuesto a la importación de productos alimenticios procesados, si en verdad se desea enfrentar el reto

Abstract:

Hunger in Mexico is an even more serious problem than commonly thought. Those who are not undernourished are malnourished. The Mexican population is either starving or obese. Only 14% of the population has good nutritional habits.The main cause is the change in the food supply due to the opening of commercial borders between Mexico and the United States. As part of nafta, a great variety of processed food products has arrived, causing a diabetes epidemic which has become the leading cause of death in our country. Thus, this “Crusade against Hunger” is insufficient and more forceful public policies are called for, namely, a tax on imported processed food

Resumen:

Se analiza la tesis del proceso de individualización de Ulrich Beck, sociólogo alemán contemporáneo, y se le compara con la realidad empírica de México. Para el autor, la modernidad reflexiva es un reflejo de la política y la tecnología; se trata de una revolución de las consecuencias a partir de tres ámbitos: ingreso, empleo y familia. A diferencia de Beck, se propone una distinción entre individualización, producto del Estado de bienestar, y la que surge de su desmantelamiento

Abstract:

In this article, we analyze the individualization process of Ulrich Beck, a contemporary German sociologist, with Mexico’s empirical reality. He proposes that reflexive modernization is a reflection of politics and technology, a revolution of consequences from three aspects: income, employment, and family. In contrast to Beck, we propose a distinction between that individualization, a product of the Welfare State, and that resulting from its breakdown

Abstract:

The Bike Sharing Systems (BSS) are an integral part of the multimodal transport systems in an urban area. Those systems have many advantages such as low cost, environmental--friendly and flexibility. Nevertheless, the lack of bicycles and the lack of spaces to drop them off discourage people from using the BSS. Thus, we propose a Saturation Index ($SI$) to identify the number of bicycles for a period of time (seconds, minutes, hours) in a station; thus, based on the $SI$, the supply and demand levels are known for every station. With those levels, the number of bicycles to be moved by truck among the stations is computed by solving the transship model. Finally, a route is computed using a heuristic to minimize the travel distance of the truck among the stations. To test our approach, we used the data set of the ECOBICI system in Mexico City, and we program a computational application based on R language and Python. The results show that during the day, a station change to supply bicycles to demand them. Thus, to balance the system, the proposed approach must be run every time the managers want to balance the system

Abstract:

In this paper we predict the overall withdrawal of ecobicis for a given day. The principal problem adressed was the lack of data available to understand certain behavior related to the ecobici's demand at some time of the day. However, with the information available in the ecobici's website we were capable to adjust a time series model that help us forecast the total withdrawals in the short time, this can be used to estimate the overall demand of ecobicis for a given hour in the day in order to identify the time of the day when the ecobici’s demand is greater than the availability. The principal motivation of our analysis is to predict the demand of each ecobici's station, so this model is a benchmark for future analysis

Abstract:

In this article, we examine how collective notions of belonging and imagination become a fertile terrain upon which transnational websites can sustain certain social practices across national boundaries that would be otherwise difficult. Drawing on field work carried out in the United States and Mexico, and using transnational imagination as our analytical lens, we observed three phenomena that are closely related to the use of a transnational website by a migrant community. First, the transnational website under study was a place for a collective imaginary rather than just for the circulation of news. Also, through transnational imagination, migrants can make claims about their status in their community of origin. Moreover, the website is instrumental in harmonizing the various views of the homelands’ realities. Finally, the website can inspire us to look beyond dyadic forms of communication

Abstract:

When somebody dies, her or his presence might still be hanging around others’ lives on Social Networking Sites (SNS). Several factors might influence the way we perceive this digital presence after the death including personal and religious beliefs. In this work, we present results from a study aimed at examining the differences between the way we perceive an online profile before and after the owner of the profile has died. In the few weeks following the death, the digital presence seems to become more salient, although this salience might be only transient. Our findings highlight not only the importance of studying this area, but also raise further questions within this area than need to be addressed in order to better understand this topic

Abstract:

Let τ be a hereditary torsion theory on Mod-R. For a right τ-full R-module M, we establish that [T, T v ξ (M)] is a boolean lattice; we find necessary and sufficient conditions for the interval [T, T v ξ (M)] be atomic, and we give conditions for the atoms be of some specific type in terms of the internal structure of M. We also prove that there are lattice isomorphisms between the lattice [T, Tv ξ (M)] and the lattice of T-pure fully invariant submodules of M, under the additional assumption that M is absolutely T-pure. With the aid of these results, we get a decomposition of a T-full and absolutely T-pure R-module M as a direct sum of T-pure fully invariant submodules N and N' with different atomic characteristics on the intervals [T, T v ξ (N)] and [T, Tv ξ (N )] , respectively

Resumen:

Objetivo. Describir y analizar el gasto de la Secretaría de Salud asociado con iniciativas de comunicación social de las campañas de prevención de enfermedades transmitidas por vectores (Zika, chikunguña y dengue) y la evaluación de impacto o resultados. Material y métodos. La información se obtuvo de 690 contratos de prestación de servicios de comunicación social (2015-2017), asociados con dos declaraciones de emergencia epidemiológica (EE-2-2015 y EE-1-2016). Resultados. Se concluye una débil evaluación de impacto del gasto público. No existe evidencia suficiente que demuestre la correspondencia del gasto en comunicación social con la efectividad y cumplimiento de las campañas. Conclusiones. Los hallazgos permiten definir recomendaciones para vigilar, transparentar y hacer más eficiente el gasto público. Existe información pública sobre el gasto; sin embargo, es necesario garantizar mecanismos de transparencia, trazabilidad de contratos y evaluación de impacto de las campañas

Abstract:

Objective. To briefly describe and to analyze, the expenditure of the Minister of Health associated with social communication campaigns for the prevention of vector-transmitted diseases (Zika, chikungunya and dengue). Materials and methods. The information was obtained through the analysis of 690 contracts for the provision of social communication services (2015-2017). The analyzed contracts are linked with two epidemiological emergency declarations (EE-2-2015, EE-1-2016). Results. The analysis concludes a weak evaluation of the impact of public spending. There is not enough evidence to show the correspondence of social communication spending with the effectiveness and compliance of the campaigns aim goals. Conclusions. Findings allow defining recommendations to monitor public spending. There are platforms that provide information on social communication spending; however, it is necessary to guarantee transparency, accountability and the governance of those responsible for social communication and official advertising and strengthen outcomes evaluation mechanisms

Abstract:

The expansion of democracy in the world has been paradoxically accompanied by a decline of political trust. By looking at the trends in political trust in new and stable democracies over the last 20 years, and their possible determinants, we claim that an observable decline in trust reflects the post-honeymoon disillusionment rather than the emergence of a more critical citizenry. However, the first new democracies of the ‘third wave’ show a significant reemergence of political trust after democratic consolidation. Using data from the World Values Survey and the European Values Survey, we develop a multivariate model of political trust. Our findings indicate that political trust is positively related to well-being, social capital, democratic attitudes, political interest, and external efficacy, suggesting that trust responds to government performance. However, political trust is generally hindered by corruption permissiveness, political radicalism and postmaterialism. We identify differences by region and type of society in these relationships, and discuss the methodological problems inherent to the ambiguities in the concept of political trust

Abstract:

In this paper, we present an agent-based simulation (ABS) model of a synthetic biology system that captures substrates and transfers them amongst a group of enzymes until reaching an acceptor enzyme, which generates a substrate-channeling event (SCE). In particular, we analyze the number of simulation cycles required to reach a pre-specified number of SCEs varying the system composition, which is given by the number of enzymes of two types: donor and acceptor. The results show an efficient frontier that generates the desired number of SCEs with the minimum number of cycles and the lowest acceptor:donor ratio for a given density of enzymes in the system. This frontier is characterized by an exponential function to define the system composition that would minimize the number of cycles to generate the desired SCEs. The output of the ABS confirms that compositions obtained by this function are highly efficient

Resumen:

Siguiendo las recomendaciones de la Organización para la Cooperación y Desarrollo Económico, hemos utilizado la información de patentes contenida en la base de datos de consulta PATENTSCOPE como un indicador de innovación tecnológica con el fin de analizar la participación de los inventores mexicanos en las solicitudes de patentes en un periodo de veinte años (1995-2015). Se realizó el análisis tomando en cuenta el género de los inventores para contrastar la participación de hombres y mujeres. Se muestran algunos indicadores tales como participación, contribución y presencia. Los resultados del estudio establecen que las inventoras mexicanas participan en los títulos de patentes con un equipo de inventores pequeño a mediano (de acuerdo al número de inventores enlistados), mientras los inventores mexicanos tienden a hacerlo en solitario. Se establece también que el área tecnológica en la que los inventores mexicanos hombres y mujeres tienen mayor participación es la de química y metalurgia. Los resultados revelan disparidades de género que deberían atenderse mediante políticas públicas para alcanzar las Metas del Milenio y las Metas de Sustentabilidad y Desarrollo establecidas por la ONU, así como promover la equidad de género en las actividades relacionadas con ciencia y tecnología

Abstract:

Following the Organization of Economic and Cooperation and Development recommendations, we have used patent data comprised in the PATENTSCOPE database as an indicator of technological innovation and in order to analyze Mexican inventors' involvement on patent filing over a 20 year period (1995-2015). The analysis was gender desegregated to observe patterns and trends of participation of both male and female inventors. Some indicators such as participation, contribution and presence are shown. Findings reveal that Mexican female inventors more often apply for patent titles within a small to medium sized team, while male inventors prefer single-authored applications. It has also been found that the stronger technological area in which both male and female Mexican inventors apply for patent titles is that relative to chemistry and metallurgy, inclusive of all its subareas. The results reveal gender disparities that should be addressed in Mexican public policy to accomplish United Nations Millenium Goals and UN Sustainable Development Goals, and to and promote gender equity in science and technology related activities

Resumo:

Seguindo as recomendações da Organização para a Cooperação e Desenvolvimento Econômico, temos utilizado a informação de patentes contida na base de dados de consulta PATENTSCOPE como um indicador de inovação tecnológica com a finalidade de analisar a participação dos inventores Mexicanos nas solicitações de patentes em um período de vinte anos (1995-2015). Foi realizada a análise levando em consideração o género dos inventores para contrastar a participação de homens e mulheres. São mostrados alguns indicadores tais como participação, contribuição e presença. Os resultados do estudo estabelecem que as inventoras Mexicanas participam nos títulos de patentes com uma equipe de inventores entre pequena a média (de acordo com o número de inventores listados), por sua vez, os inventores Mexicanos tendem a fazê-lo à sós. Foi estabelecido também que as áreas tecnológicas nas quais os inventores Mexicanos, homens e mulheres, têm maior participação são as de Química e Metalurgia. Os resultados revelam disparidades de género que deveriam ser atendidas mediante políticas públicas para alcançar as Metas do Milênio e as Metas de Sustentabilidade e Desenvolvimento estabelecidas pela ONU, assim como promover a equidade de género nas atividades relacionadas com Ciência e Tecnologia

Abstract:

University teaching in times of the COVID-19 pandemic has been exposed to new challenges and requirements. The fundamental challenge is to show that the educational quality of the institutions designed and rooted in a face-to-face model can offer the same educational quality in a digital format. The challenges appear in the form of students, who do not see added value in digital distance education and, on the other hand, employers and families who strongly question the price they are paying in terms of tuition and human resources educated now remotely. Faced with this problem, there are two opposite attitudes: wait or assume. The institutions that have decided to wait believe that this passage is temporary and that the adjustments they will have to make rest on presenting a mitigation plan with the hope to return to normal face-to-face. Other institutions have decided to assume that the changes are more profound, and that the pandemic is an opportunity to restructure and rethink their educational model. This paper presents some routes to go through this second attitude through a model called Effective Education also Digital (EED)

Resumen:

El trabajo propone distinguir entre el razonamiento práctico moral del razonamiento jurídico, a partir de dos criterios. El primero es que el razonamiento práctico moral es del dominio exclusivo de la razón práctica y que carece, así, de exigencias de la razón teórica. El segundo criterio es que el razonamiento jurídico posee un cierto grado de lo que se denomina racionalidad de método. El razonamiento práctico moral carece de esta clase de racionalidad. La tesis se argumenta a partir de la distinción entre razón teórica y razón práctica. Por un lado, el razonamiento teórico tiene relación directa con el conocimiento acerca del mundo. De otro, El razonamiento práctico se ocupa de decidir el curso de acción que se ha de elegir en un caso concreto. aunque relacionadas, las racionalidades sonregidadas por diferentes criterios de evaluación. Las diferentes exigencias de razón permiten advertir, también, que ciertos razonamientos jurídicos son pasibles de ciertos sentidos de racionalidad que no son aplicables al razonamiento moral

Abstract:

The article proposes to distinguish between moral practical reasoning from legal reasoning, from two different criteria. The first criterion is that moral practical reasoning is a reasoning that belongs exclusively to the domain of practical reason and, thus, it has no requirements from the domain of theoretical reason. The second criterion is that legal reasoning possesses, to a degree, a type of rationnality called method rationnality. Such a rationnality cannot be said to be present in moral practical reasoning , at all. The thesis is argued from the distinction between theoretical reason and practical reason. On the one hand, theoretical reasoning is concerned about deciding the best course of action in a particular situation. Although related, the two rationales are governed by different evaluation criteria. Such requirements reveal that some notions of rationality are applicable to legal reasoning but to its moral counterpart

Abstract:

In this paper we propose a theoretical model of an ITS (Intelligent Tutoring Systems) capable of improving and updating computer-aided navigation based on Bloom’s taxonomy. For this we use the Bayesian Knowledge Tracing algorithm, performing an adaptive control of the navigation among different levels of cognition in online courses. These levels are defined by a taxonomy of educational objectives with a hierarchical order in terms of the control that some processes have over others, called Marzano's Taxonomy, that takes into account the metacognitive system, responsible for the creation of goals as well as strategies to fulfill them. The main improvements of this proposal are: 1) An adaptive transition between individual assessment questions determined by levels of cognition. 2) A student model based on the initial response of a group of learners which is then adjusted to the ability of each learner. 3) The promotion of metacognitive skills such as goal setting and self-monitoring through the estimation of attempts required to pass the levels. One level of Marzano's taxonomy was left in the hands of the human teacher, clarifying that a differentiation must be made between the tasks in which an ITS can be an important aid and in which it would be more difficult

Resumen:

Este artículo sostiene que la llegada del Antropoceno requiere un cambio en el significado y alcance de la responsabilidad. Con base en Hans Jonas y Bruno Latour, sostengo que la responsabilidad es una característica definitoria de la humanidad que, no obstante, está acechada por su opuesto. Si ser responsable es primariamente ser receptivo a lo Otro, entonces la cultura de 'responsabilidad personal' que prevalece hoy en día es una traición tanto a la humanidad como a la Tierra. Cuando Jonas formuló tales ideas en 1979, el 'sistema tierra' no era ni un campo de estudio científico ni una cuestión de preocupación existencial. Pocos académicos lo tomaron en serio. Sin embargo, desarrollos recientes en el pensamiento científico, legal y ambiental han validado su visión. Para probar esta hipótesis, retomo a Latour, quien fue un cuidadoso lector-y crítico-de Jonas. Ambos pensadores consideraron que la creencia modernista de que solo los humanos son fuentes de reclamos morales válidos es un error que debe ser corregido. A medida que la Tierra hoy 'reacciona' a nuestras intervenciones con fenómenos climáticos extremos y enfermedades zoonóticas, su mensaje resuena en círculos cada vez mayores. El Antropoceno trastoca una era en la que solo algunos humanos tenían permitido hablar. Ahora debemos enseñarnos a escuchar y responder a otros seres vivos y a generaciones futuras. Sostengo que esta capacidad es el corazón de regímenes emergentes de responsabilidad planetaria

Abstract:

This paper argues that the coming of the Anthropocene requires a shift in the meaning and scope of responsibility. Drawing on Hans Jonas and Bruno Latour, I argue that responsibility is a defining feature of humanity which is nevertheless haunted by its opposite. Indeed, if to be responsible is primarily to be responsive to the claim of the Other, then the culture of 'personal responsibility' that prevails today is a betrayal of both humanity and the Earth. When Jonas formulated such thoughts in 1979 the 'Earth system' was neither a field of scientific study, nor a matter of existential concern. Few scholars took him seriously. However, recent developments in scientific, legal, and environmental thought have vindicated his vision. To test this hypothesis I turn to Latour, who was a careful reader-and critic-of Jonas. Both thinkers regarded the modernist belief that only humans are sources of valid moral claims as an error that ought to be corrected. As the Earth today 'reacts' to our interventions with extreme weather and zoonotic diseases, their message is resounding in growing circles. The Anthropocene upends an era in which only (some) humans were allowed to speak. Now we must teach ourselves how to listen and respond to other living beings and future generations. This responsiveness, I argue, will form the core of emerging regimes of planetary responsibility

Resumen:

El artículo ofrece una lectura de La Condición Humana (1958) a la luz de Vita Activa (1961). Tras hacer un recorrido por la recepción de La Condición Humana desde su publicación en 1958, argumento que se ha marginado a la obra como un todo para extraer de ella fragmentos y supuestos ‘modelos’ de la política. Considerada como un todo, la obra de Arendt es una reflexión acerca de las condiciones de posibilidad de nuestras experiencias de sentido. Como es evidente sobre todo en la versión alemana, se trata, así, de una investigación fenomenológica. Argumento que tanto la fuerza crítica como la aparente ceguera que encontramos en Arendt se deben a las premisas fenomenológicas de su pensamiento

Abstract:

The article offers a reading of The Human Condition (1958) in light of Vita Activa (1961). After tracing the reception of The Human Condition since its publication in 1958, I argue that scholarship on Arendt has marginalized the work as a whole to extract fragments for supposed ‘models’ of politics. Considered as a whole, Arendt’s work is a reflection on the conditions of possibility of our experiences of meaning. As is evident above all in the German version, it is thus a phenomenological investigation. I argue that both the critical force and the apparent blindness that we find in Arendt are due to the phenomenological premises of her thought

Resumo:

O artigo oferece uma leitura da Condicão Humana (1958) à luz de Vita Activa (1961). Depois de fazer um tour pela recepção da Condicão Humana desde sua publicação em 1958, o artigo argumenta que essa recepção marginalizou o trabalho como um todo para extrair fragmentos e supostos "modelos" da política. Considerado como um todo, o trabalho de Arendt é uma reflexão sobre as condições de possibilidade de nossas experiências de significado. Como é evidente acima de tudo na versão alemã, é, assim, uma investigação fenomenológica. Argumento que tanto a força crítica quanto a aparente cegueira que encontramos em Arendt se devem às premissas fenomenológicas de seu pensamento

Abstract:

Leo Strauss has been read as the author of a paradoxically nonpolitical political philosophy. This reading finds extensive support in Strauss’s work, notably in the claim that political life leads beyond itself to contemplation and in the limits this imposes on politics. Yet the space of the nonpolitical in Strauss remains elusive. The “nonpolitical” understood as the natural, Strauss suggests, is the “foundation of the political”. But the meaning of “nature” in Strauss is an enigma: it may refer either to the “natural understanding” of commonsense, or to nature “as intended by natural science,” or to “unchangeable and knowable necessity.” As a student of Husserl, Strauss sought both to retrieve and radically critique both the “natural understanding” and the “naturalistic” worldview of natural science. He also cast doubt on the very existence of an unchangeable nature. The true sense of the nonpolitical in Strauss, I shall argue, must rather be sought in his embrace of the trans-finite goals of philosophy understood as rigorous science. Nature may be the nonpolitical foundation of the political, but we can only ever approximate nature asymptotically. The nonpolitical remains as elusive in Strauss as the ordinary. To approximate both we need to delve deeper into his understanding of Husserl

Abstract:

Scholars of International Relations (IR) confront the unenviable task of conceiving and representing the world as a whole. Philosophy has deemed this impossible since the time of Kant. Today's populist reaction against "globalism" suggests that it is imprudent. Yet IR must persevere in its quest to diagnose emerging global realities and fault lines. To do so without stoking populist fears and mythologies, I argue, IR must enter into dialogue with the new realism in philosophy, and in particular with its ontological pluralism. The truth of what unites and divides us today is not one-dimensional, as the image of a networked world of "open" or "closed" societies suggests. Beyond anonymous networks, there are principles such as sovereignty; there are systemic dynamics of inclusion/exclusion, and there is the power of justifications

Abstract:

Leo Strauss has been understood as one of the foremost critics of Heidegger, and as having provided an alternative to his thought: against Heidegger’s Destruktion of Plato and Aristotle, Strauss enacted a recovery; against Heidegger’s “historicist turn,” Strauss rediscovered a superior alternative in the “Socratic turn.” This paper argues that, rather than opposing or superseding Heidegger, Strauss engaged Heidegger dialectically. On fundamental philosophical problems, Strauss both critiqued Heidegger and retrieved the kernel of truth contained in Heidegger’s position. This method is based on Strauss’s zetetic conception of philosophy, which has deep roots in Heidegger’s 1922 reading of Aristotle’s Metaphysics

Resumen:

Desde la década de 1990, el mundo ha visto una proliferación de estados de excepción. La detención indefinida de "combatientes enemigos", la acelerada construcción de barreras fortificadas entre países y el aumento de la población mundial considerada "ilegal" ejemplifican la tendencia mundial a normalizar las excepciones. Donald Trump es el caso más burdo -no por ello menos alarmante- de esta tendencia

Abstract:

Since the 1990s, states of exception have proliferated. The indefinite detainment of "enemy combatants", the accelerated growth of fortified barriers between countries, and the increase of an "illegal" population are examples of this global tendency to regulate these exceptions. Donald Trump is the rudest case, not the least alarming, of this trend

Abstract:

We study a reaction-diffusion system within a long channel in the regime in which the projected Fick-Jacobs-Zwanzig operator for confined diffusion can be used. We found that under this approximation, Turing instability conditions can be modified due to the channel geometry. The dispersion relation, range of unstable modes where pattern formation occurs, and spatial structure of the patterns itself change as functions of the geometric parameters of the channel. This occurs for the three channels analyzed, for which the values of the projected operators can be found analytically. For the reaction term, we use the well-known Schnakenberg kinetics

Abstract:

Auctioneers often face the decision of whether to bundle two or more di€fferent objects before selling them. Under a Vickrey auction (or any other revenue equivalent auction form) there is a unique critical number for each pair of objects such that when the number of bidders is fewer than that critical number the seller strictly prefers a bundled sale and when there are more bidders the seller prefers unbundled sales. This property holds even when the valuations for the objects are correlated for a given bidder. The results have been proved using a mathematical technique of quantiles that can be extremely useful for similar analysis

Abstract:

We examine the evolution of adult female heights in twelve Latin American countries during the second half of the twentieth century based on demographic health surveys and related surveys compiled from national and international organizations. Only countries with more than one survey were included, allowing us to cross-examine surveys and correct for biases. We first show that average height varies significantly according to location, from 148.3 cm in Guatemala to 158.8 cm in Haiti. The evolution of heights over these decades behaves like indicators of human development, showing a steady increase of 2.6 cm from the 1950s to the 1990s. Such gains compare favorably to other developing regions of the world, but not so much with recently developed countries. Height gains were not evenly distributed in the region, however. Countries that achieved higher levels of income, such as Brazil, Chile, Colombia and Mexico, gained on average 0.9 cm per decade, while countries with shrinking economies, such as Haiti and Guatemala, only gained 0.25 cm per decade

Abstract:

The credibility of election outcomes hinges on the accuracy of vote tallies. We provide causal evidence on the drivers and the downstream consequences of variation in the quality of vote tallies. Using data for the universe of polling stations in Mexico in five national elections, we document that over 40% of polling-station-level tallies display inconsistencies. Our evidence strongly suggests these inconsistencies are nonpartisan. Using data for more than 1.5 million poll workers, we show that lower educational attainment, higher workload, and higher complexity of the tally cause more inconsistencies. Finally, using an original survey of close to 80,000 poll workers together with detailed administrative data, we find that inconsistencies cause recounts and recounts lead to lower trust in electoral institutions. We discuss policy implications

Abstract:

A synthesis is presented of recent work by the authors and others on the formation of localized patterns, isolated spots, or sharp fronts in models of natural processes governed by reaction-diffusion equations. Contrasting with the well-known Turing mechanism of periodic pattern formation, a general picture is presented in one spatial dimension for models on long domains that exhibit sub-critical Turing instabilities. Localised patterns naturally emerge in generalised Schnakenberg models within the pinning region formed by bistability between the patterned state and the background. A further long-wavelength transition creates parameter regimes of isolated spots which can be described by semi-strong asymptotic analysis. In the species-conservation limit, another form of wave pinning leads to sharp fronts. Such fronts can also arise given only one active species and a weak spatial parameter gradient. Several important applications of this theory within natural systems are presented, at different lengthscales: cellular polarity formation in developmental biology, including root-hair formation, leaf pavement cells, keratocyte locomotion; and the transitions between vegetation states on continental scales. Philosophical remarks are offered on the connections between different pattern formation mechanisms and on the benefit of subcritical instabilities in the natural world

Abstract:

This work presents a comparison of different deep learning models for the reconstruction of artistic images from compact representations generated using Principal Component Analysis. The reconstruction models correspond to different types of Convolutional Neural Networks. Our results show that the statistics captured by the principal components transformation are enough to obtain good approximations in the reconstruction process, especially in terms of color and object visual features, even when using compact representations whose length is only about 1% of the original image space's total number of features

Resumen:

Esta nota de investigación presenta el Estudio Panel México 2006, un proyecto que permite estudiar la conducta electoral de los mexicanos. El estudio consiste en una encuesta tipo panel de tres rondas de entrevistas a 2,400 mexicanos adultos realizadas entre octubre de 2005 y julio de 2006, así como un extenso análisis de contenido de información noticiosa y anuncios políticos en televisión. En éste se registran cambios en las opiniones políticas de los mexicanos, los cuales pueden ser analizados a la luz de los eventos e información de las campañas presidenciales, así como del contexto político posterior a la elección. El estudio panel documenta qué tipo de votantes cambiaron su preferencia electoral y ofrece elementos para determinar por qué ocurrieron dichos cambios

Abstract:

This research note discusses the methods and key findings of the Mexico 2006 Panel Study, a multi-investigator project on Mexican voting behavior. This panel study consists of a three-wave panel survey of 2,400 Mexican adults between October 2005 and July 2006. It also includes an extensive content analysis of television news and advertisements. The Mexico 2006 Panel Study permits assessing the impact of events and information flows, along with post-electoral context, on the political attitudes of Mexicans. These surveys document which types of voters changed their electoral preferences during the campaign and offers a host of variables to explain why these changes occurred

Abstract:

Long time existence and uniqueness of solutions to the Yang-Mills heat equation is proven over a compact 3-manifold with smooth boundary. The initial data is taken to be a Lie algebra valued connection form in the Sobolev space H1. Three kinds of boundary conditions are explored, Dirichlet type, Neumann type and Marini boundary conditions. The last is a nonlinear boundary condition, specified by setting the normal component of the curvature to zero on the boundary. The Yang-Mills heat equation is a weakly parabolic nonlinear equation.We use gauge symmetry breaking to convert it to a parabolic equation and then gauge transform the solution of the parabolic equation back to a solution of the original equation. Apriori estimates are developed by first establishing a gauge invariant version of the Gaffney-Friedrichs inequality. A gauge invariant regularization procedure for solutions is also established. Uniqueness holds upon imposition of boundary conditions on only two of the three components of the connection form because of weak parabolicity. This work is motivated by possible applications to quantum field theory

Abstract:

We study the set of eigenvalues of the Bochner Laplacian on a geodesic ball of an open manifold M, and find lower estimates for these eigenvalues when M satisfies a Sobolev inequality. We show that we can use these estimates to demonstrate that the set of harmonic forms of polynomial growth over M is finite dimensional, under sufficient curvature conditions. We also study in greater detail the dimension of the space of bounded harmonic forms on coverings of compact manifolds

Abstract:

In this paper we consider the Hodge Laplacian on differential k-forms over smooth open manifolds MN, not necessarily compact. We find sufficient conditions under which the existence of a family of logarithmic Sobolev inequalities for the Hodge Laplacian is equivalent to the ultracontractivity of its heat operator. We will also show how to obtain a logarithmic Sobolev inequality for the Hodge Laplacian when there exists one for the Laplacian on functions. In the particular case of Ricci curvature bounded below, we use the Gaussian type bound for the heat kernel of the Laplacian on functions in order to obtain a similar Gaussian type bound for the heat kernel of the Hodge Laplacian. This is done via logarithmic Sobolev inequalities and under the additional assumption that the volume of balls of radius one is uniformly bounded below

Resumen:

En el presente artículo se examina la relación de la Filosofía del derecho de Hegel con la filosofía práctica aristotélica. Con ello se pretende mostrar, por una parte, que algunas de las tesis y motivos centrales de la filosofía del derecho hegeliana se entienden de mejor forma trayendo a primer plano ciertos planteamientos aristotélicos y, por otro lado, que dichos planteamientos son objeto de una reinterpretación y reelaboración por parte de Hegel ante ciertas exigencias históricas y filosóficas del contexto moderno. En particular, se examina la relación entre estos dos autores atendiendo a dos puntos concretos: la metodología holística de una filosofía práctica -entendida de modo amplio- y la teoría de la motivación ética

Abstract:

In this article I examine the relation of Hegel's Philosophy of Right with the Aristotelean practical philosophy. My aim with this is to show, on the one hand, that some of the theses and central motifs of the Hegelian philosophy of right are better understood if one brings certain Aristotelean ideas to the forefront of the discussion and, on the other hand, that these ideas are subjected to a reinterpretation by Hegel due to the historical and philosophical demands of the Modern Age. In particular, the relationship between these two authors is analyzed by examining two specific subjects: the holistic methodology of a practical philosophy -broadly construed- and the theory of ethical motivation

Resumen:

Mi objetivo en este artículo es estudiar el papel de la belleza como símbolo de la moralidad en la Kritik der Urteilskraft. En primer lugar, realizo una comparación entre el esquematismo de los conceptos y el proceso del razonamiento analógico. Lo que sostengo es que el razonamiento analógico y simbólico conduce en Kant a una consideración sobre los objetos bajo las condiciones establecidas por el agente reflexivo mismo: la tesis que mostraré con ello es que los objetos simbólicos contribuyen a los propios propósitos de reflexión de uno sobre una temática particular. Posteriormente, explico por qué Kant considera que la belleza es un símbolo adecuado de la moralidad en vistas de la existencia de un interesante número de similitudes entre el ámbito de lo estético y de lo moral. Finalmente, discuto por qué únicamente la libertad puede exhibirse por medio de un símbolo según la estética kantiana, y por qué las otras ideas de la razón --Dios y el alma-- están mucho más asociadas con la experiencia de lo sublime

Abstract:

My aim in this article is to study the role of beauty as symbol of morality within Kant's Kritik der Urteilskraft. First, I draw a comparison between the schematism of concepts and the process of analogous reasoning. I sustain that analogous and symbolic reasoning leads in Kant to a consideration of the objects under the conditions set by the reflective agent himself: the thesis that I will prove thereby is that symbolic objects serve one's own purposes of reflection on a given topic. I proceed then to explain why Kant considers that beauty is an adequate symbol of morality on account of the existence of an interesting number of similarities between the aesthetical and the moral realms. Lastly, I clarify why only freedom can be exhibited by means of a symbol in Kantian aesthetics, and why the other two ideas of reason --namely God and the soul-- are much more associated with the experience of the sublime

Resumen:

Este artículo analiza la interpretación de Ricoeur sobre el concepto de Anerkennung en la filosofía temprana de Hegel. Se discute la importancia primordial de este concepto y se evalúa el significado del mismo dentro de un orden ético y político

Abstract:

This article analyzes Ricoeur's interpretation of Hegel's Anerkennung (Recognition) in his early philosophical thought. The paramount importance of this notion is discussed and its meaning is analyzed in an ethical and political order

Resumen:

A pesar de la promesa presidencial de crear un sistema de salud universal, la falta de objetivos y estrategias ha ocasionado graves problemas. Sin coordinación entre instituciones no se pueden lograr avances reales en la materia

Resumen:

En los últimos años el Estado mexicano ha faltado a su palabra de proteger y garantizar el derecho fundamental a la salud. Sus omisiones no deben ser ignoradas

Resumen:

El presente artículo tiene como objetivo demostrar que el sistema público de salud en México atraviesa su más severa crisis de las últimas décadas en el momento que debe dar respuesta a la pandemia de Sars-Cov-2. Para poder comprender el deterioro del sistema público de salud se describen las causas principales que han derivado en su debilitamiento, donde destacan la corrupción de la pasada administración y la reforma inconclusa del sistema de salud de la actual administración. Posteriormente, se explica la importancia de los órganos encargados de dar respuesta a la emergencia sanitaria, los Acuerdos de mayor impacto emitidos al comienzo de la pandemia y sus desaciertos, para demostrar cómo el conjunto de factores descritos y explicados afectan los derechos humanos, principalmente el goce efectivo del derecho a la protección de la salud que se vulnera aún más frente a la pandemia. El propósito último del estudio es demostrar la debilidad del sistema público de salud para responder a la emergencia sanitaria y la urgencia que existe de fortalecer el sistema público de salud para cumplir con el derecho a la protección de la salud de acuerdo con los principios establecidos en la Constitución

Abstract:

The objective of this article is to demonstrate that the public health system in Mexico is going through its must severe crisis in recent decades at the time it must respond to Sars-Cov-2 pandemic. In order to understand the deterioration of the public health system, the main causes that have resulted in its weakening are described, underlining the corruption that took place in the previous administration and the unfinished health system reform of the current administration. Subsequently, the importance of the institutions in charge of responding to the health emergency, the Agreements with the greatest impact issued at the beginning of the pandemic and their limitations are explained to demonstrate how these affected human rights, mainly the effectiveness of the right to health protection. The ultimate purpose of the study is to demonstrate the weakness of the public health system to address the sanitary emergency and the urgency to strength the public health system in order to comply with the right to health protection in accordance with the principles established in the Constitution

Resumen:

Objetivo. Analizar la cobertura en salud de cáncer pulmonar en México y ofrecer recomendaciones al respecto. Material y métodos. Mediante la conformación de un grupo multidisciplinario se analizó la carga de la enfermedad relativa al cáncer de pulmón y el acceso al tratamiento médico que ofrecen los diferentes subsistemas de salud en México. Resultados. Se documentan desigualdades importantes en la atención del cáncer de pulmón entre los distintos subsistemas de salud que sugieren acceso y cobertura en salud variable, tanto a los tratamientos tradicionales como a las innovaciones terapéuticas existentes, y diferencias en la capacidad de los prestadores de servicios de salud para garantizar el derecho a la protección de la salud sin distinciones. Conclusión. Se hacen recomendaciones sobre la necesidad de mejorar las acciones para el control del tabaco, el diagnóstico temprano y la inclusión de terapias innovadoras y la homologación entre los diferentes prestadores públicos de servicios de salud a través del financiamiento con la recaudación de impuestos al tabaco

Abstract:

Objective. To analyze the coverage of lung cancer in Mexico and offer recommendations in this regard. Materials and methods. By means of the conformation of a multidisciplinary group, we analyze the burden of the disease relative to the lung cancer and the access to the medical treatment offered by the different public health subsystems in Mexico. Results. Important inequalities in lung cancer care are documented among the different public health subsystems. Our data suggest differential access and coverage to both traditional treatments and existing therapeutic innovations and differences in the capacity of health service providers to guarantee the right to health protection without distinction. Conclusions. Recommendations are made on the need to improve actions for tobacco control, early diagnosis for lung cancer and inclusion of innovative therapies and homologation among different public health service providers through financing via tobacco taxes

Abstract:

Priority setting is the process through which a country's health system establishes the drugs, interventions, and treatments it will provide to its population. Our study evaluated the priority-setting legal instruments of Brazil, Costa Rica, Chile, and Mexico to determine the extent to which each reflected the following elements: transparency, relevance, review and revision, and oversight and supervision, according to Norman Daniels's accountability for reasonableness framework and Sarah Clark and Albert Wale's social values framework. The elements were analyzed to determine whether priority setting, as established in each country's legal instruments, is fair and justifiable. While all four countries fulfilled these elements to some degree, there was important variability in how they did so. This paper aims to help these countries analyze their priority-setting legal frameworks to determine which elements need to be improved to make priority setting fair and justifiable

Abstract:

In 2010, the Mexican government implemented a multi-sector agreement to prevent obesity. In response, the Ministries of Health and Education launched a national school-based policy to increase physical activity, improve nutrition literacy, and regulate school food offerings through nutritional guidelines. We studied the Guidelines’ negotiation and regulatory review process, including government collaboration and industry response. Within the government, conflicting positions were evident: the Ministries of Health and Education supported the Guidelines as an effective obesity-prevention strategy, while the Ministries of Economics and Agriculture viewed them as potentially damaging to the economy and job generation. The food and beverage industries opposed and delayed the process, arguing that regulation was costly, with negative impacts on jobs and revenues. The proposed Guidelines suffered revisions that lowered standards initially put forward. We documented the need to improve cross-agency cooperation to achieve effective policymaking. The ‘siloed’ government working style presented a barrier to efforts to resist industry's influence and strong lobbying. Our results are relevant to public health policymakers working in childhood obesity prevention

Resumen:

Este artículo tiene tres objetivos: presentar las dificultades que plantea la exigibilidad del derecho a la salud en México; dar cuenta de los principales problemas que se están presentando en lo que, de modo general, podemos llamar “relaciones entre derecho y salud” y, por último, ofrecer algunas propuestas para alcanzar una relación eficaz entre ambas materias

Abstract:

This article pursues three goals: to describe the obstacles that in Mexico faces the complete exigibility of the right to health, to present the main problems arising in what, generally speaking, could be referred to as “relations between law and health”, and finally to offer some proposals in order to reach a working relation between the aforementioned disciplines

Abstract:

How do legislators develop reputations to further their individual goals in environments with limited space for personalization? In this article, we evaluate congressional behavior by legislators with gubernatorial expectations in a unitary environment where parties control political activities and institutions hinder individualization. By analyzing the process of drafting bills in Uruguay, we demonstrate that deputies with subnational executive ambition tend to bias legislation towards their districts, especially those from small and peripheral units. Findings reinforce the importance of incorporating ambition to legislative studies and open a new direction towards the analysis of multiple career patterns within a specific case

Abstract:

This article addresses the stability properties of a simple economy (characterized by a one-dimensional state variable) when the representative agent, confronted by trajectories that are divergent from the steady state, performs transformations in that variable in order to improve forecasts. We find that instability continues to be a robust outcome for transformations such as differencing and detrending the data, the two most typical approaches in econometrics to handle nonstationary time series data. We also find that inverting the data, a transformation that can be motivated by the agent reversing the time direction in an attempt to improve her forecasts, may lead the dynamics to a perfect-foresight path

Abstract:

This article examines dynamics in a model where agents forecast a one dimensional state variable through ordinary least squares regressions on the lagged values of the state variable. We study the stability properties of alternative transformations of the state variable, such as taking logarithms, which the agent can endogenously set forth. Surprisingly, for the considered class of economies, we found that the transformations that an econometrician would attempt are destabilizing, whereas alternative transformations, which an econometrician would never consider, such as convex transformations, are stabilizing. Therefore, we ironically find that in our set-up, an active agent who is concerned about learning the economy's dynamics and who in an attempt to improve forecasting transforms the state variable using standard transformations, is more likely to deviate from the steady state than a passive agent

Abstract:

Consider a one step forward looking model where agents believe that the equilibrium values of the state variable are determined by a function whose domain is the current value of the state variable and whose range is the value for the subsequent period. An agent’s forecast for the subsequent period uses the belief, where the function that is chosen is allowed to depend on the current realization of an extrinsic random process, and is made with knowledge of the past values of the state variable but not the current value. The paper provides (and characterizes) the conditions for the existence of sunspot equilibria for the model described

Abstract:

We show that a perfect correlated equilibrium distribution of an N-person game, as defined by Dhillon and Mertens (1996) can be achieved using a finite number of copies of the strategy space as the message space

Abstract:

We reformulate the local stability analysis of market equilibria in a competitive market as a local coordination problem in a market game, where the map associating market prices to best-responses of all traders is common knowledge and well-defined both in and out of equilibrium. Initial expectations over market variables differ from their equilibrium values and are not common knowledge. This results in a coordination problem as traders use the structure of the market game to converge back to equilibrium. We analyse a simultaneous move and a sequential move version of the market game and explore the link with local rationalizability

Abstract:

This paper introduces, within the framework of a simple example, the notion of a subjective temporary equilibrium. The underlying relation linking forecasts to equilibrium values of the state variable is linear. However, agents perceive a non-linear law that governs the rate of adjustment between successive periods and forecast using linear approximations to the non-linear law of motion. This is shown to generate a non-linear law of motion for the state variable with the feature that the agent's model describes correctly the period wise evolution of the economy. The resulting non-linear law of motion is referred to as a subjective temporary equilibrium as its specification is determined largely by the subjective beliefs of the agents regarding the dynamics of the system. In a subjective equilibrium, agents forecasts are generated by taking linear approximations to a correctly specified law of motion and the forecasts may accordingly be interpreted as being boundedly rational in a first-order sense. There exist specifications that admit the possibility of cyclical behaviour

Abstract:

This paper provides conditions for the almost sure convergence of the least squares learning rule in a stochastic temporary equilibrium model, where regressions are performed on the past values of the endogenous state variable. In contrast to earlier studies, (Evans and Honkapohja, 1998; Marcent and Sargent, 1989), which were local analyses, the dynamics are studied from a global viewpoint, which allows one to obtain an almost sure convergence result without employing projection facilities

Abstract:

Background and objective: This paper presents Alzheed, a mobile application for monitoring patients with Alzheimer's disease at day centers as well as a set of design recommendations for the development of healthcare mobile applications. The Alzheed project was conducted at Day Center “Dorita de Ojeda” that is focused on the care of patients with Alzheimer's disease. Materials and methods: A software design methodology based on participatory design was employed for the design of Alzheed. This methodology is both iterative and incremental and consists of two main iterative stages: evaluation of low-fidelity prototypes and evaluation of high-fidelity prototypes. Low-fidelity prototypes were evaluated by 11 day center's healthcare professionals (involved in the design of Alzheed), whereas high-fidelity prototypes were evaluated using a questionnaire based on the technology acceptance model (TAM) by the same healthcare professionals plus 30 senior psychology undergraduate students uninvolved in the design of Alzheed. Results: Healthcare professional participants perceived Alzheed as extremely likely to be useful and extremely likely to be usable, whereas senior psychology undergraduate students perceived Alzheed as quite likely to be useful and quite likely to be usable. Particularly, the median and mode of the TAM questionnaire were 7 (extremely likely) for healthcare professionals and 6 (quite likely) for psychology students (for both constructs: perceived usefulness and perceived ease of use). One-sample Wilcoxon signed-rank tests were performed to confirm the significance of the median for each construct

Abstract:

We develop a novel method to decompose a straddle into two assets: a volatility risk asset and a jump risk asset. Using the price ratio of the jump risk asset to the straddle, we create a forward-looking measure (S-jump) that captures the stock price jump risk anticipated by the option market. We show that S-jump substantially increases before earnings announcements and strongly predicts the size and the probability of earnings-induced stock price jumps. We also find that S-jump amplifies the earnings response coefficient. Our jump risk asset captures the run-up and run-down return patterns observed for straddles around earnings announcements

Abstract:

Let A be an expansive linear map in Rd. Approximation properties of shift-invariant subspaces of L2(Rd) when they are dilated by integer powers of A are studied. Shift-invariant subspaces providing approximation order a or density order a associated to A are characterized. These characterizations impose certain restrictions on the behavior of the spectral function at the origin expressed in terms of the concept of point of approximate continuity. The notions of approximation order and density order associated to an isotropic dilation turn out to coincide with the classical ones introduced by de Boor, DeVore and Ron. This is no longer true when A is anisotropic. In this case the A-dilated shift-invariant subspaces approximate the anisotropic Sobolev space associated to A and a. Our main results are also new when S is generated by translates of a single function. The obtained results are illustrated by some examples

Abstract:

In this article, we consider the problem of voltage regulation of a proton exchange membrane fuel cell connected to an uncertain load through a boost converter. We show that, in spite of the inherent nonlinearities in the current-voltage behavior of the fuel cell, the voltage of a fuel cell/boost converter system can be regulated with a simple proportional-integral (PI) action designed following the passivity-based control approach. The system under consideration consists of a DC-DC converter interfacing a fuel cell with a resistive load. We show that the output voltage of the converter converges to its desired constant value for all the systems initial conditions-with convergence ensured for all positive values of the PI gains. This latter feature facilitates the, usually difficult, task of tuning the gains of the PI. An immersion and invariance parameter estimator is afterward proposed which allows the operation of the PI passivity-based control when the load is unknown, maintaining the output voltage at the desired level. The stable operation of the overall system is proved and the approach is validated with extensive numerical simulations considering real-life scenarios, where robust behavior in spite of load variations and the presence of noise is obtained

Abstract:

In this note, we address the problem of parameter identification of nonlinear, input affine dissipative systems. It is assumed that the supply rate, the storage and the internal dissipation functions may be expressed as nonlinearly parameterized regression equations where the mappings (depending on the unknown parameters) satisfy a monotonicity condition-this encompasses a large class of physical systems, including passive systems. We propose to estimate the system parameters using the "power-balance" equation, which is the differential version of the classical dissipation inequality, with a new estimator that ensures global, exponential, parameter convergence under the very weak assumption of interval excitation of the power-balance equation regressor. The benefits of the proposed approach are illustrated with an example

Abstract:

In this paper we propose a new control scheme for a wind energy conversion system connected to a solid state transformer-enabled distribution microgrid. The system consists of a wind turbine, a permanent magnet synchronous generator, a rectifier and a load which is connected to the distribution grid dc bus. The scheme combines a classical PI placed, in a nested-loop configuration, with a passivity-based controller. Guaranteeing stability and endowed with disturbance rejection properties, the controller regulates the wind turbine angular velocity to a desired value - in particular, the set-point is selected such that the maximum power from the wind is extracted - maximizing the generator efficiency. The fast response of the closed-loop system makes possible to operate under fast-changing wind speed conditions. To assess and validate the controller performance and robustness under parameters variations, realistic simulations comparing our proposal with a classical PI scheme are included

Abstract:

A controller, based on passivity, for a wind energy conversion system connected to a dc bus is proposed. The system consists of a wind turbine, a permanent magnet synchronous generator, a rectifier and a load. Guaranteeing stability and endowed with adaptive properties, the controller regulates the wind turbine angular velocity to a desired value - in particular, the set-point is selected such that the maximum power from the wind is extracted - and maximizes the generator efficiency. The fast response of the closed-loop system makes possible to operate under fastchanging wind speed conditions. To assess the controller performance, realistic simulation results are included

Abstract:

In this note, an observer-based feedback control for tracking trajectories of the yaw and lateral velocities in a vehicle is proposed. The considered model consists of the vehicle’s longitudinal, lateral and yaw velocities dynamics together with its roll dynamics. First, an observer for the vehicle lateral velocity, roll angle and roll velocity is proposed. Its design is based on the well-known Immersion & Invariance technique and Super-Twisting Algorithm. Tuning conditions on the observer gains are given such that the observation errors globally asymptotically converge to zero provided that the yaw velocity reference is persistently excited. Next, a feedback control law depending on the observer estimates is designed using the Output Regulation technique. It is showed that the tracking errors converge to zero as the observation errors decay. To assess the performance of the controller, numerical simulations are performed where the stable operation of the closed-loop system is verified

Abstract:

A goal in the study of dynamics on the interval is to understand the transition to positivetop ological entropy. There is a conjecture from the 1980s that the only route to positivetop ological entropy is through a cascade of period doubling bifurcations. We prove this conjecturein natural families of smooth interval maps, and use it to study the structure of the boundary ofmappings with positive entropy. In particular, we show that in families of mappings with a fixednumber of critical points the boundary is locally connected, and for analytic mappings that it isa cellular set

Abstract:

Purpose - The purpose of this study is to draw from conservation of resources theory to examine how employees' experience of resource-draining interpersonal conflict might diminish the likelihood that they engage in championing behaviour. Its specific focus is on the mediating effect of their motivation to leave the organization and the moderating effect of their peer-oriented social interaction in this connection. Design/methodology/approach - The research hypotheses are empirically assessed with quantitative survey data gathered from 632 employees who work in a large Mexican-based pharmacy chain. The statistical analyses involved an application of the Process macro, which enabled concurrent estimations of the direct, mediating and moderating effects predicted by the proposed conceptual framework. Findings - Emotion-based tensions in co-worker relationships decrease employees' propensity to mobilize support for innovative ideas, because employees make plans to abandon their jobs. This mediating role of turnover intentions is mitigated when employees maintain close social relationships with their co-workers. Practical implications - For organizational practitioners, this study identifies a core explanation (i.e. employees want to quit the company) for why frustrations with emotion-based quarrels can lead to a reluctance to promote novel ideas - ideas that otherwise could add to organizational effectiveness. It also highlights how this harmful process can be avoided if employees maintain good, informal relationships with their colleagues. Originality/value - For organizational scholars, this study explicates why and when employees' experience of interpersonal conflict translates into complacent work behaviours, in the form of tarnished idea championing. It also identifies informal peer relationships as critical contingency factors that disrupt this negative dynamic

Abstract:

This research investigates how employees' perceptions of role ambiguity might inhibit their propensity to engage in organizational citizenship behaviour (OCB), with a particular focus on the potential buffering roles of two personal resources in this process: political skill and organizational identification. Survey data collected from a manufacturing organization indicate that role ambiguity diminishes OCB, but this effect is attenuated when employees are equipped with political skill and have a strong sense of belonging to their organization. The buffering role of organizational identification also is particularly strong when employees have adequate political skills, suggesting the reinforcing, buffering roles of these two personal resources. Organizations that want to foster voluntary work behaviours, even if they cannot provide clear role descriptions for their employees, should nurture adequate personal resources within their employee ranks

Abstract:

This paper investigates how employees' experience of workplace incivility may steer them away from idea championing, with a special focus on the mediating role of their desire to quit their jobs and the moderating role of their dispositional self-control. Data collected from employees who work in a large retail organization reveal that an important reason that exposure to rude workplace behaviors reduces employees' propensity to champion innovative ideas is that they make concrete plans to leave. This mediating effect is mitigated when employees are equipped with high levels of self-control though. For organizations, this study accordingly pinpoints desires to seek alternative employment as a critical factor by which irritations about resource-draining incivility may escalate into a reluctance to add to organizational effectiveness through dedicated championing efforts. It also indicates how this escalation can be avoided, namely, by ensuring employees have access to pertinent personal resources

Abstract:

Purpose. The purpose of this research is to examine how employees' experience of career dissatisfaction might curtail their organizational citizenship behavior, as well as how this detrimental effect might be mitigated by employees' access to valuable peer-, supervisor- and organizational-level resources. The frustrations stemming from a dissatisfactory career might be better contained in the presence of these resources, such that employees are less likely to respond to this resource-depleting work circumstance by staying away from extra-role activities

Abstract:

This study examines how employees' exposure to interpersonal conflict might reduce their creative behaviour, with a particular focus on how this negative connection might be mitigated by their access to pertinent personal and contextual resources. Using data from employees who work in the construction sector, the empirical findings reveal that interpersonal conflict thwarts creativity, but this effect is weaker when employees can more easily control their emotions, have clear job descriptions, and are satisfied with how their employer communicates with them. Empathy exhibits a weak effect. This study accordingly reveals different means through which organizations can overcome the challenge that arises when emotion-based conflicts steer employees away from creativity

Résumé:

Les auteurs de cette étude s'appuient sur les données des employés exerçant dans le secteur de la construction pour examiner l'impact que l'exposition aux conflits interpersonnels a sur le comportement créatif des employés. Ils s'intéressent plus particulièrement aumoyenqui pourrait permettre d'atténuer le lien négatif entre les conflits interpersonnels et la créativité, soit l'accès à des ressources personnelles et contextuelles pertinentes. Les résultats empiriques révèlent que les conflits interpersonnels entravent la créativité. Mais cet effet est plus faible lorsque les employés contrôlent plus facilement leurs émotions, reçoivent des descriptions de tâches précises et sont satisfaits de la façon dont leur employeur communique avec eux. L'empathie n'a qu'un faible effet. Cette étude révèle les différents moyens par lesquels les organisations peuvent surmonter le problème lié aux conflits émotionnels qui entravent la créativité des employés

Abstract:

Anchored in conservation of resources theory, this study considers how employees' experience of job stress might reduce their organizational citizenship behaviors (OCB), as well as how this negative relationship might be buffered by employees' access to two personal resources (passion for work and adaptive humor) and two contextual resources (peer communication and forgiving climate). Data from a Mexican-based organization reveal that felt job stress diminishes OCB, but the effect is subdued at higher levels of the four studied resources. This study accordingly adds to extant research by elucidating when the actual experience of job stress is more or less likely to steer employees away from OCB -that is, when they have access to specific resources that hitherto have been considered direct enablers of such efforts instead of buffers of employees' negative behavioral responses to job stress

Abstract:

Purpose. The purpose of this paper is to consider how employees' perceptions of psychological contract breach, due to their sense that their organization has not kept its promises, might diminish their creative behavior. Yet access to two critical personal resources -emotion regulation and humor skills- might buffer this negative relationship. Design/methodology/approach. Survey data were collected from employees in a large organization in the automobile sector. Findings. Employees' beliefs that their employer has not come through on its promises diminishes their engagement in creative activities. The effect is weaker among employees who can more easily control their emotions and who use humor in difficult situations. Practical implications. For organizations, the results show that the frustrations that come with a sense of broken promises can be contained more easily to the extent that their employee bases can rely on pertinent personal resources. Originality/value. This investigation provides a more comprehensive understanding of when perceived contract breach steers employees away from productive work activities, in the form of creativity. This damaging effect is less prominent when employees possess skills that enable them to control negative emotions or can use humor to cope with workplace adversity

Abstract:

This study investigates how employees' perceptions of work overload might reduce their creative behaviours and how this negative relationship might be buffered by employees' access to three energy-enhancing resources: their passion for work, their ability to share emotions with colleagues, and their affective commitment to the organization. Data from a manufacturing organization reveal that work overload reduces creative behaviour, but the effect is weaker with higher levels of passion for work, emotion sharing, and organizational commitment. The buffering effects of emotion sharing and organizational commitment are particularly strong when they combine with high levels of passion for work. These findings indicate how organizations marked by adverse work conditions, due to excessive workloads, can mitigate the likelihood that employees avoid creative behaviours

Abstract:

Drawing from conservation of resources theory, this article investigates the relationship between job control (a critical job resource) and idea championing, as well as how this relationship may be augmented by stressful work conditions that can lead to resource losses, such as conflicting work role demands and psychological contract violations. With quantitative data collected from employees of an organization that operates in the chemical sector, this study reveals that job control increases the propensity to champion innovative ideas. This effect is especially salient when employees experience high levels of role conflict and psychological contract violations. For organizations, the results demonstrate that giving employees more control over whether they invest in championing activities will be most beneficial when those employees also face resource-draining work conditions, in the form of either incompatible role expectations or unfilled employer obligations

Abstract:

Based on the job demands–resources model, this study considers how employees’ perceptions of organizational politics might reduce their engagement in organizational citizenship behavior. It also considers the moderating role of two contextual resources and one personal resource (i.e., supervisor transformational leadership, knowledge sharing with peers, and resilience) and argues that they buffer the negative relationship between perceptions of organizational politics and organizational citizenship behavior. Data from a Mexican-based manufacturing organization reveal that perceptions of organizational politics reduce organizational citizenship behavior, but the effect is weaker with higher levels of transformational leadership, knowledge sharing, and resilience. The buffering role of resilience is particularly strong when transformational leadership is low, thus suggesting a three-way interaction among perceptions of organizational politics, resilience, and transformational leadership. These findings indicate that organizations marked by strongly politicized internal environments can counter the resulting stress by developing adequate contextual and personal resources within their ranks

Abstract:

Drawing from the job demands-resources model, this study considers how task conflict reduces employees' job satisfaction, as well as how the negative task conflict-job satisfaction relationship might be buffered by supervisors' transformational leadership and employees' personal resources. Using data from a large organization, the authors show that task conflict reduces job satisfaction, but this effect is weaker at higher levels of transformational leadership, tenacity, and passion for work. The buffering roles of the two personal resources (tenacity and passion for work) are particularly salient when transformational leadership is low. These findings indicate that organizations marked by task-related clashes can counter the accompanying stress by developing adequate leadership and employee resources within their ranks

Abstract:

This paper investigates how the harmful effect of role ambiguity might be buffered by employees' access and reveals that the role ambiguity enhances turnover intentions but this effect diminishes at higher levels of innovation propensity, goodwill trust, and procedural justice. Purpose-This article investigates how employees' perceptions of role ambiguity might increase their turnover intentions and how this harmful effect might be buffered by employees' access to relevant individual (innovation propensity), relational (goodwill trust), and organizational (procedural justice) resources. Uncertainty due to unclear role descriptions decreases in the presence of these resources, so employees are less likely to respond to this adverse work situation in the form of enhanced turnover intentions

Abstract:

We add to human resource literature by investigating how the contribution of task conflict to employee creativity depends on employees' learning orientation and their goal congruence with organizational peers. We postulate a positive relationship between task conflict and employee creativity and predict that this relationship is augmented by learning orientation but attenuated by goal congruence. We also argue that the mitigating effect of goal congruence is more salient among employees who exhibit a low learning orientation. Our results, captured from employees and their supervisors in a large, Mexican-based organization, confirm these hypotheses. The findings have important implications for human resource managers who seek to foster creativity among employees

Abstract:

This study considers how employees' tenacity might enhance their propensity to engage in knowledge exchange with organizational peers, as well as how the positive tenacity-knowledge exchange relationship is invigorated by two types of role conflict: within-work and between work and family. Using data from a large Mexican organization in the logistics sector, this study shows that tenacity increases knowledge exchange, and this effect is stronger at higher levels of within-work and work-family role conflict. The invigorating role of within-work role conflict is particularly salient when work-family role conflict is high. These findings inform organizations that the application of personal energy to knowledge-enhancing activities is particularly useful when employees encounter severe workplace adversity because of conflicting role demands

Abstract:

Purpose: Drawing from conservation of resources theory and affective events theory, this article examines the hitherto unexplored relationship between employees' tenacity levels and problem-focused voice behavior, as well as how this relationship may be augmented when employees encounter adversity in relationships with peers or in the organizational climate in general. Design/Methodology/Approach: The study draws on quantitative data collected through a survey administered to employees and their supervisors in a large manufacturing organization. Findings: Tenacity increases the likelihood of speaking up about problem areas, and this relationship is strongest when peer relationships are characterized by low levels of goal congruence and trust (relational adversity) or when the organization does not support change (organizational adversity). The augmenting effect of organizational adversity on the usefulness of tenacity is particularly salient when it combines with high relational adversity, which underscores the critical role of tenacity for spurring problem-focused voice behavior when employees negatively appraise different facets of their work environment simultaneously. Implications: The results inform organizations that the allocation of personal energy to reporting organizational problems is perceived as particularly useful by employees when they encounter significant adversity in their work environments. Originality/Value: This study extends research on voice behavior by providing a better understanding of the likelihood that employees speak up about problem areas, according to their levels of tenacity, and explicating when this influence of tenacity tends to be more prominent

Abstract:

This conceptual article centers on the relationship between intergenerational strategy involvement and family firms’ innovation pursuits, a relationship that may be contingent on the nature of the interactions among family members who belong to different generations. The focus is the potential contingency roles of two conflict management approaches (cooperative and competitive) and two dimensions of social capital (goal congruence and trust), in the context of intergenerational interactions. The article theorizes that although cooperative conflict management may invigorate the relationship between intergenera- tional strategy involvement and innovation pursuits, competitive conflict management likely attenuates it. Moreover, it proposes that both functional and dysfunctional roles for social capital might arise with regard to the contribution of intergenerational strategy involvement to family firms’ innovation pursuits. This article thus provides novel insights into the opportunities and challenges that underlie the contributions of family members to their firm’s innovative aspirations when more than one generation participates in the firm’s strategic management

Abstract:

This study investigates how employees’ perceptions of adverse work conditions might discourage innovative behavior and the possible buffering roles of relational resources. Data from a Mexican-based organization reveal that perceptions of work overload negatively affect innovative behavior, but this effect gets attenuated with greater knowledge sharing and interpersonal harmony. Further, although perceived organizational politics lead to lower innovative behavior when relational resources are low, they increase this behavior when resources are high. Organizations which seek to adopt innovative ideas in the presence of adverse work conditions thus should create relational conduits that can mitigate the associated stress

Abstract:

We develop and test a motivational framework to explain the intensity with which individuals sell entrepreneurial initiatives within their organizations. Initiative selling efforts may be driven by several factors that hitherto have not been given full consideration: initiative characteristics, individuals’ anticipation of rewards, and their level of dissatisfaction. On the basis of a survey in a mail service firm of 192 managers who proposed an entrepreneurial initiative, we find that individuals’ reported intensity of their selling efforts with respect to that initiative is greater when they (1) believe that the organizational benefits of the initiative are high, (2) perceive that the initiative is consistent with current organizational practices (although this effect is weak), (3) believe that their immediate organizational environment provides extrinsic rewards for initiatives, and (4) are satisfied with the current organizational situation. These findings extend previous expectancy theory-based explanations of initiative selling (by considering the roles of initiative characteristics and that of initiative valence for the proponent) and show the role of satisfaction as an important motivational driver for initiative selling

Abstract:

In this study, we examine the role of individuals’ commitment in small and medium-sized firms. More specifically, we argue that employees will commit themselves to their firm based on their current work status in the firm, their perception of the organizational climate, and the firm’s entrepreneurial orientation. We also examine how individuals’ commitment affect the actual effort they exert vis-à-vis their firm. The study’s hypotheses are tested by applying quantitative analyses to survey data collected from 863 Mexican small and medium-sized businesses. We found that individuals’ position and tenure in the firm, their perception of psychological safety and meaningfulness, and the firm’s entrepreneurial orientation all are positively related to organizational commitment. We also found a positive relationship between organizational commitment and effort. Finally, our findings show that organizational commitment mediates the relationship between many of the predictor variables and effort. We discuss the limitations and implications of our findings and provide directions for future research

Abstract:

Participation in social programs, such as clubs and other social organizations, results from a process in which an agent learns about the requirements, benefits, and likelihood of acceptance related to a program, applies to be a participant, and, finally, is accepted or rejected. We propose a model of this participation process and provide an application of the model using data from a social program in Mexico. Our empirical analysis illustrates that decisions at each stage of the process are responsive to expectations about the decisions and outcomes at the subsequent stages and that knowledge about the program can have a significant impact on participation outcomes. JEL codes: I38, D83, program participation, take-up, information acquisition, targeting, undercoverage, leakage

Abstract:

The judicialization of social rights is a reality in Latin America; however, little has been said about this phenomenon in Mexico or about the role of the Mexican Supreme Court (Suprema Corte de Justicia de la Nación, SCJN) in advancing an effective guarantee of the right to health. In several countries, courts have adopted either an active role in defining health policy and protecting the right to health or a passive one. Studying the ways in which health-related cases are resolved in Mexico enables us to evaluate the role of the SCJN when ruling for or against this right. This article aims to determine whether the SCJN, through the analysis of its rulings, is or could be a catalyst for change in the healthcare system. This article reports on the results of a systematic content analysis of twenty-two SCJN rulings, examining the claimants, their claims as understood by the SCJN, and the elements considered by the justices in their decision-making process. The analysis of the way in which the SCJN ruled in these cases demonstrates that the SCJN must be uniform and consistent in applying constitutional and conventional principles to improve predictability of its decisions and to be innovative in responding to the new requirements posed by economic, social, and cultural rights. The SCJN should increase its possibilities of promoting structural reforms where laws or policies are inconsistent with constitutional or conventional standards by maintaining a middle ground with respect of the executive and legislative branches

Resumen:

Objetivo. Conocer la opinión de actores clave respecto del proceso de judicialización del derecho a la protección de la salud en México. Material y métodos. Se realizaron 30 entrevistas semiestructuradas a representantes de los poderes Judicial (PJ), Legislativo (PL), Sector Salud (SS), industria farmacéutica, academia y organizaciones de la sociedad civil (OSC) durante mayo de 2017 a agosto de 2018, en distintos lugares de la Ciudad de México. Se transcribieron las grabaciones y se analizó el contenido con base en categorías de interés. Resultados. Las posturas respecto al fenómeno de la judicialización del derecho a la salud son disímiles. Hay tensiones entre quienes ven su potencial efecto como agente de cambio del sector y quienes la perciben como una interferencia ilegítima del PJ. No existe una estrategia coordinada entre los sectores para promover un cambio en el SS. Conclusiones. Las posturas respecto al fenómeno de la judicialización en México son disímiles. Hay tensiones entre quienes ven su potencial efecto como agente de cambio del sector y quienes la perciben como una interferencia ilegítima del PJ en el SS. Otros argumentan que no existe una estrategia coordinada entre los sectores para promover un cambio en el SS. Todos coinciden en que la judicialización en México es una realidad

Abstract:

Objective. Understand what Mexican key stakeholders think about the judicialization of the right to health in Mexico. Materials and methods. 30 semi-structured interviews were conducted at different settings in Mexico City with representatives of the judiciary, legislative power, Health Sector (HS), pharmaceutical industry, academia and nongovernmental organizations from May 2017 to August 2018. Interviews were transcribed and analyzed based on different categories of interest. Results. There are different opinions regarding judicialization of the right to health. Tensions exist between those who see its potential effect as a game changer for the HS and those who perceive it as an illegitimate interference of the judiciary. There is no coordinated strategy between sectors to promote change in the HS. Conclusions. There are different opinions regarding judicialization of the right to health in Mexico. There are tensions between those who see its potential effect as a game changer for the HS and those who perceive it as an illegitimate interference of the judiciary. Others argue that there is no coordinated strategy between sectors to promote change in the HS. All agree that judicialization in Mexico is a reality

Resumen:

La disminución de la tasa de lactancia materna en México es un problema de salud pública. En este artículo discutimos un enfoque regulatorio –Regulación Basada en Desempeño– y su aplicación para mejorar las tasas de lactancia materna. Este enfoque obliga a la industria a asumir su responsabilidad por la falta de lactancia materna y sus consecuencias. Se considera una estrategia factible de ser aplicada al caso, ya que el mercado de sucedáneos tiene una estructura oligopólica, donde es relativamente fácil fijar la contribución de cada participante del mercado en el problema; incide en un grupo poblacional definido; tiene un objetivo regulatorio que puede ser fácilmente evaluado, y se pueden definir las sanciones bajo criterios objetivos. Para su aplicación se recomienda: modificar la política pública, crear convenios de concertación con la industria, establecer sanciones disuasorias, fortalecer los mecanismos de supervisión y alinear lo anterior al Código Internacional de Comercialización de Sucedáneos

Abstract:

The decreasing breastfeeding rate in México is of public health concern. In this paper we discus an innovative regulatory approach -Performance Based Regulation- and its application to improve breastfeeding rates. This approach, forces industry to take responsibility for the lack of breastfeeding and its consequences. Failure to comply with this targets results in financial penalties. Applying performance based regulation as a strategy to improve breastfeeding is feasible because: the breastmilk substitutes market is an oligopoly, hence it is easy to identify the contribution of each market participant; the regulation's target population is clearly defined; it has a clear regulatory standard which can be easily evaluated, and sanctions to infringement can be defined under objective parameters. Recommendations: modify public policy, celebrate concertation agreements with the industry, create persuasive sanctions, strengthen enforcement activities and coordinate every action with the International Code of Marketing of Breast-milk Substitutes

Abstract:

What would have happened if a relatively looser fisheries policy had been implemented in the European Union (EU)? Using Bayesian methods a Dynamic Stochastic General Equilibrium (DSGE) model is estimated to assess the impact of the European Common Fisheries Policy (CFP) on the economic performance of a Galician (northwest of Spain) fleet highly dependant on the EU Atlantic southern stock of hake. Our counterfactual analysis shows that if a less effective CFP had been implemented during the period 1986–2012, fishing opportunities would have increased, leading to an increase in labour hours of 4.87%. However, this increase in fishing activity would have worsened the profitability of the fleet, dropping wages and rental price of capital by 6.79% and 0.88%, respectively. Welfare would also be negatively affected since, in addition to the increase in hours worked, consumption would have reduced by 0.59%

Resumen:

Para mejorar las prácticas de lactancia materna es necesario fortalecer acciones de promoción, protección y apoyo, y establecer una política nacional multisectorial que incluya elementos indispensables de diseño, implementación, monitoreo y evaluación de programas y políticas públicas, financiamiento para acciones e investigación, desarrollo de abogacía y voluntad política, y promoción de la lactancia materna, todo coordinado por un nivel central. Recientemente, México ha iniciado un proceso de reformas conducentes a la conformación de una Estrategia Nacional de Lactancia Materna (ENLM). Esta estrategia es el resultado de la disponibilidad de evidencia científica sobre los beneficios de la lactancia materna en la salud de la población y el desarrollo de capital humano así como de los datos alarmantes de su deterioro. La implementación integral de una ENLM que incluya el establecimiento de un Comité Nacional Operativo, coordinación intra e intersectorial de acciones, establecimiento de metas claras, monitoreo y penalización de las violaciones al Código Internacional de Comercialización de Sucedáneos de la Leche Materna, y financiamiento de estas acciones es la gran responsabilidad pendiente de la agenda de salud pública del país

Abstract:

Evidence strongly supports that to improve breastfeeding practices it is needed to strengthen actions of promotion, protection and support. To achieve this goal, it is necessary to establish a multisectoral national policy that includes elements such as design, implementation, monitoring and evaluation of programs and policies, funding research, advocacy to develop political willingness, and the promotion of breastfeeding from the national to municipal level, all coordinated by a central level. It is until now that Mexico has initiated a reform process to the establish a National Strategy for Breastfeeding Action. This strategy, is the result not only of the consistent scientific evidence on clear and strong benefits of breastfeeding on population health and the development of human capital, but also for the alarming data of deterioration of breastfeeding practices in the country. The comprehensive implementation of the National Strategy for Breastfeeding Action that includes the establishment of a national committee, intra- and inter-sectoral coordination of actions, setting clear goals and monitoring the International Code of Marketing of Breast-Milk Substitutes, is the awaiting responsibility of the public health agenda of the country

Abstract:

Climate change is a core issue for sustainable development and exacerbates inequality. However, in both the WTO and the climate regime, disagreements over differential treatment have hampered efforts to address international inequalities in a way that facilitates effective responses to global issues. Sustainable globalization requires bridging the disparities between developed and developing countries in their capacities to address such matters of global concern. However, differential treatment now functions as a distraction from the global issues it was supposed to address. Cognitive biases distort perceptions regarding the climate crisis and the value of multilateralism. To what extent can cognitive science inform decision making by States? How do we change paradigms (cognitive background assumptions), which limit the options that decision-making elites in developed and developing countries perceive as useful and worth considering? To what extent do cognitive biases and cultural cognition create a barrier to multilateral cooperation on issues of global concern?

Abstract:

According to the 2018 Intergovernmental Panel on Climate Change (IPCC) report, if global warming continues to increase at the current rate (the so-called business-as-usual scenario) global temperatures are likely to reach 1.5 °C between 2030 and 2052. Limiting global warming to 1.5 °C is affordable and feasible but requires immediate action. Climate-related risks for natural and human systems depend on the magnitude and rate of warming, geographic location, levels of development and vulnerability, and on the choices and implementation of adaptation and mitigation options. Climate-related risks to health, livelihoods, food security, water supply, human security, and economic growth are projected to increase with global warming of 1.5 °C and increase further with 2 °C. Estimates of the global emissions outcome of current nationally stated mitigation ambitions as submitted under the Paris Agreement will result in global warming of about 3 °C by 2100, with warming continuing afterward. The climate risks that we set out below are very serious. The worst-case scenario is catastrophic. Additionally, climate risks create challenges and opportunities for the financial industry, particularly insurance. In this chapter we first set out the risks posed by climate change, and then we seek some possible solutions to reduce emissions (often referred to as climate change mitigation) and to adapt to climate change. We show that financial markets can be harnessed to reduce emissions through market mechanisms. In particular, markets can put a price on risk. That allows insurance/ reinsurance companies to step in with appropriate solutions and create incentives for emissions reductions and climate change adaption. Emissions reductions will reduce the risk of catastrophic climate change. However, some climate change is already inevitable. Therefore, adaptation is essential

Abstract:

The effects of accelerating climate change will have a destabilizing impact on trade negotiations, particularly for the worst-affected developing countries. The effects of the climate crisis will make it more difficult to make concessions in crucial areas, such as agriculture and intellectual property rights, due to the effects of the climate crisis on agricultural yields and the increased need for technology to adapt to a warming climate and reduce greenhouse gas emissions. Rising sea levels, droughts, floods, and killer heat waves will provoke mass migration, with impacts on domestic politics that makes trade concessions more difficult. In this context, multilateral trade negotiations are unlikely to advance in a significant way and megaregional trade agreements will become increasingly difficult to join. The result will be a warming world that is divided between those included in and those excluded from the megaregional trade regime. This will also hamper efforts to slow and to adapt to the climate crisis, due to the key role that international trade plays in addressing both

Abstract:

The Appellate Body is considered the jewel in the crown of the WTO dispute settlement system. However, since it blocked the re-appointment of Jennifer Hillman to the Appellate Body, the United States has become increasingly assertive in its efforts to control judicial activism at the WTO. This was a hot topic in the corridors at the eleventh WTO Ministerial Conference, in Buenos Aires. This article examines judicial activism in the Appellate Body, and discusses the efforts of the United States to constrain the Appellate Body in this context. It also analyses US actions and proposals regarding the dispute settlement systems of the NAFTA, in order to place the WTO debate in a wider context. It concludes that reforms are necessary to break the negative feedback loop between deadlock in multilateral trade negotiations and judicial activism

Abstract:

In two early WTO cases, the Appellate Body found a failure to engage in negotiations to be arbitrary or unjustifiable discrimination under the GATT Article XX chapeau. Subsequent jurisprudence has not applied a negotiation requirement. Instead, it analyzes whether discrimination is arbitrary or unjustifiable by focusing on the cause of the discrimination, or the rationale put forward to explain its existence, which would exclude a duty to negotiate in many circumstances. The issue of whether there is a duty to negotiate is a systemic issue for international economic law. The Article XX chapeau language appears in other WTO agreements and in other international economic law treaties, including those that address environmental protection, regional trade and international investment. This article argues that there is no such duty in WTO law

Abstract:

The purpose of multilateral disciplines on subsidies is to avoid trade distortions, in order to increase production efficiencies through competition. However, this objective may be defeated due to defects in the structure of the WTO Agreement on Subsidies and Countervailing Measures and the resulting interpretations of WTO tribunals in cases involving clean energy subsidies. These defects, together with inefficient design of energy markets, could slow the transition to clean energy sources. However, the necessary reforms to the multilateral subsidies regime and energy markets are unlikely to be implemented any time soon, in the absence of a successful formula for multilateral negotiations. In this environment, the private sector will have to take the lead in making the transition to clean energy sources, in order to mitigate the disastrous effects of climate change to the extent that this goal remains attainable

Abstract:

Mexican energy reforms open the energy sector to foreign participation via different types of contracts, some of which may qualify as investments under North American Free Trade Agreement (NAFTA) Chapter 11. Mexican NAFTA reservations exclude some Mexican regulation from the scope of application of specific obligations in Chapter 11, such as those regarding performance requirements, most-favoured-nation treatment, and national treatment. However, Mexico’s legislative restrictions on foreign investors’ right to pursue investor-state arbitration are not covered by its NAFTA reservations and should not affect access to NAFTA Chapter 11 and Mexico cannot invoke its domestic laws to justify a violation of its international obligations. Moreover, Mexico’s reservations do not prevent the application of obligations regarding fair and equitable treatment and expropriation

Abstract:

This article analyzes how International Investment Agreements (IIAs) might constrain the ability of governments to adopt climate change measures. This article will consider how climate change measures can either escape the application of IIA obligations or be justified under exceptions. First, this article considers the role of treaty structure in preserving regulatory autonomy. Then, it analyzes the role that general scope provisions can play in excluding environmental regulation from the scope of application of IIAs. Next, this article will consider how the limited incorporation of environmental exceptions into IIAs affects their interpretation and application in cases involving environmental regulation. The article then analyzes non-discrimination obligations, the minimum standard of treatment for foreign investors and obligations regarding compensation for expropriation. This analysis shows that tribunals can exclude environmental regulation from the scope of application of specific obligations as well

Abstract:

This article analyzes how treaty structure affects regulatory autonomy by shifting the point in a treaty in which tribunals address public interest regulation. The article also analyzes how trade and investment treaties use a variety of structures that influence their interpretation and the manner in which they address public interest regulation. WTO and investment tribunals have addressed public interest regulation in provisions regarding a treaty’s scope of application, obligations and public interest exceptions. The structure of treaties affects a tribunal’s degree of deference to regulatory choices. In treaties that do not contain general public interest exceptions, tribunals have excluded public interest regulation from the scope of application of the treaty as a whole or the scope of application of specific obligations. If treaty parties wish to preserve a greater degree of regulatory autonomy, they can limit the general scope of application of a treaty, or the scope of application of specific obligations, which places the burden of proof on the complainant. In cases where the complexity of the facts or the law make the burden of proof difficult to meet, this type of treaty structure makes it more difficult to prove treaty violations and thus preserves regulatory autonomy to a greater degree

Abstract:

Debates between developing and developed countries over access to technology to mitigate or adapt to climate change tend to overlook the importance of plant varieties. Climate change will increase the importance of the development of new plant varieties that can adapt to changing climactic conditions.This article compares Intellectual Property Rights (IPRs) for plant varieties in the World Trade Organization (WTO) Trade-Related Aspects of Intellectual Property (TRIPS) Agreement, the UPOV Convention and the Convention on Biological Diversity (CBD). It concludes that TRIPS Article 27.3(b) provides an appropriate degree of flexibility regarding the policy options available to confront climate change

Abstract:

Multilingualism is a sensitive and complex subject in a global organisation such as the World Trade Organization (WTO). In the WTO legal texts, there is a need for full concordance, not simply translation. This article begins with an overview of the issues raised by multilingual processes at the WTO in the negotiation, drafting, translation, interpretation and litigation phases. It then compares concordance procedures in the WTO, European Union and United Nations and proposes new procedures to prevent and correct linguistic discrepancies in the WTO legal texts. It then categorises linguistic differences among the WTO legal texts and considers the suitability of proposed solutions for each category. The conclusion proposes an agenda for further work at the WTO to improve best practices based on the experience of the WTO and other international organisations and multilingual governments

Abstract:

This article examines three issues: (1) the use, over time, of facemasks in a public setting to prevent the spread of a respiratory disease for which the mortality rate is unknown; (2) the difference between the responses of male and female subjects in a public setting to unknown risks; and (3) the effectiveness of mandatory and voluntary public health measures in a public health emergency. The use of facemasks to prevent the spread of respiratory diseases in a public setting is controversial. At the height of the influenza epidemic in Mexico City in the spring of 2009, the federal government of Mexico recommended that passengers on public transport use facemasks to prevent contagion. The Mexico City government made the use of facemasks mandatory for bus and taxi drivers, but enforcement procedures differed for these two categories. Using an evidence-based approach, we collected data on the use of facemasks over a 2-week period. In the specific context of the Mexico City influenza outbreak, these data showed mask usage rates mimicked the course of the epidemic and gender difference in compliance rates among metro passengers. Moreover, there was not a significant difference in compliance with mandatory and voluntary public health measures where the effect of the mandatory measures was diminished by insufficiently severe penalties, the lack of market forces to create compliance incentives and sufficient political influence to diminish enforcement. Voluntary compliance was diminished by lack of trust in the government

Abstract:

This article analyzes several unresolved issues in World Trade Organization (WTO) law that may affect the WTO-consistency of measures that are likely to be taken to address climate change. How should the WTO deal with environmental subsidies under the General Agreement on Tariffs and Trade (GATT), the Agreement on Agriculture and the Subsidies and Countervailing Measures (SCM) Agreement? Can the general exceptions in GATT Article XX be applied to other agreements in Annex 1A? Are processing and production methods relevant to determining the issue of ‘like products’ in GATT Articles I and III, the SCM Agreement and the Antidumping Agreement and the TBT Agreement? What is the scope of paragraphs b and g in GATT Article XX and the relationship between these two paragraphs? What is the relationship between GATT Article XX and multilateral environmental agreements in the context of climate change? How should Article 2 of the TBT Agreement be interpreted and applied in the context of climate change? The article explores these issues

Resumen:

En este trabajo se aplica la técnica de extracción de señales para realizar la desestacionalización de dos series de tiempo observadas. En ambos casos se considera explícitamente la de la serie. Para una de las series se obtiene una desestacionalización claramente aceptable, pero la segunda conduce a concluir que no cualquier serie, aparentemente estacional, debe ser desestacionalizada

Abstract:

This paper applies the theory of signal extraction to carry out the deseasonalization ot two observed time series. For both series, we explicitly consider the ARMA model simplification and estimation of the series components. For one series we obtain a reasonable deseasonalization, but the oter case allows us to conclude that not every, apparently seasonal, series should be deseasonalized

Resumen:

En este trabajo se da cuenta del estado y la evolución geográfica de las ciencias sociales, a través del proceso de desconcentración en México de los integrantes del Sistema Nacional de Investigadores (SNI). Las preguntas que se busca responder y que sirven como dimensiones de análisis son: ¿Cuál es la distribución de los investigadores dentro de la República? ¿Cómo se distribuyen por sexo en las regiones geográficas del país? ¿Cómo es la composición por nivel del sni en cada zona? ¿Existen, y cuáles, han sido los cambios en la composición disciplinar de esta área del conocimiento

Abstract:

In this paper, the the social sciences state and geographical evolution is recognized through the deconcentrating process of the members on the National System of Researchers in Mexico. The questions that are pursued to be answered and that serve as analysis dimensions are: Which is the researchers geographical distribution in the country? How are they distributed by gender and academic level? How is the composition of the National System of Researchers by level in each area? Are there, if any, changes in the disciplinary composition of this area of knowledge?

Resumen:

Los autores nos ofrecen un reporte que muestra la desconcentración de los integrantes del Sistema Nacional de Investigadores —anteriormente ubicados de forma predominante en la Zona Metropolitana de la Ciudad de México—, que muestra cómo en las diversas regiones del país hay una participación cada vez mayor en trabajos de investigación científica y tecnológica original y de calidad

Abstract:

This article examines a calculus graphing schema and the triad stages of schema devel- opment from Action-Process-Object-Schema (APOS) theory. Previously, the authors studied the underlying structures necessary for students to process concepts and enrich their knowledge, thus demonstrating various levels of understanding via the calculus graphing schema. This investigation built on this previous work by focusing on the thematization of the schema with the intent to expose those possible structures acquired at the most sophisticated stages of schema development. Results of the analyses showed that these successful calculus students, who appeared to be operating at varying stages of development of their calculus graphing schemata, illustrated the complexity involved in achieving thematization of the schema, thus demonstrating that thematization was possible

Abstract:

We classify all planar four–body central configurations where two pairs of the bodies are on parallel lines. Using cartesian coordinates, we show that the set of four–body trape- zoid central configurations with positive masses forms a two–dimensional surface where two symmetric families, the rhombus and isosceles trapezoid, are on its boundary. We also prove that, for a given position of the bodies, in some cases a specific order of the masses determines the geometry of the configuration, namely acute or obtuse trapezoid central configuration. We also prove the existence of non–symmetric trapezoid central configura- tions with two pairs of equal masses

Abstract:

Let T be a second-order arithmetical theory, Lambda a well-order, lambda < Lambda and X subset of N. We use [lambda vertical bar X](T)(Lambda)phi as a formalization of "phi is provable from T and an oracle for the set X, using omega-rules of nesting depth at most lambda". For a set of formulas Gamma, define predicative oracle reflection for T over Gamma (Pred-O-RFNG(T)) to be the schema that asserts that, if X subset of N, Lambda is a well-order and phi is an element of Gamma, then for all lambda < Lambda ([lambda vertical bar X](T)(Lambda)phi -> phi ). In particular, define predicative oracle consistency (Pred-O-Cons(T)) as Pred-O-RFN.({0= 1}) (T). Our main result is as follows. Let ATR(0) be the second-order theory of Arithmetical Transfinite Recursion, RCA(0)(*) be Weakened Recursive Comprehension and ACA be Arithmetical Comprehension with Full Induction. Then, ATR(0) RCA(0)(*) + Pred-O-Cons(RCA(0)(*)) = RCA(0)(*) + Pred-O-RFN Pi 12 (ACA). We may even replace RCA(0)(*) by the weaker ECA(0), the second-order analogue of Elementary Arithmetic. Thus we characterize ATR(0), a theory often considered to embody Predicative Reductionism, in terms of strong reflection and consistency principles

Abstract:

In the generalized Russian cards problem, the three players Alice, Bob and Cath draw a, b and c cards, respectively, from a deck of a+b+c cards. Players only know their own cards and what the deck of cards is. Alice and Bob are then required to communicate their hand of cards to each other by way of public messages. For a natural number k, the communication is said to be k-safe if Cath does not learn whether or not Alice holds any given set of at most k cards that are not Cath’s, a notion originally introduced as weak k-security by Swanson and Stinson. An elegant solution by Atkinson views the cards as points in a finite projective plane. We propose a general solution in the spirit of Atkinson’s, although based on finite vector spaces rather than projective planes, and call it the ‘geometric protocol’. Given arbitrary c, k>0, this protocol gives an informative and k-safe solution to the generalized Russian cards problem for infinitely many values of (a,b,c) with b=O(ac). This improves on the collection of parameters for which solutions are known. In particular, it is the first solution which guarantees k-safety when Cath has more than one card

Abstract:

In the generalized Russian cards problem, Alice, Bob and Cath draw a, b and c cards, respectively, from a deck of size a + b + c. Alice and Bob must then communicate their entire hand to each other, without Cath learning the owner of a single card she does not hold. Unlike many traditional problems in cryptography, however, they are not allowed to encode or hide the messages they exchange from Cath. The problem is then to find methods through which they can achieve this. We propose a general four-step solution based on finite vector spaces, and call it the “colouring protocol”, as it involves colourings of lines. Our main results show that the colouring protocol may be used to solve the generalized Russian cards problem in cases where a is a power of a prime, c = O(a^2) and b = O(c^2). This improves substantially on the set of parameters for which solutions are known to exist; in particular, it had not been shown previously that the problem could be solved in cases where the eavesdropper has more cards than one of the communicating players

Abstract:

We find a new and simple inversion formula for the 2D Radon transform (RT) with a straight use of the shearlet system and of well-known properties of the RT. Since the continuum theory of shearlets has a natural translation to the discrete theory, we also obtain a computable algorithm that recovers a digital image from noisy samples of the 2D Radon transform which also preserves edges. A very well-known RT inversion in the applied harmonic analysis community is the biorthogonal curvelet decomposition (BCD). The BCD uses an intertwining relation of differential (unbounded) operators between functions in Euclidean and Radon domains. Hence the BCD is ill-posed since the inverse is unstable in the presence of noise. In contrast, our inversion method makes use of an intertwining relation of integral transformations with very smooth kernels and compact support away from the origin in the Fourier domain, i.e. bounded operators. Therefore, we obtain, at least, the same asymptotic behavior of mean-square error as the BCD (and its shearlet version) for the class of cartoon-like functions. Numerical simulations show that our inverse surpasses, by far, the inverse based on the BCD. Our algorithm uses only fast transformations

Abstract:

We built an infrared vision system to be used as the real time 3D motion sensor in a prototype low cost, high precision, frameless neuronavigator. The objective of the prototype is to develop accessible technology for increased availability of neuronavigation systems in research labs and small clinics and hospitals. In this paper we present our choice of technology including camera and IR emitter characteristics. We describe the methodology for setting up the 3D motion sensor, from the arrangement of the cameras and the IR emitters on surgical instruments, to triangulation equations from stereo camera pairs, high bandwidth computer communication with the cameras and real time image processing algorithms. We briefly cover the issues of camera calibration and characterization. Although our performance results do not yet fully meet the high precision, real time requirements of neuronavigation systems we describe the current improvements being made to the 3D motion sensor that will make it suitable for surgical applications

Abstract:

This paper evaluates the finite sample performance of two methods for selecting the power transformation when applying seasonal adjustment with the X-13ARIMA-SEATS package: the automatic method, based on the Akaike Information Criterion (AIC) and Guerrero's method, that is based on the minimization of a coefficient of variation in order to choose a power transformation. For this purpose, we generate time series with different Data Generating Processes considering traditional Monte Carlo experiments as well as additive and multiplicative time series with linear and time-varying deterministic trends. We also illustrate the performance of both approaches with an empirical application, by seasonally adjusting the Mexican Global Economic Activity Indicator and its components. The results of different statistical tests indicate that Guerrero's method is more adequate than the automatic one. We conclude that using Guerrero's method generates better results when seasonally adjusting time series with the X-13ARIMA-SEATS program

Abstract:

This article presents a new method to reconcile direct and indirect deseasonalized economic time series. The proposed technique uses a Combining Rule to merge, in an optimal manner, the directly deseasonalized aggregated series with its indirectly deseasonalized counterpart. The lastmentioned series is obtained by aggregating the seasonally adjusted disaggregates that compose the aggregated series. This procedure leads to adjusted disaggregates that verify Denton’s movement preservation principle relative to the originally deseasonalized disaggregates. First, we use as preliminary estimates the directly deseasonalized economic time series obtained with the X-13ARIMA-SEATS program applied to all the disaggregation levels. Second, we contemporaneously reconcile the aforementioned seasonally adjusted disaggregates with its seasonally adjusted aggregate, using Vector Autoregressive models. Then, we evaluate the finite sample performance of our solution via a Monte Carlo experiment that considers six Data Generating Processes that may occur in practice, when users apply seasonal adjustment techniques. Finally, we present an empirical application to the Mexican Global Economic Indicator and its components. The results allow us to conclude that the suggested technique is appropriate to indirectly deseasonalize economic time series, mainly because we impose the movement preservation condition to the preliminary estimates produced by a reliable seasonal adjustment procedure

Abstract:

This paper is about a control design for the lateral-directional dynamics of a fixed wing aircraft. The objective is to command the lateral-directional dynamics to the equilibrium point that corresponds to a coordinated turn. The proposed control is inspired by the total energy control system technique; with the objective of mixing the ailerons and the rudder inputs. The control law is developed directly from the nonlinear aircraft model. Numerical simulation results are presented to show the closed-loop system behavior

Abstract:

The stratified proportional hazards model represents a simple solution to take into account heterogeneity within the data while keeping the multiplicative effect of the predictors on the hazard function. Strata are typically defined a priori by resorting to the values of a categorical covariate. A general framework is proposed, which allows the stratification of a generic accelerated lifetime model, including, as a special case, the Weibull proportional hazards model. The stratification is determined a posteriori, taking into account that strata might be characterized by different baseline survivals, and also by different effects of the predictors. This is achieved by considering a Bayesian nonparametric mixture model and the posterior distribution it induces on the space of data partitions. An optimal stratification is then identified following a decision theoretic approach. In turn, stratum-specific inference is carried out. The performance of this method and its robustness to the presence of right-censored observations are investigated through an extensive simulation study. Further illustration is provided analyzing a data set from the University of Massachusetts AIDS Research Unit IMPACT Study

Abstract:

This work presents a study about the smoothness attained by the methods more frequently used to choose the smoothing parameter in the context of splines: Cross Validation, Generalized Cross Validation, and corrected Akaike and Bayesian Information Criteria, implemented with Penalized Least Squares. It is concluded that the amount of smoothness strongly depends on the length of the series and on the type of underlying trend, while the presence of seasonality even though statistically significant is less relevant. The intrinsic variability of the series is not statistically significant and its effect is taken into account only through the smoothing parameter

Resumen:

Las reformas de 1928 y 1934 a la Constitución mexicana trajeron una serie de cambios de forma y fondo a la Suprema Corte de Justicia de la Nación, que han sido analizadas desde la teoría constitucional, y a partir de las cuales se puede estudiar el periodo liberal de las mismas que muestran cómo, a pesar del transfondo político imperante, la Corte construyó y desarrolló su propio modelo judicial

Abstract:

The reforms of 1928 and 1934 to the Mexican Constitution brought a series of changes of form and substance to the Supreme Court of Justice of the Nation, which have been analyzed from the constitutional theory, and from which the liberal period of the same ones that show how, in spite of the prevailing political background, the Court constructed and developed its own judicial model

Resumen:

El nuevo modelo de control de la regularidad constitucional y el advenimiento del llamado "paradigma constitucionalista" demandan una buena cantidad de ajustes a nuestro sistema jurídico, tanto en el ámbito legislativo como en el jurisprudencial. Un mandato constitucional que condiciona estos cambios de manera preponderante es el principio pro persona. En este trabajo demostramos cómo la Suprema Corte de Justicia no ha sido precisamente consistente a la hora de conjugar este importante principio con los diferentes problemas que va resolviendo. A menos que pensemos que la jurisprudencia de la Corte es infalible, no encontramos ninguna razón que justifique su inaplicación a cargo de los jueces ordinarios mediante el control difuso. Tampoco podemos admitir que la Corte sea impermeable con relación al principio pro persona. En este trabajo, reflexionamos sobre estos problemas a propósito de un expediente de reciente resolución: la CT 299/2013

Abstract:

The new model of constitutional adjudication and the advent of the so called "constitutionalist paradigm" demand quite a few adjustments in the Mexican legal system, both in the legislative field as in the judicial one. The "pro personae" principle must compel and inspire these changes. In this paper we will demonstrate how the Supreme Court of Justice has not been consistent at the time of expounding this important principle through judicial review. Unless we think that the Supreme Court is infallible, we do not find a reason that justifies this position. We cannot admit, either, that the Supreme Court is impermeable regarding the "pro personae" principle. In this paper, we reflect upon these issues by analyzing the recent decision in the C.T. 299/2013 (conflict in jurisdictional criteria)

Resumen:

El presente artículo tiene como objetivo plantear algunas reflexiones respecto a la forma en que el orden jurídico mexicano regula los cuidados paliativos y su conexión con el debate acerca de la muerte asistida. En él, los autores analizan el contenido de la Ley General de Salud –a partir de una reforma publicada en enero de 2009–, su Reglamento en Materia de Prestación de Servicios de Atención Médica y las demás disposiciones aplicables a los cuidados paliativos, con el objetivo de impulsar el debate público en torno a las diversas formas de asistencia que pueden recibir las personas con una enfermedad que no responde al tratamiento curativo

Abstract:

This article analyzes the Mexican regulation on palliative care and its relationship with the public debate on assisted death or suicide. This paper focuses on the rights that people with incurable diseases have, given the current contents of the General Health Statute and other applicable rules. Its main purpose is to activate the public debate on these matters

Resumen:

La Primera Sala de la Suprema Corte de Justicia de la Nación resolvió, por mayoría de cuatro votos, un amparo en el que había que dilucidar si ciertos artículos de la Norma Oficial Mexicana NOM-174-SSA1-1998, Para el Manejo Integral de la Obesidad resultaban o no violatorios de derechos humanos. En la sentencia aprobada por la mayoría se concluye que las restricciones son contrarias a la libertad prescriptiva o terapéutica, la cual, a su juicio, es parte esencial del derecho al trabajo. El ministro Cossío Díaz votó en contra y emitió un voto particular en el cual estimó que, en primer lugar, la libertad prescriptiva no es parte esencial del referido derecho, sino que funge como criterio orientador de la profesión médica. En segundo lugar, señala que el derecho al trabajo no es absoluto, toda vez que admite ciertos límites, siempre y cuando sean válidos constitucionalmente. Por ello, para determinar si son constitucionalmente válidos los requisitos que establece la Norma Oficial Mexicana (NOM), se debió haber solicitado la opinión de expertos, para poder justificar la introducción de aquéllos en la NOM. Finalmente, manifestó que el estudio debió de ponderar el derecho al trabajo con el de la salud del paciente, toda vez que este último es el que se pretendió tutelar con la NOM impugnada

Abstract:

The First Chamber of the Mexican Supreme Court of Justice decided, by a majority of four votes, on a case where it had to be evaluated if some articles of a Mexican Official Norm (NOM) on obesity violated human rights. The majority in the chamber concluded that the restrictions went against Medics’ prescribing or therapeutic rights, and therefore their freedom to work. Justice Cossío Díaz voted against the judgment and wrote a separate opinion where he holds, first of all, that the prescribing right works as a guideline for the medical profession and is not an essential element of the freedom to work. Secondly, he points out that the freedom to work is not an absolute right, for it has certain limits permitted by the Constitution. Consequently, experts’ opinions should have been consulted for them to be able to determine if the NOM´s requirements were in accordance with the Constitution. Finally, he considers that the judgment should have introduced a balancing test between freedom to work and the patient’s health rights, since this last-mentioned right was what the NOM intended to protect. (Gac Med Mex. 2013;149:686-90)

Resumen:

En sesión del 23 de enero de 2013, la Primera Sala de la Suprema Corte de Justicia de la Nación resolvió por mayoría de tres votos el asunto citado al rubro. Al respecto, expongo las consideraciones de mi desacuerdo en torno a la aprobación del engrose de la sentencia y a las razones que sustentan dicha resolución

Resumen:

En las sesiones del 17, 18, 20, 24, 25 y 27 de mayo de 2010, el Pleno de la Suprema Corte de Justicia reconoció la validez de la modificación de la “Norma oficial mexicana NOM-190-SSA1-1999, prestación de servicios de salud. Criterios para la atención médica de violencia familiar”, para quedar como “Norma oficial mexicana NOM-046-SSA2-2005, violencia familiar, sexual y contra las mujeres. Criterios para la prevención y atención”, publicada en el Diario Oficial de la Federación el 16 de abril de 2009, impugnada por el gobernador del estado de Jalisco, quien señaló que la norma era violatoria de los artículos 4, 5, 14, 16, 20, 21, 29, 31, fracción IV, 49, 73, 74, 89, fracción I, 123, 124 y 133 de la Constitución Federal. En dicha resolución, la Suprema Corte de Justicia determinó la constitucionalidad de la obligación a cargo de los integrantes del Sistema Nacional de Salud, de ofrecer y suministrar la denominada pastilla del “día siguiente” o de “emergencia” a las mujeres víctimas de violación sexual. De acuerdo con el artículo 5 de la Ley General de Salud, el Sistema Nacional de Salud abarca tanto los hospitales privados como los públicos, ya sean locales o federales. Lo que quiere decir que todos los hospitales se encuentran obligados a atender las disposiciones contenidas en la norma oficial impugnada. Dada la importancia de la determinación de la Corte en el ámbito médico nacional, en el presente artículo se analizan los puntos más relevantes del referido fallo y sus implicaciones

Abstract:

This article summarizes the Court’s ruling regarding the constitutionality of the Official Norm “NOM-046-SSA2-2005”. Jalisco’s Governor challenged the validity of the referred norm arguing that it was against articles 4, 5, 14, 16, 20, 21, 29, 31-IV, 49, 73, 74, 89-I, 123, 124 y 133 of the Federal Constitution. The Supreme Court disregarded Governor’s claim and determined that the members of the National Health System are obliged to offer and give the “day after pill” to sexual violation victims. According to article 5 of General Health Law, the National Health System includes private and public hospitals, whether they are local or federal. This means that all these health institutions have the obligation to observe the dispositions contained in the appealed Official Norm. Given the significance of the Court’s ruling in the medical sphere, in this article the most relevant issues of the Court decision and its implications are analyzed

Resumen:

La función que los médicos cumplen en la sociedad es muy importante desde diversos ángulos. No obstante, las actividades que desarrollan no pueden quedar fuera del control legal en la medida en que está en juego en muchos casos la salud o incluso la vida de otras personas. Por ello, en el presente artículo se analiza a partir de una sentencia emitida por la Primera Sala de la Suprema Corte de Justicia de la Nación, el equilibrio que debe existir entre el derecho al trabajo de los médicos y el derecho de las personas a ver protegida su salud, tomando como referencia el análisis que dicho tribunal hizo en la revisión de un juicio de amparo respecto a la constitucionalidad del artículo 271 de la Ley General de Salud, destacando que dicho análisis se hizo teniendo en cuenta los estándares internacionales en materia de derechos humanos existentes. Asimismo, se analizan aspectos relacionado a quiénes son las autoridades competentes para otorgar títulos académicos médicos, y cómo el referido artículo de la Ley General de Salud era compatible con otros derechos constitucionales y la labor de los médicos

Abstract:

The role physicians play in society is very important from different perspectives. In spite of this, their activities cannot remain outside of the legal sphere and their ensuing guidelines since physicians activities include the health and life of patients, often at risk. We describe a law put forth by Mexico’s Supreme Court that includes a balance between physician’s duties and safeguarding a patient’s health. Following international guidelines and human right’s treaties, Supreme Court magistrates analyzed the constitutionality of article 271 included in Mexico’s General Health Law (Ley General de Salud). Other aspects of their analysis included attributes to grant medical degrees and the way in which certain clauses in the General Health Law are compatible with physicians’ daily work and other constitutional rights

Abstract:

Para efectos de un adecuado desarrollo del tema objeto de este trabajo, es conveniente apuntar desde ahora -aún cuando con posterioridad desarrollemos el tema de manera detallada- que el sistema de justicia constitucional mexicano tiene la peculiaridad de simultáneamente poder ser calificado como concentrado, en cuanto que sólo son competentes para ejercitar el control de regularidad constitucional los órganos del Poder Judicial de la Federación, y difuso, debido a que tal control se ejercita por los distintos órganos jurisdiccionales1 que componen a ese Poder (Suprema Corte de Justicia, tribunales de circuito, jueces de distrito). De este modo, la adecuada descripción de lo que podemos denominar "justicia constitucional mexicana" exige analizar distintos órganos (Suprema Corte de Justicia, tribunales de circuito y juzgados de distrito) y diversos procedimientos (juicio de amparo, controversias constitucionales y acciones de inconstitucionalidad), de ahí que sea pertinente introducir tales diferenciaciones desde ahora

Abstract:

Water regulation in Mexico rests on the Mexican Constitution and interpretation of that law by the Mexican Supreme Court: Mexican lawyers,on the other hand, tend to ignore those interpretations and look to the text of the Constitution itself. This article argues against that approach and points to the importance of new ways of making decisions

Resumen:

Por diversos motivos un gobierno puede querer inducir a la banca comercial a que otorgue crédito a un determinado sector. Debido a ello, en este artículo se analiza una familia de contratos que puede generar tal resultado pero sin que traiga consigo un comportamiento perverso por parte de acreedores o deudores - situación que tradicionalmente se presenta con la banca de desarrollo. Para llevar a cabo tal análisis se consideró la existencia de información asimétrica en dos frentes: i) entre el gobierno y la banca comercial (el crédito puede ser desviado a miembros que no pertenezcan al grupo-objetivo del gobierno), y ii) entre la banca comercial y los deudores (estos últimos pueden utilizar el crédito en actividades distintas de las acordadas). Tomando esto en consideración, se muestra que el contrato aquí elaborado es superior a otro tipo de políticas en cuanto que genera menores tasas de interés y una mayor disposición de la banca a cooperar en la consecución del objetivo gubernamental

Abstract:

For several reasons, governments may want to encourage commercial banks to offer credit to some specific groups. This paper analyzes a contract scheme that may help achieve this objective without inducing financial disintermediation or other adverse behavior. Two sources of information asymmetry are considered: between the government and the bank (credit might be diverted to borrowers not belonging to the targeted group); and, between the bank and the borrower (the latter may divert credit to purposes other than stated). Compared to other policies, the scheme analyzed here is superior since it brings about lower interest rates and higher cooperation from banks to achieve the government's objective

Abstract:

For several reasons, governments may want to encourage commercial banks to offer credit to some specific groups. This paper analyzes a contract scheme that may help achieve this objective without inducing financial disintermediation or other adverse behavior. Two sources of information asymmetry are considered: between the government and the bank (credit might be diverted to borrowers not belonging to the targeted group); and between the bank and the borrower (the latter may divert credit to purposes other than stated). Compared to other policies, the scheme analyzed here is superior since it brings about lower interest rates and higher cooperation from banks to achieve the government's objective

Resumen:

Se presenta la doctrina de Santo Tomás de Aquino, fundada en aquella de Aristóteles, relativa al auténtico sentido de la verdad práctica, concebida como diferente de la especulativa, caracterizada por su fin específico, consistente en poner en la existencia una obra exterior al hombre (poiesis) o bien una acción propiamente humana (praxis). Para actuar rectamente se requiere dirigir la acción en conformidad con un apetito recto. De este modo la vida moral no es obra de puro conocimiento sino que requiere necesariamente la participación de los apetitos, lo cuales, si están moldeados por la prudencia, permitirán actuar rectamente

Abstract:

The doctrine of St. Thomas Aquinas, founded on that of Aristotle, is presented here. It concerns the authentic meaning of practical truth, conceived as different from that of a speculative nature, characterized by its specific purpose, consisting in the existence of a work external to Man (poiesis) or an actual human action (praxis). In both practical domains, reason is directed toward movement that is acted by will. To act righteously, it is necessary to direct action in accordance with an upright appetite, which ultimately means to act in a prudent manner. In this way, the moral life is not the work of pure knowledge but necessarily requires the participation of appetites, which, if molded by prudence, will allow one to act righteously

Abstract:

This paper reports an experiment designed to shed light on an empirical puzzle observed by Dufwenberg and Gneezy (Games and Economic Behavior 30: 163-182, 2000) that the size of the foregone outside option by the first mover does not affect the behavior of the second mover in a lost wallet game. Our conjecture was that the original protocol may not have made the size of the forgone outside option salient to second movers. Therefore, we change two features of the Dufwenberg and Gneezy protocol: (i) instead of the strategy method we implement a direct response method (sequential play) for the decision of the second mover; and (ii) we use paper money certificates that are passed between the subjects rather than having subjects write down numbers representing their decisions. We observe that our procedure yields qualitatively the same result as the Dufwenberg and Gneezy experiment, i.e., the second movers do not respond to the change in the outside option of the first movers

Abstract:

In the mid-1980s, Mexico undertook major trade reform, privatization and deregulation. This coincided with a rapid expansion in wages and employment that led to a rise in wage dispersion. This paper examines the role of industry- and occupation-specific effects in explaining the growing dispersion. We find that despite the magnitude and pace of the reforms, industry-specific effects explain little of the rising wage dispersion. In contrast occupation-specific effects can explain almost half of the growing wage dispersion. Finally, we find that the economy became more skill-intensive and that this effect was larger for the traded sector because this sector experienced much smaller low-skilled employment growth. We therefore suggest that competition from imports had an important role in the fall of the relative demand for less-skilled workers

Abstract:

Technology and mobile devices have been successfully integrated in peoples’ everyday activities. Educational institutions around the world are increasing their interest to create mobile learning (ML) environments considering the advantage of connectivity, situated learning, individualized learning, social interactivity, portability, affordability and more widely ubiquity. Even with the fast development of ML environments. There is however a lack of understanding about the factors that influence ML adoption. This paper proposes a framework for ML adoption integrating a modified Unified Theory of Acceptance and Use of Technology (UTAUT) with constructs from the Expectation-Confirmation Theory (ECT). Since the goal for education is learning, this research will include individual attributes such as learning styles (LS) and experience to understand how they moderate ML adoption and actual use. For this reason, the framework brings together the adoption theory for initial use and the constructs of continuance intention for actual and habitual use as an outcome of learning. The framework is divided in two stages, acceptance and actual use. The purpose of this paper is to test the first stage: ML acceptance through the structural equation modeling statistical technique. The data was collected from students that already are experiencing ML. Findings demonstrate that performance and effort expectation constructs are significant predictors of ML and there is some influence of LS and experience as moderators for ML adoption. The practical implication in educational services is to incorporate LS influence when designing strategies for learning enhanced by mobile devices

Abstract:

Mobile learning (ML) has been growing recently. The successful adoption of this technology to enhance learning environments could represent a major contribution for education. The purpose of this research will be to investigate the effect of learning styles on ML adoption. This research will consider four stages: conduct an exploratory study, develop a systematic literature review, develop the model and test the model. The results could be a guideline to encourage ML and to help mobile device manufacturers and developers to generate new mobile applications for education

Abstract:

As the mobile technology evolves, the possibilities for Mobile Learning (ML) are becoming increasingly attractive. However, the lack of perceived learning value and institutional infrastructure are hindering the possibilities for ML attempts. The purpose of our study is to understand the use and adoption of mobile technologies by teachers in a business school. We developed a questionnaire based on current research about the use of technology on higher education and it was used to interview 14 teachers. Participants provided insights about ML opportunities, such as availability, interactive environments, enhanced communication and inclusion on daily activities. Participants also realized that current teaching practices should change in mobile environments to include relevant information, to organize mobile materials, to encourage reflection and to create interactive activities with timely feedback. Further, they identified technological, institutional, pedagogical and individual obstacles that are threaten ML practices

Abstract:

Mobile technology and mobile applications evolution have increased possibilities for mobile learning (ML). However, the lack of perceived learning value and institutional infrastructure are hindering the possibilities for ML attempts. The purpose of our study is the understanding opportunities and obstacles of mobile technologies as perceived by teachers in higher education. A questionnaire was developed based on actual research about technology adoption in higher education and was used to interview 14 teachers. Participants were asked to identify different issues associated with using mobile technology in education. In response, participants provided insights about ML perception, such as opportunities to enhance communication with students, availability for resources and immediate feedback Finally, they identified technological, institutional, pedagogical and individual obstacles that threaten ML practices

Resumen:

A los abogados novohispanos se les concedió la gracia del derecho a utilizar en sus togas puños de encaje de bolillo, privilegio sólo reservado a las altas autoridades eclesiásticas y que se conserva actualmente en las sesiones togadas del Colegio. La toga es una vestimenta propia de la profesión de abogado, es la prenda profesional de los juristas. Alfonso X impuso la garnacha sin vuelillos puños de encaje profesional como prenda profesional de los juristas en las cortes de Jerez de la Frontera en abril de 1267. Los vuelillos quedaron reservados en España hoy en día a los miembros de las juntas de gobierno de los Colegios de Abogados y jueces. La concesión del privilegio del uso de bolillos blancos se hizo a solicitud del IRCAM que desde su fundación gozaba de la participación de las preeminencias, y prerrogativas de que gozaba en el Colegio de Abogados de Madrid. Se confirma la concepción de élite profesional que distinguió a la profesión en el siglo XVIII. EL otorgamiento del privilegio buscó acabar con la confusión en el uso de los trajes de abogados y de otras profesiones, tuvo así una finalidad práctica: la de distinguir a los abogados respecto del resto de los togados

Abstract:

Lawyer in New Spain were granted the right to adorn their robes with bobbin lace cuffs, privileged reserved to high ecclesiastical magistrates, this tradition has since then been observed in ceremonial sessions at the bar. The robe is a vestment distinctive to the Spanish tabard without the aforementioned laced cuffs to be proper of the jurist's guild as a garment indicative of such undertaking at the Jerez de la Frontera Courts in April 1267. Those cuffs remained reserved in Spain to members of the governing body of the Castilian law professional's courts of inn and of judges. The petition for the conceded privilege for the use of white laced cuffs was bestowed to the Royal Illustrious Bar of Mexico since its establishment and has then on enjoyed of the sort entitlements and prerogatives. The common conception of an elite body distinguished such occupation in the XVIII century and is confirmed to us by the measures taken. It was by the approaching means that it was intended to put to an end to the then prevailing confusion regarding the attire of law's practitioners from other cultivated fields: Therefore it bore a practicality, set aside legists from of other people of learned pursuits

Résumé:

Le droit des manchettes de dentelle blanche sur leurs toges a été concédée aux avocats de la Nouvelle Espagne, privilège qui était réservé aux plus hautes autorités ecclésiastiques, y et qui se conserve de nos jours lors des sessions solennelles du Barreau. La toge est un habit propre à la profession d'avocat, c'est l'habit professionnel des juristes. Alphonse X imposa la "garnacha sin vuelillos" (précurseur de la toge comme on la connait aujourd'hui) comme habit officiel des juristes à la court Jerez de la Frontera en avril 1267. Les boucles des toges restent réservées de nos jours en Espagne aux members des Conseils d'Administration des Barreaux et aux juges. Le droit d'utiliser les manchettes en dentelle blanche résulte d'une demande de l'ICRAM, qui depuis sa fondation, bénéfice de la prééminence et des prérogatives dont jouissait le barreau de Madrid. Elle confirme par ces usages, le concept d'élite professionnelle qui distingue la profession d'avocat au XVIIIème siècle. L'octroi de ce privilège visait à mettre fin à la confusion dans l'utilisation des costumes d'avocats avec d'autres professions. Ce privelège accordé répond un objectif practique: distinguer les avocats du reste des professionnels qui utilisaient des habits comme la toge

Abstract:

Information workers and software developers are exposed to work fragmentation, an interleaving of activities and interruptions during their normal work day. Small-scale observational studies have shown that this can be detrimental to their work. In this paper, we perform a large-scale study of this phenomenon for the particular case of software developers performing software evolution tasks. Our study is based on several thousands interaction traces collected by Mylyn and the Eclipse Usage Data Collector. We observe that work fragmentation is correlated to lower observed productivity at both the macro level (for entire sessions) and at the micro level (around markers of work fragmentation); further, longer activity switches seem to strengthen the effect, and different activities seem to be affected differently. These observations give ground for subsequent studies investigating the phenomenon of work fragmentation

Resumen:

En 1976 Nagel y Williams presentaron -en una reunión de la Aristotelian Society- dos célebres textos dirigidos a exhibir el desafío que el azar y la fortuna representan para la imputación kantiana de responsabilidad moral. Desde entonces a la fecha en la literatura han proliferado cientos de artículos centrados en analizar este dilema. Dicho debate, no obstante, rara vez es situado al interior del análisis de las implausibles y falsas premisas que dan lugar a él. En este trabajo reconstruyo las coordenadas centrales en las que esta problemática filosófica se origina. Posteriormente muestro que la imputación de responsabilidad ética a un agente no sólo no excluye, sino incluso presupone, lo que llamaré una capacidad "impura" de agencia donde la suerte ocupa un lugar central

Abstract:

In 1976, Nagel and Williams presented -at the congress of the Aristotelian Society- two famous texts aimed at exposing the challenge that chance and fortune represent for moral thought. Since then, hundreds of articles have proliferated in the literature focused on analyzing this dilemma. This debate, however, is rarely situated within the analysis of the implausible and false premises that give rise to it. In this paper I reconstruct the central coordinates in which this philosophical problem originates. Later, I show that imputation of ethical responsibility to an agent not only does not exclude, but even presupposes, what I will call an impure capacity for agency where luck occupies a central place

Resumen:

Alrededor de la categoría "populismo" suelen articularse todo tipo de ansiedades políticas. Jan-Werner Müller atina al afirmar que la frivolidad en el uso de este concepto ha terminado por volverlo sinónimo de todo aquello que nos desagrada en gobiernos que gozan de amplio apoyo social. Lo cierto es que “populismo” ha sido utilizado en el debate académico y político como un término valorativo antes que como una categoría analítica. En este texto me propongo mostrar que -de cara a desarrollar una teoría correcta sobre el populismo- debemos comenzar por evitar el lenguaje valorativo para en su lugar emprender una tarea analítica. Probaré que se trata de una tarea urgente, cuyo primer paso será diferenciar entre populismo y autoritarismo democrático

Abstract:

Around the category "populism" all kinds of political anxieties are usually articulated. Wermer or Laclau have been correct in stating that the frivolity in the use of this concept has ended up making it synonymous with everything that we dislike in a government with a large popular base. The truth is that "populism" has been used in academic and political debate as an evaluative term rather than as an analytical category. In this text I propose to show that -in order to develop a correct theory of populism- we must begin by avoiding evaluative language and instead undertake an analytical task. I will prove that such task is the first step to differentiate between populism and democratic authoritarianism

Abstract:

The new Latin American constitutionalism (NLC) is the term that has been coined to refer to certain constitutional processes and constitutional reforms that have taken place relatively recently in Latin America. Constitutional theorists have not been very optimistic regarding the scope and nature of this new constitutionalism. I thoroughly agree with this critical skepticism as well as with the idea that this new phenomenon does not substantively change the organic element of the different constitutions in the region. However, I argue that it is a mistake to focus analysis on this characteristic. My intention is to show that the NLC should be evaluated in the context of its relationship with contemporary neo-constitutional theory

Résumé:

Le nouveau constitutionalisme latino-américain est la locution qui a été inventée pour renvoyer à certains processus et réformes constitutionnels ayant eu lieu dans une époque relativement récente en Amérique Latine. Les théoriciens des constitutions ne se sont pas montrés très optimistes quant à l’étendue et à la nature de ce nouveau constitutionalisme. Je suis tout à fait d’accord avec ce scepticisme critique, ainsi qu’avec l’idée selon laquelle ce nouveau phénomène ne change pas significativement l’élément organique des différentes constitutions en place dans la région. Cependant, je soutiens qu’il est erroné de concentrer l’analyse sur cette caractéristique. Mon intention est de montrer que le nouveau constitutionalisme latino-américain doit être examiné relativement à la théorie néo-constitutionnelle contemporaine

Resumen:

La obra de Marx ha suscitado una añeja polémica entre sus estudiosos. Algunos han mantenido que el lenguaje desarrollado en ella es estrictamente explicativo. Dicho lenguaje expresaría ante todo un saber científico expurgado de todo contenido moral (sobre la estructura del capital, las fuerzas que causan la dinámica social y las leyes que la rigen). En el otro extremo, en cambio, otros han argüido que en Marx hallamos más bien un lenguaje ético orientado a denunciar los crímenes y miserias de una determinada formación social con el fin de oponerle otra. En este artículo defiendo la idea de que en la obra de Marx hay elementos tanto para afirmar una cosa como la otra. Sin embargo, argumento que la actualidad del pensamiento marxista reside esencialmente en los elementos éticos y normativos que configuran la dimensión moral de su planteamiento

Abstract:

Marx's work has brought forward an archaic controversy among his followers. Some have sustained that the language developed throughout it is merely descriptive. Such language would express above all a scientific knowledge expurgated of all moral content (about the structure of capital, the forces that cause social dynamics and the laws that govern it.) On the other side, however, others have argued that in Marx we have found an ethical language oriented towards denouncing crimes and miseries of one determined social formation with the finality of opposing another one. In this article I defend the idea that in Marx's work there are elements to affirm one thing as well as the other. Nevertheless, I argue that the main importance of Marx's thinking resides essentially on the ethical and regulatory elements that configure the moral dimension of his approach

Resumen:

Los procesos democráticos de toma de decisiones (al igual que las restricciones constitucionales a la regla de mayoría) pueden ser evaluados por sus resultados, por su valor intrínseco o por una combinación de ambas cosas. Mostraré que analizar a fondo estas alternativas permite sacar a la luz las debilidades más serias en los modos usuales de justificación del constitucionalismo. La fundamentación teórica de la articulación entre democracia y constitucionalismo ha permanecido atrapada en una trampa que busco romper. Concluiré mostrando la necesidad de rebasar los argumentos epistémicos y contra-epistémicos sugiriendo pautas que hasta ahora creo han sido poco ponderadas en la literatura clásica sobre el tema

Abstract:

Democratic decision-making process (as well as constitutional limits to majority rule) may be evaluated on the basis of their results, their intrinsic value or a combination of both. I will show that an in-depth analysis of these alternatives uncovers serious weaknesses in the usual models of justification for constitutionalism. The theoretical basis to describe the relationship between democracy and constitutionalism has remained stuck in a trap that I seek to break from. I conclude by showing the need to overcome epistemic and counter-epistemic arguments by proposing

Resumen:

Este ensayo reflexiona sobre la crisis de las instituciones ciudadanas del Estado y de la sociedad civil como consecuencia del proceso de globalización actual. Efecto de este proceso es que los Gobiernos locales se ven cada vez más obligados a orientar su política conforme a los criterios de flujos económicos globales. Con ello, los Estados ven desbordada su capacidad de gestión, con lo que tienden a sacrificar intereses de sectores hasta entonces protegidos por ellos. Este texto se dirige a reflexionar sobre los fenómenos de exclusión, violencia y subalternidad que dicha exclusión genera. Su interés es hacer una exploración crítica de tres categorías analíticas centrales: imperio, imperialismo y multitud, a partir de la importante obra publicada en el año 2000 por Hardt y Negri. Al final, se mostrará su importancia para desvelar diversos fenómenos derivados de esta condición mundial y la violencia que genera, así como la necesidad de analizar el pensamiento de Hardt y Negri a partir de ciertas coordenadas de reflexión latinas

Abstract:

This essay is a reflection on the crisis of the citizen institutions of the Sate and the civilian society as a consequence of the process of present globalization. The effect of this process is that the local governments see themselves more obligated to orient their politics according to the criteria of the global economic flow. With this the States see their capability to management overflow and this in turn tends to sacrifice the interests of the sectors that until now were protected by them. This text is directed to make a reflection about the phenomena of exclusion, violence, and subalternity that such exclusion generates. The interest of this essay is to do a critical exploration on three critical central categories: empire, imperialism and multitude as mentioned in the important work of Hardt and Negri published in 2002. Finally it will demonstrate its importance too unveil diverse phenomena derived from this worldwide condition and the violence that it generates, as well as the need to analyze Hardt and Negri’s thoughts considering certain coordenates of latino reflections

Resumen:

Me propongo analizar críticamente la idea de abstinencia epistémica desarrollada por un importante grupo de teóricos liberales a partir de los años ochenta del siglo pasado. Para los propósitos del liberalismo político la propuesta de la abstinencia epistémica desempeña un papel crucial. Consiste en garantizar el consenso en torno a las reglas procesales y principios públicos de justicia, exigiendo que la pluralidad de intereses y concepciones sustantivas que coexisten en la sociedad se abstengan de realizar atribuciones de verdad sobre sus propias concepciones morales cuando éstas son debatidas en la esfera pública. Mi argumento es que esta estrategia fracasa toda vez que la abstinencia epistémica no resiste la aplicación de sus propias cláusulas a sí misma

Abstract:

The purpose of this paper is to discuss a thesis of Epistemic Abstinence that was developed by an important group of political theorists starting in the 1980s. The thesis is of central importance to political liberalism. It is meant to secure a consensus on procedural rules and public principles of justice by insisting that the many interests and fundamental conceptions that coexist in society abstain from making claims about the truth their own moral precepts within the public sphere. I argue that this strategy breaks down because the thesis of Epistemic Abstinence cannot be applied to itself

Abstract:

This article offers an articulation of liberation philosophy, a Latin American form of political and philosophical thought that is largely not followed in European and Anglo-American political circles. Liberation philosophy has posed serious challenges to Jürgen Habermas's and Karl-Otto Apel's discourse ethics. Here I explain what these challenges consist of and argue that Apel's response to Latin American political thought shows that discourse ethics can maintain internal consistency only if it is subsumed under the program of liberation philosophy

Resumen:

El liberalismo, en esencia, consiste en relegar el pluralismo y trasladarlo a la esfera privada para asegurar el consenso en la esfera pública. De este modo, todas las cuestiones controvertidas (por antonomasia, la discusión en torno a la verdad) son eliminadas de la agenda para crear las condiciones de un consenso "racional". En consecuencia, el reino de la política se transforma en un terreno en el cual los individuos, despojados de sus pasiones y creencias más fundamentales, aceptan someterse a acuerdos que consideran (o se les imponen) como neutrales. Es así como niega el liberalismo la dimensión de lo político (esto es, de lo polemos, lo dinámico, lo conflictivo), con el fin de reconducirlo al ámbito de la política (la polis, el lugar de la reconciliación del conflicto). El propósito de este trabajo es analizar y discutir a fondo algunos de los principales argumentos que se han ofrecido con el fin de justificar esta estrategia liberal (básicamente en autores como Rorty o Rawls). Mi conclusión será mostrar cómo es que esta estrategia liberal dista mucho de no ser problemática

Abstract:

The essence of liberalism is the relegation of pluralism to the private sphere in order to ensure consensus in the public sphere. In this way, all controversial issues (most notably, the debate on truth) are removed from the agenda to create the conditions for a "rational" consensus. Accordingly, the realm of politics becomes an arena in which individuals, stripped of their most fundamental beliefs and passions, show their agreement with arrangements which they consider to be (or are imposed on them as) neutral. Thus, liberalism denies the political dimension (ie, the polemos, the dynamic, the conflictive) and brings it instead to the realm of politics (the polis, the place of reconciliation of the conflict). The aim of this paper is to analyze and discuss in depth some of the main arguments that have been offered in order to justify such liberal strategy (basically in authors such as Rorty and Rawls). My conclusion will show that this liberal strategy is far from being unproblematic

Resumen:

We study the effects of a nongovernmental civic inclusion campaign on the democratic integration of demobilized insurgents. Democratic participation ideally offers insurgents a peaceful channel for political expression and addressing grievances. However, existing work suggests that former combatant's ideological socialization and experiences of violence fuel hard-line commitments that may be contrary to democratic political engagement, threatening the effectiveness of postwar electoral transitions. We use a field experiment with demobilized FARC combatants in Colombia to study how a civic inclusion campaign affects trust in political institutions, democratic political participation, and preferences for strategic moderation versus ideological rigidity. We find the campaign increased trust in democracy and support for political compromise. Effects are driven by the most educated ex-combatants moving from more hard-line positions to ones that are in line with their peers and by ex-combatants who had the most violent conflict experience similarly moderating their views

Abstract:

Self-interruptions account for a significant portion of task switching in information-centric work contexts. However, most of the research to date has focused on understanding, analyzing and designing for external interruptions. The causes of self-interruptions are not well understood. In this paper we present an analysis of 889 hours of observed task switching behavior from 36 individuals across three hightechnology information work organizations. Our analysis suggests that self-interruption is a function of organizational environment and individual differences, but also external interruptions experienced. We find that people in open office environments interrupt themselves at a higher rate. We also find that people are significantly more likely to interrupt themselves to return to solitary work associated with central working spheres, suggesting that selfinterruption occurs largely as a function of prospective memory events. The research presented contributes substantially to our understanding of attention and multitasking in context

Abstract:

Law search is fundamental to legal reasoning and its articulation is an important challenge and open problem in the ongoing efforts to investigate legal reasoning as a formal process. This article formulates a mathematical model that frames the behavioral and cognitive framework of law search as a sequential decision process. The model has two components: first, a model of the legal corpus as a search space and second, a model of the search process (or search strategy) that is compatible with that environment. The search space has the structure of a "multi-network"-an interleaved structure of distinct networks-developed in earlier work. In this article, we develop and formally describe three related models of the search process. We then implement these models on a subset of the corpus of U.S. Supreme Court opinions and assess their performance against two benchmark prediction tasks. The first is to predict the citations in a document from its semantic content. The second is to predict the search results generated by human users. For both benchmarks, all search models outperform a null model with the learning-based model outperforming the other approaches. Our results indicate that through additional work and refinement, there may be the potential for machine law search to achieve human or near-human levels of performance

Abstract:

This work focuses on the historical volatility models in which the temporal and spatial dependencies inherent in the mean and variance are "simple". The empirical time series used are trade-by-trade common stock time series from the U.S. and Mexico, and from the Mexican ADR's traded in the U.S. Results from these three data sets provide information on the liquidity of the markets in these two countries

Abstract:

By using the Monte Carlo simulation, one can obtain a close form solution for the price of the pure discount bond. In order to do this, the path of the stochastic variables n and r should be simulated first. To properly simulate from the tails of the Normal distribution, in order to have the expected value for the martingale n to converge to one, a few sampling procedures are applied that are tailored to specifically emphasize the sampling from the tails of a distribution

Abstract:

A moral right to health or health care, given reasonable resource constraints, implies a reasonable array of services, as determined by a fair deliberative process. Such a right can be embodied in a constitution where it becomes a legal right with similar entitlements. What is the role of the courts in deciding what these entitlements are? The threat of “judicialization” is that the courts may overreach their ability if they attempt to carry out this task; the promise of judicialization is that the courts can do better than health systems have done at determining such entitlements. We propose a middle ground that requires the health system to develop a fair, deliberative process for determining how to achieve the progressive realization of the same right to health or health care and that also requires the courts to develop the capacity to assess whether the deliberative process in the health system is fair

Abstract:

This article analyses the importance of training as a creator of human capital, which enables a company to obtain competitive advantages that are sustainable in the long-term that result in greater profitability. The study is based on the general theoretical framework of resource and capacity theory. The study not only analyses the impact of the influence of training on performance; it also attempts to analyse the nature of such a relationship in greater depth. This being the case, an attempt has been made to measure explanatory capacity from two different perspectives: the universalistic approach and the contingent approach. At the outset, two hypotheses are formulated that attempt to quantify the relationship from a universalistic perspective to later, in two more hypotheses, incorporate the potential moderating effect of the strategy into the model, in order to verify whether or not this strategy improves the explanatory power of our model of analysis

Abstract:

Purpose – The aim of this paper is to determine whether the effort invested by service companies in employee training has an impact on their economic performance. Design/methodology/approach – The study centres on an intensive labor sector, where the perception of service quality depends on who renders this service. To overcome the habitual problems of transversal studies, the time effect has been considered by measuring data over a period of nine years, to give panel data treatment with fixed effects. Findings – The prepared models give clear empirical support to the hypothesis that training activities are a positive influence on company performance. Research limitations/implications – The results obtained contribute empirical evidence about a relationship that, hitherto, has not been satisfactorily demonstrated. However, there may be some limitations related to the use of a training indicator based on effort and not on results obtained, with low representation of what happens in the smaller companies that lack structured training policies, or with no differentiation between generic or more specific training. Practical implications – The results obtained can contribute towards increased manager awareness that training should be treated as an investment and not considered as an expense. Originality/value – The main contributions can be resumed in three points: a training measurement has been used, based on three dimensions, which presumes to be an improvement on the more frequent method of measuring this variable. A consistent methodology was used that previously was not applied in the analysis of this relationship, and clear empirical evidence has been obtained concerning a relationship that, frequently, is mentioned with theoretical arguments, but which needs more empirical evidence

Abstract:

In a simple public good economy, we propose a natural bargaining procedure, the equilibria of which converge to Lindahl allocations as the cost of bargaining vanishes. The procedure splits the decision over the allocation in a decision about personalized prices and a decision about output levels for the public good. Since this procedure does not assume price-taking behavior, it provides a strategic foundation for the personalized taxes inherent in the Lindahl solution to the public goods problem

Abstract:

Retail petroleum markets in Mexico are on the cusp of a historic deregulation. For decades, all 11,000 gasoline stations nationwide have carried the brand of the state-owned petroleum company Pemex and sold Pemex gasoline at federally regulated retail prices. This industry structure is changing, however, as part of Mexico's broader energy reforms aimed at increasing private investment. Since April 2016, independent companies can import, transport, store, distribute, and sell gasoline and diesel. In this paper, we provide an economic perspective on Mexico's nascent deregulation. Although in many ways the reforms are unprecedented, we argue that past experiences in other markets give important clues about what to expect, as well as about potential pitfalls. Turning Mexico's retail petroleum sector into a competitive market will not be easy, but the deregulation has enormous potential to increase efficiency and, eventually, to reduce prices

Abstract:

The financial crisis has brought the problems of regulatory failure and unbridled counterparty risk to the forefront in financial discussions. In the last decade, central counterparties have been created in order to face those insidious problems. In Mexico, both the stock and the derivatives markets have central counterparties, but the money market has not. This paper addresses the issue of creating a central counterparty for the Mexican money market. Recommendations that must be followed in the design and the management of risk of a central counterparty, given by international regulatory institutions, are presented in this study. Also, two different conceptual designs for a central counterparty, appropriate for the Mexican market, are proposed. Final conclusions support the creation of a new central counterparty for the Mexican money market

Abstract:

If two elections are held at the same day, why do some people choose to vote in one but to abstain in another? We argue that selective abstention is driven by the same factors that determine voter turnout. Our empirical analysis focuses on Sweden where the (aggregate) turnout gap between local and national elections has been about 2-3%. Rich administrative register data reveal that people from higher socio-economic backgrounds, immigrants, women, older individuals, and people who have been less geographically mobile are less likely to selectively abstain

Abstract:

This paper demonstrate that in procedural contexts of free proof, proof sentences of judicial decision (i.e. sentences of the kind "it is proven that p"), have normative illocutionary force. On the one hand, in that context, "it is proven that p" express a value judgment of the judge. On the other hand, it is shown that "it is proven that p" is, in that context, a practical reason aiming to justify an action of the decision-maker: the acceptance of the factual statement as a premise of the judicial decision

Resumen:

Algunas versiones del realismo jurídico pretenden compatibilizar la pretensión de que el derecho es un conjunto de normas con un fuerte compromiso con el empirismo. De conformidad con este último, el derecho no está constituido por entidades abstractas de ningún tipo sino por hechos empíricamente constatables. En vistas a llevar a cabo esta compatibilización, en varios trabajos Riccardo Guastini ha defendido una concepción de las proposiciones normativas, i.e. aserciones existenciales sobre normas jurídicas, como enunciados teóricos acerca del derecho vigente, necesariamente referentes a ciertos hechos. Se concibe así al derecho vigente como el conjunto de tex-tos que son resultado de interpretaciones estables, consolidadas y dominantes que los jueces han llevado a cabo en sus decisiones en el ordenamiento de que se trate. Partiendo de esta versión del realismo jurídico, este trabajo procura sembrar algunas dudas. Primero, sobre este modo de concebir a las proposiciones normativas. Segundo, sobre el modo en que, en consecuencia, queda configurada la teoría del derecho. Tercero, y más en general, sobre la pretensión de compatibilizar la visión del derecho como conjunto de normas con la tesis empirista

Abstract:

Some versions of legal realism seek to reconcile the claim that law is a set of rules with a commitment to empiricism. According to the latter, law is not constituted by abstract entities of any kind, but by facts instead. Embracing this orientation, Riccardo Guastini has defended a conception of normative propositions, i.e. existential assertions about legal norms, as necessarily referring to certain facts. Specifically, law is conceived as a set of texts that are the result of stable, consolidated and dominant interpretations that judges have carried out in their decisions. Starting from this version of legal realism, this work tries to cast some doubts. First, on this way of conceiving normative propositions. Second, on the way in which, as a consequence, legal theory is understood. Third, and more generally, on the claim to reconcile the view of law as a set of rules with the empiricist thesis

Abstract:

The paper applies a factor model to the study of risk sharing among U.S. states. The factor model makes it possible to disentangle movements in output and consumption due to national, regional, or state-specific business cycles from those due to measurement error. The results of the paper suggest that some findings of the previous literature which indicate a substantial amount of inter-state risk sharing may be due to the presence of measurement error in output. When measurement error is properly taken into account, the evidence points towards a lack of inter-state smoothing

Abstract:

Motivated by the dollarization debate in Mexico, we estimate an identified vector autoregression for the Mexican economy, using monthly data from 1976 to 1997, taking into account the changes in the monetary policy regime which occurred during this period. We find that (i) exogenous shocks to monetary policy have had no impact on output and prices; (ii) most of the shocks originated in the foreign sector; (iii) disturbances originating in the U.S. economy have been a more important source of fluctuations for Mexico than shocks to oil prices. We also study the endogenous response of domestic monetary policy by means of a counterfactual experiment. The results indicate that the response of monetary policy to foreign shocks played an important part in the 1994 crisis

Abstract:

The conference on "Global Monetary Integration" addressed a number of questions related to the adoption of the US dollar as legal tender in emerging-market economies. The goal of the conference was to foster the policy debate on dollarization, not to resolve it, and on that score it succeeded

Abstract:

Mexican manufacturing job loss induced by competition with China increases cocaine trafficking and violence, particularly in municipalities with transnational criminal organizations. When it becomes more lucrative to traffic drugs because changes in local labor markets lower the opportunity cost of criminal employment, criminal organizations plausibly fight to gain control. The evidence supports a Becker-style model in which the elasticity between legitimate and criminal employment is particularly high where criminal organizations lower illicit job search costs, where the drug trade implies higher pecuniary returns to violent crime, and where unemployment disproportionately affects low-skilled men

Abstract:

The paper investigates how the relative contribution of external factors to stock price movements varies with the degree of financial development. It is found that financial development makes stock markets more susceptible to external influences (both financial and macroeconomic). Interestingly, this effect is present even after having accounted for capital controls and international trade effects

Abstract:

We consider the spatial circular restricted three-body problem, on the motion of an infinitesimal body under the gravity of Sun and Earth. This can be described by a 3-degree of freedom Hamiltonian system. We fix an energy level close to that of the collinear libration point L1, located between Sun and Earth. Near L1 there exists a normally hyperbolic invariant manifold, diffeomorphic to a 3-sphere. For an orbit confined to this 3-sphere, the amplitude of the motion relative to the ecliptic (the plane of the orbits of Sun and Earth) can vary only slightly. We show that we can obtain new orbits whose amplitude of motion relative to the ecliptic changes significantly, by following orbits of the flow restricted to the 3-sphere alternatively with homoclinic orbits that turn around the Earth. We provide an abstract theorem for the existence of such ‘diffusing’ orbits, and numerical evidence that the premises of the theorem are satisfied in the three-body problem considered here. We provide an explicit construction of diffusing orbits. The geometric mechanism underlying this construction is reminiscent of the Arnold diffusion problem for Hamiltonian systems. Our argument, however, does not involve transition chains of tori as in the classical example of Arnold. We exploit mostly the ‘outer dynamics’ along homoclinic orbits, and use very little information on the ‘inner dynamics’ restricted to the 3-sphere. As a possible application to astrodynamics, diffusing orbits as above can be used to design low cost maneuvers to change the inclination of an orbit of a satellite near L1 from a nearly-planar orbit to a tilted orbit with respect to the ecliptic. We explore different energy levels, and estimate the largest orbital inclination that can be achieved through our construction

Abstract:

Rapidly exploring Random Trees (RRTs) are effective for a wide range of applications ranging from kinodynamic planning to motion planning under uncertainty. However, RRTs are not as efficient when exploring heterogeneous environments and do not adapt to the space. For example, in difficult areas an expensive RRT growth method might be appropriate, while in open areas inexpensive growth methods should be chosen. In this paper, we present a novel algorithm, Adaptive RRT, that adapts RRT growth to the current exploration area using a two level growth selection mechanism. At the first level, we select groups of expansion methods according to the visibility of the node being expanded. Second, we use a cost-sensitive learning approach to select a sampler from the group of expansion methods chosen. Also, we propose a novel definition of visibility for RRT nodes which can be computed in an online manner and used by Adaptive RRT to select an appropriate expansion method. We present the algorithm and experimental analysis on a broad range of problems showing not only its adaptability, but efficiency gains achieved by adapting exploration methods appropriately

Abstract:

Universal access to safe drinking water and sanitation facilities is an essential human right, recognised in the Sustainable Development Goals as crucial for preventing disease and improving human wellbeing. Comprehensive, high-resolution estimates are important to inform progress towards achieving this goal. We aimed to produce high resolution geospatial estimates of access to drinking water and sanitation facilities

Abstract:

This paper proposes a strategy to estimate the community structure for a network accounting for the empirically established fact that communities and links are formed based on homophily. It presents a maximum likelihood approach to rank community structures where the set of possible community structures depends on the set of salient characteristics and the probability of a link between two nodes varies according to the characteristics of the two nodes. This approach has good large sample properties, which lead to a practical algorithm for the estimation. To exemplify the approach it is applied to data collected from four village clusters in Ghana

Abstract:

This paper examines the role of identity in the fragmentation of networks by incorporating the choice of commitment to identity characteristics into a non-cooperative network formation game. The Nash network features divisions based on identity, with layers within these divisions. Using the refinement of strictness I find structures where stars of highly committed players are linked together by less committed players

Abstract:

This paper analyzes the effect of the transfer of information by an informed strategic trader (owner) to another strategic player (buyer). It shows that while the owner will never fully divulge his information, he may transfer a noisy signal of his information to the buyer. With such a transfer, the owner loses some of his informational superiority and yet increases his trading profit. I also show that if the transfer can be made to more than one buyer, then, the owner’s profit is increasing in the number of other buyers to whom the transfer is made

Abstract:

Much of what we know about the alignment of voters with parties comes from mass surveys of the electorate in the postwar period or from aggregate electoral data. Using individual elector-level panel data from nineteenth-century United Kingdom poll books, we reassess the development of a party centered electorate. We show that (a) the electorate was party-centered by the time of the extension of the franchise in 1867, (b) a decline in candidate-centered voting is largely attributable to changes in the behavior of the working class, and (c) the enfranchised working class aligned with the Liberal left. This early alignment of the working class with the left cannot entirely be explained by a decrease in vote buying. The evidence suggests instead that the alignment was based on the programmatic appeal of the Liberals. We argue that these facts can plausibly explain the subsequent development of the party system

Abstract:

This article makes an analytical critique of the position of Basu, Haas, and Moraitis, who, by extending the conventional linear system for the simultaneous determination of value, argue that in Marx's economic theory the intensification of work generates absolute surplus value and is not relative. This position is also contrasted with the original theory of Marx to verify its incompatibility. As an alternative in search of a rectification of the role of labor intensification as a generator of relative surplus value, this work incorporates labor intensity into the Temporal Single System Interpretation (TSSI), showing its full compatibility with Marx's original theory

Resumen:

En los últimos años el tema de la sostenibilidad ha empezado a tomarse en cuenta en la agenda de actividades de las empresas con el objetivo de evitar daños a la naturaleza y generar un cambio integral no solo en materia ambiental, sino también en aspectos sociales, económicos y culturales. De ahí la relevancia de que las instituciones de educación superior lo incorporen en los planes de estudio de las licenciaturas, entre ellas la de Contaduría, a fin de que estén en consonancia con los Objetivos de Desarrollo Sostenible de las Naciones Unidas

Abstract:

This article was meant to be about poetry and International Relations (IR). It ended up being about trans/feminist and cuir art and critique with love and care among people of color. This is what praxis does to academic thinking; it disrupts the methods as much as it troubles the aesthetics

Resumen:

Este artículo analiza las razones por las que el tribunal electoral confirma o revoca las multas que impone el IFE a los partidos políticos mexicanos, como resultado de la revisión a sus ingresos y gastos. Se confirman parcialmente las expectativas de la literatura sobre política judicial, la cual predice que los tribunales especializados, como el electoral en México, tienen más probabilidades de revocar las decisiones de los organismos especializados que revisan por razones estratégicas. Al analizar 1671 multas impugnadas entre 1997 y 2010, se concluye que aunque los magistrados confirman tres de cada cuatro multas, cuando revocan decisiones del IFE se trata de temas visibles como gastos de campaña o cuando las élites políticamente relevantes son las que impugnan

Abstract:

I analyze the main determinants of why the electoral tribunal upholds or overturns fines imposed by the IFE to Mexico's political parties, as revealed by audits of political spending. I found evidence that partially support the hypotheses developed by the judicial politics literature, which states that specialized courts, such as the electoral tribunal are more likely to overturn decisions of a specialized agency for strategic reasons. By analyzing 1671 fines challenged between 1996 and 2010, I conclude that although magistrates affirm three out of four fines, they overturn IFE's decisions when there is a salient issue, such as campaign spending or when relevant political elites challenge the fines imposed

Resumen:

Al explorar la relación entre religión, evasión e involucramiento en América Latina, se verifica la hipótesis de evasión que predice que el tiempo dedicado a la Iglesia disminuye el tiempo dedicado a la política. Se analizan 24 países encuestados por el Baró­ metro de las Américas en 2010, donde se estudian la asistencia a servicios religiosos, a grupos de la Iglesia, importancia de la religión, confianza en las iglesias y la presencia de la Iglesia católica a escala subnacional con datos del Anuario Pontificio de 2005. Se halla evidencia en favor de la hipótesis de evasión entre quienes asisten al culto y donde existe una mayor presencia de la Iglesia católica, mientras que el involucramiento aumenta entre quienes asisten a grupos de la Iglesia

Abstract:

The presented paper describes the integration of an artificial vision system into a flexible production cell: the production cell consists of a material storage box with an artificial vision system (AVS) and a 5-DOF robot type Scorbot ER 4. The camera system detects the geometry of the rough material. This information is sent to the robot that starts moving to take the material for further processing. The Cartesian Coordinates are processed so that the robot joints can be positioned correctly. The described system is part of an ongoing development of a smart factory for investigation and educational purposes

Abstract:

The proliferation of wireless sensor networks is one of the main hardware components enabling the creation of the Internet of Things. As sensor nodes are being deployed in a wide variety of indoor and outdoor environments, they are in general battery-powered devices. In fact, power provisioning is one of the main challenges faced by engineers when deploying IoT-based applications. This paper develops crosslayer architecture, integrating smart and power-aware protocols with a low-cost and highefficiency power management module, which is the basis of long-lasting of self-powered WSNs. The main physical components of the proposed architecture are a wireless node comprising a set of small solar cells responsible for harvesting the energy and an ultracapacitor as storage device. Energy consumption is reduced significantly by varying the sleep/wake duty cycle of the radio module. For environments with only a few hours of sunlight per day we present the feasibility of ensuring long-lasting operation by means of adapting the duty cycle scheme according to the energy stored in the ultracapacitor. Our experiments prove the feasibility of a long-endurance outdoors operation with a low-complexity power management unit. This is an important advance towards the development of novel IoT-based applications

Abstract:

In this article we explore the relationship between 19 of the most common anomalies reported for the US market and the cross-section of Mexican stock returns. We find that 1-month stock returns in Mexico are robustly predicted only by 3 of the 19 anomalies: momentum, idiosyncratic volatility, and the lottery effect. Momentum has a positive relation with future 1-month returns, while idiosyncratic volatility and the lottery effect have a negative relation. For longer horizons of 3 and 6 months, only the 3 most important factors in the US market predict returns: size, book-to-market, and momentum

Resumen:

El artículo presenta un simulador de vuelo ejecutivo (SVE), como parte de un entorno de aprendizaje, diseñado para ser utilizado por dueños o administradores de parques o reservas cinegéticas, grupos conservacionistas o diseñadores de políticas ecológicas gubernamentales, con el objetivo de evaluar diversas estrategias y de proveer experiencia virtual en la planeación estratégica sustentable y rentable de dichos parques o reservas de turismo cinegético. El SVE está basado en un modelo de dinámica de sistemas que evalúa el riesgo de agotamiento de la población, sus beneficios económicos potenciales y la generación potencial de impuestos en un parque virtual. Se diseñó el SVE con el objetivo de evaluar los impactos de diversas políticas desde la libre cacería hasta políticas altamente restrictivas como cuotas de caza, esquemas de impuestos y precios. La estructura del modelo está basada en el fenómeno económico denominado “tragedia de los comunes”, el cual ocurre cuando los individuos, actuando independientemente unos de otros, explotan indiscriminadamente un recurso de propiedad común, buscando obtener beneficios de corto plazo, mientras lo agotan para su uso en el largo plazo. La utilización del SVE muestra que sí es posible la sustentabilidad y la rentabilidad en una reserva de turismo cinegético, aplicando combinaciones de estrategias o políticas racionales a nivel sistema

Abstract:

This paper presents a management flight simulator (MFS) as part of a learning environment, designed to be used by wildlife hunting park owners or managers, conservationists and governement environmental policy makers, with the aim to provide strategy assessment and virtual experience in the sustainable and profitable hunting parks or reserves management strategic planning. The MFS is basedon a system dynamicsmodel that asses the population risk of depletion, economic benefits and tax collecting in a virtual wildlife park.*The MSF was designed for evaluate different policies impacts from free shooting to highly restricted shhting quotas, tax schemes and price policies. The model structure is based on the "tragedy of the commons", economic phenomenon, ocurring when individuals acting independently of one another, will overuse a commo-property resource in order to obtain short term benefit while depleting it for a long term use. The use of the MFS shows that sustainability and profitability is possible in a wildlife shooting reserve, applying a system level combination of rational policies

Resumen:

Existen crecientes preocupaciones en México respecto a las emisiones de CO2, debido a la utilización de combustibles fósiles en la generación de electricidad. Recientemente se han autorizado varias leyes con la finalidad de incrementar la participación de combustibles no fósiles en la mezcla energética. A pesar de que se han establecido algunos objetivos, estos serán difíciles de lograr si las inversiones continúan siendo dirigidas principalmente a las tecnologías fósiles. Este artículo presenta un modelo de apoyo a la toma de decisiones, basado en el enfoque de dinámica de sistemas, como un método alternativo a las técnicas de modelaje tradicionales. El modelo es utilizado para identificar los requerimientos futuros de capacidad de generación, así como para evaluarlos en diversos escenarios simulados

Abstract:

There are increasing concerns in México regarding CO2 emissions, due to the use of fossil fuel based electric generation. Recently, several laws have been passed with the objective of increase the non-fossil participation in the energy portfolio mix. Although several objectives have been established, these would be hard to achieve if investments should continue to be directed mainly to fossil fuel technologies. This article presents a system dynamics decision support model, as an alternative method to the traditional modelling approaches. The model is used to assess the generation capacity requirements and to evaluate them in several simulated scenarios. Keywords: Non-fossil electricity generation capacity expansion, Mexico, scenario simulation model, system dynamics

Resumen:

El artículo presenta un simulador de vuelo ejecutivo (SVE), como parte de un entorno de aprendizaje, diseñado para ser utilizado por dueños o administradores de parques o reservas cinegéticas, grupos conservacionistas o diseñadores de políticas ecológicas gubernamentales, con el objetivo de evaluar diversas estrategias y de proveer experiencia virtual en la planeación estratégica sustentable y rentable de dichos parques o reservas de turismo cinegético. El SVE está basado en un modelo de dinámica de sistemas que evalúa el riesgo de agotamiento de la población, sus beneficios económicos potenciales, y la generación potencial de impuestos en un parque virtual. Se diseñó el SVE con el objetivo de evaluar los impactos de diversas políticas desde la libre cacería hasta políticas altamente restrictivas como cuotas de caza, esquemas de impuestos y precios. La estructura del modelo está basada en el fenómeno económico denominado “tragedia de los comunes”, el cual ocurre cuando los individuos, actuando independientemente unos de otros, explotan indiscriminadamente un recurso de propiedad común, buscando obtener beneficios de corto plazo, mientras lo agotan para su uso en el largo plazo. La utilización del SVE muestra que sí es posible la sustentabilidad y la rentabilidad en una reserva de turismo cinegético, aplicando combinaciones de estrategias o políticas racionales a nivel sistema

Abstract:

This paper presents a management flight simulator (MFS) as part of a learning environment, designed to be used by wildlife hunting park owners or managers, conservationists and government environmentalist policy makers, with the aim to provide strategy assessment and virtual experience in sustainable and profitable hunting parks or reserves management strategic planning. The MFS is based on a System Dynamics model that asses the population risk of depletion, economic benefits and tax collecting in a virtual wildlife park. The MSF was designed for evaluate different policies impacts from free shooting to highly restricted shooting quotas, tax schemes, and price policies. The model structure is based on the “tragedy of the commons” economic phenomenon, occurring when individuals acting independently of one another, will overuse a common-property resource in order to obtain short term benefit while depleting it for long term use. Using the MFS shows that sustainability and profitability is possible in a wildlife shooting reserve, applying a system level combination of rational policies

Abstract:

A lack of Resource Based View (RBV) effective understanding in strategy courses and a quick feedback learning style of the new generation of Business Administration students demand more than a traditional lecture teaching strategy. Based on two educator research questions: How could my students achieve an effective understanding of the RBV concepts? How could my students experiment quick financial impacts of their strategic decisions? and one student question: How could I develop strategic resources in order to achieve the maximum Cash Flow?, an Interactive Learning Environment (ILE) is proposed with the following learning objectives: understand the RBV concepts, identify relationships between strategic resources and financial performance and experiment the financial impacts of several resource development strategies, as an iterative process. The proposed ILE is tested based on a laboratory experiment that was conducted with the participation of graduate and undergraduate students to evaluate some key performance measures differences due to student investment strategy profiles between these two groups. The experiment results suggest that graduate students were more aggressive, getting worst results at the beginning but, at the end they achieve better results with some less aggressive strategy plus assigning more resources to productivity versus capacity than undergraduate students did

Resumen:

Se presentan los resultados de la aplicación de un modelo de planeación, que permite la conformación y evaluación de escenarios del sector manufacturero mexicano, analizando los impactos de la inversión en formación humana y en desarrollo tecnológico en sus niveles de productividad futura. El esquema de planeación se basa en los conceptos de evolución que explican el comportamiento de sistemas abiertos. La operacionalización del esquema se realiza en base a un modelo, compuesto por un sistema de ecuaciones simultáneas, cuyos parámetros se estiman estadísticamente mediante la utilización de técnicas estadísticas de regresión. La conformación y evaluación de escenarios se lleva a cabo estableciendo supuestos y simulando el modelo hacia el año 2000. El modelo considera que la inversión en capital de conocimiento, conformado tanto por aspectos humanos (educación y capacitación), como por aspectos técnicos (desarrollo tecnológico, investigación y desarrollo), es uno de los elementos fundamentales que influyen en la productividad de todo sistema de transformación en competencia, la cual es, a su vez, uno de los elementos críticos que determinan su productividad y su participación en el mercado. La conformación y evaluación de escenarios tecnológicos en el sector manufacturero mexicano permite identificar la gran importancia que tienen las inversiones en capital de conocimiento para su desarrollo, lo que proporciona pautas para las políticas de asignación de recursos en los rubros correspondientes

Abstract:

For the problem of adjudicating conflicting claims, we propose the following method to extend a lower bound rule: (i) for each problem, assign the amounts recommended by the lower bound rule and revise the problem accordingly; (ii) assign the amounts recommended by the lower bound rule to the revised problem. The “recursive-extension” of a lower bound rule is obtained by recursive application of this procedure. We show that if a lower bound rule satisfies positivity then it’s recursive extension singles out a unique awards rule.We then study the relation between desirable properties satisfied by a lower bound rule and properties satisfied by its recursive extension

Abstract:

In economics the main efficiency criterion is that of Pareto-optimality. For problems of distributing a social endowment a central notion of fairness is no-envy (each agent should receive a bundle at least as good, according to her own preferences, as any of the other agent's bundle). For most economies there are multiple allocations satisfying these two properties. We provide a procedure, based on distributional implications of these two properties, which selects a single allocation which is Pareto-optimal and satisfies no-envy in two-agent exchange economies. There is no straightforward generalization of our procedure to more than two-agents

Abstract:

For the problem of adjudicating conflicting claims, we consider the requirement that each agent should receive at least 1/n his claim truncated at the amount to divide, where n is the number of claimants (Moreno-Ternero and Villar, 2004a). We identify two families of rules satisfying this bound. We then formulate the requirement that for each problem, the awards vector should be obtainable in two equivalent ways, (i) directly or (ii) in two steps, first assigning to each claimant his lower bound and then applying the rule to the appropriately revised problem. We show that there is only one rule satisfying this requirement. We name it the “ rule”, as it is obtained by a recursion. We then undertake a systematic investigation of the properties of the rule

Abstract:

This article proposes specification tests for economic models defined through conditional moments restrictions in which conditioning variables are estimated. There are two main motivations for this situation. First, the case when the conditioning variables are not directly observable, such as economic models, where innovations or latent variables appear as explanatory variables. Second, the case when the set of conditioning variables is too large to derive powerful tests, and hence, the original conditioning set is replaced by a constructed variable that is regarded as a good summary of it. We establish the asymptotic properties of the proposed tests, examine its finite sample behavior, and apply them to different econometric contexts. In some cases, the proposed approach leads to relevant tests that generalize well known specification tests, such as Ramsey’s RESET test

Abstract:

Despite their theoretical advantages, Integrated Conditional Moment (ICM) specification tests are not commonly employed in the econometrics practice. An important reason is that the employed test statistics are nonpivotal, and so critical values are not readily available. This article proposes an omnibus test in the spirit of the ICM tests of Bierens and Ploberger (1997, Econometrica 65, 1129–1151) where the test statistic is based on the minimized value of a quadratic function of the residuals of time series econometric models. The proposed test falls under the category of overidentification restriction tests started by Sargan (1958, Econometrica 26, 393–415). The corresponding projection interpretation leads us to propose a straightforward wild bootstrap procedure that requires only linear regressions to estimate the critical values irrespective of the model functional form. Hence, contrary to other existing ICM tests, the critical values are easily calculated while the test preserves the admissibility property of ICM tests

Abstract:

In econometrics, models stated as conditional moment restrictions are typically estimated by means of the generalized method of moments (GMM). The GMM estimation procedure can render inconsistent estimates since the number of arbitrarily chosen instruments is finite. In fact, consistency of the GMM estimators relies on additional assumptions that imply unclear restrictions on the data generating process. This article introduces a new, simple and consistent estimation procedure for these models that is directly based on the definition of the conditional moments. The main feature of our procedure is its simplicity, since its implementation does not require the selection of any user-chosen number, and statistical inference is straightforward since the proposed estimator is asymptotically normal. In addition, we suggest an asymptotically efficient estimator constructed by carrying out one Newton-Raphson step in the direction of the efficient GMM estimator

Abstract:

Decisions based on econometric model estimates may not have the expected effect if the model is misspecified. Thus, specification tests should precede any analysis. Bierens' specification test is consistent and has optimality properties against some local alternatives. A shortcoming is that the test statistic is not distribution free, even asymptotically. This makes the test unfeasible. There have been many suggestions to circumvent this problem, including the use of upper bounds for the critical values. However, these suggestions lead to tests that lose power and optimality against local alternatives. In this paper we show that bootstrap methods allow us to recover power and optimality of Bierens' original test. Bootstrap also provides reliable p-values, which have a central role in Fisher's theory of hypothesis testing. The paper also includes a discussion of the properties of the bootstrap Nonlinear Least Squares Estimator under local alternatives

Abstract:

In this paper we consider testing that an economic time series follows a martingale difference process. The martingale difference hypothesis has typically been tested using information contained in the second moments of a process, that is, using test statistics based on the sample autocovariances or periodograms. Tests based on these statistics are inconsistent since they cannot detect nonlinear alternatives. In this paper we consider tests that detect linear and nonlinear alternatives. Given that the asymptotic distributions of the considered tests statistics depend on the data generating process, we propose to implement the tests using a modified wild bootstrap procedure. The paper theoretically justifies the proposed tests and examines their finite sample behavior by means of Monte Carlo experiments

Abstract:

In this paper we propose an iterative method for solving the inhomogeneous systems of linear equations associated with density fitting. The proposed method is based on a version of the conjugate gradient method that makes use of automatically built quasi-Newton preconditioners. The paper gives a detailed description of a parallel implementation of the new method. The computational performance of the new algorithms is analyzed by benchmark calculations on systems with up to about 35 000 auxiliary functions. Comparisons with the standard, direct approach show no significant differences in the computed solutions

Abstract:

The problem of robustification of interconnection and damping assignment passivity-based control for underactuated mechanical system vis-à-vis matched, constant, and unknown disturbances is addressed in the paper. This is achieved adding an outer-loop controller to the interconnection and damping assignment passivity-based control. Three designs are proposed, with the first one being a simple nonlinear PI, while the second and the third ones are nonlinear PIDs. While all controllers ensure stability of the desired equilibrium in spite of the presence of the disturbances, the inclusion of the derivative term allows us to inject further damping enlarging the class of systems for which asymptotic stability is ensured. Numerical simulations of the Acrobot system and experimental results on the disk-on-disk system illustrate the performance of the proposed controller

Abstract:

In this paper we present a dynamic model of marine vehicles in both body-fixed and inertial momentum coordinates using port-Hamiltonian framework. The dynamics in body-fixed coordinates have a particular structure of the mass matrix that allows the application of passivity-based control design developed for robust energy shaping stabilisation of mechanical systems described in terms of generalised coordinates. As an example of application, we follow this methodology to design a passivity-based controller with integral action for fully actuated vehicles in six degrees of freedom that tracks time-varying references and rejects disturbances. We illustrate the performance of this controller in a simulation example of an open-frame unmanned underwater vehicle subject to both constant and time-varying disturbances. We also describe a momentum transformation that allows an alternative model representation of marine craft dynamics that resembles general port-Hamiltonian mechanical systems with a coordinate dependent mass matrix

Abstract:

The problem of robustification of Interconnection and Damping Assignment Passivity- Based Control (IDA-PBC) for underactuated mechanical system vis-à-vis matched, constant, unknown disturbances is addressed in the paper. This is achieved adding an outer-loop controller to the IDA-PBC. Three designs are proposed, with the first one being a simple nonlinear PI, while the second and the third ones are nonlinear PIDs. While all controllers ensure stability of the desired equilibrium in spite of the presence of the disturbances, the inclusion of the derivative term allows us to inject further damping enlarging the class of systems for which asymptotic stability is ensured

Abstract:

Control of underactuated mechanical systems via energy shaping is a well-established, robust design technique. Unfortunately, its application is often stymied by the need to solve partial differential equations (PDEs). In this technical note a new, fully constructive, procedure to shape the energy for a class of mechanical systems that obviates the solution of PDEs is proposed. The control law consists of a first stage of partial feedback linearization followed by a simple proportional plus integral controller acting on two new passive outputs. The class of systems for which the procedure is applicable is identified imposing some (directly verifiable) conditions on the systems inertia matrix and its potential energy function. It is shown that these conditions are satisfied by three benchmark examples

Abstract:

To extend the realm of application of the well known controller design technique of interconnection and damping assignment passivity-based control (IDA-PBC) of mechanical systems two modifications to the standard method are presented in this article. First, similarly to Batlle et al. (2009) and Gómez-Estern and van der Schaft (2004), it is proposed to avoid the splitting of the control action into energy-shaping and damping injection terms, but instead to carry them out simultaneously. Second, motivated by Chang (2014), we propose to consider the inclusion of dissipative forces, going beyond the gyroscopic ones used in standard IDA-PBC. The contribution of our work is the proof that the addition of these two elements provides a non-trivial extension to the basic IDA-PBC methodology. It is also shown that several new controllers for mechanical systems designed invoking other (less systematic procedures) that do not satisfy the conditions of standard IDA-PBC, actually belong to this new class of SIDA-PBC

Abstract:

To extend the realm of application of the well known controller design technique of interconnection and damping assignment passivity-based control (IDA-PBC) of mechanical systems two modifications to the standard method are presented in this article. First, similarly to [1], [13], it is proposed to avoid the splitting of the control action into energy-shaping and damping injection terms, but instead to carry them out simultaneously. Second, motivated by [2], we propose to consider the inclusion of dissipative forces, going beyond the gyroscopic ones used in standard IDA-PBC. It can be shown that new controllers for mechanical systems that do not satisfy the conditions of standard IDA-PBC, actually belong to this new class of SIDA-PBC

Abstract:

As Mexico slouches from economic meltdown to recalcitrant recovery, several questions loom large in the minds of pundits and investors, Mexicans and foreigners alike: Will President Ernesto Zedillo maintain current economic policy it will be succumb to political pressures and electoral cycles? Will the social fabric untravel or will it withstand the brunt of 'adjustment fatigue'? And is the predicted demise of the PRI likely, or will the party display its traditional resilience?

Abstract:

Sick pay is a common provision in most labor contracts. This paper employs an experimental gift exchange environment to explore two related questions using both managers and undergraduates as subjects. First, do workers reciprocate generous sick pay with higher effort? Second, do firms benefit from offering sick pay with higher effort. However, firms is that workers do reciprocate generous sick pay in terms of profits only if there is a competition among firms for workers. Consequently, competition leads to a higher voluntary provision of sick pay relative to a monopsonistic labor market

Abstract:

A kinetic model for the Boltzmann equation is proposed and explored as a practical means to investigate the properties of a dilute granular gas. It is shown that all spatially homogeneous initial distributions approach a universal "homogeneous cooling solution" after a few collisions. The homogeneous cooling solution (HCS) is studied in some detail and the exact solution is compared with known results for the hard sphere Boltzmann equation. It is shown that all qualitative features of the HCS, including the nature of overpopulation at large velocities, are reproduced by the kinetic model. It is also shown that all the transport coefficients are in excellent agreement with those from the Boltzmann equation. Also, the model is specialized to one having a velocity independent collision frequency and the resulting HCS and transport coefficients are compared to known results for the Maxwell model. The potential of the model for the study of more complex spatially inhomogeneous states is discussed

Abstract:

Recent theoretical analyses of the two-time joint-probability density for electric-field dynamics in a strongly coupled plasma have included formal short-time expansions. Here we compare the short-time-expansion results for the associated generating function with molecular-dynamics-simulation results for the special case of fields at a neutral point in a one-component plasma with plasma parameter GAMMA = 10. The agreement is quite good for times omega(p)t less-than-or-equal-to 2, although more general application of the short-time expansion requires some important qualifications

Abstract:

The dynamics of electric fields at a neutral or charged point in a one-component plasma is considered. The equilibrium joint probability density for electric-field values at two different times is defined, and several formally exact limits are described in some detail. The asymptotic short-time behavior for both neutral and charged-point cases is shown to be Gaussian with respect to the field differences, but with a half-width depending on their sum. In the strong-coupling limit, the joint probability density is dominated by weak fields (charged-point case), leading to a Gaussian distribution with time dependence entirely determined from the electric-field time-correlation function. The limit of large fields is shown to be determined by the time-dependent autocorrelation function for the density of ions around the field point; for the special case of fields at a neutral point, this result implies that the joint distribution at large fields is determined entirely by the dynamic structure factor. Finally, the full distribution (all field values and times) is studied in the weak-coupling limit

Abstract:

The equilibrium joint probability density for electric fields at two different times is considered for both neutral and charged points. The behavior of this distribution function is discussed in the Gaussian, short time, and high field limits. An approximate global description is proposed using an independent particle model as an extension of corresponding approximations for the single time field distribution

Abstract:

We develop a theory of media slant as a systematic filtering of political news that reduces multidimensional politics to the one-dimensional space perceived by voters. Economic and political choices are interdependent in our theory: expected electoral results influence economic choices, and economic choices in turn influence voting behaviour. In a two-candidate election, we show that media favouring the front-runner will focus on issues unlikely to deliver a surprise, while media favouring the underdog will gamble for resurrection. We characterize the socially optimal slant and show thal it coincides with the one favoured by the underdog under a variety of circumstances. Balanced media, giving each issue equal coverage, may be worse for voters than partisan media

Abstract:

We model voting in juries as a game of incomplete information, allowing jurors to receive a continuum of signals. We characterize the unique symmetric equilibrium of the game, and give a condition under which no asymmetric equilibria exist under unanimity rule. We offer a condition under which unanimity rule exhibits a bias toward convicting the innocent, regardless of the size of the jury, and give an example showing that this bias can be reversed. We prove a "jury theorem" for our general model: As the size of the jury increases, the probability of a mistaken judgment goes to zero for every voting rule except unanimity rule. For unanimity rule, the probability of making a mistake is bounded strictly aboye zero if and only if there do not exist arbitrarily strong signals of innocence. Our results explain the asymptotic inefficiency of unanimity rule in finite models and establishes the possibility of asymptotic efficiency, a property that could emerge only in a continuous model

Abstract:

We study diffeomorphisms that have one-parameter families of continuous symmetries. For general maps, in contrast to the symplectic case, existence of a symmetry no longer implies existence of an invariant. Conversely, a map with an invariant need not have a symmetry. We show that when a symmetry flow has a global Poincaré section there are coordinates in which the map takes a reduced, skew-product form, and hence allows for reduction of dimensionality. We show that the reduction of a volume-preserving map again is a volume preserving. Finally we sharpen the Noether theorem for symplectic maps. A number of illustrative examples are discussed and the method is compared with traditional reduction techniques

Abstract:

Parameters in climate models are usually calibrated manually, exploiting only small subsets of the available data. This precludes both optimal calibration and quantification of uncertainties. Traditional Bayesian calibration methods that allow uncertainty quantification are too expensive for climate models; they are also not robust in the presence of internal climate variability. For example, Markov chain Monte Carlo (MCMC) methods typically require 0(10 to the 5) model runs and are sensitive to internal variability noise, rendering them infeasible for climate models. Here we demonstrate an approach to model calibration and uncertainty quantification that requires only 0(10 to the 2) model runs and can accommodate internal climate variability. The approach consists of three stages: (a) a calibration stage uses variants of ensemble Kalman inversion to calibrate a model by minimizing mismatches between model and data statistics; (b) an emulation stage emulates the parameter-to-data map with Gaussian processes (GP), using the model runs in the calibration stage for training; (c) a sampling stage approximates the Bayesian posterior distributions by sampling the GP emulator with MCMC. We demonstrate the feasibility and computational efficiency of this calibrate-emulate-sample (CES) approach in a perfect-model setting. Using an idealized general circulation model, we estimate parameters in a simple convection scheme from synthetic data generated with the model. The CES approach generates probability distributions of the parameters that are good approximations of the Bayesian posteriors, at a fraction of the computational cost usually required to obtain them. Sampling from this approximate posterior allows the generation of climate predictions with quantified parametric uncertainties

Abstract:

This article elaborates the concepts of techno-colonialism and sub-netizenship to explore the renewal of colonial processes through the digitalization of "democracy." Techno-colonialism is conceived as a frame - adopted consciously and unconsciously - that shapes capitalist social relations and people's political participation. Today, this frame appeals to the idealized netizen, a global, free, equal and networked subject that gains full membership to a political community. Meanwhile, sub-netizenship is the novel political subordination because of race, ethnicity, class, gender, language, temporality, and geography within a global matrix that crosses the analogue-digital dimensions of life. This techno-colonialism/sub-netizenship dynamic manifested in the experience of Marichuy as an indigenous independent precandidate for the Mexican presidential elections of 2018. In a highly unequal and diverse country, aspirants required a tablet or smartphone to collect citizen support via a monolinguistic app only accessible to Google or Facebook users. Our analysis reveals how some individuals are excluded and disenfranchised by digital innovation but still resist a legal system that seeks to homogenize them and render them into legible and marketable data

Abstract:

Despite ongoing interest in deploying information and communication technologies (ICTs) for sustainable development, their use in climate change adaptation remains understudied. Based on the integration of adaptation theory and the existing literature on the use of ICTs in development, we present an analytical model for conceptualizing the contribution of existing ICTs to adaptation, and a framework for evaluating ICT success. We apply the framework to four case studies of ICTs in use for early warning systems and managing extreme events in the Latin American and the Caribbean countries. We propose that existing ICTs can support adaptation through enabling access to critical information for decision-making, coordinating actors and building social capital. ICTs also allow actors to communicate and disseminate their decision experience, thus enhancing opportunities for collective learning and continual improvements in adaptation processes. In this way, ICTs can both communicate the current and potential impacts of climate change, as well as engage populations in the development of viable adaptation strategies

Abstract:

We examine the welfare properties of surplus maximízation by embedding a perfectly díscriminating monopoly in an otherwise standard Arrow-Debreu economy. Although we discover an inefficient equilibríum, we validate partial equilibrium intuition by showing: (i) that equilibria are efficient provided that the monopoly goods are costly, and (ii) that a natural monopoly can typically use personalized two-part tariffs in these equilibria. However, we find that Pareto optima are sometimes incompatible with surplus maximization, even when transfer payrnents are used. We provide insight into the source of this difficulty and give some instructive examples of economies where a second welfare theorem holds.

Abstract:

We ask when firms with increasing returns can cover their costs independently by charging two-part tariffs (TPTs), a condition we call independent viability. To answer, we develop notions of substitutability and complementarity that account for the total value of goods and use them to find the maximum extractable surplus. We then show that independent viability is a sufficient condition for existence of a general equilibrium in which regulated natural monopolies use TPTs. Independent viability also guarantees efficiency when the increasing returns arise solely from fixed costs. For arbitrary technologies, it ensures that a second welfare theorem holds

Abstract:

We study theoretically and experimentally committee decision making with common interests. Committee members do not know which of two alternatives is optimal, but each member can acquire a private costly signal before casting a vote under either majority or unanimity rule. In the experiment, as predicted by Bayesian equilibrium, voters are more likely to acquire information under majority rule, and vote strategically under unanimity rule. As opposed to Bayesian equilibrium predictions, however, many committee members vote when uninformed. Moreover, uninformed voting is strongly associated with a lower propensity to acquire information. We show that an equilibrium model of subjective prior beliefs can account for both these phenomena, and provides a good overall fit to the observed patterns of behavior both in terms of rational ignorance and biases

Abstract:

We conduct a laboratory study of the group-on group ultimatum bargaining with restricted within-group interaction. In this context, we concentrate on the effect of different within-group voting procedures on the bargaining outcomes. Our experimental observations can be summarized in two propositions. First, individual responder behavior across treatments does not show statistically significant variation across voting rules, implying that group decisions may be viewed as aggregations of independent individual decisions. Second, we observe that proposer behavior significantly depends (in the manner predicted by a simple model) on the within-group decision rule in force among the responders and is generally different from the proposer behavior in the one-on-one bargaining

Resumen:

Este trabajo considera el problema de pronosticar los siniestros ocurridos pero no reportados. El pronóstico sirve para que las compañías de seguros calculen la reserva que debe constituirse para los siniestros pendientes de pago, aunque aquí no se trata el cálculo de la reserva. Se revisan los métodos de pronóstico y se destaca uno que surge de un modelo estadístico, y produce pronósticos con error cuadrático medio mínimo. Su uso se ilustra con datos reales sobre siniestralidad en el ramo de automóviles3. Las ventajas del método, en comparación con otros, son una reducción de la subjetividad en su uso y la posibilidad de medir la incertidumbre asociada a los pronósticos

Abstract:

This research considers the problem of forecasting incurred but not reported claims. Even though it does not deal with reserve calculations, the forecast is used to calculate the reserve that insurance companies require to face claims pending to pay; however, reserve calculation is not discussed in this paper. Forecasting methods are reviewed and emphasis is placed on one that emerges from a statistical model and provides minimum mean square error forecasts. Its use is illustrated with automobile claim real data3. The advantages of this method, as compared with others, are the reduction of the subjectivity component when used, and the possibility of measuring the uncertainty associated to the forecasts

Abstract:

Why does entrepreneurship training work? We argue that the feedback loop of the opportunity development process is a training element that can explain the effectiveness of entrepreneurship training. Building on action regulation theory, we model the feedback loop as a recursive cycle of changes in the business opportunity, goals, performance outcomes, and feedback. Furthermore, we hypothesize that error orientation and monitoring can strengthen or weaken the cycle, and that going through the feedback loop during training explains short- and long-term training outcomes. To test our hypotheses, we collected data before, during, and after an entrepreneurship training program. Results support our hypotheses, suggesting that the feedback loop of the opportunity development process is a concept that can explain why entrepreneurship training is effective

Abstract:

We examine how the possibility of a bank run affects the investment decisions made by a competitive bank. Cooper and Ross [1998. Bank runs: liquidity costs and investment distortions. Journal of Monetary Economics 41, 27–38] have shown that when the probability of a run is small, the bank will offer a contract that admits a bank-run equilibrium. We show that, in this case, the bank will chose to hold an amount of liquid reserves exactly equal to what withdrawal demand will be if a run does not occur; precautionary or “excess” liquidity will not be held. This result allows us to show that when the cost of liquidating investment early is high, an increase in the probability of a run will lead the bank to invest less. However, when liquidation costs are moderate, the level of investment is increasing in the probability of a run

Abstract:

This paper introduces an approach to the study of optimal government policy in economies characterized by a coordination problem and multiple equilibria. Such models are often criticized as not being useful for policy analysis because they fail to assign a unique prediction to each possible policy choice. We employ a selection mechanism that assigns, ex ante, a probability to each equilibrium indicating how likely it is to obtain. We show how such a mechanism can be derived as the natural result of an adaptive learning process. This approach leads to a well-defined optimal policy problem, and has important implications for the conduct of government policy. We illustrate these implications using a simple model of technology adoption under network externalities

Abstract:

We study optimal fiscal policy in an economy where (i) search frictions create a coordination problem and generate multiple, Pareto-ranked equilibria and (ii) the government finances the provision of a public good by taxing market activity. The government must choose the tax rate before it knows which equilibrium will obtain, and therefore an important part of the problem is determining how the policy will affect the equilibrium selection process. We show that when the equilibrium selection rule is based on the concept of risk dominance, higher tax rates make coordination on the Pareto-superior outcome less likely. As a result, taking equilibrium-selection effects into account leads to a lower optimal tax rate

Abstract:

We construct an endogenous growth model in which bank runs occur with positive probability in equilibrium. In this setting, a bank run has a permanent effect on the leve1s of the capital stock and of output. In addition, the possibility of a run changes the portfolio choices of depositors and of banks, and thereby affects the long-run growth rateo These facts imply that both the occurrence of a run and the mere possibility of runs in a given period have a large impact on aH future periods.A bank run in our model is triggered by sunspots, and we consider two different equilibrium selection rules. In the first, a run occurs with a fixed, exogenous probability, while in the second the probability of a run is influenced by banks' portfolio choices. We show that when the choices of an individual bank affect the proba,bility of a run on that bank, the economy both grows faster and experiences fewer runs

Resumen:

En el contexto de implementación de medidas para mitigar el cambio climático, diversas tecnologías han surgido o han sido implementadas con el objeto de reducir los efectos del mismo, incluyéndose dentro de estas la captura y almacenamiento de dióxido de carbono (CO2). Si bien esta tecnología es usada para capturar CO2 en procesos de altas emisiones de GEI, resulta una tecnología idónea en una etapa de transición energética. Como todo proceso de esta naturaleza, su aplicación conlleva la actualización de diversos riesgos, como daño ambiental o afectaciones a la salud de los seres humanos, sin embargo por la amplia temporalidad que conlleva el almacenamiento de CO2, la regulación en relación a la responsabilidad de los agentes responsables resulta fundamental. Ante la necesidad de implementar este tipo de tecnologías, resulta fundamental su regulación, por lo que el presente artículo propone como base de estudio para el modelo de regulación de la CAC, los Fondos Internacionales de Indemnización de Daños Debidos a Contaminación por Hidrocarburos (los FIDAC o los IOPC, por el acrónimo en inglés), esquemas que han demostrado dar certeza y seguridad ante contingencias en proyectos u operaciones a largo plazo

Abstract:

In the context of implementing measures to mitigate climate change, various technologies have emerged or have been implemented to reduce its effects including the capture and storage of carbon dioxide (CO2). Although this technology is used to capture CO2 in processes with high GHG emissions, it is an ideal technology in a stage of energy transition. Like any process of this nature, its application entails the updating of various risks, such as environmental damage or effects on the health of human beings, however, due to the extensive temporality that CO2 storage entails, the regulation in relation to the responsibility of the responsible agents is essential. Given the need the implement this type of technology, its regulation is essential, which is why this article proposes as an study basis for the CAC regulation model, the International Funds for Compensation for Damage Due to Hydrocarbon Pollution (the IOPC Funds), schemes that have proven to provide certainly and security in the face of contingencies in long-term projects or operations

Resumen:

Tras una breve aproximación respecto a ciertas disputas en el sector energético latinoamericano, el autor comenta sobre tres mecanismos contratuales relevantes que, tanto inversionistas como Estados anfitriones han diseñado conjuntamente, bajo una perspectiva de prevención. Al hacerlo, estos jugadores vuelven a enfocar sus incentivos, una vez que un cambio material de circunstancias emerge. Estos mecanismos son los siguientes: (i) las cláusulas de renegociación; (ii) las cláusulas de estabilización y (iii) los factores de estandarización económica

Abstract:

Upon a brief introduction into certain disputes arising in connection with the Latin-American energy sector, the author comments on the three relevant contractual mechanisms that, both, investors and Host States have jointly designed from a prevention perspective. In so doing, such players aim to re-focus their incentives, when a material change of circumstances emerges. These mechanims are the following: (i) renegotiation clauses; (ii) stabilization clauses and (iii) economic standarization factors

Abstract:

This article formulates some criticism on traditional Comparative Law and elaborates on the transition of the same to sustainable Comparative Law. The article further explains the need of incorporating cultural dialogues (multiculturalism and interculturalism) and the joint efforts of sciences (transdisciplinarity), as fundamental elements of the contemporary comparative method

Resumen:

El artículo tiene por objeto analizar las críticas y réplicas a las funciones del FMI y el BM, con especial referencia al campo del derecho internacional de los derechos humanos. Se argumenta que los Estados partes de tratados internacionales en materia de derechos humanos están obligados a ser congruentes en sus negociaciones y acuerdos con el FMI Y el BM, por lo que incurren en responsabilidad internacional, en caso de incumplir esta obligación. Se explora asimismo el deber del FMI y del BM de respetar las obligaciones internacionales sobre derechos humanos, por su carácter de costumbre internacional

Abstract:

This article analyses the criticisms and answers regarding the functions of the IMF and the WB, with specíal reference to the field of International Law of Human Rights. It ís argued that the State Parties to ínternational treaties on human rights are obliged to be coherent on their negotiations and agreements with both the IMF and the WB, as breaching said obligations amounts to their international liability. Likewise, the article explores the duty of the IMF and the WB to respect international obligations on human rights considering their character of International Customary Law.

Resumen:

El objeto del presente artículo es analizar un conjunto de perspectivas críticas que sobre el derecho internacional económico existen actualmente. El artículo aboga porque los instrumentos internacionales de la disciplina comentada, tengan en consideración valores universales, principios y nonnas internacionalmente aceptados, tanto por la sociedad civil en general, como por los estados, a través de diversas fuentes de derecho internacional. El artículo fonna parte de la primera fase de una investigación del autor, en tomo a la aplicación de dichos valores, principios y nonnas en ciertos instnlmentos internacionales del Banco Mundial (BM), la Organización Mundial del Comercio (OMC), el Programa de las Naciones Unidas para el desarrollo (PNUD) y la Organización para la cooperación y el desarrollo econólnico (OCDE)

Abstract:

The purpose of the present paper is to analyze a group of current perspectives regarding International Economic Law. The paper advocates for the consider.ation of universal values, principIes and rules within the instrunlents of said legal discipline, as accepted by both , the civil society and the States as per other sources of Public International Law. The paper is part of the first stage 01 an academic research 01 the author with respect to the application of those values, principIes and rules to certain international instruments of the World Bank (WB), the World Trade Organization (WTO), the United Nations Development Programme (UNDP), and the Organization for Economic Cooperation and Development (OECD)

Abstract:

A dynamic multi-level factor model with possible stochastic time trends is proposed. In the model, long-range dependence and short memory dynamics are allowed in global and local common factors as well as model innovations. Estimation of global and local common factors is performed on the prewhitened series, for which the prewhitening parameter is estimated semiparametrically from the cross-sectional and local average of the observable series. Employing canonical correlation analysis and a sequential least-squares algorithm on the prewhitened series, the resulting multi-level factor estimates have centered asymptotic normal distributions under certain rate conditions depending on the bandwidth and cross-section size. Asymptotic results for common components are also established. The selection of the number of global and local factors is discussed. The methodology is shown to lead to good small-sample performance via Monte Carlo simulations. The method is then applied to the Nord Pool electricity market for the analysis of price comovements among different regions within the power grid. The global factor is identified to be the system price, and fractional cointegration relationships are found between local prices and the system price, motivating a long-run equilibrium relationship. Two forecasting exercises are then discussed

Abstract:

Drawing upon signaling theory, charismatic leadership tactics (CLTs) have been identified as a trainable set of skills. Although organizations rely on technology-mediated communication, the effects of CLTs have not been examined in a virtual context. Preregistered experiments were conducted in face-to-face (Study 1; n = 121) and virtual settings (Study 2; n = 128) in the United States. In Study 3, we conducted virtual replications in Austria (n = 134), France (n = 137), India (n = 128), and Mexico (n = 124). Combined with past experiments, the meta-analytic effect of CLTs on performance (Cohen's d = 0.52 in-person, k = 4; Cohen's d = 0.21 overall, k = 10) and engagement in an extra-role task (Cohen's d = 0.19 overall; k = 6) indicate large to moderate effects. Yet, for performance in a virtual context Cohen's d ranged from −0.25 to 0.17 (Cohen's d = 0.01 overall; k = 6). Study 4 (n = 129) provided mixed support for signaling theory in a virtual context, linking CLTs to some positive evaluations. We conclude with guidance for future research on charismatic leadership and signaling theory

Abstract:

We assess relative performance of three recently proposed instrument selection methods via a Monte CarIo study that investigates the finite sample behavior of the post-selection estimator of a simple linear IV model. Our results suggest that no one method dominates

Abstract:

In the normal linear simultaneous equations model, we demonstrate a close relationship between two recently proposed methods of instrument selection by presenting a fundamental relationship between the two sets of canonical correlations upon which the methods are based

Abstract:

This article introduces a data-driven Box-Pierce test for serial correlation. Ihe proposed test is very attractive compared to the existing ones. In particular, implementation of this test is extremely simple for two reasons: first, the researcher does not need to specify the order of the autocorrelation tested, since the test automatically chooses this number; second, its asymptotic null distribution is chi-square with one degree of freedom, so there is no need of using a bootstrap procedure to estimate the critical values. In addition, the test is robust to the presence of conditional heteroskedasticity of unknown form. Finally, the proposed test presents higher power in simulations than the existing ones for models commonly employed in empirical finance

Abstract:

This artícle introduces an automatic test for the correct specification of a vector autoregression (VAR) model. The proposed test statistic is a Portmanteau statistic with an automatic selection of the order of the residual serial correlation tested. The test presents several attractive characteristics: simplicity, robustness, and high power in finite samples. The test is simple to implement since the researcher does not need to specify the order of the autocorrelation tested and the proposed critical values are simple to approximate, without resorting to bootstrap procedures. In addition, the test is robust to the presence of conditional heteroscedasticity of unknown form and accounts for estimation uncertainty without requiring the computation of large-dimensional inverses of near-to-singularity covariance matrices. The basic methodology is extended to general nonlinear multivariate time series models. Simulations show that the proposed test presents higher power than the existing ones for models commonly employed in empirical macroeconomics and empirical finance. Finally, the test is applied to the elassical bivariate V AR model for GNP (gross national product) and unemployment of Blanchard and Quah (1989) and Evans (1989). Online supplementary material ineludes proofs and additional details

Abstract:

Multi-server queueing systems with Poisson arrivals and Erlangian service times are among the most applicable of what are considered "easy" systems in queueing theory. By selecting the proper order, Erlangian service times can be used to approximate reasonably well many general types of service times which have a unimodal distribution and a coefficient of variation less than or equal to 1. In view of their practical importance, it may be surprising that the existing literature on these systems is quite sparse. The probable reason is that, while it is indeed possible to represent these systems through a Markov process, serious difficulties arise because of (1) the very large number of system states that may be present with increasing Erlang order and/or number of servers, and (2) the complex state transition probabilities that one has to consider. Using a standard numerical approach, solutions of the balance equations describing systems with even a modest Erlang order and number of servers require extensive computational effort and become impractical for larger systems. In this paper we illustrate these difficulties and present the equally likely combinations (ELC) heuristic which provides excellent approximations to typical equilibrium behavior measures of interest for a wide range of stationary multiserver systems with Poisson arrivals and Erlangian service. As system size grows, ELC computational times can be more than 1000 times faster than those for the exact approach. We also illustrate this heuristic's ability to estimate accurately system response under transient and/or dynamic conditions

Resumen:

No se puede entender la configuración de la política medieval y moderna en cuanto a sus ejes estructurales de "sacralidad" y "laicidad" sin tener en consideración la lectura que hicieron los distintos intelectuales, filósofos y escritores de la Europa de entre los siglos XIII y XVII de los autores clásicos. En la presente investigación abarcamos el estudio de cuatro escritores latinos: Cicerón, Séneca, Tito Livio y Tácito como modelos de referencia que van a suponer, a través de una lectura "reconstructiva", el desarrollo de la formación de los principales modelos políticos de la Europa moderna. A través del diálogo que notables representantes de la teología escolástica, el realismo maquiaveliano, el jesuitismo político y el absolutismo barroco, entablan con los autores clásicos mencionados, se van organizando moldes que constituirán las formas de gobierno propias de los príncipes y de los monarcas bajo-medievales renacentistas y barrocos. El núcleo que dirigirá nuestro análisis será, precisamente, la dicotomía entre la sacralidad del príncipe gobernante que se convertirá en el principal motivo de la política medieval y eclesiástica y la desacralización y semi-desacralización en las formas de gobernar maquiavelianas (para el primer caso) y absolutistas contrarreformistas (para el segundo). Nos valdremos del método de la estética de la recepción para trazar el diálogo clásicos-modernos, por los que iremos viendo en qué medida los escritores políticos de la Europa medieval y la moderna van "coloreando", "concretando" y "rellenando" sus propios "horizontes de expectativas", con distintos sesgos ideológicos, en la idea de ir configurando modelos políticos que se adapten al periodo histórico que estamos estudiando

Resumen:

Si la virtus de la Antigüedad era la fuerza, la valentía y el coraje, y en la modernidad la diligencia, el trabajo, el mérito y el esfuerzo, actualmente se ha instalado la commoditas, una suerte de pseudovirtud nihilista que reivindica la ociosidad, el igualitarismo radical y el entretenimiento. La sociedad contemporánea posmoderna ha superado tanto la metafísica filosófica grecorromana como la teología cristiana, tanto el racionalismo y el empirismo ilustrado como el idealismo romántico decadentista y el positivismo materialista, y ha entrado en una suerte de fin de la historia de nihilismo lúdico autocomplaciente y autosatisfecho. Concluimos que la commoditas está para quedarse y reconfigurará la naturaleza del ser humano en función de una nueva mentalidad que empalidece a la tradición hasta prácticamente anularla

Abstract:

If the virtus of antiquity was strength, bravery and courage, and in Modernity, diligence, work, merit and effort, today the commoditas has been installed, a kind of nihilistic pseudo-virtue that claims the idleness, radical egalitarianism, and entertainment. Contemporary postmodern society has surpassed both Greco-Roman philosophical metaphysics and Christian theology, both rationalism and enlightened empiricism, as well as romantic-decadent idealism and materialist positivism, so society has entered a sort of end of history of self-indulgent and self-satisfied playful nihilism. We conclude that commoditas is here to stay and will reconfigure the nature of the human being based on a new mentality that pales tradition to the point of practically annulling it

Resumen:

Pestes y pandemias están de actualidad por la crisis de Covid-19. Aunque nuestro mundo no está acostumbrado a las epidemias, han sido muy frecuentes a lo largo de la historia de la humanidad. En el presente estudio se ofrece un análisis de distintas pestes que asolaron el mundo grecorromano, con las explicaciones que les dieron los autores clásicos y las crisis sociopolíticas que supusieron, muy similares a la actual

Abstract:

Pests and pandemics are in the news because of the Covid-19 crisis. Although our world is not accustomed to epidemics, they have been very frequent throughout human history. This paper offers an analysis of the different plagues that devastated the Greco-Roman world, with the discussions made by classical authors and the political-social crises that they supposed, very similar to the present one

Abstract:

The notion of relative importance of criteria is central in multicriteria decision aid. In this work we define the concept of comparative coalition structure, as an approach for formally discussing the notion of relative importance of criteria. We also present a multicriteria decision aid method that does not require the assignment of weights to the criteria

Abstract:

This study identifies characteristics that positively affect entrepreneurial intention. To do so, the study compares personality traits with work values. Socio-demographic and educational characteristics act as control variables. The sample comprises 1210 public university students. Hierarchical regression analysis serves to test the hypotheses. Results show that personality traits affect entrepreneurial intention more than work values do

Resumen:

En la presente investigación se trata de analizar si las Universidades como organismos que podrían actuar como incubadoras de ideas de negocio, realmente están cumpliendo ese papel y están incentivando la actitud emprendedora entre sus estudiantes, a través de la organización de sus y las medidas específicas que acometen. Las hipótesis planteadas acerca de la mayor formación en su titulación, genérica y específica, son contrastadas utilizando una muestra de 668 estudiantes de una universidad madrileña. Las conclusiones obtenidas invitan a la reflexión, en tanto que los estudiantes de más reciente incorporación a la Universidad presentan tasas de actitud emprendedora superiores a las de sus compañeros más veteranos

Abstract:

The objetive of the present research consists of analyzing if the Universities, as organisms that might act as incubators of business ideas, really are fulfilling this role and are stimulating the entrepreneurship attitude among their students, through the structure of its studies and the specific measures that they undertake. The raised hypotheses over of the higher training in their studies, generic and specific, they are confirmed using a sample of 668 students of a university of Madrid. The obtained conclusions invite to the reflection, while the students of more recent incorporation to the University present rates of entrepreneurship attitude higher to those of more veteran colleages

Abstract:

In this paper we propose an instrument for collecting sensitive data that allows for each participant to customize the amount of information that she is comfortable revealing. Current methods adopt a uniform approach where all subjects are afforded the same privacy guarantees; however, privacy is a highly subjective property with intermediate points between total disclosure and non-disclosure: each respondent has a different criterion regarding the sensitivity of a particular topic. The method we propose empowers respondents in this respect while still allowing for the discovery of interesting findings through the application of well-known inferential procedures

Abstract:

In this article, we introduce the partition task problem class along with a complexity measure to evaluate its instances and a performance measure to quantify the ability of a system to solve them. We explore, via simulations, some potential applications of these concepts and present some results as examples that highlight their usefulness in policy design scenarios, where the optimal number of elements in a partition or the optimal size of the elements in a partition must be determined

Abstract:

In a negative representation, a set of elements (the positive representation) is depicted by its complement set. That is, the elements in the positive representation are not explicitly stored, and those in the negative representation are. The concept, feasibility, and properties of negative representations are explored in the paper; in particular, its potential to address privacy concerns. It is shown that a positive representation consisting of n l-bit strings can be represented negatively using only O(ln) strings, through the use of an additional symbol. It is also shown that membership queries for the positive representation can be processed against the negative representation in time noworse than linear in its size, while reconstructing the original positive set from its negative representation is anNP-hard problem. The paper introduces algorithms for constructing negative representations as well as operations for updating and maintaining them

Abstract:

This paper proposes a strategy for administering a survey that is mindful of sensitive data and individual privacy. The survey seeks to estimate the population proportion of a sensitive variable and does not depend on anonymity, cryptography, or legal guarantees for its privacy preserving properties. Our technique presents interviewees with a question and t possible answers, and asks participants to eliminate one of the t-1 alternatives at random. We introduce a specific setup that requires just a single coin as randomizing device, and that limits the amount of information each respondent is exposed to by presenting to her/him only a subset of the question's alternatives. Finally we conduct a simulation study to provide evidence of the robustness against the response and the nonresponse bias of the suggested procedure

Abstract:

In this paper we present a method for hiding a list of data by mixing it with a large amount of superfluous items. The technique uses a device known as a negative database which stores the complement of a set rather that the set itself to include an arbitrary number of garbage entries efficiently. The resulting structure effectively hides the data, without encrypting it, and obfuscates the number of data items hidden; it prevents arbitrary data lookups, while supporting simple membership queries; and can be manipulated to reflect relational algebra operations on the original data

Abstract:

A set D B of data elements can be represented in terms or its complement set, known as a negative database. That is, all of the elements not in D B are represented, and D B itself is not explicitly stored. This method of representing data has certain properties that are relevant for privacy enhancing applications. The paper reviews the negative database (N D B) representation scheme for storing a negative image compactly, and proposes using a collection of N D Bs to represent a single D B, that is, one N D B is assigned for each record in D B. This method has the advantage of producing negative databases that are hard to reverse in practice, i.e., from which it is hard to obtain DB. This result is obtained by adapting a technique for generating hard-to-solve 3-SAT formulas. Finally we suggest potential avenues of application

Abstract:

The benefits of negative detection for obscuring information are explored in the context of Artificial Immune Systems (AIS). AIS based on string matching have the potential for an extra security feature in which the "normal" profile of a system is hidden from its possible hijackers. Even if the model of normal behavior falls into the wrong hands, reconstructing the set of valid or "normal" strings is an NP-hard problem. The data-hiding aspects of negative detection are explored in the context of an application to negative databases. Previous work is reviewed describing possible representations and reversibility properties for privacy-enhancing negative databases. New algorithms are presented which allow on-line creation, updates and clean-up of negative databases, some experimental results illustrate the impact of these operations on the size of the negative database. Finally some future challenges are discussed

Abstract:

In anomaly detection, the normal behavior of a process is characterized by a model, and deviations from the model are called anomalies. In behavior-based approaches to anomaly detection, the model of normal behavior is constructed from an observed sample of normally occurring patterns. Models of normal behavior can represent either the set of allowed patterns (positive detection) or the set of anomalous patterns (negative detection). A formal framework is given for analyzing the tradeoffs between positive and negative detection schemes in terms of the number of detectors needed to maximize coverage. For realistically sized problems, the universe of possible patterns is too large to represent exactly (in either the positive or negative scheme). Partial matching rules generalize the set of allowable (or unallowable) patterns, and the choice of matching rule affects the tradeoff between positive and negative detection. A new match rule is introduced, called r-chunks, and the generalizations induced by different partial matching rules are characterized in terms of the crossover closure. Permutations of the representation can be used to achieve more precise discrimination between normal and anomalous patterns. Quantitative results are given for the recognition ability of contiguous-bits matching together with permutations

Abstract:

Dementia is characterized by a progressive deterioration in cognitive functions and behavioral problems. Due to its importance, in the domain of Internet of Things (IoT), where physical objects are connected to the internet, a myriad of systems have been proposed to support people with dementia, their caregivers, and medical experts. However, the vast and increasing number of research efforts has led to a complex state of the art, which is in need of a methodological analysis and a characterization of its key aspects. Based on the PRISMA guidelines, this article presents a systematic review aimed at investigating the state of the art of the IoT in dementia regardless of the dementia category and/or its cause. Articles published within the period of January 2017 to November 2022 were searched in well-known scientific databases. The searches retrieved a total of 2733 records, which were narrowed down to 104 relevant studies by applying inclusion, exclusion, and quality criteria. A set of 13 research questions at the intersection of IoT and dementia were posed, which guided the analysis of the selected studies. The systematic review contributes (i) an in-depth methodological analysis of recent and relevant IoT systems in the domain of dementia; (ii) a taxonomy that identifies, characterizes, and categorizes key aspects of IoT research focused on dementia; and (iii) a series of future work directions to advance the field of IoT in the dementia domain

Abstract:

Background and objective: In day centers, people with dementia are assigned to specific groups to receive care according to the progression of the disease. This article presents the design and evaluation of a dashboard aimed at facilitating the comprehension of the progression of people with dementia to support decision-making of healthcare professionals (HCPs) when determining patient-group assignment. Materials and method: A participatory design methodology was followed to build the dashboard. The grounded theory methodology was utilized to identify requirements. A total of 8 HCPs participated in the design and evaluation of a low-fidelity prototype. The perceived usefulness and perceived ease of use of the high-fidelity prototype was evaluated by 15 HCPs (from several day centers) and 38 psychology students utilizing a questionnaire based on the technology acceptance model. Results: HCPs perceived the dashboard as extremely likely to be useful (Mdn = 6.5 out of 7) and quite likely to be usable (Mdn = 6 out of 7). Psychology students perceived the dashboard as quite likely to be useful and usable (both with Mdn= 6)

Resumen:

El aumento de sanciones por violaciones de la privacidad motiva la definición de una metodología de evaluación de la utilidad de la información y de la preservación de la privacidad de datos a publicar. Al desarrollar un caso de estudio se provee un marco de trabajo para la medición de la preservación de la privacidad. Se exponen problemas en la medición de la utilidad de los datos y se relacionan con la preservación de la privacidad en datos a publicar. Se desarrollan modelos de aprendizaje máquina para determinar el riesgo de predicción de atributos sensibles y como medio de verificación de la utilidad de los datos. Los hallazgos motivan la necesidad de adecuar la medición de la preservación de la privacidad a los requerimientos actuales y a medios de ataque sofisticados como el aprendizaje máquina

Abstract:

The grown penalties for privacy violations motivate the definition of a methodology for evaluating the usefulness of information and the privacy-preserving data publishing. We developing a case study and we provided a framework for measuring the privacy-preserving. Problems are exposed in the measurement of the usefulness of the data and relate to privacy-preserving data publishing. Machine learning models are developed to determine the risk of predicting sensitive attributes and as a means of verifying the usefulness of the data. The findings motivate the need to adapt the privacy measures to current requirements and sophisticated attacks as the machine learning

Abstract:

Scholars argue that electoral management bodies staffed by autonomous, non-partisan experts are best for producing credible and fair elections. We inspect the voting record of Mexico's Instituto Federal Electoral (IFE), an ostensibly independent bureaucratic agency regarded as extremely successful in organizing clean elections in a political system marred by fraud. We discover that the putative non-partisan experts of "autonomous" IFE behave as "party watchdogs" that represent the interests of their political party sponsors. To validate this party influence hypothesis, we examine roll-call votes cast by members of IFE's Council-General from 1996 to 2006. Aside from shedding light on lFE's failure to achieve democratic compliance in 2006, our analysis suggests that election arbiters that embrace partisan strife are quite capable of organizing free, fair, and credible elections in new democracies

Resumen:

Este trabajo propone una metodología para elaborar escenarios de cambio climático a escala local. Se usan modelos multivariados de series de tiempo para obtener pronósticos restringidos y se avanza en la literatura sobre métodos estadísticos de reducción de escala en varios aspectos. Así se logra: i) una mejor representación del clima a escala local; ii) evitar la posible ocurrencia de relaciones espurias entre variables de gran y pequeña escalas; iii) una representación apropiada de la variabilidad de las series en los escenarios de cambio climático, y iv) evaluar la compatibilidad y combinar la información de variables climáticas con las derivadas de los modelos de clima. La metodología propuesta es útil para integrar escenarios sobre la evolución de los factores de pequeña escala que influyen en el clima local. De esta forma, al escoger distintas evoluciones que representen, por ejemplo, distintas políticas públicas sobre uso del suelo o control de contaminantes, la metodología ofrece una manera de evaluar la conveniencia de dichas políticas en términos de sus efectos para amplificar o atenuar los impactos del cambio climático

Abstract:

This paper proposes a new methodology for generating climate change scenarios at the local scale based on multivariate time series models and restricted forecasting techniques. This methodology offers considerable advantages over the current statistical downscaling techniques such as: (i) it provides a better representation of climate at the local scale; (ii) it avoids the occurrence of spurious relationships between the large and local scale variables; (iii) it offers a more appropriate representation of variability in the downscaled scenarios; and (iv) it allows for compatibility assessment and combination of the information contained in both observed and simulated climate variables. Furthermore, this methodology is useful for integrating scenarios of local scale factors that affect local climate. As such, the convenience of different public policies regarding, for example, land use change or atmospheric pollution control can be evaluated in terms of their effects for amplifying or reducing climate change impacts

Abstract:

The urge for higher resolution climate change scenarios has been widely recognized, particularly for conducting impact assessment studies. Statistical downscaling methods have shown to be very convenient f or this task, mainly because of their lower computational requirements in comparison with nested limited-area regional models or very high resolution Atmosphere–ocean General Circulation Models. Nevertheless, although some of the limitations of statistical downscaling methods are widely known and have been discussed in the literature, in this paper it is argued that the current approach for statistical downscaling does not guard against misspecified statistical models and that the occurrence of spurious results is likely if the assumptions of the underlying probabilistic model are not satisfied. In this case, the physics included in climate change scenarios obtained by general circulation models, could be replaced by spatial patterns and magnitudes produced by statistically inadequate models. Illustrative examples are provided for monthly temperature for a region encompassing Mexico and part of the United States. It is found that the assumptions of the probabilistic models do not hold for about 70 % of the gridpoints, parameter instability and temporal dependence being the most common problems. As our examples reveal, automated statistical downscaling “black-box” models are to be considered as highly prone to produce misleading results. It is shown that the Probabilistic Reduction approach can be incorporated as a complete and internally consistent framework for securing the statistical adequacy of the downscaling models and for guiding the respecification process, in a way that prevents the lack of empirical validity that affects current methods

Abstract:

We design a laboratory experiment to study behavior in a multidivisional organization. The organization faces a trade-off between coordinating its decisions across the divisions and meeting division-specific needs that are known only to the division managers, who can communicate their private information through cheap talk. While the results show close to optimal communication, we also find systematic deviations from optimal behavior in how the communicated information is used. Specifically, subjects' decisions show worse than predicted adaptation to the needs of the divisions in decentralized organizations and worse than predicted coordination in centralized organizations. We show that the observed deviations disappear when uncertainty about the divisions' local needs is removed and discuss the possible underlying mechanisms

Abstract:

We design a laboratory experiment in which an interested third party endowed with private information sends a public message to two conflicting players, who then make their choices. We find that third-party communication is not strategic. Nevertheless, a hawkish message by a third party makes hawkish behavior more likely while a dovish message makes it less likely. Moreover, how subjects respond to the message is largely unaffected by the third party’s incentives. We argue that our results are consistent with a focal point interpretation in the spirit of Schelling

Abstract:

Forward induction (FI) thinking is a theoretical concept in the Nash refinement literature which suggests that earlier moves by a player may communicate his future intentions to other players in the game. Whether and how much players use FI in the laboratory is still an open question. We designed an experiment in which detailed reports were elicited from participants playing a battle of the sexes game with an outside option. Many of the reports show an excellent understanding of FI, and such reports are associated more strongly with FI-like behavior than reports consistent with first mover advantage and other reasoning processes. We find that a small fraction of subjects understands FI but lacks confidence in others. We also explore individual differences in behavior. Our results suggest that FI is relevant for explaining behavior in games

Resumen:

Este artículo hace una revisión del modelo de bienestar noruego cuyo objeto ha sido la protección y el mantenimiento de la clase media en el país. Para entenderlo, se analiza el concepto de clase media y su relación con las políticas públicas. En los países escandinavos, el papel que desarrolla el Estado en la economía es crucial para la generación de la clase media, lo que ha permitido un modelo socio-económico exitoso y de difícil aplicación en otros países donde el concepto de Estado difiere sustancialmente

Abstract:

This article reviews the Norwegian welfare model, the object of which has been the protection and maintenance of the middle class in the country. In order to understand it, the concept of middle class and its relationship with public policies is analyzed. In Scandinavian countries, the role of the State in the economy is crucial for the generation of the middle class, which has allowed a successful socio-economic model that is difficult to apply in other countries where the concept of the State differs substantially

Abstract:

Using new census-type data and a dynamic structural model, we study the effect of credit supply on investment by manufacturing firms during the Greek depression. Real factors (profitability, uncertainty, and taxes) account for only a fraction of the substantial drop in investment observed in the data. The reduction in credit supply has significant real effects, explaining 11–32% of the investment slump. We also find that exporting firms, which reduce investment and deleverage despite their improved profitability during the crisis, face a contraction in credit supply similar to that of non-exporters, suggesting that the credit-supply shock has a significant common component

Resumen:

El objetivo principal de este trabajo es documentar la amplia heterogeneidad a nivel institucional que existe dentro de los países así como investigar qué factores institucionales son los más relevantes para los franquiciantes multinacionales

Abstract:

The purpose of this paper is to document the extensive heterogeneity in institutions within countries and investigate which institutional factors are the most relevant for international brands

Resumo:

O principal objetivo desta investigação é justificar a grande heterogeneidade que existe nos países, assim como pesquisar quais fatores institucionais são os mais importantes para as franquias multinacionais

Abstract:

Introduction: Mathematical models and field data suggest that human mobility is an important driver for Dengue virus transmission. Nonetheless little is known on this matter due the lack of instruments for precise mobility quantification and study design difficulties. Materials and methods: We carried out a cohort-nested, case-control study with 126 individuals (42 cases, 42 intradomestic controls and 42 population controls) with the goal of describing human mobility patterns of recently Dengue virus-infected subjects, and comparing them with those of non-infected subjects living in an urban endemic locality. Mobility was quantified using a GPS-data logger registering waypoints at 60-second intervals for a minimum of 15 natural days. Results: Although absolute displacement was highly biased towards the intradomestic and peridomestic areas, occasional displacements exceeding a 100-Km radius from the center of the studied locality were recorded for all three study groups and individual displacements were recorded traveling across six states from central Mexico. Additionally, cases had a larger number of visits out of the municipality´s administrative limits when compared to intradomestic controls (cases: 10.4 versus intradomestic controls: 2.9, p = 0.0282). We were able to identify extradomestic places within and out of the locality that were independently visited by apparently non-related infected subjects, consistent with houses, working and leisure places. Conclusions: Results of this study show that human mobility in a small urban setting exceeded that considered by local health authority’s administrative limits, and was different between recently infected and non-infected subjects living in the same household. These observations provide important insights about the role that human mobility may have in Dengue virus transmission and persistence across endemic geographic areas that need to be taken into account when planning preventive and control ...

Resumen:

Se realizó una investigación con 38 estudiantes universitarios de un curso de Física I con el fin de indagar acerca de los conocimientos matemáticos que ellos han aprendido acerca de la transformación de funciones, tras haber cursado Cálculo Diferencial. La investigación se enmarca en la Teoría de la Representaciones Semióticas de Duval. Se realizó una evaluación diagnóstica referida a las transformaciones Af(Bx+C)+D aplicadas sobre las funciones x2 y senx. El artículo contribuye mostrando las posibles causas de las dificultades presentadas por los estudiantes a la luz del marco teórico, además de proponer estrategias que contribuyan a mejorar su aprendizaje

Abstract:

A study was carried out with 38 university students enrolled in the Physics I course in order to investigate what they had learned about the transformation of functions as part of their mathematical knowledge, after having completed a course on Differential Calculus. The research is framed by Duval’s Theory of Semiotic Representations. A diagnostic assessment test was carried out concerning transformations of the form Af(Bx+C)+D applied to the functions x2 and sinx when the parameters included in them are varied one by one. The article contributes by showing possible reasons to explain student’s difficulties in the light of the theoretical framework, and by proposing strategies that may contribute to improve student's learning

Resumo:

Foi realizada uma pesquisa com 38 estudantes universitários de um curso de Física I, a fim de investigar o conhecimento matemático que eles aprenderam sobre a transformação de funções, depois de concluir a cadeira de Cálculo Diferencial. A pesquisa está enquadrada na Teoria das Representações Semióticas de Duval. Foi realizada uma avaliação diagnóstica referente às transformações Af(Bx+C)+D aplicadas às funções x2 e senx. O artigo contribui mostrando as possíveis causas das dificuldades apresentadas pelos alunos à luz do referencial teórico, além de propor estratégias que contribuam para melhorar sua aprendizagem

Resumen:

El objetivo de esta investigación es identificar los determinantes de la madurez de la deuda para las empresas mexicanas que cotizan en la BMV, usando una definición alternativa de esta variable dependiente. En particular, se define la madurez como "tiempo para expiración del contrato" considerando el promedio ponderado del tiempo a vencimiento, contribución original del presente trabajo. Se utilizan modelos de datos panel y de selección de Heckman, pues el uso de datos longitudinales en un panel desbalanceado puede presentar problemas de selección en forma de atrición. Los resultados sugieren que el sesgo por atrición es significativo, y que la madurez promedio de la deuda está determinada por variables como tamaño y apalancamiento, entre otras característias de las empresas, así como la tasa de interés del mercado. Como principal limitación, se tienen las omisiones de datos de las fuentes de información utilizadas generando un panel corto y desbalanceado. Se concluye que al usar este método de medición de madurez se obtienen mejores resultados para analizar el plazo de vencimiento de la deuda, comparado con las métricas tradicionales en la literatura

Abstract:

This research aims to determinants of debt maturity for Mexican companies listed on the BMV, using an alternative definition of this dependent variable. Maturity is defined as "time to contract expiration" considering the weighted average of the time to expiration time, which contributes to the origonality of this work. Panel data models and Heckman selection models are used, since the use of longitudinal data in an unbalanced panel can present selection problems due to atrition. The results suggest that the attrition bias is significant, and that the average maturity of the debt is determined by firm characteristics such a size and leverage, among others, and the interest rate of the Mexican market. As a limitation and due to the omissions of data reported by the information sourced used for the analysis, a short and unbalanced panel is used. It is concluded that, by using this maturity alternative measurement method, better results are obtained to analyze the maturity of the debt, compared to the traditional metrics in the literature

Abstract:

This work is concerned with a reaction-diffusion system that has been proposed as a model to describe acid-mediated cancer invasion. More precisely, we consider the properties of travelling waves that can be supported by such a systém, and show that a rich variety of wave propagation dynamics, both fast and slow, is compatible with the model. In particular, asymptotic formulae for admissible wave profiles and bounds on their wave speeds are provided

Abstract:

A numerical study of model-based methods for derivative-free optimization is presented. These methods typically include a geometry phase whose goal is to ensure the adequacy of the interpolation set. The paper studies the performance of an algorithm that dispenses with the geometry phase altogether (and therefore does not attempt to control the position of the interpolation set). Data are presented describing the evolution of the condition number of the interpolation matrix and the accuracy of the gradient estimate. The experiments are performed on smooth unconstrained optimization problems with dimensions ranging between 2 and 15

Abstract:

Hospitals are convenient settings for the deployment of context-aware applications. The information needs of hospital workers are highly dependent on contextual variables, such as location, role and activity. While some of these parameters can be easily determined, others, such as activity are much more complex to estimate. This paper describes an approach to estimate the activity being performed by hospital workers. The approach is based on information gathered from a workplace study conducted in a hospital, in which 196 h of detailed observation of hospital workers was recorded. Contextual information, such as the location of hospital workers, artifacts being used, the people with whom they collaborate and the time of the day, is used to train a back propagation neural network to estimate hospital workers activities. The activities estimated include clinical case assessment, patient care, preparation, information management, coordination and classes and certification. The results indicate that the user activity can be correctly estimated 75% of the time (on average) which is good enough for several applications. We discuss how these results can be used in the design of activity-aware applications, arguing that recent advances in pervasive and networking technologies hold great promises for the deployment of such applications

Abstract:

Hospitals are convenient settings for deployment of ubiquitous computing technology. Not only are they technology-rich environments, but their workers experience a high level of mobility resulting in information infrastructures with artifacts distributed throughout the premises. Hospital information systems (HISs) that provide access to electronic patient records are a step in the direction of providing accurate and timely information to hospital staff in support of adequate decision-making. This has motivated the introduction of mobile computing technology in hospitals based on designs which respond to their particular conditions and demands. Among those conditions is the fact that worker mobility does not exclude the need for having shared information artifacts at particular locations. In this paper, we extend a handheld-based mobile HIS with ubiquitous computing technology and describe how public displays are integrated with handheld and the services offered by these devices. Public displays become aware of the presence of physicians and nurses in their vicinity and adapt to provide users with personalized, relevant information. An agent-based architecture allows the integration of proactive components that offer information relevant to the case at hand, either from medical guidelines or previous similar cases

Abstract:

We consider the motion of a planar rigid body in a potential twodimensional flow with a circulation and subject to a certain nonholonomic constraint. This model can be related to the design of underwater vehicles. The equations of motion admit a reduction to a 2-dimensional nonlinear system, which is integrated explicitly. We show that the reduced system comprises both asymptotic and periodic dynamics separated by a critical value of the energy, and give a complete classification of types of the motion. Then we describe the whole variety of the trajectories of the body on the plane

Abstract:

In the Black–Scholes–Merton model, as well as in more general stochastic models in finance, the price of an American option solves a parabolic variational inequality.When the variational inequality is discretized, one obtains a linear complementarity problem (LCP) that must be solved at each time step. This paper presents an algorithm for the solution of these types of LCPs that is significantly faster than the methods currently used in practice. The new algorithm is a two-phase method that combines the active-set identification properties of the projected successive over relaxation (SOR) iteration with the second-order acceleration of a (recursive) reduced-space phase.We show how to design the algorithm so that it exploits the structure of the LCPs arising in these financial applications and present numerical results that show the effectiveness of our approach

Abstract:

We develop a model of the politics of state capacity building undertaken by incumbent parties that have a comparative advantage in clientelism rather than in public goods provision. The model predicts that, when challenged by opponents, clientelistic incumbents have the incentive to prevent investments in state capacity. We provide empirical support for the model's implications by studying policy decisions by the Institutional Revolutionary Party that affected local state capacity across Mexican municipalities and over time. Our difference-in-differences and instrumental variable identification strategies exploit a national shock that threatened the Mexican government's hegemony in the early 1960s

Abstract:

We document how informal employment in Mexico is countercyclical, lags the cycle and is negatively correlated withformal employment. This contributes to explaining why total employment in Mexico displays low cyclicality and variability over the business cycle when compared to Canada, a developed economy with a much smaller share of informal employment. To account for these empirical findings, we build a business cycle model of a small, open economy that incorporates formal and informal labor markets and calibrate it to Mexico. The model performs well in terms of matching conditional and unconditional moments in the data. It also sheds light into the channels through which informal economic activity may affect business cycles. Introducing informal employment into a standard model amplifies the effects of productivity shocks. This is linked to productivity shocks being imperfectly propagated from the formal to the informal sector. It also shows how imperfect measurement of informal economic activity in national accounts can translate into stronger variability in aggregate economic activity

Abstract:

At the beginning of 2003, the debate which occurred within the United Nations Security Council about the Iraqi war has been one of the few international events which drew so much attention within Mexico's public opinion. This can be explained by seriousness of United States' violation of international law, but also by the difficult bilateral relations between Mexico - non permanent member state of the Security Council - and the United States at the heart of the Iraqi crisis. Regarding the debate's polarization between France and the United States at the United Nations, Mexico decided to side with France for three major reasons: the necessity to counterbalance American power, cultural affinities with France, and also because of an ingenuous attitude in view of the specific interests defended by France

Abstract:

The parameter space of nonnegative trigonometric sums (NNTS) models for circular data is the surface of a hypersphere; thus, constructing regression models for a circular-dependent variable using NNTS models can comprise fitting great (small) circles on the parameter hypersphere that can identify different regions (rotations) along the great (small) circle. We propose regression models for circular- (angular-) dependent random variables in which the original circular random variable, which is assumed to be distributed (marginally) as an NNTS model, is transformed into a linear random variable such that common methods for linear regression can be applied. The usefulness of NNTS models with skewness and multimodality is shown in examples with simulated and real data

Abstract:

The probability integral transform of a continuous random variable X with distribution function F X is a uniformly distributed random variable U = F X ( X ). We define the angular probability integral transform (APIT) as Θ U = 2πU = 2πFX (X), which corresponds to a uniformly distributed angle on the unit circle. For circular (angular) random variables, the sum modulus 2π of absolutely continuous independent circular uniform random variables is a circular uniform random variable, that is, the circular uniform distribution is closed under summation modulus 2π, and it is a stable continuous distribution on the unit circle. If we consider the sum (difference) of the APITs of two random variables, X1 and X2, and test for the circular uniformity of their sum (difference) modulus 2π, this is equivalent to test of independence of the original variables. In this study, we used a flexible family of nonnegative trigonometric sums (NNTS) circular distributions, which include the uniform circular distribution as a member of the family, to evaluate the power of the proposed independence test by generating samples from NNTS alternative distributions that could be at a closer proximity with respect to the circular uniform null distribution

Abstract:

Recent technological advances have enabled the easy collection of consumer behavior data in real time. Typically, these data contain the time at which a consumer engages in a particular activity such as entering a store, buying a product, or making a call. The occurrence time of certain events must be analyzed as circular random variables, with 24:00 corresponding to 0:00. To effectively implement a marketing strategy (pricing, promotion, or product design), consumers should be segmented into homogeneous groups. This paper proposes a methodology based on circular statistical models from which we construct a clustering algorithm based on the use patterns of consumers. In particular, we model temporal patterns as circular distributions based on nonnegative trigonometric sums (NNTSs). Consumers are clustered into homogeneous groups based on their vectors of parameter estimates by using a spherical k-means clustering algorithm. For this purpose, we define the parameter space of NNTS models as a hypersphere. The methodology is applied to three real datasets comprising the times at which individuals send short-service messages and start voice calls and the check-in times of the users of a mobile application Foursquare

Abstract:

Fernández-Durán [Circular distributions based on nonnegative trigonometric sums. Biometrics. 2004;60:499–503] developed a new family of circular distributions based on non-negative trigonometric sums that is suitable for modelling data sets that present skewness and/or multimodality. In this paper, a Bayesian approach to deriving estimates of the unknown parameters of this family of distributions is presented. Because the parameter space is the surface of a hypersphere and the dimension of the hypersphere is an unknown parameter of the distribution, the Bayesian inference must be based on transdimensional Markov Chain Monte Carlo (MCMC) algorithms to obtain samples from the high-dimensional posterior distribution. The MCMC algorithm explores the parameter space by moving along great circles on the surface of the hypersphere. The methodology is illustrated with real and simulated data sets

Abstract:

The statistical analysis of circular, multivariate circular, and spherical data is very important in different areas, such as paleomagnetism, astronomy and biology. The use of nonnegative trigonometric sums allows for the construction of flexible probability models for these types of data to model datasets with skewness and multiple modes. The R package CircNNTSR includes functions to plot, fit by maximum likelihood, and simulate models based on nonnegative trigonometric sums for circular, multivariate circular, and spherical data. For maximum likelihood estimation of the models for the three different types of data an efficient Newton-like algorithm on a hypersphere is used. Examples of applications of the functions provided in the CircNNTSR package to actual and simulated datasets are presented and it is shown how the package can be used to test for uniformity, homogeneity, and independence using likelihood ratio tests

Resumen:

El objetivo de este artículo es probar la hipótesis de la curva U invertida de Kuznets en la relación entre el consumo de agua per cápita para uso agrícola y ganadero, que representa en promedio 75% del consumo total, y el PIB per cápita para los municipios en la cuenca Lerma-Chapala. En el contexto de cambio climático, la relación entre consumo de agua y PIB es muy importante, pues la variabilidad en la disponibilidad del agua ha aumentado, forzando a los usuarios y gobiernos a considerar estrategias para su uso eficiente, en donde se incluyan los posibles impactos económicos y ambientales. Al llevar a cabo el análisis a nivel de una cuenca hidrográfica es necesario considerar los efectos espaciales entre municipios vecinos a través de la aplicación de modelos autorregresivos espaciales. Al incluir errores correlacionados espacialmente en los modelos de regresión, no se rechaza la hipótesis de la curva U invertida de Kuznets. Por tanto, cualquier estrategia de mitigación del cambio climático relacionada con el uso eficiente del agua debe ser evaluada en sus costos y beneficios en el PIB municipal en relación con la curva de Kuznets estimada en este artículo

Abstract:

The main objective of this article is to test the Kuznets inverted U-curve hypothesis in the relation between water consumption, for which agriculture and farming represent on average 75% of total consumption, and the GDP in municipalities located in the Lerma-Chapala hydrographic basin. Given the context of climate change, it is essential to understand the relationship between water consumption and GDP: the increased variability in the availability of water has forced governments and users to implement strategies for the efficient use of water resources, and thus they must consider not only likely environmental problems but also economic impact. Using data at the municipal level in a hydrographic basin, we consider the spatial effects among the different municipalities; these effects are modeled using spatial autoregressive models. The Kuznets inverted U-curve hypothesis is not rejected when allowing for spatially correlated errors. Thus, any strategy for mitigating climate change by making an efficient use of water resources must be evaluated in terms of its costs and benefits in the PIB of the municipality in relation to the fitted Kuznets curve presented in this article

Abstract:

The generational cohort theory states that groups of individuals who experienced the same social, economic, political, and cultural events during early adulthood (17–23 years) would share similar values throughout their lives. Moreover, they would act similarly when making decisions in different aspects of life, particularly when making decisions as consumers. Thus, these groups define market segments, which is relevant in the design of marketing strategies. In Mexico, marketing researchers commonly use U.S. generational cohorts to define market segments, despite sufficient evidence that the generational cohorts in the two countries are not identical because of differences in national historic events. This paper proposes a methodology based on change-point analysis and ordinal logistic regressions to obtain a new classification of generational cohorts for Mexican urban consumers, using data from a 2010 nationwide survey on the values of individuals across age groups

Abstract:

Femández-Durán, and Gregorio-Domínguez, Seasonal Mortality for Fractional Ages in Life Insurance. Scandinavian Actuarial Joumal. A uniform distribution of deaths between integral ages is a widely used assumption for estimating future-lifetimes; however, this assumption does not necessarily reflect the true distribution of deaths throughout the year. We propose the use of a seasonal mortality assumption for estimating the distribution of future-lifetimes between integral ages: this assumption accounts for the number of deaths that occurs in given months of the year, including the excess mortality that is observed in winter months. The impact of this seasonal mortality assumption on short-term life insurance premium calculations is then examined by applying the proposed assumption to Mexican mortality data

Abstract:

A family of distributions for a random pair of angles that determine a point on the surface of a three-dimensional unit sphere (three-dimensional directions) is proposed. It is based on the use of nonnegative double trigonometric (Fourier) sums (series). Using this family of distributions, data that possess rotational symmetry, asymmetry or one or more modes can be modeled. In addition, the joint trigonometric moments are expressed in terms of the model parameters. An efficient Newton-like optimization algorithm on manifolds is developed to obtain the maximum likelihood estimates of the parameters. The proposed family is applied to two real data sets studied previously in the literature. The first data set is related to the measurements of magnetic remanence in samples of Precambrian volcanics in Australia and the second to the arrival directions of low mu showers of cosmic rays

Abstract:

Fernández-Durán, J. J. (2004): “Circular distributions based on nonnegative trigonometric sums,” Biometrics, 60, 499–503, developed a family of univariate circular distributions based on nonnegative trigonometric sums. In this work, we extend this family of distributions to the multivariate case by using multiple nonnegative trigonometric sums to model the joint distribution of a vector of angular random variables. Practical examples of vectors of angular random variables include the wind direction at different monitoring stations, the directions taken by an animal on different occasions, the times at which a person performs different daily activities, and the dihedral angles of a protein molecule. We apply the proposed new family of multivariate distributions to three real data-sets: two for the study of protein structure and one for genomics. The first is related to the study of a bivariate vector of dihedral angles in proteins. In the second real data-set, we compare the fit of the proposed multivariate model with the bivariate generalized von Mises model of [Shieh, G. S., S. Zheng, R. A. Johnson, Y.-F. Chang, K. Shimizu, C.-C. Wang, and S.-L. Tang (2011): “Modeling and comparing the organization of circular genomes,” Bioinformatics, 27(7), 912–918.] in a problem related to orthologous genes in pairs of circular genomes. The third real data-set consists of observed values of three dihedral angles in γ-turns in a protein and serves as an example of trivariate angular data. In addition, a simulation algorithm is presented to generate realizations from the proposed multivariate angular distribution

Abstract:

The Bass Forecasting Diffusion Model is one of the most used models to forecast the sales of a new product. It is based on the idea that the probability of an initial sale is a function of the number of previous buyers. Almost all products exhibit seasonality in their sales patterns and these seasonal effects can be influential in forecasting the weekly/monthly/quarterly sales of a new product, which can also be relevant to making different decisions concerning production and advertising. The objective of this paper is to estimate these seasonal effects using a new family of distributions for circular random variables based on nonnegative trigonometric sums and to use this family of circular distributions to define a seasonal Bass model. Additionally, comparisons in terms of one-step-ahead forecasts between the Bass model and the proposed seasonal Bass model for products such as iPods, DVD players, and Wii Play video game are included

Abstract:

In medical and epidemiological studies, the importance of detecting seasonal patterns in the occurrence of diseases makes testing for seasonality highly relevant. There are different parametric and nonparametric tests for seasonality. One of the most widely used parametric tests in the medical literature is the Edwards test. The Edwards test considers a parametric alternative that is a sinusoidal curve with one peak and one trough. The Cave and Freedman test is an extension of the Edwards test that is also frequently applied and considers a sinusoidal curve with two peaks and two troughs as the alternative hypothesis. The Kuiper, Hewitt and David and Newell are common non-parametric tests. Fernández-Durán (2004) developed a family of univariate circular distributions based on non-negative trigonometric (Fourier) sums (series) (NNTS) that can account for an arbitrary number of peaks and troughs. In this article, this family of distributions is used to construct a likelihood ratio test for seasonality considering parametric alternative hypotheses that are NNTS distributions

Resumen:

El capital social se puede definir como la capacidad de personas o grupos de obtener beneficios por medio del uso de redes sociales (Robinson, Siles y Schmid, y Flores y Rello, 2003). En el presente artículo se utilizan los datos de la segunda ronda de entrevistas del Panel de Hogares de Escasos Recursos en la delegación Álvaro Obregón, México, Distrito Federal (PAO), para ajustar modelos de regresión con variables instrumentales con el objetivo de identificar variables asociadas al capital social en redes sociales que sean significativas para explicar el porcentaje del ingreso que los hogares de PAO suelen gastar en comida. Además, se presenta un análisis factorial para identificar las variables medidas en PAO que están relacionadas con las dimensiones de confianza, redes sociales y aceptación de normas sociales que constituyen el concepto de capital social

Abstract:

Social capital can be defined as the capacity of individuals or groups to obtain benefits by participating in social networks (Robinson, Siles and Schmid and, Flores and Rello in Atria and Siles eds. 2003). In this paper, we use data from the second round of the Panel of Low Income Households in the Delegación Álvaro Obregón, México, D.F. (PAO) to fit regression models with instrumental variables in order to identify significant proxy variables related to the social capital in social networks to explain the percentage of income that households in PAO spend in food. Also, we present the results of a factor analysis to identify the variables in PAO that are related with the dimensions of trust, social networks and accepted norms that are main elements of the definition of social capital

Abstract:

In Fernández-Durán, a new family of circular distributions based on nonnegative trigonometric sums (NNTS models) is developed. Because the parameter space of this family is the surface of the hypersphere, an efficient Newton-like algorithm on manifolds is generated in order to obtain the maximum likelihood estimates of the parameters

Abstract:

Johnson and Wehrly (1978, Journal of the American Statistical Association 73, 602-606) and Wehrly and Johnson (1980, Biometrika 67, 255-256) show one way to construct the joint distribution of a circular and a linear random variable, or the joint distribution of a pair of circular random variables from their marginal distributions and the density of a circular random variable, which in this article is referred to as joining circular density. To construct flexible models, it is necessary that the joining circular density be able to present multimodality and/or skewness in order to model different dependence patterns. Fernandez- Durain (2004, Biometrics 60, 499-503) constructed circular distributions based on nonnegative trigonometric sums that can present multimodality and/or skewness. Furthermore, they can be conveniently used as a model for circular-linear or circular-circular joint distributions. In the current work, joint distributions for circular-linear and circular-circular data constructed from circular distributions based on nonnegative trigonometric sums are presented and applied to two data sets, one for circular-linear data related to the air pollution patterns in Mexico City and the other for circular-circular data related to the pair of dihedral angles between consecutive amino acids in a protein

Abstract:

In many practical situations it is common to have information about the values of the mean and range of certain population. In these cases it is important to specify the population distribution from the given values of its mean and range. The article by de Alba, Fernández-Durán and Gregorio-Domínguez (2004) includes a bibliography with articles dealing with the case of making inference about the mean and standard deviation of a normal population in terms of the observed sample mean and range. In this paper, the maximum entropy principle is used to specify the population distribution given its mean and range. This problem has the particular difficulty that it is necessary to determine the unknown support of the maximum entropy distribution given its length which is equal to the specified value of the range

Resumen:

México es un país donde ocurren distintos fenómenos naturales, como inundaciones, huracanes y terremotos, que pueden convertirse en desastres que requieren grandes sumas de dinero para mitigar su efecto económico en la población afectada. Generalmente, estas grandes sumas de dinero suelen ser aportadas por el gobierno federal y/o local. El objetivo del presente trabajo es el desarrollo de una metodología actuarial para el cálculo de bonos catastróficos para desastres naturales en México, de manera que, en caso de que el evento catastrófico ocurra durante la vigencia del bono, el gobierno cuente con fondos adicionales y, en caso de que no ocurra, los inversionistas que hayan comprado el bono obtengan tasas de interés superiores a la tasa libre de riesgo del mercado. Los bonos tienen la particularidad, a diferencia de bonos similares emitidos en otros países, que en caso de que ocurra el evento catastrófico el inversionista no pierde el total de su inversión sino sólo una parte o se le difiere su capital total o parte de éste a una fecha posterior a la de la finalización del contrato del bono catastrófico, esto con el fin de hacerlos más atractivos para el inversionista

Abstract:

Floods, hurricanes and earthquakes occur every year in Mexico. These natural phenomena can be considered as catastrophes if they produce large economic damages in the affected areas. In these cases it is required a huge amount of monéy to provide relief to the catastrophe victims and areas. Usually, in Mexico it is the local andlor federal governments that are responsible to provide these funds. The main objective of this article is to develop an actuarial methodologyfor the pricing of CAT bonds in Mexico in order to allow the government to have additional funds to provide relief to the affected victims and areas in case that the catastrophic event occurs during the CAT bond periodo If the catastrophic event does not occur during the CAT bond period then the CAT bondholders will get a higher interest rate than the (risk-free) reference interest rate in the market. To make the CAT bond more attractive to investors the CAT bonds considered in this work have the additional characteristic that the CAT bondholders do not necessarily lose all their initial investment if the catastrophic event occurs. Instead a percentage of the CAT bond principal is lost or their initial investment is paid in a date after the end of the CAT bond period

Abstract:

A new family of distributions for circular random variables is proposed. It is based on nonnegative trigonometric sums and can be used to model data sets which present skewness and/or multimodality. In this family of distributions, the trigonometric moments are easily expressed in terms of the parameters of the distribution. The proposed family is applied to two data sets, one related with the directions taken by ants and the other with the directions taken by turtles, to compare their goodness of fit versus common distributions used in the literature

Abstract:

Many random variables occurring in nature are circular random variables, i.e., its probability density function has period 2 pi and its support is the unit circle. The support of a linear random variable is a subset of the real line. When one is interested in the relation between a circular random variable and a linear random variable it is necessary to construct their joint distribution. The support of the joint distribution of a circular and a linear random variable is a cylinder. In this paper, we use copulas and circular distributions based on non-negative trigonometric sums to construct the joint distribution of a circular and a linear random variable. As an application of the proposed methodology the hourly quantile curves of ground-level ozone concentration for a monitoring station in Mexico City are analyzed. In this case the circular random variable has a uniform distribution if we have the same number of observations in each hour during the day and, the linear random variable is the ground-level ozone concentration

Abstract:

Consider a portfolio of personal motor insurance policies in which. for each policyholder in the portfolio, we want to assign a credibility factor at the end of each policy period that reflects the claim experience of the policyholder compared with the claim experience of the entire portfolio. In this paper we present the calculation of credibility factors based on the concept of relative entropy between the claim size distribution of the entire portfolio and the claim size distribution of the policyholder

Abstract:

By using data of the elections for the Chamber of Deputies of 1997 and 2000 in Mexico, we fit spatial autologistic models with temporal effects to test the significance of spatial and temporal effects on those elections. The binary variable of interest is the one that indicates a win of the National Action Party (PAN) or the alliance that it formed. By spatial effect, we refer to the fact that neighbouring constituencies present dependence on their electoral results. The temporal effect refers to the existence of dependence, for the same constituency, of the result of the election with the result of the previous election. The model that we used to test the significance of spatial and temporal effects is the spatial autologistic model with temporal effects for which estimation is complex and requires simulation techniques. By defining an urban constituency as one that contains at least one population center of 200,000 inhabitants or more, among our principal results, we find that, for the Mexican election of 2000, the spatial effect is significant only when neighbouring constituencies are both urban. For the election of 1997, the spatial effect is significant independent of the type of neighbouring constituencies. The temporal effect is significant on both elections

Resumen:

Cuando se otorga un crédito a plazo fijo a una persona para comprar un bien de consumo duradero existe la posibilidad de que ésta no pueda hacer frente a los pagos del crédito debido a que pierda su empleo de manera involuntaria. Esta situación representa un problema tanto para el deudor como para el acreedor. El acreedor puede incurrir en una pérdida operativa y el deudor ser despojado del bien. En este trabajo se establece una metodología para el cálculo de la prima de un seguro de desempleo cuyos beneficios, en caso de desempleo involuntario del deudor, son el pago de un número máximo (predeterminado en el contrato del seguro) de las mensualidades de su crédito durante el periodo de desempleo. El seguro propuesto tiene una vigencia igual a la del crédito y sólo el primer desempleo durante la vigencia del crédito es considerado para el pago de beneficios. El costo del seguro se obtiene estimando las tasas de transición de una cadena de Markov en tiempo continuo con dos estados (empleado y desempleado). Estas tasas de transición se modelan como funciones de covariables, como el género, el estado civil, la edad y la escolaridad. Para la estimación de las tasas de transición se utilizan bases de datos de la Encuesta Nacional de Empleo Urbano (ENEU) (INEGI, 1998). Al considerar todas las posibles trayectorias de empleo-desempleo en la duración del crédito, así como las probabilidades de cada una de ellas, obtenemos la distribución de probabilidades de la cantidad mensual requerida por el seguro definida como la cantidad mensual que el asegurado debería pagar para cubrir las mensualidades de su crédito durante el primer periodo de desempleo si ocurriese la trayectoria de empleo-desempleo considerada. A partir de esta distribución de probabilidades es posible calcular distintas medidas, como la desviación estándar o cuantificar el riesgo del contrato del seguro

Abstract:

When a person makes an installment purchase of a durable good it is possible that he/she will be unable to make the periodical payments because he/she looses his/her employment involuntarily. This is an issue both to the borrower as well as to the creditor. The borrower can be deprived of the good and the creditor can incur an operational loss. In this paper we develop a methodology to calculate the net premium of an insurance against involuntary unemployment for installment purchases of durable goods. The benefits of this insurance are a predetermined number of installments during the unemployment spell. The insurance has the same starting date and duration as the installment purchase and only the first involuntary unemployment spell is considered for benefits. The cost of the insurance is obtained by using a two-state (employed-unemployed) continuous time Markov chain. The transition rates of the chain are modelled as functions of the covariates gender, marital status, age and educational level. By using these covariates it is possible to identify different risk groups. The transition rates are estimated by using data from the National Urban Employment Surver (Encuesta Nacional de Empleo Urbano ENEU, INEGI, 1998) in Mexico. By considering all the possible employed-unemployed trajectories during the installment purchase period and the probability of each of these trajectories, it is possible to obtain the probability distribution of the required amount for insurance which is defined as the monthly amount that the borrower should have to pay in order to cover the payments of the credit during the first unemployment spell if the considered trajectory occurs. From this probability distribution it is possible to calculate different measures such as the standard deviation or quantiles to set safety limits in the cost of the insurance and to give insight into the riskiness of the insurance contract

Abstract:

rocedure based on the combination of a Bayesian changepoint model and ordinary least squares is used to identify and quantify regions where a radar signal has been attenuated (i.e. diminished) as a consequence of intervening weather. A graphical polar display is introduced that illustrate the location and importance of the attenuation

Resumen:

La democracia no genera las mismas expectativas entre los mexicanos. La concepción que estos tienen de ella varía de acuerdo a su sistema de creencias, donde su nivel de información define los términos en que los mexicanos piensan a la democracia como forma de gobierno. Este ensayo busca dar una explicación a las diferencias que existen en las concepciones de la democracia entre las distintas regiones del país. Estas diferencias ponen en duda los argumentos que afirmaban entre los valores de los mexicanos existe un déficit democrático. Su concepción de la democracia es un reflejo del ambiente en el que vive el individuo y, en gran medida, un espejo de las características de la región en la que habita

Abstract:

We study a general scenario where confidential information is distributed among a group of agents who wish to share it in such a way that the data becomes common knowledge among them but an eavesdropper intercepting their communications would be unable to obtain any of said data. The information is modeled as a deck of cards dealt among the agents, so that after the information is exchanged, all of the communicating agents must know the entire deal, but the eavesdropper must remain ignorant about who holds each card. This scenario was previously set up in Fernández-Duque and Goranko (2014) as the secure aggregation of distributed information problem and provided with weakly safe protocols, where given any card c, the eavesdropper does not know with certainty which agent holds c. Here we present a perfectly safe protocol, which does not alter the eavesdropper’s perceived probability that any given agent holds c. In our protocol, one of the communicating agents holds a larger portion of the cards than the rest, but we show how for infinitely many values of a, the number of cards may be chosen so that each of the m agents holds more than a cards and less than 4m2a

Abstract:

We consider the generic problem of Secure Aggregation of Distributed Information (SAD 1), where several agents acting as a team have information distributed amongst them, modelled by means of a publicly known deck of cards distributed amongst the agents, so that each of them knows only her cards. The agents have to exchange and aggregate the information about how the cards are distributed amongst them by means of public announcements over insecure communication channels, intercepted by an adversary "eavesdropper", in such a way that the adversary does not learn who holds any of the cards. We present a combinatorial construction of protocols that provides a direct solution of a class of SADI problems and develop a technique of iterated reduction of SADI problems to smaller ones which are eventually solvable directly. We show that our methods provide a solution to a large class of SADI problems, including all SADI problems with sufficiently large size and sufficiently balanced card distributions

Abstract:

This article uses possible-world semantics to model the changes that may occur in an agent's knowledge as she loses information. This builds on previous work in which the agent may forget the truth-value of an atomic proposition, to a more general case where she may forget the truth-value of a propositional formula. The generalization poses some challenges, since in order to forget whether a complex proposition π is the case, the agent must also lose information about the propositional atoms that appear in it, and there is no unambiguous way to go about this. We resolve this situation by considering expressions of the form [‡π]ϕ, which quantify over all possible (but 'minimal') ways of forgetting whether π. Propositional atoms are modified non-deterministically, although uniformly, in all possible worlds. We then represent this within action model logic in order to give a sound and complete axiomatization for a logic with knowledge and forgetting. Finally, some variants are discussed, such as when an agent forgets π (rather than forgets whether π) and when the modification of atomic facts is done non-uniformly throughout the model

Abstract:

Dynamic topological logic (DTL) is a polymodal logic designed for reasoning about dynamic topological systems. These are pairs (X,f), where X is a topological space and f : X → X is continuous. DTL uses a language L which combines the topological S4 modality with temporal operators from linear temporal logic. Recently, we gave a sound and complete axiomatization DTL* for an extension of the logic to the language L*, where is allowed to act on finite sets of formulas and is interpreted as a tangled closure operator. No complete axiomatization is known in the language L, although one proof system, which we shall call KM, was conjectured to be complete by Kremer and Mints. In this article, we show that given any language L' such that L ⊆ L' ⊆ L*, the set of valid formulas of L' is not finitely axiomatizable. It follows, in particular, that KM is incomplete

Abstract:

Provability logics are modal or polymodal systems designed for modeling the behavior of Gödel’s provability predicate and its natural extensions. If ALPHA is any ordinal, the Gödel-Löb calculus GLPΛ contains one modality [λ] for each λ < Λ, representing provability predicates of increasing strength. GLPω has no non-trivial Kripke frames, but it is sound and complete for its topological semantics, as was shown by Icard for the variable-free fragment and more recently by Beklemishev and Gabelaia for the full logic. In this paper we generalize Beklemishev and Gabelaia’s result to GLPΛ for countable Λ. We also introduce provability ambiances, which are topological models where valuations of formulas are restricted. With this we show completeness of GLPΛ for the class of provability ambiances based on Icard polytopologies

Abstract:

This article studies the transfinite propositional provability logics GLPΛ and their corresponding algebras. These logics have for each ordinal ξ< Λ a modality 〈ξ〉. We will focus on the closed fragment of GLPΛ (i.e. where no propositional variables occur) and worms therein. Worms are iterated consistency expressions of the form 〈ξn〉…〈ξ1〉Τ. Beklemishev has defined well-orderings <ξ on worms whose modalities are all at least ξ and presented a calculus to compute the respective order-types. In the current article, we present a generalization of the original < ξ orderings and provide a calculus for the corresponding generalized order-types oξ. Our calculus is based on so-called hyperations which are transfinite iterations of normal functions. Finally, we give two different characterizations of those sequences of ordinals which are of the form 〈oξ(A)〉ξεOn for some worm A. One of these characterizations is in terms of a second kind of transfinite iteration called cohyperation

Abstract:

Ordinal functions may be iterated transfinitely in a natural way by taking pointwise limits at limit stages. However, this has disadvantages, especially when working in the class of normal functions, as pointwise limits do not preserve normality. To this end we present an alternative method to assign to each normal function f a family of normal functions Hyp[ f ]=(f^ξ) ξ∈On, called its hyperation, in such a way that f^0 = id, f^1 = f and f^α+β = f^α ◦ f^β for all α, β. Hyperations are a refinement of the Veblen hierarchy of f . Moreover, if f is normal and has a well-behaved left-inverse g called a left adjoint, then g can be assigned a cohyperation coH[g]=(g^ξ) ξ∈On, which is a family of initial functions such that g^ξ is a left adjoint to f^ξ for all ξ

Abstract:

For any ordinal Λ, we can define a polymodal logic GLP(Λ), with a modality [ξ] for each ξ<Λ. These represent provability predicates of increasing strength. Although GLP(Λ) has no Kripke models, Ignatiev showed that indeed one can construct a Kripke model of the variable-free fragment with natural number modalities, denoted GLP(Θ)(0). Later, Icard defined a topological model for GLP(Θ)(0); which is very closely related to Ignatiev's. In this paper we show how to extend these constructions for arbitrary Λ. More generally, for each Θ. Λ we build a Kripke model J(Λ)(Θ)and a topological model , and show that GLP(Λ)(Θ) is sound for both of these structures, as well as complete, provided Θ is large enough

Abstract:

We show that given a finite, transitive and reflexive Kripke model 〈 W, ≼, [ ⋅ ] 〉 and w∈W , the property of being simulated by w (i.e., lying on the image of a literal preserving relation satisfying the ‘forth’ condition of bisimulation) is modally undefinable within the class of S4 Kripke models. Note the contrast to the fact that lying in the image of w under a bisimulation is definable in the standard modal language even over the class of K4 models, a fairly standard result for which we also provide a proof. We then propose a minor extension of the language adding a sequent operator ♮ (‘tangle’) which can be interpreted over Kripke models as well as over topological spaces. Over finite Kripke models it indicates the existence of clusters satisfying a specified set of formulas, very similar to an operator introduced by Dawar and Otto. In the extended language L+=L□♮ , being simulated by a point on a finite transitive Kripke model becomes definable, both over the class of (arbitrary) Kripke models and over the class of topological S4 models. As a consequence of this we obtain the result that any class of finite, transitive models over finitely many propositional variables which is closed under simulability is also definable in L +, as well as Boolean combinations of these classes. From this it follows that the μ-calculus interpreted over any such class of models is decidable

Resumen:

Young citizens vote at relatively low rates, which contributes to political parties de-prioritizing youth preferences. We analyze the effects of low-cost online interventions in encouraging young Moroccans to cast an informed vote in the 2021 elections. These interventions aim to reduce participation costs by providing information about the registration process and by highlighting the election's stakes and the distance between respondents' preferences and party platforms. Contrary to preregistered expectations, the interventions did not increase average turnout, yet exploratory analysis shows that the interventions designed to increase benefits did increase the turnout intention of uncertain baseline voters. Moreover, information about parties' platforms increased support for the party closest to the respondents' preferences, leading to better-informed voting. Results are consistent with motivated reasoning, which is surprising in a context with weak party institutionalization

Abstract:

The present study introduces the two-sided and right-sided Quaternion Hyperbolic Fourier Transforms (QHFTs) for analyzing two-dimensional quaternion-valued signals defined in an open rectangle of the Euclidean plane endowed with a hyperbolic measure. The different forms of these transforms are defined by replacing the Euclidean plane waves with the corresponding hyperbolic plane waves in one dimension, giving the hyperbolic counterpart of the corresponding Euclidean Quaternion Fourier Transforms. Using hyperbolic geometry tools, we study the main operational and mapping properties of the QHFTs, such as linearity, shift, modulation, dilation, symmetry, inversion, and derivatives. Emphasis is placed on novel hyperbolic derivative and hyperbolic primitive concepts, which lead to the differentiation and integration properties of the QHFTs. We further prove the Riemann-Lebesgue Lemma and Parseval's identity for the two-sided QHFT. Besides, we establish the Logarithmic, Heisenberg-Weyl, Donoho-Stark, and Benedicks' uncertainty principles associated with the two-sided QHFT by invoking hyperbolic counterparts of the convolution, Pitt's inequality, and the Poisson summation formula. This work is motivated by the potential applications of the QHFTs and the analysis of the corresponding hyperbolic quaternionic signals

Abstract:

Developing new paradigms of user interaction is always challenging. The introduction of the Google Glass platform presents a novel way to deliver content to users. Clearly, the Glass platform is not going to become a mainstream consumer electronics product as it is; however it was an experimental program from which important practical lessons can be learned. We, as part of the Google Glass Explorer Community, present this study as a contribution to the practical understanding of products that can be core for the development of micro-interaction-based interfaces for wearable gadgets in urban contexts. Throughout this paper we detail the development process of this kind of application by focusing on the challenges presented, the implementation and design decisions, and the usability tests we performed. The main results were that the use of the app is intuitive in general, but the users have problems identifying several components that were adapted for the size of the screen and the concept of the device

Abstract:

The complete twisted graph of order n, denoted by Tn, is a complete simple topological graph with vertices u1, u2, … , un such that two edges uiuj and ui'uj' cross if and only if i< i'< j'< j or i'< i< j< j'. The convex geometric complete graph of order n, denoted by Gn, is a convex geometric graph with vertices v1, v2, … , vn placed counterclockwise, in which every pair of vertices is adjacent. A biplanar tree of order n is a labeled tree with vertex set { v1, v2, … , vn} having the property of being planar when embedded in both Tn and Gn. Given a connected graph G the (combinatorial) tree graph T(G) is the graph whose vertices are the spanning trees of G and two trees P and Q are adjacent in T(G) if there are edges e P and f Q such that Q= P- e+ f. For all positive integers n, we denote by T(n) the graph T(Kn). The biplanar tree graph, B(n) , is the subgraph of T(n) induced by the biplanar trees of order n. In this paper we give a characterization of the biplanar trees and we study the structure, the radius and the diameter of the biplanar tree graph

Resumen:

En este artículo se reseña una teoría de la enseñanza de las matemáticas (APOE), presentando sus características principales y su uso en el diseño de actividades y material didáctico. En particular hacemos referencia a cómo ha sido aplicada con éxito en varios cursos de álgebra lineal

Abstract:

This study presents a contribution to research in undergraduate teaching and learning of linear algebra, in particular, the learning of matrix multiplication. A didactical experience consisting on a modeling situation and a didactical sequence to guide students' work on the situation were designed and tested using APOS theory. We show results of research on students' activity and learning while using the sequence and through analysis of student's work and assessment questions. The didactic sequence proved to have potential to foster students' learning of function, matrix transformations and matrix multiplication. A detailed analysis of those constructions that seem to be essential for students understanding of this topic including linear transformations is presented. These results are contributions of this study to the literature

Abstract:

A minimum feedback arc set of a digraph D is a minimum set of arcs which removal leaves the resultant graph free of directed cycles; its cardinality is denoted by τ1(D). The acyclic disconnection of D, ω(D), is defined as the maximum number of colors in a vertex coloring of D such that every directed cycle of D contains at least one monochromatic arc. In this article we study the relationship between the minimum feedback arc set and the acyclic disconnection of a digraph, we prove that the acyclic disconnection problem is NP-complete. We define the acyclic disconnection and the minimum feedback for graphs. We also prove that ω(G) + τ1(G) = |V(G)| if G is a wheel, a grid or an outerplanar graph

Abstract:

In this paper we relate the global irregularity and the order of a c-partite tournament T to the existence of certain cycles and the problem of finding the maximum strongly connected subtournament of T. In particular, we give results related to the following problem of Volkmann: How close to regular must a c-partite tournament be, to secure a strongly connected subtournament of order c?

Abstract:

Given a set of cycles C of a graph G, the tree graph of G defined by C is the graph T(G,C) whose vertices are the spanning trees of G and in which two trees R and S are adjacent if the union of R and S contains exactly one cycle and this cycle lies in C. Li et al [Discrete Math 271 (2003), 303--310] proved that if the graph T(G,C) is connected, then C cyclically spans the cycle space of G. Later, Yumei Hu [Proceedings of the 6th International Conference on Wireless Communications Networking and Mobile Computing (2010), 1--3] proved that if C is an arboreal family of cycles of G which cyclically spans the cycle space of a 2-connected graph G, then T(G, C) is connected. In this note we present an infinite family of counterexamples to Hu's result

Abstract:

Let T be a 3-partite tournament and F3(T) be the set of vertices of T not in triangles. We prove that, if the global irregularity of T, ig(T), is one and |F3(T)|>3, then F3(T) must be contained in one of the partite sets of T and |F3(T)|<=((k+1)/4)+1 , which implies |F3(T)>=(n+5)/12)+1, where k is the size of the largest partite set and n the number of vertices of T. Moreover, we give some upper bounds on the number, as well as results on the structure of said vertices within the digraph, depending on its global irregularity

Resumen:

Se enmarca la historia del Segundo Imperio en México dentro del contexto internacional estudiando la influencia de la situación y los conflictos europeos de la época, la diplomacia que desarrolló Maximiliano y el papel de los Estados Unidos ante la Intervención Francesa

Abstract:

This paper frames the history of the Second Empire in Mexico within the international context, studying the influence of the situation and the European conflicts of the time, the diplomacy developed by Maximilian and the role of the United States before the French Intervention

Resumen:

El editor realiza unas notas biográficas de Francisco de Arrangoiz, autor de un folleto poco conocido, editado en francés con el título La chute de l'empire du Mexique, par un mexicain, en el que Arrangoiz ataca desde su perspectiva conservadora al Imperio de Maximiliano por su política liberal y defiende al clero mexicano y al Estado pontificio. Se presenta al final el folleto traducido al español

Resumen:

El editor realiza unas notas biográficas de Francisco de Arrangoiz, autor de un folleto poco conocido, editado en francés con el título La chute de l'empire du Mexique, par un mexicain, en el que Arrangoiz ataca desde su perspectiva conservadora al Imperio de Maximiliano por su política liberal y defiende al clero mexicano y al Estado pontificio. Se presenta al final el folleto traducido al español

Abstract:

The editor produces biographical notes about Francisco de Arrangoiz, who wrote a little known brochure published in French entitled La chute de l'empire du Mexique, par un mexicain, in which Arrangoiz, from his conservative perspective, attacks the Maximilian empire for its liberal policy and defends the Mexican clergy and the Pontifical State. The brochure, translated into Spanish, is presented at the end of the text

Resumen:

El artículo compara y analiza la visión de Frances Erskine Inglis, Madame Calderón de la Barca de dos levantamientos militares en México en 1840 y 1841 y una revolución en Madrid en 1854. La autora era una escritora escocesa, casada con un diplomático y hombre de Estado español Ángel Calderón de la Barca, a quien acompañó en sus misiones diplomáticas en México y Estados Unidos, permaneciendo a su lado en su desempeño como Ministro de Estado en España; su paso por México quedó plasmado en La vida en México, y su vida en la penísula en The Attaché in Madrid, obras en las cuales se puede leer el relato y comentario de dichos acontecimientos políticos

Abstract:

This paper compares and analyzes Frances Erskine Inglis's, Madame Calderón de la Barca point of view on the 1840 and 1841 military rebellions in Mexico, and the 1854 Spanish revolution. The autor was a Scottish writter, married to a Spanish diplomat and statesman, Ángel Calderón de la Barca, whom she followed during his diplomatic missions in Mexico and United States, and later, stayed by his side during his administration as Minister of State in Spain, Her time in Mexico got embodied in La vida rn México, and her life in the Spanish peninsula in The Attaché in Madrid, literary works in wich it is posible to can read the history and comments of the political events mentioned before

Résumé:

L'article fait une comparaison et une analyse de la perception qu'a eu Frances Erskine Inglis, Madame Calderon de la Barca, sur deux révolte militaire se déroulants au Mexique en 1840 et 1841, et sur la révolution espagnole de 1854. L'auteure était une écrivaine écossaise, mariée avec un diplomate et homme d'Etat espagnol, Ángel Calderón de la Barca, à qui elle a accompagnée durant ses missions diplomatiques au Mexique et aux États-Unis et, plus tard, restant à ses côtes durant l'exercice de ses fonctions comme Ministre d'État en Espagne. Son passage au Mexique s'est matérialisé dans La vida en México, et sa vie dans la péninsule espagnole dans The Attaché in Madrid, œuvres dans lesquelles on peut lire le récit et commentaire des événements politiques avant mentionnés

Resumen:

El artículo narra y analiza los principales acontecimientos políticos, sociales, económicos e internacionales que afectaron la historia de México durante el período comprendido entre 1855 y 1867, época de la creación del Estado republicano, liberal, federal y secular, que tuvo que hacer frente a la opción conservadora, centralista y clerical. Finalmente, se opuso al proyecto monarquista, liberal, centralista y regalista: es el parteaguas del México moderno

Abstract:

This article chronicles and analyzes the major Mexican political, social, economic, and international events during the period (1855-1867). This is the time of creation of the republican, liberal, federal, and secular State in opposition to the conservative, centralist, and clerical alternative. By the end of that period, it faced the monarchial, liberal, centralistic, and royalist agenda. It is the turning point in modern Mexican history

Abstract:

El presente trabajo se centrará en el análisis del diario de la colonia española en México, El Correo Español y su actitud durante la guerra de 1898.

Abstract:

In this article we propose novel Bayesian nonparametric methods using Dirichlet Process Mixture (DPM) models for detecting pairwise dependence between random variables while accounting for uncertainty in the form of the underlying distributions. A key criteria is that the procedures should scale to large data sets. In this regard we find that the formal calculation of the Bayes factor for a dependent-vs.-independent DPM joint probability measure is not feasible computationally. To address this we present Bayesian diagnostic measures for characterising evidence against a "null model" of pairwise independence. In simulation studies, as well as for a real data analysis, we show that our approach provides a useful tool for the exploratory nonparametric Bayesian analysis of large multivariate data sets

Abstract:

We study how proximate neighbors affect one's propensity to vote using data on 12 million registered voters in Mexico. To identify this effect, we exploit idiosyncratic variation at the neighborhood block level resulting from approximately one million relocation decisions. We find that when individuals move to blocks where people vote more (less) they themselves start voting more (less). We show that this finding is not the result of selection into neighborhoods or of place-based factors that determine turnout, but rather peer effects. Consistent with this claim, we find a contagion effect for non-movers and show that neighbors from the same block are much more likely to perform an electoral procedure on the same exact day as neighbors who live on different blocks within a neighborhood

Abstract:

We present an integrated platform for the visualization, analysis, segmentation and reconstruction of MR brain images. Our tool allows the user to interactively analyze a stack of MR images, or to automatically segment multiple anatomical structures and perform 3D reconstructions. The user can also interactively guide the segmentation process to produce better quality results. The tool is light and fast, lending itself to be used as a general purpose MR imaging manipulation software

Abstract:

The article presents a possible solution to a typical tomographic images generation problem from data of an industrial process located in a pipeline or vessel. These data are capacitance measurements obtained non-invasively according to the well known ECT technique (Electrical Capacitance Tomography). Every 313 pixels image frame is derived from 66 capacitance measurements sampled from the real time process. The neural nets have been trained using the backpropagation algorithm where training samples have been created synthetically from a computational model of the real ECT sensor. To create the image 313 neuronal nets, each with 66 inputs and one output, are used in parallel. The resulting image is finally filtered and displayed. The different ECT system stages along with the different tests performed with synthetic and real data are reported. We show that the image resulting from our method is a faster and more precise practical alternative to previously reported ones

Resumen:

Los recursos del sector salud han tenido una caída alarmante en los últimos años. Más que nunca necesitan políticas sostenibles y estrategias de financiamiento para no dejar a la deriva a cincuenta millones de mexicanos sin acceso a los servicios sanitarios

Resumen:

La cobertura universal es una meta que debe alcanzarse como resultado de un proceso, no de un decreto. Este artículo ofrece reflexiones para el desarrollo de ese objetivo

Abstract:

The Mexican scientific production published in mainstream journals included in the Web of Science (Clarivate Analytics) for the period 1995-2015 is analyzed. To this purpose, the bibliometric data of the 32 states were organized into five groups according to the following criteria: research production, institutional sectors, and number of research centers. Our findings suggest that there has been an important deconcentration of the scientific activities mainly towards public state universities as a consequence of various public policies. While institutions located in Mexico City (CX) have published, during this period, 48.1% of the whole Mexican scientific production, there are two other groups of states with rather different productivity: 13 of them published 40% of the output and the rest of the entities (18) just published 11%. Our findings suggest that the highest research performance corresponds to those federal entities where there are branches of higher education institutions located in CX. We also identify those institutional sectors that contribute importantly to a specific research output for each federal entity. The results of this study could be useful to improve science and technology public policies in each state

Abstract:

This paper presents a networking of two theories, the APOS Theory and the ontosemiotic approach (OSA), to compare and contrast how they conceptualize the notion of a mathematical object. As context of reflection, we designed an APOS genetic decomposition for the derivative and analyzed it from the point of view of OSA. Results of this study show some commonalities and some links between these theories and signal the complementary nature of their constructs

Resumen:

En este trabajo, después de un breve resumen del APOE y del EOS, se analiza una descomposición genética de la derivada, realizada usando los constructos teóricos del APOE, desde la perspectiva del EOS. La mirada realizada desde el EOS se focaliza en la encapsulación de procesos en objetos que, según el APOE, se realiza en dicha descomposición genética. Esta mirada nos permite concluir que la manera de conceptualizar la encapsulación de procesos en objetos en el APOE no informa sobre la naturaleza del objeto que ha emergido ni de sus cambios de naturaleza

Abstract:

In this paper, after a brief overview of the APOS and the OSA, we analyze a genetic decomposition of the derivative, performed using the theoretical constructs of APOS, from the perspective of OSA. This OSA perspective focuses on the encapsulation of processes into objects that, according to the APOS, is performed in such genetic decomposition. This view allows us to conclude that the way to conceptualize the encapsulation of processes into objects in the APOS does not report the nature of the object that has emerged neither its changes of nature

Abstract:

As robotic systems become increasingly capable of complex sensory, motor and information processing functions, the ability to interact with them in an ergonomic, real-time and adaptive manner becomes an increasingly pressing concern. In this context, the physical characteristics of the robotic device should become less of a direct concern, with the device being treated as a system that receives information, acts on that information, and produces information. Once the input and output protocols for a given system are well established, humans should be able to interact with these systems via a standardized spoken language interface that can be tailored if necessary to the specific system.The objective of this research is to develop a generalized approach for human-machine interaction via spoken language that allows interaction at three levels. The first level is that of commanding or directing the behavior of the system. The second level is that of interrogating or requesting an explanation from the system. The third and most advanced level is that of teaching the machine a new form of behavior. The mapping between sentences and meanings in these interactions is guided by a neuropsychologically inspired model of grammatical construction processing. We explore these three levels of communication on two distinct robotic platforms. The novelty of this work lies in the use of the construction grammar formalism for binding language to meaning extracted from video in a generative and productive manner, and in thus allowing the human to use language to command, interrogate and modify the behavior of the robotic systems

Abstract:

We consider a curved Sitnikov problem, in which an infinitesimal particle moves on a circle under the gravitational influence of two equal masses in Keplerian motion within a plane perpendicular to that circle. There are two equilibrium points, whose stability we are studying. We show that one of the equilibrium points undergoes stability interchanges as the semi-major axis of the Keplerian ellipses approaches the diameter of that circle. To derive this result, we first formulate and prove a general theorem on stability interchanges, and then we apply it to our model. The motivation for our model resides with the n-body problem in spaces of constant curvature

Abstract:

We study the restricted 3-body problem with the constriction of motion to the unit circle. First, we study the 2-body problem on the unit circle and give the explicit solutions for a regularized version of the equations of motion for any initial data. We classify the motions in elliptic, parabolic, hyperbolic type and an equilibrium state. Then, we analyze the restricted 3-body problem on the unit circle when the primary bodies are performing elliptic and hyperbolic motions. We show the existence of just one equilibrium state when the masses of primary bodies are equal and we exhibit the hyperbolic structure of this equilibrium point via an exponential dichotomy. In the last part we regularize the equations of motion. We show the global dynamics and some periodic solutions with its respective period

Abstract:

We introduce a modular methodology for evaluating the perceived quality of video streams in simulated packet networks through the use of artificial neural networks.. One particular implementation of the test bed is presented and the results obtained with it under simple network configurations are discussed. Our tool was able to accurately predict the MOS scores of human viewers. Other applications of the test bed are also presented. For data analysis, we found that the usual parameters that can be controlled in an MPEG-4 codec do not have a such a strong influence on the perceived video quality as a good network design that protects the video flows may do

Abstract:

We analyze the convergence of a continuous interior penalty (CIP) method for a singularity perturbed fourth order elliptic problem on a layer-adapted mesh. On this anisotropic mesh, we prove under reasonable assumptions uniform convergence of almost order k - 1 for finite elements of degree k ≥ 2. This result is of better order than the known robust result on standar meshes. A by-product of our analysis is an analytic lower bound for the penalty of the symmetric CIP method. Finally, our convergence result is verified numerically

Resumen:

Los agentes de conversación y los sistemas de diálogo (interfaces de control de voz, chatbots, asistentes personales) están ganando impulso como técnicas de interacción humano-computadora en la sociedad digital. Al platicar con Mitsuku (una inteligencia artificial conversacional) te puedes dar cuenta que es capaz de seguir una conversación, de recordar datos e, incluso, de aceptar correcciones. Aunque todavía hay que esperar un poco más para que sea verdaderamente conversacional, los asistentes virtuales no están hechos aún para pláticas reales, nos debemos preparar porque se encuentran en plena expansión y están dando el salto a servicios de la vida diaria

Abstract:

Conversation agents and dialogue systems (voice control interfaces, chatbots, personal assistants), are gaining momentum as human-computer interaction techniques in the digital society. When you talk with Mitsuku (a conversational artificial intelligence), you can realize that it is able to follow a logical conversation, accept corrections, and remember data and information. But we still have to wait a little longer to be truly conversational. Virtual assistants are not made for real conversations yet. We must be prepared because they are in full expansion and are making the leap to daily life services

Abstract:

Since 1997, Mexico -like many Latin American countries- has seen significant public and private investment aimed at incorporating information technology in the classroom. In-class information technology holds the great promise of revolutionizing the teaching-learning process. The reality is different, often showing negligible impact from such efforts. One common criticism has been the failure to properly train and support teachers before rolling out technology for use in their classrooms. This article looks at a recent effort of the Mexican government to address the issue of teacher training and support. @prende 2.0 was a program of the Mexican federal government that involved 2,700 digital trainers who trained more than 63,000 teachers in the use of technological equipment that they would be provided. Analyzing administration information and hard data from @prende, this article analyzes the program’s successes and challenges to fashion a series of recommendations regarding similar training and support efforts

Abstract:

In this work we present a computational system for learning support called SAGE (Sistema de Apoyo Generalizado para la Enseñanza Individualizada) designed to offer a teaching plan for each student according to their skills and knowledge based on a taxonomy of learning objectives. To achieve it, a content map and the Bloom’s Taxonomy were used. The content map organizes subjects from the general to the particular through a Morganov-Heredia matrix where dependencies are established and the existent relationships between the course subjects are represented. The model of skills and knowledge is based on the first four cognitive levels from Bloom’s Taxonomy and it is obtained from a diagnostic test and updated according to the advance of the students. The system consists of three modules that were created according to the object-oriented methodology: the student module, the teacher module and the interface module. The student module takes the lessons and consults their evaluations, the teacher model registers students to the course and follows up on their progress and the interface module provides a simple interaction with the users, keeping the student’s attention during the lessons and facilitating the query of information to the teacher. Our final system is content-free, integrates some support tools like games and practice exercises and allows students and teachers to check the progress of the course by comparing the scores with the group average, showing positions inside the group, median and standard deviations, as well as charts that show progress from one chapter to another. We also propose three improvements for the system: a clustering analysis to the cognitive characteristics of the students for determining grouping profiles, the Bayesian Knowledge Tracing method based on those profiles so that progress depends on the probabilities of students having the required knowledge but failing

Abstract:

Blockchain has the potential to transform the financial services industry, institutional functions, business operations, and other areas such as education. The current paper focuses on one real-world illustration of blockchain’s potential --a pilot project that used a blockchain (hosted by Ethereum) to store certifications for 1,518 teachers who participated in a teacher training in Mexico

Abstract:

The objective of our investigation was to design a formal mentoring program for novice professors who come from another culture and are recent graduates from a doctoral program. We studied a sample of eight international novice professors in the program to demonstrate its effectiveness. What distinguishes this program from others is that it offers mentoring to help professors both improve the quality of their instruction and adapt to the culture of the country and of the university. The methodology used was that of case study with a design of pre-test, intervention, post-test. The professors who participated in the mentoring program demonstrated an improvement of 0.95 points (on a scale of five points, p<0.01) in their student evaluations. The principal causes of deficient teaching performance in the novice international faculty were: the absence of pedagogical knowledge and the lack of teaching experience, as well as lack of familiarity with the country culture and the organizational culture of the university. Our study shows that a mentoring program can help improve low student evaluations of novice professors who come from another culture and are recent graduates of a doctoral program

Abstract:

Advances in serious games have influenced education, and transformed the teaching-learning processes, among others aspects. There are educational games that have improved particular math skills but lack the educational paradigm goals or engaging elements. Our work follows the pedagogical Approach methodology and our final goal is to improve the math competences: 55 % Mexican students have not the skills in mathematic to compete in the working world according to the PISA exam

Abstract:

This article makes two different contributions: an analysis of learning styles among undergraduate students in different academic programs, and a proposed regrouping of programs in order to improve teaching practice. The study was conducted in Mexico City in a Mexican private university (Instituto Tecnológico Autónomo de México - ITAM), among a sampling of 753 first-year students in 11 undergraduate degree programs, applying the learning styles questionnaire developed by Felder and Silverman. The results of our research showed that there were similarities between the learning styles of some programs, which can be grouped into four major categories: 1) active, sensitive, visual and sequential learning styles in the Administration, Business Engineering, Economics, Industrial Engineering and Law programs; 2) active-reflective, sensitive, visual and sequential learning styles in the Actuarial and Accounting programs; 3) active-reflective, sensitive-intuitive, visual and sequential-global in the Applied Mathematics, Computer Engineering and Telematics Engineering programs; 4) active, sensitive-intuitive, visual and sequential-global in the International Relations program. The results of our investigation imply that courses should be planned taking into account learning styles shared by the students in different programs, adjusting teaching techniques-electronic media, for example-in order to optimize learning

Abstract:

Nowadays the use of video is a natural process for digital natives’ students. Several aspects of instructional video in e-learning or in a traditional learning have not yet been well investigated. A major problem with the use of instructional video has been lack of interactivity [1]. It´s difficult manage video; students cannot directly jump to a particular part of a video or add some explanations to a specific part by the teacher or the student. Browsing a not interactive video is more difficult and time consuming, because people have to view and listen to the video sequentially this remains a linear process. We defined and developed an interactive platform video online system to allow proactive and random access to video content based on questions or search targets, use of an interactive word glossary, dictionary, an online books, educational video resources, extra explanations for the teachers and comments for students in real time. If learners can determine what to construct or create, they are more likely to engage in learning. Interactive video increases learnercontent interactivity, thus potentially motivating students and improving learning effectiveness [1]. We are evaluating this system at some universities of Mexico

Abstract:

Advances on Information and Communications Technology (ICT) have influenced education, and transformed the teaching-learning processes, among others aspects. It has been established that teachers’ knowledge on digital media, their design and pedagogical application, is extremely relevant to improve their teaching activities. As Salinas (1997) says “teachers are essential at the time of initiating any change. Their knowledge and skills are essential for the correct operation of a program”. Therefore, it is needed to extend the variety of educational experiences they can offer to students when technology is available in their environment, and it has become part of their culture. In this paper, it is discussed how important is the use of technological means as part of teaching strategies, and a teachers guide is proposed to select suitable technology for specific didactic strategies based on students learning styles

Resumen:

Investigaciones sobre procesos de aprendizaje han mostrado que los estudiantes tienden a aprender en diferentes maneras y que prefieren utilizar diferentes recursos de enseñanza. El entender los estilos de aprendizaje puede servir para identificar, e implantar, mejores estrategias de enseñanza y aprendizaje, de tal forma que los estudiantes adquieran nuevo conocimiento de manera más efectiva y eficiente. Aquí, se analizan similitudes y diferencias entre estilos de aprendizaje de estudiantes inscritos en cursos de cómputo, en programas de Ingeniería y Ciencias Sociales del Instituto Tecnológico Autónomo de México (ITAM). Adicionalmente, se analizan similitudes y diferencias en estrategias de enseñanza de sus correspondientes profesores. Un análisis comparativo sobre perfiles de aprendizaje de los estudiantes y los resultados obtenidos en los cursos, sugiere que existen grandes similitudes entre los estilos de aprendizaje de los estudiantes, y las estrategias de enseñanza de sus profesores, a pesar de las diferencias entre sus programas académicos. También existe un patrón consistente de cómo estos estudiantes aprenden: Activo, Sensible, Visual, y Secuencial. En la última parte de este artículo se discute como estos hallazgos podrían tener una implicación significativa en el desarrollo de estrategias pedagógicas efectivas, y de materiales didácticos multimedia específicos, para cada programa educativo

Abstract:

Research on learning processes has shown that students tend to learn in different ways and prefer to use different teaching resources. The understanding of learning styles can be used to identify, and implement, better teaching and learning strategies, in order to allow students to acquire new knowledge in a more effective and efficient way. In this study we analyze similarities and differences in learning styles among students enrolled in computing courses, in engineering and social sciences programs at the Instituto Tecnológico Autónomo de México (ITAM). In addition, we also analyze similarities and differences among the teaching strategies shown by their corresponding teachers. A comparative analysis on student learning profiles and course outcomes, allow us to suggest that, despite academic program differences, there are strong similarities among the students learning styles, as well as among the teaching styles of their professors. Seemingly, a consistent pattern of how these students learn also exists: Active, Sensitive, Visual and Sequential. At the end of the paper, we discuss how these findings might have significant implications in developing effective pedagogic strategies, as well as didactic multimedia based materials for each one of these academic programs

Abstract:

Recent research on the learning process has shown that students tend to learn in different ways and that they prefer to use different teaching resources as well. Many researchers agree on the fact that learning materials shouldn't just reflect of the teacher's style, but should be designed for all kinds of students and all kind of learning styles. Even though they agree on the importance of applying these learning styles to different learning systems, various problems still need to be solved, such as matching teaching contents with the student's learning style. In this paper, we describe the design of a personalized teaching method that is based on an adaptive taxonomy using Felder and Silvennan's learning styles and which is combined with the selection of the appropriate teaching strategy and the appropriate electronic media. Students are able to learn and to efficiently improve their learning process with such method

Abstract:

In distance learning, the intervention of an adviser is essential for coaching the students. In this modality there are no space and time restrictions: the students have control over when and how they carry out their lectures and the adviser is responsible for responding to all their questions. Often the advisers are unable to answer immediately because of the amount and diversity of questions made by students. At all times, the students need support in order to continue learning when problems arise, but if they don't have answers right away, they wouldn't be able to go on. This is why the advisor can benefit from a software tool that stores his experience and knowledge for reuse and quickly generates solutions to students' problems. In this paper, we describe an information system (MC) design that uses these ideas related to Case-Based Reasoning (CBR) in order to flexibly, efficiently and immediately reply when questions are encountered by students

Abstract:

Recent research on the learning process has shown that students tend to learn in different ways and that they prefer to use different teaching resources as well. Many researchers agree on the fact that learning materials shouldn't just reflect of the teacher's style, but should be designed for all kinds of students and all kind of learning styles [8]. Even though they agree on the importance of applying these learning styles to different learning systems, various problems still need to be solved, such as matching teaching contents with the student's learning style. In this paper, we describe the design of a personalized teaching environment that is based on an adaptive taxonomy using Felder and Silverman's learning styles and which is combined with the selection of the appropriate teaching strategy and the appropriate electronic media. Students are able to learn and to efficiently improve their learning process with such method

Abstract:

Nowadays there are new educational scenarios emerging along with technological breakthroughs in Information Technologies (IT), which allow us to modify the traditional teaching methods. Due to this situation, we ought to think about satisfying the growing educational needs using new didactic resources, new tools which will make teaching-learning environments more flexible, adding electronic media provided by communication networks and by informatics. Regarding learning, we find that not everyone learns the same way. Each person has a particular set of learning abilities, thus we can identify the preferences that constitute his or her learning style. Knowing our learning styles helps us both, teachers and researchers. Better teaching-learning strategies can be elaborated to assimilate in an effective and more efficient way new information and knowledge. In the following research, the challenge is to use the vast resources offered by informatics to create a suitable environment for the development of individuals with different skills. For example, impelling intellectual growth and expansion of abilities, based on the correct use of electronic media and the teaching-learning methods when learning a new subject. In this work, a computer program is provided for instructional aid, in which two educational aspects that have been only partially integrated yet are incorporated in an educational environment: computer science and educational psychology (although both of them have been previously used in education)

Abstract:

Fraser compared the 101 ABET accredited industrial engineering programs by location, size, and other descriptors, as well as by the inclusion of different courses in the curricula. Except for two programs in Puerto Rico, all these programs are in the United States. In this paper, we extend that comparison to include industrial engineering programs in other countries in order to find ideas that US programs (and programs in other countries that use the US model) should consider for adoption from IE programs outside the US. We found differences in total number of credit hours and in number of years required for the IE degree, in the amount of general education included in the degree, and in the strength of ties to industry. We noted trends toward standardization of degrees in certain countries and regions and toward international links among programs. We make two recommendations related to partners: IE programs should seek partnerships with mechanical engineering and with business programs, and IE programs should seek partners with universities in other countries

Abstract:

The COVID-19 disease constitutes a global health contingency. This disease has left millions people infected, and its spread has dramatically increased. This study proposes a new method based on a Convolutional Neural Network (CNN) and temporal Component Transformation (CT) called CNN-CT. This method is applied to confirmed cases of COVID-19 in the United States, Mexico, Brazil, and Colombia. The CT changes daily predictions and observations to weekly components and vice versa. In addition, CNN-CT adjusts the predictions made by CNN using AutoRegressive Integrated Moving Average (ARIMA) and Exponential Smoothing (ES) methods. This combination of strategies provides better predictions than most of the individual methods by themselves. In this paper, we present the mathematical formulation for this strategy. Our experiments encompass the fine-tuning of the parameters of the algorithms. We compared the best hybrid methods obtained with CNN-CT versus the individual CNN, Long Short-Term Memory (LSTM), ARIMA, and ES methods. Our results show that our hybrid method surpasses the performance of LSTM, and that it consistently achieves competitive results in terms of the MAPE metric, as opposed to the individual CNN and ARIMA methods, whose performance varies largely for different scenarios

Abstract:

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among di erent algorithms

Abstract:

The automatic classification of plants with nutrient deficiencies or excesses is essential in precision agriculture. In particular, being able to perform early detection of nutrient concentrations would increase the production of crop yields and make appropriate use of fertilizers. RGB cameras represent a low-cost alternative sensor for plant monitoring, but this task is complicated when it is purely visual and has limited samples. In this paper, we analyze the Curriculum by Smoothing technique with a small dataset of RGB images (144 images per class) to classify nitrogen concentrations in greenhouse basil plants. This Deep Learning method changes the texture found in the images during training by convolving each feature map (the output of a convolutional layer) of a Convolutional Neural Network with a Gaussian kernel whose width increases as training progresses. We observed that controlled information extraction allows a state-of-the-art deep neural network to perform well using little training data containing a high variance between items of the same class. As a result, the Curriculum by Smoothing provides an average accuracy 7% higher than the traditional transfer learning method for the classification of the nitrogen concentration level of greenhouse basil 'Nufar' plants with little data

Abstract:

We study relative equilibria (RE) for the three-body problem on S2, under the influence of a general potential which only depends on cos σij where σij are the mutual angles among the masses. Explicit conditions for masses mk and cos σij to form relative equilibrium are show. Using the above conditions, we study the equal masses case under the cotangent potential. We show the existence of scalene, isosceles, and equilateral Euler RE, and isosceles and equilateral Lagrange RE. We also show that equilateral Euler RE on a rotating meridian exists for general potential Σi

Resumen:

La compresión auditiva es fundamental en la socialización y en el desempeño académico y profesional; sin embargo, en México no se enseña de manera explícita en el aula, y su evaluación apenas inicia: los primeros indicadores de desempeño los ofreció el Examen de Habilidades Lingüísticas (EXHALING). A partir de los resultados de dicho examen se analiza la forma en que la comprensión auditiva podría desempeñar un papel protagónico en el aprovechamiento escolar y el fortalecimiento del español

Abstract:

Listening comprehension is fundamental in the processes of socialization, as well as in the academic and professional development; nevertheless, in Mexico, teachers do not explicitly teach it in the classroom, and its evaluation is just beginning: the Test for Linguistic Skills in Spanish (EXHALING, for its abbreviation in Spanish) offered its first performance indicators. In the paper, based on the results of EXHALING, it will be exposed how listening could play a protagonist roll in college achievement, as well as in Spanish learning strengthening

Abstract:

Genetic algorithms (GA's) have some control parameters such as the probability of bit mutation or the probability of crossover. These are normally given a priori by the user (programmer) of the algorithm. There exists a wide variety of values for control parameters and it is difficult to find the best choice of these values in order to optimize the behaviour of a particular GA. We introduce a selfadaptive GA (SAGA) with its control parameters encoded in the genome of the individuals of the population. This algorithm is used to optimize a set of twenty functions from R^2 to R and its behaviour is compared with the one resulting from the execution of a traditional GA varying its control parameter values. We obtain a set of measurements which demonstrate statistically that SAGA yields a set of results which compare favourably with the same results mean values from an extensive set of runs of traditional GA (TGA)

Resumen:

El presente texto es un análisis crítico literario de un cuento de Ignacio Padilla. Se indaga la relación entre la literatura y la filosofía, especialmente la ética. El cuento invita a reflexionar acerca de los cambios civilizatorios obligados sobre ciertos seres humanos y el resultado de deshumanización que acontece cuando no se toman en cuenta las diferencias, que pueden ser educativas, culturales o físicas

Abstract:

This article is a literary critical analysis of an Ignacio Padilla’s story. We will investigate the relationship between literature and philosophy, particularly ethics. This story invites us to reflect on the civilizing changes imposed on some human beings and the resulting dehumanization when their differences are not taken into account

Abstract:

We study three classes of diversity relations over menus arising from an unobserved categorization of alternatives in the form of a partition. A basic diversity relation declares a menu to be more diverse than another if and only if for every alternative in the latter there is an alternative in the former which belongs to the same category. An extension of a basic diversity relation preserves its weak and strict parts and Possibly makes additional diversity judgements between hitherto incomparable menus. A cardinality-based extension is an extension which ranks menus on the basis of the number of categories that exist in each menu. We characterize each class axiomatically. Two axioms satisfied by each of the three classes are Monotonicity, which says that larger menus are at least as diverse, and No Complements, which rules out certain complementarities between alternatives in generating diversity

Resumen:

El autor describe el beligerante movimiento laboral que surgió en la fábrica textil La Magdalena hacia finales del Porfiriato y que continuó durante el primer tercio del siglo xx; muestra el efecto que tuvo sobre su capacidad productiva y cómo los “avances” institucionales en materia laboral, entre el gobierno, empresarios y obreros, no ofrecieron incentivos para la transformación tecnológica de la fábrica, que requirió de mayores tarifas para operar. Esta situación, en conjunto con la entrada de las fibras sintéticas como sustitutos del algodón, obligaron al cierre inminente de más de una fábrica textil porfiriana hacia la segunda mitad del siglo pasado

Abstract:

The author describes the belligerent labor movement at La Magdalena textile factory from the end of the Porfiriato years until the first third of the twentieth century. He demonstrates its effect on production capacity and how institutional advancements in the workforce involving the government, entrepreneurs, and workers were not incentives to the technological transformation of factories but lead to higher operational costs. This occurrence along with the emergence of synthetic fibers substituting cotton led to the imminent closure of many textile factories from the Porfiriato years until the second half of the past century

Resumen:

Análisis sobre las conclusiones de algunos trabajos que abordan el tema de la persistencia de la élite terrateniente e industrial en México en la primera mitad del siglo xx. Se presenta el efecto que tuvieron sucesos como la Revolución mexicana y los programas de reforma agraria, así como la inestabilidad política y la incertidumbre institucional en la persistencia o debilitamiento de las élites económicas consolidadas durante el Porfiriato

Abstract:

In this article, we analyze the findings of research concerning the persistence of the industrial and landowner elites in the first half of the twentieth century. We evaluate the effect of the Mexican Revolution and the agrarian reform programs as well as the political instability and institutional uncertainty had on the persistence or weakening of the consolidated economic elites during the Porfiriato

Resumen:

La parroquia de San José, en la ciudad de Toluca, cuenta con registros de bautizos a partir de 1642, de matrimonios desde 1666 y de defunciones desde 1648. La información está dividida por castas. El objetivo de este trabajo es realizar un análisis de algunas de las características demográfi cas de la población de la Villa de Toluca, en especial, mortalidad y fecundidad para el periodo 1684-1760. Aunque se enfrentaron diferentes problemas debidos a la calidad o carencia de información a lo largo del periodo, se pudieron obtener estimadores, como la tasa bruta de mortalidad, la tasa de mortalidad infantil y la tasa bruta de natalidad, algunas por casta, por lo que pueden compararse, sobre todo para las épocas de epidemias

Abstract:

The parochial archives of Village of Toluca, 1684-1760 As of 1684 there are registrations of casualties in the parish of Sagrario or San José of the city of Toluca, of baptisms as of 1642 and marriages as of 1666. The information is divided on the basis of castes. The objective of this work is to carry out an analysis of some of the demographic characteristics of the population of the Village of Toluca, specially mortality and fertility for the period from 1684 to 1760. Even though several problems were faced due to the quality or lack of information along the period, estimators were obtained, namely: crude death rate, infant mortality rate and crude birth rate, some per caste, so they can be compared, mainly for the times of epidemics

Abstract:

The decades-long Colombian civil war nearly came to an official end with the 2016 Peace Plebiscite, which was ultimately defeated in a narrow vote. This conflict has deeply divided Colombian civil society, and non-political public figures have played a crucial role in structuring debate on the topic. To understand the mechanisms underlying the influence of members of civil society on political discussion, we performed a randomized experiment on Colombian Twitter users shortly before this election. Sampling from a pool of subjects who had been frequently tweeting about the Plebiscite, we tweeted messages that encouraged subjects to consider different aspects of the decision. We varied the identity (a general, a scientist, and a priest) of the accounts we used and the content of the messages we sent. We found little evidence that any of our interventions were successful in persuading subjects to change their attitudes. However, we show that our pro-Peace messages encouraged liberal Colombians to engage in significantly more public deliberation on the subject

Abstract:

This paper aims to perform an alternative methodology the Ministry of Finance and Public Credit (SHCP) applies to estimate the annual Mexican Crude Oil Mix Export Price (MXM), a crucial element of the General Economic Policy Criteria in the Economic Package. We first identify the MXM and the West Texas Intermediate (WTI) relation, computing tail conditional dependence between both series. Subsequently, we use a market risk analysis approach that considers some methodologies to estimate the value at risk (VaR), including an ARIMA-TGARCH model for the innovations of the MXM's price to forecast its behavior using data daily data from January 03rd, 1996, to December 30th, 2021. Once we identify the VaR and the ARIMA-TGARCH components, we aim to design an alternative method to estimate the annual average MXM's price

Resumen:

Tras hacer un repaso por la presencia del tema de la prueba en los trabajos académicos de Manuel Atienza, se presentan cinco ideas para abordar el estudio de la prueba desde un enfoque argumentativo del Derecho: 1. La prueba en el Derecho no se reduce a las reglas sobre la prueba. 2. El estudio de la prueba tiene un carácter multidisciplinar y representa un punto de encuentro en el que convergen distintas disciplinas. 3. El estudio de la prueba no puede prescindir de su dimensión jurídica e institucional. 4. La prueba en el Derecho tiene una dimensión valorativa. 5. El estudio de la prueba puede abordarse desde un enfoque argumentativo que integre diversas aproximaciones (conceptual, empírica, historiográfica y metodológica)

Abstract:

Manuel Atienza has acknowledged the importance of reasoning about facts throughout his academic work. In this paper I present five ideas about the grounds of an argumentative approach to Evidence. 1. The subject of evidence is no co-extensive to the rules of evidence. 2. Evidence is a multidisciplinary discipline in which several disciplines find a common ground. 3. The study of evidence should account of its institutional dimension. 4. The subject of Evidence is connected with a series of legal, procedural and substantive values. 5. An argumentative approach to Evidence could integrate several approaches (analytical, empirical historiographic and methodological) in a coherent framework

Resumen:

Este trabajo tiene como objetivo analizar la concepción racional de la prueba de Jordi Ferrer y su propuesta para formular estándares de prueba precisos y objetivos. En primer lugar se plantea que la presentación que hace este autor de la concepción racional de la prueba resulta problemática en tres aspectos: los antecedentes históricos de los que parte, las notas características de dicha concepción y la contraposición entre una concepción racional centrada en pruebas y una concepción persuasiva centrada en las creencias del juez. En segundo lugar se argumenta que la búsqueda de estándares de prueba precisos y objetivos debe abandonarse de raíz, no sólo porque resulta inviable, sino porque es contraria a la naturaleza probabilística del razonamiento probatorio. Finalmente, se plantea que el problema de la suficiencia probatoria requiere articular varios componentes: (i) el estándar de prueba aplicable, con todo y los problemas de imprecisión y de subjetividad que contiene, (ii) las pruebas existentes en un caso concreto, con un trabajo riguroso de análisis de los hechos y de valoración de las pruebas y (iii) la convicción del órgano jurisdiccional. Prueba y convicción son dos componentes que deben armonizarse

Abstract:

This paper examines the rationalist conception of evidence advocated by Jordi Ferrer and its proposal to formulate precise and objective standards of proof. First, three concerns are raised about the characterization of the rationalist conception: i) its historical background, ii) its defining features, and iii) the contrast between a rationalist conception that focuses exclusively on evidence and a persuasive conception that focuses on the beliefs of the trier of facts. Second, it is argued that the search for an objective and precise standard of proof should be abandoned, both because it is futile and because it contradicts the probabilistic nature of evidential reasoning. Finally, it is suggested that an adequate theory of the sufficiency of evidence should be able to accommodate and explain (a) the current formulation of standards of proof notwithstanding the problems of subjectivity and imprecision, (b) a rigorous analysis of evidence that includes both an individual and an overall evaluation of evidence, and (c) the beliefs of the trier of facts. Evidence and persuasion are two components that should be harmonized

Resumen:

Este trabajo ofrece una aproximación inicial a la relación entre prueba y perspectiva de género. En primer lugar se plantea que al hablar de prueba con perspectiva de género habría que reconocer la vinculación de esta última con el feminismo y las perspectivas feministas sobre la prueba. Desvinculada de los movimientos feministas, la perspectiva de género corre el riesgo de convertirse en una visión desprovista del potencial reivindicativo y crítico que le dio origen. En segundo lugar, el alcance de la perspectiva de género en el ámbito probatorio comprende la prueba en general. Prácticamente todos los temas y problemas probatorios son susceptibles de examinarse con perspectiva de género. Finalmente, la tesis sobre la exigencia de corroboración de la declaración de la víctima requiere examinarse con perspectiva de género. Una regla de esa naturaleza opera en detrimento de las víctimas y refuerza el escepticismo estructural hacia su credibilidad

Abstract:

This paper offers a preliminary analysis of a gender perspective on evidence. It shows the connection between a gender perspective on evidence with feminism and feminist perspectives on evidence. It also shows that the scope of a gender perspective on evidence covers the entire field of Evidence. It finally shows that the corroboration requirement could be examined from a gender perspective. The corroboration requirement is inimical to the victims and strengthens a structural skepticism about her credibility

Abstract:

This paper explores two persistent questions in the legal literature on presumptions: the place and the nature of presumptions in law and legal argumentation. First, it shows that the thesis that presumptions belong to argumentation is a common trend in the literature on this subject, since its foundations in the middle ages to modern times. The civilians clearly saw that illumination for the treatment of this topic should be found in rhetoric, not in the law, since presumptions belong to the provinces of rhetoric and argumentation. Second, the paper shows that the analysis of presumptions is problematic for at least two reasons. On the one hand "presumption" is an ambiguous term in legal discourse. It is a word that has been used in many different senses and for a variety of purposes. Argumentation scholars that rely on legal models of presumption should be aware of the persistent ambiguity in the use of the word "presumption". On the other hand, there are at least four conceptions of presumptions. Each of these conceptions offers a distinctive approach to the question "What is a presumption?". The picture portrayed here may help to shed light on the possibilities and limits of an interdisciplinary dialogue about presumptions

Resumen:

En este trabajo se plantean algunos comentarios a "Los usos de los estándares de prueba" de Rodrigo Coloma. El análisis del término en el lenguaje ordinario y en el lenguaje jurídico puede arrojar luz sobre la noción de estándar de prueba. Sin embargo, resulta problemática la propuesta de entender el estándar de prueba en el derecho como un umbral de carácter cuantitativo o como un prototipo de carácter cualitativo que permite establecer relaciones de semejanza para considerar un hecho como probado. Por otra parte, es relevante la tarea de examinar los usos de los estándares de prueba, pero encuentro algunos problemas en la formulación de algunos de los usos que detecta Coloma. Finalmente, se plantea que la discusión sobre la posibilidad de formular un estándar de prueba realmente objetivo suele presentarse en términos dilemáticos: o bien el estándar de prueba es completamente objetivo o bien no es un estándar en absoluto. Frente a esta postura se destaca la tesis de Susan Haack en el sentido de que el estándar de prueba es en parte psicológico y en parte epistemológico

Abstract:

This paper examines Rodrigo Coloma’s "The uses of the standards of proof". The analysis of the word in both ordinary and legal language may shed light on the concept of standard of proof. However, Coloma's distinction between a quantitative threshold or a qualitative prototype is problematic. I also find problematic some of the uses of the standards of proof formulated by Coloma. Legal scholars traditionally argue that a subjective standard is not a standard at all. Against this view the paper stresses Susan Haack’s view that the legal standard of proof is in part psychological and in part epistemological

Abstract:

The problem of joint position regulation of a self-balancing robot moving on a slope via a PID passivity-based controller is addressed in the present paper. It is assumed that the angle of the slope is known, and the robot can move up or down. The contributions are the original presentation of the design and practical implementation for comparison purposes of two PID passivity-based control laws for position regulation of a self-balancing robot, and the original proposal of the respective asymptotic stability analysis. Experimental results illustrate the performance of the proposed controllers in a self-balancing robot, which were evaluated together with a different passivity-based controller, reported in the control literature, and a linear control law to test its superiority. Finally, the experiments were extended to deal with disturbance rejection, where one of the PID passivity-based control laws, the one that does not use partial feedback linearization, showed to be better than the other three controllers used for comparison

Abstract:

Market reputation is often perceived as a cheaper alternative to product liability in the provision of safety incentives. We explore the interaction between legal and reputational sanctions using the idea that inducing safety through reputation requires implementing costly “market sanctioning” mechanisms. We show that law positively affects the functioning of market reputation by reducing its costs. We also show that reputation and product liability are not just substitutes but also complements. We analyze the effects of different legal policies, and namely that negligence reduces reputational costs more intensely than strict liability, and that court errors in determining liability interfere with reputational cost reduction through law. A more general result is that any variant of an ex post liability rule will improve the functioning of market reputation in isolation. We complicate the basic analysis with endogenous prices and observability by consumers of the outcome of court’s decisions

Resumen:

La dinámica de Leibniz, como programa de una ciencia de la fuerza, incluía en sus inicios una metafísica de los cuerpos. Cuando vio la luz a finales de la década de 1670, Leibniz sostuvo que su ciencia de la fuerza requería la rehabilitación de las formas sustanciales. Pero al mismo tiempo, el interés de Leibniz por las unidades verdaderas en tanto que constituyentes últimas del mundo lo condujo a postular un mundo de sustancias corpóreas, entidades individualizadas en virtud de una forma sustancial. A principios de la década de 1680, parecía haber dos caminos convergentes hacia la misma metafísica, la rehabilitación de la forma sustancial. Con los años, estos dos caminos metafísicos evolucionaron. A mediados de la década de 1690, la metafísica dinámica añadió la materia prima, entendida como forma pasiva, a la forma sustancial, entendida como fuerza activa. Al mismo tiempo, las unidades que fundan el mundo evolucionaron de sustancias corpóreas a mónadas, sustancias inextensas, similares a la mente (esprit) y constituyentes últimos de las cosas. Argumento que cuando esto ocurrió, ya no estaba claro que estas dos representaciones metafísicas siguieran siendo coherentes: la metafísica dinámica, basada en la fuerza, y la metafísica de la unidad, ahora entendida en términos de mónadas, parecían cada vez más incompatibles

Abstract:

From the beginning of Leibniz’s dynamics, his program for a science of force, there was a metaphysics of body. When the program first emerged in the late 1670s, Leibniz argued that his science of force entailed the reestablishment of substantial form. But in the same period, Leibniz’s interest in genuine unities as the ultimate constituents of the world led him to posit a world of corporeal substances, bodies made one by virtue of a substantial form. In this way, by the early 1680s, there seemed to be two convergent paths to a single metaphysics: the revival of substantial form. Over the years, both of these metaphysics evolved. By the mid 1690s, to substantial form, understood as active force, the dynamical metaphysics added materia prima, understood as passive force. Meanwhile unities that ground the world evolved from corporeal substances to monads, now considered non-extended, mindlike, and the ultimate constituents of things. When this happened, I argue, it was no longer obvious that these two metaphysical pictures were still consistent with one another: the dynamical metaphysics, grounded in force, and the metaphysics of unity, now understood in terms of monads, seemed increasingly to be in tension with one another

Resumen:

En este documento se presenta una perspectiva a futuro de la energía eólica en el contexto de la transición de una economía basada en el uso de combustibles fósiles a una anhelada economía libre de carbón. En la primera sección se desarrolla la idea de la energía como elemento disruptivo que ha cambiado los paradigmas del ser humano, llegando a los nuevos paradigmas, destacando de manera relevante la posibilidad de utilizar masivamente la energía renovable. En una segunda sección es discute cómo se prevé sea la evolución de la transición energética, dependiendo de las políticas para acelerarla o mantener el ritmo actual que es inferior al necesario para lograr la des carbonización. En una tercera sección se presenta el tema de la variabilidad en la energía renovable (cuando no hay viento o el sol se pone) y los mecanismos para mitigar sus efectos, en particular contar con el mercado de capacidad, una matriz eléctrica diversificada y baterías. En la cuarta sección se desarrolla la contribución que ha tenido y tendrá la energía eólica en el futuro de la energía tanto a nivel internacional como México. Finalmente se presenta una sexta sección de conclusiones

Abstract:

This paper presents a future perspective of wind energy in the context of the transition from an economy based on the use of fossil fuels to a desired carbon-free economy. The first section develops the idea of energy as a disruptive element that has changed the paradigms of mankind, reaching the new paradigms, highlighting in a significant way the possibility of large-scale use of renewable energy. In a second section, it is discussed how the energy transition is expected to evolve, depending on the policies to accelerate it or maintain the current pace, which is lower that the necessary to achieve decarbonization. A third section presents the issue of variability in renewable energy (when there is no wind or the sun goes down) and the mechanisms to mitigate its effects, in particular having a capacity market, a diversified electricity matrix and batteries. The fourth section develops the contribution that wind energy has had and will have in the future of energy both internationally and in Mexico. Finally, a sixth section of conclusions is presented

Resumen:

Este trabajo examina los determinantes internos que influyen en la estructura de capital de las pequeñas y medianas empresas familiares en México (Pymef). Se sugiere que el tamaño y la antigüedad de la empresa, la formalidad en la planeación administrativa y estratégica, la actitud hacia el control familiar y la edad del director influyen en las decisiones del tipo de fuentes de financiamiento por utilizar. A través de un análisis de trayectorias se validan estadísticamente varias hipótesis tomando como base una muestra de 240 Pymef mexicanas. Los resultados indican relaciones significativas. entre tamaño y deuda, así como entre edad del director, capital social y utilidades acumuladas. Asimismo, aplicando el análisis por sector de actividad y por tamaño, se encontró evidencia empírica sobre las influencias entre el tamaño de la empresa y el capital social, así como la formalidad de la planeación administrativa y estratégica cori la deuda. Los resultados sugieren líneas de investigación de las Pymef mexicana

Abstract:

This paper examines the determinants of capital structure in family-owned small and medium-sized enterprises in Mexico. We suggest that the size, age, managerial planning, family control and the age of their CEO or owner influence financing decisions. The study's hypotheses are tested by analyzing a survey data collected from 240 Mexican SMEs with Path Analysis. We found that the size of the firm has a positive relationship with debt; the age of SME's owner has an influence on the equity and retained earnings funding. Through splitting the data by sectors and size, we also found two additional relationships: between size and equity, and between managerial planning formality and debt. Our findings suggest directions for further research on Mexican family-owned SMEs

Abstract:

We show that if L-r(X), 1 < r < infinity, has an asymptotically uniformly convex renorming of power type then X admits a uniformly convex norm of power type

Abstract:

Drawing on psychological theory, we created a new approach to classify negative sentiment tweets and presented a subset of unclassified tweets to humans for categorization. With these results, a tweet classification distribution was built to visualize how the tweets can fit in different categories. The approach developed through visualization and classification of data could be an important base to measure the efficiency of a machine classifier with psychological diagnostic criteria as the base (Thelwall et al. in J Assoc Inf Sci Technol 62(4):406–418, 2011). Nonetheless, this proposed system is used to identify red flags in at-risk population for further intervention, due to the need to be validated through therapy with an expert

Abstract:

New technologies are opening novel ways to help people in their decision-making while shopping. From crowd-generated customer reviews to geo-based recommendations, the information to make the decision could come from different social circles with varied degrees of expertise and knowledge. Such differences affect how much influence the information has on the shopping decisions. In this work, we aim to identify how social influence when it is mediated by modern and ubiquitous communication (such as that provided by smartphones) can affect people’s shopping experience and especially their emotions while shopping. Our results showed that large amount of information affects emotional state in costumers, which can be measured in their physiological response. Based on our results, we conclude that integrating smartphone technologies with biometric sensors can create new models of customer experience based on the emotional effects of social influence while shopping

Abstract:

Stress is becoming a major problem in our society and most people do not know how to cope with it. We propose a novel approach to quantify the stress level using psychophysiological measures. Using an automatic stress detector that can be implemented in a mobile application, we managed to create an alternative to automatically detect stress using sensors from a wearable device, and once it is detected, a score is calculated for each one of the records. By identifying the stress during the day and giving a numeric value from biological signals to it, a visualization could be produced to facilitate the analysis information by users

Resumen:

En el artículo se destacan los puntos más relevantes de la regulación sobre reparación del daño en materia penal en México desde el Código Penal Federal vigente hasta el Código Nacional de Procedimientos Penales recientemente aprobado. De esta forma se demuestra que la regulación de la reparación del daño en materia penal en México ha sido históricamente deficiente e ineficiente. Igualmente, se enfatizan las consecuencias negativas de no haber entregado el ejercicio de la reparación del daño a la víctima del delito y haber convertido esta acción en una facultad del Ministerio Público. Por último, se destaca la oportunidad que el legislador perdió para realizar una acabada reglamentación de los aspectos procedimentales de la reparación del daño en materia penal en el Código Nacional de Procedimientos Penales, pues solo aparecen normas dispersas a lo largo del CNPP, muchas de las cuales se quedan solo en el nivel de loables principios teóricos

Abstract:

Sensor networks have perceived an extraordinary growth in the last few years. From niche industrial and military applications, they are currently deployed in a wide range of settings as sensors are becoming smaller, cheaper and easier to use. Sensor networks are a key player in the so-called Internet of Things, generating exponentially increasing amounts of data. Nonetheless, there are very few documented works that tackle the challenges related with the collection, manipulation and exploitation of the data generated by these networks. This paper presents a proposal for integrating Big Data tools (in rest and in motion) for gathering, storage and analysis of data generated by a sensor network that monitors air pollution levels in a city. The authors provide a proof of concept that combines Hadoop and Storm for data processing, storage and analysis, and Arduino-based kits for constructing their sensor prototypes

Abstract:

The rapid pace of technological advances in recent years has enabled a significant evolution and deployment of Wireless Sensor Networks (WSN). These networks are a keyplayer in the so-called Internet of Things, generating exponentiall yincreasing amounts of data. Nonetheless, there are very few documented works that tackle the challenges related with the collection, manipulation and exploitation of the data generated by these networks. This paper presents a proposal for integrating BigData tools (in rest and in motion) for the gathering, storage and analysis of data generated by a WSN that monitors air pollution levels in a city. We provide a proof of concept that combines Hadoop and Storm for data processing, storage and analysis, and Arduino-based kits for constructing our sensor prototypes

Resumen:

A lo largo del siglo XIX y XX, la relación entre la Iglesia y el Estado estuvo llena de enfrentamientos legales, políticos e incluso de carácter militar. Dichas disputas se caracterizaron por una lucha de poder, libertad y soberanía en el proceso de construcción del Estado mexicano. El patronato real y la tolerancia religiosa fueron temas claves en estas diatribas que adoptaron matices, definiciones y posturas distintas a lo largo del tiempo. El análisis de la visión de la jerarquía católica en este devenir resulta indispensable para entender la historia de dos instituciones que comparten territorio, sociedad y cultura y que todavía en nuestros días siguen modificándose para alcanzar la plena coexistencia

Abstract:

Throughout the 19th and 20th centuries, the relationship between the Church and the State was full of legal, political and even military confrontations. These disputes were characterized by a struggle for power, freedom and sovereignty in the process of building the Mexican State. The royal patronage and religious tolerance were key themes in these diatribes that took on different nuances, definitions and positions over time. The analysis of the vision of the Catholic hierarchy in this becoming is indispensable to understand the history between two institutions that share territory, society and culture and that still in our days continue modifying to reach the full coexistence

Resumen:

La invasión de Francia a España en 1808 y la abdicación de la familia real a favor de Napoleón situó a la población novohispana en una disyuntiva: defender los derechos de Fernando VII, como postulaban los peninsulares, o defender el derecho del pueblo soberano de escoger sus autoridades, como afirmaban los criollos. La multiplicación de las conspiraciones criollas y la fuerza del poder y de las armas desembocaran en la lucha por la independencia. La presencia e intervención del clero a lo largo del proceso independentista, de 1810 a 1821, formó un pensamiento moderno en lo político, pero tradicional en su concepción social y cultural. La tradición católica, sembrada con la cruz y la espada, marcó a la sociedad mexicana con un carácter tradicional difícil de erradicar

Abstract:

The invasion of Spain by France in 1808, and the abdication of the Royal Family in favour of Napoleon, raised a dilemma for the people of Mexico: to defend the right of Ferdinand VII, as was postulated by the Spanish, or defend the sovereignty of the nation, as the Creole’s affirmed. The proliferation of Creole plots and the strength and power of weapons resulted in a fight for Independence. The presence and intervention of the clergy throughout the Independence process, from 1810 to 1821, formed a modern thought of politics, but traditional of social and cultural ideas. Catholic tradition, spread by cross and sword, gave the Mexican society a traditional shape hard to eradicate

Resumo:

A invasão da Espanha pela França em 1808, e a abdicação da Família Real em favor de Napoleão, colocou o povo do México perante um dilema: defender os direitos de Fernando VII, como postulavam os espanhóis, ou defender o direito do povo soberano a escolher as suas autoridades, como afirmavam os crioulos. A proliferação de conspirações crioulas e a força do poder das armas resultaram numa luta pela independência. A presença e a intervenção do clero em todo o processo de Independência, de 1810 a 1821, formou um pensamento moderno em termos políticos, mas tradicional em termos de ideias sociais e culturais. A tradição católica, disseminada com a cruz e a espada, deu à sociedade mexicana um carácter tradicional de difícil erradicação

Abstract:

A contragenic function in a domain Omega C R 3 is a reduced-quaternion valued (i.e the last quaternionic coordinate is zero) harmonic function, which is orthogonal in L2 (Omega) to all reduced-quaternion monogenic functions and their conjugates. Contragenicity is not a local property. For spheroidal domains of arbitrary eccentricity, we relate standard orthogonal bases of harmonic and contragenic functions for one spheroid to another via computational formulas. This permits us to show that there exist nontrivial contragenic functions common to the spheroids of all eccentricities

Abstract:

We construc bases of polynomials for the spaces of square-integrable harmonic functions that are orthogonal to the monogenic and antimonogenic R3-valued functions defined in a prolate or oblate spheroid

Resumen:

En los últimos años, la divulgación de sostenibilidad en materia financiera está cada vez más ligada a la sigla "ASG", usada cuando las organizaciones miden, identifican y cuantifican sus impactos y prácticas en las tres esferas del desarrollo sostenible. Este artículo analiza temas sostenibles relacionados con liderazgo y gobernanza basados en los estándares del Sustainability Accounting Standards Board

Abstract:

We classify and analyze the stability of all relative equilibria for the two-body problem in the hyperbolic space of dimension 2 and we formulate our results in terms of the intrinsic Riemannian data of the problem

Abstract:

Differential diagnosis among different types of dementia, mainly between Alzheimer (AD) and Vascular Dementia (VD), offers great difficulties due to the overlapping among the symptoms, and signs presented by patients suffering these illnesses. A differential diagnosis of AD and VD can be obtained with a 100% of confidence through the analysis of brain tissue (i.e. a cerebral biopsy). This gold test involves an invasive technique, and thus it is rarely applied. Besides these difficulties, to get an efficient differential diagnosis of AD and VD is essential, because the therapeutic treatment needed by a patient differs depending on the illness he suffers. In this paper, we explore the use of artificial neural networks technology to build an automaton to assist neurologists during the differential diagnosis of AD and VD. First, different networks are analyzed in order to identify minimum sets of clinical tests, from those normally applied, that still allows a differential diagnosis of AD and VD; and, second, an artificial neural network is developed, using backpropagation and data based on these minimum sets, to assist physicians during the differential diagnosis of AD and VD. Our results allow us to suggest that, by using our neural network, neurologists may improve their efficiency in getting a correct differential diagnosis of AD and VD and, additionally, that some tests contribute little to the diagnosis, and that under some combinations they make it rather more difficult.

Abstract:

This paper analyzes the effect of offshore outsourcing on the export performance of firms, based on the theories of international business, the resource-based view of the firm and the transaction cost theory. Outsourcing can reduce production costs and increase flexibility. It can also provide new resources and market knowledge. However, the impact of offshore outsourcing depends on the resources and capabilities of firms to manage a network of foreign suppliers, and to absorb knowledge of foreign markets. Using a database of about 1,000 manufacturing companies in Mexico in 2011, we found that offshore outsourcing increases the performance of exports. The effects are stronger in export markets from which the company also imports intermediate goods. The results also show that the size of the company, the organization of intra-firm imports and export experience moderate the effects of outsourcing in a positive way

Resumen:

La posibilidad de reflexión sobre la historia radica en una detención poética de la narrativa de los acontecimientos, lo cual implica que pensar acerca de lo que somos a lo largo del tiempo, requiere tanto a la historia y la filosofía como a la poesía. Para desarrollar el argumento, se recurre a la digresión de Theodor Adorno y Marx Horkheimer, "Odiseo, o mito e Ilustración", contenida en su Dialéctica de la Ilustración, y a las Tesis sobre la filosofía de la historia de Walter Benjamin

Abstract:

The possibility of reflection on history lies in a poetic arrest of the narrative of events, which implies that thinking about who we are over time, requires both history and philosophy as well as poetry. To develop the argument, the digression of Theodor Adorno and Marx Horkheimer, "Odysseus, or Myth and Enlightenment", contained in their Dialectics of Enlightenment, and Walter Benjamin's Theses on the Philosophy of History are used

Resumen:

Se muestran algunas de las formas más significativas en que Michel Foucault aborda la relación entre verdad y poder. Se expone el papel que desempeña tal relación en algunos de sus análisis más representativos, como el de las experiencias de la locura y la sexualidad, y el de las técnicas de cuidado de sí en la Antigüedad, particularmente en la práctica de la escritura de los estoicos y epicúreos, en la tragedia de Edipo y en el surgimiento de la filosofía

Abstract:

In this article, we will demonstrate the significant ways in which Michel Foucault approaches the relationship between truth and power. We will present the role it plays in his most representative analyses, namely the experience of craziness and sexuality, care of the self in the Classical Antiquity, particularly in the writings of stoics and epicureans, the tragedy of Oedipus and finally, in the emergence of philosophy

Resumen:

En el presente texto se intenta mostrar la perspectiva de Theodor Adorno en relación a la producción cultural, desde tres aspectos fundamentales de su filosofía. Se partirá, en primera instancia, de su crítica a la “industria cultural”, la cual es caracterizada como un orden de producción en el que los criterios se basan en lo meramente comercial y en el que la cultura pierde cualquier tipo autenticidad y legitimidad. Posteriormente, se analizarán las propuestas de opciones auténticas de cultura, tanto en la crítica cultural, como en la creación artística

Abstract:

This work illustrates Theodor Adorno’s perspective regarding cultural production from the point of view of three of his fundamental aspects in his philosophy. First, we will delve into his criticism of cultural industry, which is characterized as a production line whose criteria is solely based on commercialism and in which culture loses its authenticity and legitimacy. Then, his suggestions for authentic options of culture, in cultural criticism as well as in artistic expression, will be assessed

Resumen:

El psicoanálisis es, posiblemente, la teoría de la mente que desde su creación, ha tenido más influencia en el estudio de las manifestaciones culturales. Sus explicaciones dinámicas, a partir de un sistema fundamentado en los conceptos de energía psíquica y de representación, en relación a una comprensión tópica de la psique, permiten explicar las creaciones culturales en función de los diferentes conflictos afectivos que se puedan generar en contextos variables. Se pretende mostrar de manera general, las posibilidades de dicha teoría en lo concerniente a la crítica de arte, partiendo de los principales textos de Freud en materia de estética y de algunas de las críticas más significativas que se han hecho al respecto

Abstract:

Psychoanalysis is arguably the mental theory which since its creation has had the most influence on the study of cultural events. Its dynamic explanations, within a system based on the concepts of psychic energy and representation, in relation to its topical knowledge of the psyche, allows us to explain cultural creations in light of the various emotional conflicts that may be produced in diverse contexts. In this article, we will broadly present the possibilities of such a theory in the field of art criticism utilizing Freud’s main works regarding aesthetics and some of the most important criticisms regarding this topic

Abstract:

This paper compares the relative performance of different organizational structures for the decision of accepting or rejecting a project of uncertain quality. When the principal is uninformed and relies on the advice of an informed and biased agent, cheap-talk communication is persuasive and it is equivalent to delegation of authority, provided that the agent's bias is small. When the principal has access to additional private information, cheap-talk communication dominates both (conditional) delegation and more democratic organizational arrangements such as voting with unanimous consensus

Abstract:

The present article intends to assess, in a systematic and critical way, what has been done in Portugal regarding the so-called Regulatory Impact Assessment (RIA), while at the same time, contributing to its development and maturity. A careful and in depth analysis of the Portuguese institutional discourse towards regulatory reform since its early days is provided, with special emphasis given to the legal instruments deployed by the current Government. Our aim is to provide a sound and rigorous explanation to the inexistence of a “true” or “substantive” RIA system actually operating in Portugal. Reflecting and borrowing from international experiences and practices, from OECD and EU documents and Portuguese scholarship, we demonstrate that there is a huge gap between the reality of impact assessment and its political discourse in Portugal. We argue that the lack of comprehensive RIA is inevitability stemming from the shortcomings of the current Portuguese system. Indeed, the “formal” system has been so far incapable of providing us with a single example of an impact assessment study

Resumo:

O presente artigo procura avaliar, de maneira sistemática e crítica, o que tem sido feito em Portugal no que respeita ao denominado Regulatory Impact Assessment (RIA) e, ao mesmo tempo, contribuir para o seu desenvolvimento e maturidade. Apresenta-se uma análise cuidadosa e profunda do discurso institucional portugués sobre a reforma da legislacao desde os seus primeiros días, com especial ênfase para os instrumentos legais apresentados pelo XVII Governo Constitucional. O nosso objectivo é fornecer uma explicacao profunda e rigorosa para a inexistencia de um <> ou <> sistema de RIA a funcionar actualmente em Portugal. Reflectindo e aprendendo com experiências e práticas internacionais, através de documentos da OCDE e UE, e com a experiencia derivada de uma bolsa de investigacao portuguesa, demonstramos que há, em Portugal, uma enorme distância entre a realidade da avaliacao de impacto e o discurso político. Sustentamos que a falta de um verdadeiro RIA deriva inevitavelmente do insucesso do actual sistema portugués. De facto, o sistema << formal>> foi, até hoje, incapaz de nos apresentar um único exemplo de um estudo de avaliacao de impacto

Abstract:

Sunspot equilibrium and lottery equilibrium are two stochastic solution concepts for nonstochastic economies. We compare these concepts in a class of completely finite, (possibly) nonconvex exchange economies with perfect markets, which requires extending the lottery model to the finite case. Every equilibrium allocation of our lottery model is also a sunspot equilibrium allocation. The converse is almost always true. There are exceptions, however: For some economies, there exist sunspot equilibrium allocations with no lottery equilibrium counterpart

Abstract:

In nonconvex environments, a sunspot equilibrium can sometimes be destroyed by the introduction of new extrinsic information. We provide a simple test for determining whether or not a particular equilibrium survives, or is robust to, all possible refinements of the state space. We use this test to provide a characterization of the set of robust sunspot-equilibrium allocations of a given economy; it is equivalent to the set of equilibrium allocations of the associated lottery economy. Journal of Economic Literature Classification Numbers: D51, D84, E32

Abstract:

We analyze sunspot-equilibrium prices in nonconvex economies with perfect markets and a continuous sunspot variable. Our primary result is that every sunspot equilibrium allocation can be supported by prices that, when adjusted for probabilities, are constant across states. This result extends to the case of a finite number of equally-probable states under a nonsatiation condition, but does not extend to general discrete state spaces. We use our primary result to establish the equivalence of the set of sunspot equilibrium allocations based on a continuous sunspot variable and the set of lottery equilibrium allocations. Journal of Economic Literature Classification Numbers: D51, D84, E32

Abstract:

We exploit a unique field experiment to recover the willingness to pay (WTP) for shorter waiting times at a cataract detection clinic in Mexico City, and compare the results with those obtained through a hypothetical dichotomous choice questionnaire. The WTP to avoid a minute of wait obtained from the field experiment ranges from 0.59 to 0.82 Mexican pesos (1 USD =12.5 Mexican pesos at the time of the survey), while that from the hypothetical choice experiment ranges from 0.33 to 0.48 Mexican pesos. WTP to avoid the wait is lower for lower income individuals, and it is larger the more accurately the announced expected waiting time matches the true values. Finally, we find evidence that the marginal disutility of waiting is not constant

Abstract:

Objective. To characterize the impact of Mexico's Co-vid-19 vaccination campaign of older adults. Materials and methods. We estimated the absolute change in sympto-matic cases, hospitalizations and deaths for vaccine-eligible adults (aged >60 years) and the relative change compared to vaccine-ineligible groups since the campaign started. Re­sults. By May 3, 2021, the odds of Covid-19 cases among adults over 60 compared to 50-59 year olds decreased by 60.3% (95% CI: 53.1, 66.9), and 2 003 cases (95%CI: 1 156, 3 130) were avoided. Hospitalizations and deaths showed similar trends. Conclusions. Covid-19 events decreased after vaccine rollout among those eligible for vaccination

Resumen:

¿Hasta dónde la elaboración de políticas públicas enfrenta los problemas sociales? Resultados del programa “70 y más” de México Las investigaciones realizadas hasta ahora indican que la elaboración de políticas públicas es importante para afrontar los problemas sociales, en particular para reducir la pobreza. Sin embargo, las conclusiones sobre los programas de reducción de la pobreza revelan que caminan muy lentamente hacia el cumplimiento de sus objetivos principales. Este ensayo da cuenta, primero, de las investigaciones que analizan hasta dónde la elaboración de políticas sociales ha contribuido a resolver los problemas sociales, en particular la pobreza. En segundo lugar, el ensayo examina si la línea de la pobreza ha vinculado la elaboración de políticas públicas a los problemas sociales. Finalmente, el ensayo revela que no se aborda la reducción de la pobreza a la hora de elaborar políticas públicas y, además, que la línea de la pobreza no relaciona la elaboración de políticas con la reducción de la pobreza

Abstract:

Previous research has revealed that social policy design is relevant for addressing social problems, particularly for reducing poverty. However, evidence on poverty reduction exposes a sluggish trend towards achieving its main goals. This paper first reports on research examining to what extent social policy design has addressed social problems, poverty in particular. Second, this paper examines whether poverty lines have linked social policy design and social problems. Finally, this paper reveals that social policy design does not address poverty reduction and that poverty lines have not linked policy design and poverty reduction

Résumé:

Dans quelle mesure la conception de politiques sociales résout-elle les problèmes sociaux? Données tirées du programme «70 y más» du Mexique Des recherches menées dans le passeé ont révélé que la conception de politiques sociales est pertinente pour résoudre des problèmes sociaux, en particulier pour réduire la pauvreté. Cependant, il existe des données relatives à la réduction de la pauvreté qui révèlent une tendance lente vers la réalisation de ses principaux buts. Cet article présente en premier lieu des recherches qui examinent la mesure dans laquelle la conception de politiques sociales a abordé les problèmes sociaux, et la pauvreté en particulier. Deuxièmement, cet article examine la question de savoir si les seuils de pauvreté ont relié la conception de politiques sociales et les problèmes sociaux. Enfin, cet article révèle que la conception de politiques sociales ne se penche pas sur la réduction de la pauvreté et que les seuils de pauvreté n’ont pas relié la conception de politiques et la réduction de la pauvreté

Resumo:

Até que ponto a formulação de políticas sociais enfrenta os problemas sociais? Evidências do programa “70 y más” no México Uma pesquisa anterior revelou que o desenho de políticas sociais é relevante para enfrentar problemas sociais, particularmente para redução da pobreza. Porém, as evidências sobre a redução da pobreza mostram uma ritmo lento na conquista de seus principais objetivos. Este artigo primeiramente apresenta um relato sobre pesquisas que examinam até que ponto o desenho de políticas sociais tem tratado de problemas sociais, e a questão da pobreza em particular. Em segundo lugar, este artigo discute se as políticas relativas à pobreza têm conectado a política social e os problemas sociais. Por fim, o artigo revela que o desenho de políticas sociais não enfrenta a questão da redução da pobreza e que programas relacionados à pobreza ainda não fizeram a conexão entre desenho de políticas e redução da pobreza

Abstract:

The Mexican constitution guarantees its citizens the right to submit individual requests to the government. Public officials are obligated to read and respond to citizen requests in a timely manner. Each request goes through three processing steps during which human employees read, analyze, and route requests to the appropriate federal agency depending on its content. The Mexican government recently created a centralized online submission system. In the year following the release of the online system, the number of submitted requests doubled. With limited resources to manually process each request, the Sistema Atención Ciudadana (SAC) office in charge of handling requests has struggled to keep up with the increasing volume, resulting in longer processing time. Our goal is to build a machine learning system to process requests in order to allow the government to respond to citizen requests more efficiently

Abstract:

Bureaucratic compliance is often crucial for political survival, yet eliciting that compliance in weakly institutionalized environments requires that political principals convince agents that their hold on power is secure. We provide a formal model to show that electoral manipulation can help to solve this agency problem. By influencing beliefs about a ruler’s hold on power, manipulation can encourage a bureaucrat to work on behalf of the ruler when he would not otherwise do so. This result holds under various common technologies of electoral manipulation. Manipulation is more likely when the bureaucrat is dependent on the ruler for his career and when the probability is high that even generally unsupportive citizens would reward bureaucratic effort. The relationship between the ruler’s expected popularity and the likelihood of manipulation, in turn, depends on the technology of manipulation

Abstract:

We analyze the removal of the credit-risk guarantees provided by the government-sponsored enterprises (GSEs) in a model with agents heterogeneous in income and house price risk. We find that wealth inequality increases, driven by higher mortgage spreads and housing rents. Housing holdings become more concentrated. Foreclosures fall. The removal benefits high-income households, while hurting low- and mid-income households (renters and highly leveraged mortgagors with conforming loans). GSE reform requires compensating transfers, sufficiently high elasticity of rental supply, or linking GSE reform with the elimination of the mortgage interest deduction

Abstract:

Sampling-based motion planning is the state-of-the-art technique for solving challenging motion planning problems in a wide variety of domains. While generally successful, their performance suffers from increasing problem complexity. In many cases, the full problem complexity is not needed for the entire solution. We present a hierarchical aggregation framework that groups and models sets of obstacles based on the currently needed level of detail. The hierarchy enables sampling to be performed using the simplest and most conservative representation of the environment possible in that region. Our results show that this scheme improves planner performance irrespective of the underlying sampling method and input problem. In many cases, improvement is significant, with running times often less than 60% of the original planning time

Abstract:

Legal designers use different mechanisms to entrench constitutions. This article studies one mechanism that has received little attention: constitutional "locks," or forced waiting periods for amendments. We begin by presenting a global survey, which reveals that locks appear in sixty-seven national constitutions. They vary in length from nine days to six years, and they vary in reach, with some countries "locking" their entire constitution and others locking only select parts. After presenting the survey, we consider rationales for locks. Scholars tend to lump locks with other tools of entrenchment, such as bicameralism and supermajority rule, but we argue that locks have distinct and interesting features. Specifically, we theorize that locks can cool passions better than other entrenchment mechanisms, promote principled deliberation by placing lawmakers behind a veil of ignorance, and protect minority groups by creating space for political bargaining. Legislators cannot work around locks, and because locks are simple and transparent, lawmakers cannot "break" them without drawing attention. For these reasons, we theorize that locks facilitate constitutional credibility and self-enforcement, perhaps better than other entrenchment mechanisms

Abstract:

This paper addresses the following question: how could Mexico's fiscal, supply-side, and trade reforms lead it into the crisis caused by the December 1994 devaluation? The 1994 political events in Mexico (an armed insurrection combined with terrorism, the assassination of the leading presidential candidate and of the president of the ruling political party, and kidnappings of prominent businessmen as well as political scandals) are most often mentioned in passing or merely as the trigger of a foregone conclusion. We analyze the hypotheses that have been proposed to explain the crisis and conclude that the crisis had a political origin and that some of the financial disequilibria, including the maintenance of a fixed nominal exchange rate in the face of the recent explosion in international transactions, contributed to the crisis

Abstract:

In this paper we explore the phenomenon of Chinese counterfeits smuggled into Mexico, the world's fourth largest counterfeit market, particularly highlighting the role played by Chinese transnational crime in the production and financing of these illegal exports and the role of the Korean diaspora in the distribution of Chinese counterfeits within Mexico. Based on this case study and the extant literature, we suggest propositions concerning diaspora criminal enterprises and diaspora participants in informalmarkets that amend current theory concerning two streams of literature — diaspora homeland investment and trade facilitation by diasporas

Abstract:

Globalization forces many managers to increasingly interact with new cultures, even if these managers remain in their home countries. This may be particularly true of managers in emerging markets, many of whom experience an encroaching US culture due to media, migration, and trade, as well as the importation of US-style business education. This study explores the possibility of applying acculturation insights developed in the immigrant and sojourner contexts to the context of local managers in emerging markets. By exploring the acculturation of Mexican managers in Mexico, we help to redress what has been identified as a key omission in prior acculturation research – the acculturation of a majority population. Our results suggest that Mexican managers who are bicultural or culturally independent (cosmopolitan) are more likely to be in upper management positions in Mexico. Our study supplements earlier work supporting the efficacy of biculturalism in minority populations. It also supports a growing body of research that conceptualizes individuals who rate themselves low on similarity to two cultures as being cosmopolitans and not marginalized individuals who experience difficulty in life

Abstract:

Economists and policymakers have lauded the adoption of liberal trade policies in many of the emerging markets. From the outside it may appear that governments in these countries have cemented a new set of rules governing economic behavior within their borders. Yet the authors have found that these countries are likely to see the emergence or resurgence of smuggling and contraband distribution in response to trade liberalization. In order to survive under trade liberalization, smugglers will rely on cost savings associated with the circumventing of legal import channels. In addition they may employ violence to bolster a diminished competitive advantage and may seek new illegal sources, both local and international, for the consumer products they distribute. In a market environment in which organized crime competes alongside more legitimate channels of distribution, U.S. multinationals will face new challenges relating to strategic planning, maintaining alliance relationships and corporate control of global brands and pricing

Abstract:

In this paper we contemplate three considerations with regard to gender, culture, and ethnomathematics: The long-term trend in Western culture of excluding women and women’s activities from mathematics, an analysis of some activities typically associated with women with the goal of showing that mathematical processes are frequently involved in said activities, and an examination of some specific examples of cultural tendencies in which certain skills of women are highly regarded. The cultures we will consider are from the regions of the Andes of South America, India, and Mesoamerica

Abstract:

In this essay I will explore the important connection between conformism as an adaptive psychological strategy, and the emergence of the phenomenon of ethnicity. My argument will be that it makes sense that nature made us conformists. And once humans acquired this adaptive strategy, I will argue further, the development of ethnic organization was inevitable. Understanding the adaptive origins of conformism, as we shall see, is perhaps the most useful way to shed light on what ethnicity is—at least when examined from the functional point of view, which is to say from the point of view of the adaptive problems that ethnicity solves. I shall begin with a few words about our final destination

Abstract:

It has been difficult to make progress in the study of ethnicity and nationalism because of the multiple confusions of analytic and lay terms, and the sheer lack of terminological standardization (often even within the same article). This makes a conceptual cleaning-up unavoidable, and it is especially salutary to attempt it now that more economists are becoming interested in the effects of identity on behavior, so that they may begin with the best conceptual tools possible. My approach to these questions has been informed by anthropological and evolutionary-psychological questions. I will focus primarily on the terms ‘ethnic group’, ‘nation’, and ‘nationalism’, and I will make the following points: (1) so-called ‘ethnic groups’ are collections of people with a common cultural identity, plus an ideology of membership by descent and normative endogamy; (2) the ‘group’ in ‘ethnic group’ is a misleading misnomer—these are not ‘groups’ but categories, so I propose to call them ‘ethnies’; (3) ‘nationalism’ mostly refers to the recent ideology that ethnies—cultural communities with a self-conscious ideology of self-sufficient reproduction—be made politically sovereign; (4) it is very confusing to use ‘nationalism’ also to stand for ‘loyalty to a multi-ethnic state’ because this is the exact opposite; (5) a ‘nation’ truly exists only in a politician’s imagination, so analysts should not pretend that establishing whether something ‘really’ is or is not ‘a nation’ matters; (6) a big analytic cost is paid every time an ‘ethnie’ is called a ‘nation’ because this mobilizes the intuition that nationalism is indispensable to ethnic organization (not true), which thereby confuses the very historical process—namely, the recent historical emergence of nationalism—that must be explained; (7) another analytical cost is paid when scholars pretend that ethnicity is a form of kinship—it is not

Abstract:

I argue that (1) the accusation that psychological methods are too diverse conflates "reliability" with "validity" (2) one must not choose methods by the results they produce - what matters is -,whether a method acceptably models the real-world situation one is trying to understand; (3) one must also distinguish methodological failings from differences that arise from the pursuit of different Gicoretickd questions

Abstract:

If ethnic actors represent ethnic groups as essentialized ‘natural’ groups despite the fact that ethnic essences do not exist, we must understand why. This article presents a hypothesis and evidence that humans process ethnic groups (and a few other related social categories) as if they were ‘species’ because their surface similarities to species make them inputs to the ‘living-kinds’ mental module that initially evolved to process species-level categories. The main similarities responsible are (1) category-based endogamy and (2) descent-based membership. Evolution encouraged this because processing ethnic groups as species—at least in the ancestral environment—solved adaptive problems having to do with interactional discriminations and behavioral prediction. Coethnics (like conspecifics) share many strongly intercorrelated ‘properties’ that are not obvious on first inspection. Since interaction with out-group members is costly because of coordination failure due to different norms between ethnic groups, thinking of ethnic groups as species adaptively promotes interactional discriminations towards the in-group (including endogamy). It also promotes inductive generalizations, which allow acquisition of reliable knowledge for behavioral prediction without too much costly interaction with out-group members. The relevant cognitive-science literature is reviewed, and cognitive field-experiment and ethnographic evidence from Mongolia is advanced to support the hypothesis

Abstract:

As a result of a spate of studies geared to investigating Brazilian racial categories, it is now believed by many that Brazilians reason about race in a manner quite different to that of Americans. This paper will argue that this conclusion is premature, as the studies in question have not, in fact, investigated Brazilian categories. What they have done is elicit sorting tasks on the basis of appearances, but the cognitive models of respondents have not been investigated in order to determine what are the boundaries of their concepts. Sorting based on appearances is not sufficient to infer the boundaries of concepts whenever appearance is not a defining criterion for the concepts in question, as the case appears to be for racial and ethnic categories. Following a critique of the methods used, I review a terminological and theoretical confusion concerning the use of the terms ‘emic’ and ‘etic’ in anthropology that appears directly responsible for the failure so far to choose methods appropriate to parsing the conceptual domain of ‘race’ in Brazil

Abstract:

An investigation of the cognitive models underlying ethnic actors' own ideas concerning the acquisition/transmission of an ethnic status is necessary in order to resolve the outstanding differences between "primordial" and "circumstantial" models of ethnicity. This article presents such data from a multi- ethnic area in Mongolia that found ethnic actors to be heavily primordialist, and uses these data to stimulate a more cogent model of ethnicity that puts the intuitions of both primordialists and circumstantialists on a more secure foundation. Although many points made by the circumstantialists can be accommodated in this framework, the model argues that ethnic cognition is at core primordialist, and ethnic actors' instrumental considerations - and by implication their behaviours - are conditioned and constrained by this primordialist core. The implications of this model of ethnicity for ethnic processes are examined, and data from other parts of the world are revisited for their relevance to its claims

Abstract:

Screening potential entrants is a major challenge to any system of immigration. At bottom, the problem is one of information asymmetry, in which migrants hold private information as to their abilities and intentions. We propose a new approach that leverages information that potential entrants have about each other. Certain potential entrants to the United States would have to apply as a small group, called a trust circle. Once inside the country, all members would be subject to onerous bureaucratic requirements, but these would be waived over time for trust circles that remain in good standing. However, if anyone within a trust circle becomes involved in hostile or criminal activities, every member of the group would summarily lose his or her privileges. Knowing this, potential migrants will associate only with others they trust and would have incentives to expose others in the group who adopt bad behaviors after entry

Abstract:

This paper connects trade flows to deviations from the law of one price (LOOP) in a structural model of trade and retailing. It accounts for the observed cross-country dispersion in prices of goods, based on retail price survey data, by focusing on two sources of goods market segmentation - (i) international trade costs, and (ii) non-traded input costs of distribution. I find that a multi-sector Ricardian trade model, ala Eaton-Kortum, augmented with a distribution sector, can account for the average price dispersion for a basket of goods fully and generates 70% of the variation in price dispersion across goods within the basket. While tradability of goods is important in explaining the average price dispersion for the basket of goods, distribution costs are important in explaining why, within the basket, some goods show more price dispersion than others

Abstract:

In this article we consider the advantages of applying lot streaming in a multiple job flow-shop context. The lot streaming process of splitting jobs into sublots to allow overlapping between successive operations has been shown to reduce makespan and thus increase customer satisfaction. Efficient algorithms are available in the literature for solving the problem for a single job. However, for multiple jobs, job sequencing, as well as lot sizing, is involved, and the problem is therefore NP-hard. We consider two special cases for which we provide polynomial time solutions. In one case, we eliminate diversity of the jobs, and hence the job sequencing decision, and in the other we restrict the number of machines. We show that for jobs with identical processing times and number of sublots, no advantage is obtained by allowing inconsistency in sublot sizing of consecutive jobs. For the two-machine case, we also explain why the sequencing and sublot size decision can be approached independently, and supply a polynomial time algorithm for minimising makespan, taking account of attached set-ups on the first machine and transportation times

Abstract:

This paper describes and develops the conditions that make the demand side policy of vehicle use restrictions part of a cost-effective set of environmental control policies. Mexico City's experience with vehicle use restrictions is described and its failure analysed. It is argued that Mexico City took a step in the right direction, but failed to make the restrictions flexible, thereby making the policy perverse. A programme of tradable vehicle use permits is presented and described that would provide the needed flexibility and promote urban sustainability

Abstract:

Many large cities in the world have serious ground level ozone problems, largely the product of vehicular emissions and thus the argued unsustainability of current urban growth patterns is frequently blamed on unrestricted private vehicle use. This article reviews MexicoCity’s experience with vehicle use restrictions as an emissions control program and develops the conditions for optimal quantitative restrictions on vehicle use and for complementary abatement technologies. The stochastic nature of air pollution outcomes is modelled explicitly in both the static and dynamic formulations of the control problem, in which for the first time in the literature the use of tradeable vehicle use permits is proposed as a cost-effective complement to technological abatement for mobile emissions control. This control regime gives the authorities a broader and more flexible set of instruments with which to deal more effectively with vehicle emissions, and with seasonal and stochastic variation of air quality outcomes. The market in tradeable vehicle use permits would be very competitive with low transactions costs. This control policy would have very favorable impacts on air quality, vehicle congestion and on urban form and development. Given the general political resistance to environmental taxes, this program could constitute a workable and politically palatable set of policies for controlling greenhouse gas emissions from the transport sector

Abstract:

We consider a pure exchange economy consisting of a single risky asset whose dividend drift rate is modeled as an Ornstein-Uhlenbeck process, and a representative agent with power-utility who, in equilibrium, consumes the dividend paid by the risky asset. Endogenously determined interest rates are found to be of the Vasicek (1977) type. The mean and variance of the equilibrium stock price are stochastic and have mean-reverting components. A closed-form solution for a standard call option is determined for the case of log-utility. Equilibrium values have interesting implications for the equity premium puzzle observed by Mehra and Prescott (1985)

Abstract:

We consider planar and spatial autonomous Newtonian systems with Coriolis forces and study the existence of branches of periodic orbits emanating from equilibria. We investigate both degenerate and nondegenerate situations. While Lyapunov's center theorem applies locally in the nondegenerate, nonresonant context, our result provides a global answer which is significant in some degenerate cases. We apply our abstract results to a problem from Celestial Mechanics. More precisely, in the three-dimensional version of the Restricted Triangular Four-Body Problem with possibly different primaries our results show the existence of at least seven branches of periodic orbits emanating from the stationary points

Abstract:

We introduce a framework to theoretically and empirically examine electoral maldistricting-the intentional drawing of electoral districts to advance partisan objectives, compromising voter welfare. We identify the legislatures that maximize voter welfare and those that maximize partisan goals, and incorporate them into a maldistricting index. This index measures the intent to maldistrict by comparing distances from the actual legislature to the nearest partisan and welfare-maximizing legislatures. Using 2008 presidential election data and 2010 census-based district maps, we find a Republican-leaning bias in district maps. Our index tracks court rulings in intuitive ways

Abstract:

Coattails and the forces behind them have important implications for the understanding of electoral processes and their outcomes. By focusing our attention on neighboring electoral sections that face the same local congressional election, but different municipal elections, and assuming that political preferences for local legislative candidates remain constant across neighboring electoral sections, we exploit variation in the strength of the municipal candidates in each of these electoral sections to estimate coattails from municipal to local congressional elections in Mexico. A one percentage increase in vote share for a municipal candidate translates, depending on his or her party, into an average of between 0.45 and 0.78 percentage point increase in vote share for the legislative candidates from the same party (though this effect may not have been sufficient to affect an outcome in any electoral district in our sample). In addition, we find that a large fraction of the effect is driven by individuals switching their vote decision in the legislative election, rather than by an increase in turnout

Abstract:

In this paper I consider choice correspondences defined on a novel domain: the decisions are assumed to be taken not by individuals, but by committess, whose membership is observable and variable. In particular, for the case of two alternatives I provide a full characterization of committee choice structures that may be rationalized with two common decision rules: unanimity with a default and weighted majority

Abstract:

We propose a model of endogenous party platforms with stochastic membership. The parties’ proposals depend on their membership, while the membership depends both on the proposals of the parties and on the unobserved idiosyncratic preferences of citizens over parties. An equilibrium of the model obtains when the members of each party prefer the proposal of the party to which they belong to, rather than the proposal of the other party. We prove the existence of such an equilibrium and study its qualitative properties. For the cases in which parties use either the average or the median to aggregate the preferences of their members, we show that if the unobserved idiosyncratic characteristics of the parties are similar, then parties make different proposals in the stable equilibria. Conversely, we argue that if parties differ substantially in their unobserved idiosyncratic characteristics, then the unique equilibrium is convergent

Abstract:

We construct a unique dataset that includes the total number of ads placed by all competing political parties during Mexico's 2012 presidential campaign, and detailed information on the content of the ads aired every day during the course of the campaign by each of the competing parties. To illustrate its potential usefulness, we describe the evolution each party's negative advertising strategies (defined as ads that explicitly mention each of the other competing candidates or parties) over the course of the campaign, and relate it to the expected vote share in the general election for each of the competing candidates based on the available surveys. We show that parties' choice of negative advertising strategies are consistent with a model in which ads do affect voting intentions, and negative (positive) advertising affect negatively (positively) the vote share of the mentioned party, and positively (negatively) that of all other competing parties

Abstract:

We analyze existence of equilibrium in a one-dimensional model of endogenous party platforms and more than two parties. The platforms proposed by parties depend on their membership composition. The policy implemented is a function of the different proposals and the vote distribution among such proposals. It is shown that if voters are sincere there is always an equilibrium regardless of the number of parties. In the case of strategic voting behavior, existence of equilibrium can be shown provided a subadditivity condition on the outcome function holds

Abstract:

In this paper I provide an example of sorting equilibrium nonexistence in a three-community model of the type introduced in Caplin and Nalebuff (1997; Journal of Economic Theory 72, 306-342). With two communities, such an example has been shown to exist only when the dimension of the policy space is even. It turns out, however, that with three communities existence may fail regardless of whether the policy space dimension is odd or even. This suggests that the original odd/even dichotomy can, at least in part, be explained by the evenness of the number of communities

Abstract:

In a social choice model with an infmite number of agents, there may occur "equal size" coalitions that a preference aggregation rule should treat in the same manner. We introduce an axiom of equal treatment with respect to a measure of coalition size and explore its interaction with cornrnon axioms of social choice. We show that, provided the measure space is sufficiently rich in coalitions of the same measure, the new axiom is the natural extension of the conceptof anonymity, and in particular plays a silnilar role in the characterization of preference aggregation rules

Abstract:

We develop a model of endogenous party platform formation in a multidimensional policy space. Party platforms depend on the composition of the parties' primary electorate. The overall social outcome is taken to be a weighted average of party platforms and individuals vote strategically. Equilibrium is defined to obtain when no group of voters can shift the social outcome in its favor by deviating and the party platforms are consistent with their electorate. We provide sufficient conditions for existence of equilibria

Abstract:

This paper analyzes a general model of an economy with heterogeneous individuals choosing among two jurisdictions, such as towns or political parties. Each jurisdiction is described by its constitution, where a constitution is defined as a mapping from all possible population partitions into the (possibly multidimensional) policy space. This study is the first to establish sufficient conditions for existence of sorting equilibria in a two-jurisdiction model for a policy space of an arbitrary dimension

Resumen:

El 22 de julio de 2011 una tragedia despertó abruptamente a Europa del ensueño democrático-liberal. Reinhard Heydrich hizo explotar una bomba en Oslo y provocó un tiroteo en la isla de Utøya. ¿Cuál es la trascendencia de lo ocurrido? Tal vez el bienestar social no impida las manifestaciones políticas extremas

Abstract:

On July 22, 2011, Europe was shook by a tragedy waking it up from its liberaldemocratic daydream. Reinhard Heydrich exploded a bomb in Oslo and caused a shootout on Utoya island. What are the implications of these incidents? Social well-being is perhaps not a deterrent to such extreme political expressions

Abstract:

This paper describes a system for the design or redesign of movie posters. The main reasoning strategy used by the system is case-based reasoning (Leake 1996), which requires two important sub-tasks to be performed: case retrieval and case adaptation. We have used the random forest algorithm to implement the case retrieval subtask, as it allows the system to formulate a generalization of the features of all the top matching cases that are determined to be most relevant to the new (re)design desired. We have used heuristic rules to implement the case adaptation task which results in generating the suggestion for poster composition. These heuristic rules are based on the Gestalt theory of composition (Arnheim 1974) and the requirements specified by the user

Abstract:

The interrelated fields of computational creativity and design computing, sometimes also referred to as design science, have been gaining momentum over the past two or three decades. Many frequent international conference series, as well as more sporadic stand-alone academic events, have emerged to prove this. As maturing fields, it is time to take stock of what has come before and try to come up with a cohesive description of the theoretical foundations and practical advances that have been made. This paper presents such a description in the hope that it helps to communicate what the fields are about to people that are not directly involved in them, hopefully drawing some of them in

Abstract:

This paper presents some details on how to use concepts from computational creativity to inform the machine learning task of clustering. Specifically, clustering involves structuring exemplar-based knowledge. The novelty and usefulness of the way the knowledge ends up being structured can be measured. These are characteristics that traditionally computational creativity focuses on whereas machine learning doesn’t, but they can aid in selecting the best value for the parameters of the learning task. Doing so also provides us with a way to find an adequate balance between novelty and usefulness, something that still hasn’t been fully formalized in computational creativity. Thus both fields, machine learning and computational creativity, can benefit from this type of hybrid research

Abstract:

Evolutionary algorithms (EAs) have been used in varying ways for design and other creative tasks. One of the main elements of these algorithms is the fitness function used by the algorithm to evaluate the quality of the potential solutions it proposes. The fitness function ultimately represents domain knowledge that serves to bias, constrain, and guide the algorithm’s search for an acceptable solution. In this paper, we explore the degree to which the fitness function’s implementation affects the search process in an evolutionary algorithm. To perform this, the reliability and speed of the algorithm, as well as the quality of the designs produced by it, are measured for different fitness function implementations. These measurements are then compared and contrasted

Abstract:

We use an evolutionary algorithm in which we change the fitness function periodically to model the fact that objectives can change during creative problem solving. We performed an experiment to observe the behavior of the evolutionary algorithm regarding its response to these changes and its ability to successfully generate solutions for its creative task despite the changes. An analysis of the results of this experiment sheds some light into the conditions under which the evolutionary algorithm can respond with varying degrees of robustness to the changes

Abstract:

This paper aims to discuss a method for autonomously analyzing a musical style based on the random forest learning algorithm. This algorithm needs to be shown both positive and negative examples of the concept one is trying to teach it. The algorithm uses the Hidden Markov Model (HMM) of each positive and negative piece of music to learn to distinguish the desired musical style from melodies that don't belong to it. The HMM is acquired from the coefficients that are generated by the Wavelet Transform of each piece of music. The output of the random forest algorithm codifies the solution space describing the desired style in an abstract and compact manner. This information can later be used for recognizing and/or generating melodies that fit within the analyzed style, a capability that can be of much use in computational models of design and computational creativity

Abstract:

We propose a computational method for producing novel cons'tructs that fall within an existing design or artistic style. The method is based on evolutionary algorithms, and we discuss related knowledge representation issues. We then present an implementation ofthis method that we used in order to imitare the style of the Dutch painter Mondrian. Finally, we explain and give the results of a cognitive experiment designed to detennine the effectiveness ofthe method, and provide a discussion of these results

Abstract:

This paper presents a multi-agent computational simulation of the effects on creativity of designers' simple social interactions with both other designers and consumers. This model is based on ideas from situated cognition and uses indirect observation to produce potential changes to the knowledge that each designer and consumer uses to perform their activities. These changes result in global, social behaviors emerging from the indirect interaction of the players with relatively simple individual behaviors. The paper provides results to illustrate these emergent behaviors and how the social interactions affect creativity

Abstract:

This paper presents an approach to understanding designing that includes societal effects. We use a multi-agent system to simulate the interactions between two types of agents, those that produce designs and those that receive the designs proposed by the producers. Interactions occur when receiver agents give their opinion to the producers on the designs they produced. Based on this information some producers may choose to adopt knowledge from other producers and some receivers may choose to adopt knowledge from other receivers. As a result of this exchange of knowledge global behaviors emerge in the society of agents. We provide the results of preliminary experiments with the model in which we measure variations in the producers’ successes, which allows us to observe and analyze some of these emergent behaviors under different knowledge transfer scenarios

Abstract:

In this paper we propose a process model for producing novel constructs that fall within an existing design or artistic style. The process model is based on evolutionary algorithms. We then present an implementation of the process model that we used in order to imitate the style of the Dutch painter Mondrian. Finally, we explain and give the results of a cognitive experiment designed to determine the effectiveness of the process model, and provide a discussion of these results

Abstract:

In this paper we use computational techniques to explore the Aztec board game of Patolli. Rules for the game were documented by the Spanish explorers that ultimately destroyed the Aztec civilization, yet there is no guarantee that the few players of Patolli that still exist follow the same strategies as the Aztec originators of the game. We implemented the rules of the game in an agent-based system and designed a series of experiments to pit game-playing agents using different strategies against each other to try to infer what makes a good strategy (and therefore what kind of information would have been taken into account by expert Aztec players back in the days when Patolli was an extremely popular game). In this paper we describe the game, explain our implementation, and present our experimental setup, results and conclusion

Abstract:

In Spanish-speaking countries, the game of dominoes is usually played by four people divided into two teams, with teammates sitting opposite each other. The players cannot communicate with each other or receive information from onlookers while a game is being played. Each player only knows for sure which tiles he/she has available in order to make a play, but must make inferences about the plays that the other participants can make in order to try to ensure that his/her team wins the game. The game is governed by a set of standardized rules, and successful play involves the use of highly-developed mathematical, reasoning and decision-making skills. In this paper we describe a computer system designed to simulate the game of dominoes by using four independent game-playing agents split into teams of two. The agents have been endowed with different strategies, including the traditional strategy followed by experienced human players. An experiment is described in which the success of each implemented strategy is evaluated compared to the traditional one. The results of the experiment are given and discussed, and possible future extensions mentioned

Resumen:

La industria automotora utiliza sistemas de software complejos para realizar la ingeniería de sus productos en todas las etapas del proceso de producción, desde el diseño conceptual hasta la manufactura. Para cada una de estas etapas, se emplean herramientas de software de gran complejidad, y la gente que utiliza dichas herramientas necesita apoyo constante para continuar siendo productivos aún cuando pueden surgir problemas en el uso del software. Dada la cantidad y variedad de consultas hechas por el personal de ingeniería de producto al personal de soporte de software, estos últimos se pueden beneficiar si tuvieran una herramienta de software que almacene su experiencia y conocimiento, y que pueda ser consultada para generar soluciones rápidamente a los problemas que pueda haber entre los usuarios del software de diseño. Una forma de almacenar dicha experiencia es generar una base (memoria) de casos a la que se pueda acceder utilizando distintos descriptores de los posibles problemas como índices. Dado un nuevo problema, la idea es de realizar una búsqueda en la base de casos para tratar de encontrar todo el conocimiento que se pueda acerca de problemas similares que se hayan tenido anteriormente, y sugerir las soluciones a dichos problemas como potenciales soluciones al nuevo problema. Al mismo tiempo, aún si el conocimiento previo de la base de casos no haya sido de gran utilidad en alguna situación en particular, el sistema puede almacenar las soluciones a estos nuevos problemas para que en el futuro, cuando surjan los problemas de nuevo, se incremente su utilidad. En este artículo describimos un sistema de información que utiliza estas ideas para responder flexible y eficientemente cuando el personal de ingeniería de producto de una empresa de la industria automotora tiene problemas con el uso de su software de diseño

Abstract:

The automotive industry uses complex software systems to perform product engineering at all stages of the production process, from conceptual design to manufacture. For each of these stages, different software tools of great complexity areemployed, and the people using these tools need constant support in order to continue being productive even when problems arise with the software they are using. Given the amount and variety of queries made by product engineering personnel to the software support staff, said staff can benefit from a software tool that stores their expertise and can be queried in order to generate quick solutions to problems with product engineering software. One way of storing such expertise is to generate a case base which is indexed by different problem descriptors. Given a new problem, the case base is searched for knowledge about similar problems encountered in the past, and their solutions suggested as potential ways of solving the new problem. At the same time, even if its prior knowledge was not able to provide help in a particular situation, this information system can store new solutions to new problems in order to be able to help with similar problems in the future. In this paper wedescribe an information system that uses these ideas in order to flexibly and efficiently reply when problem situations are encountered by the product engineering staff of a major automobile manufacturer. This setup can relieve some of the burden placed on the software support staff and reduce their response time

Abstract:

While there have been plenty of applications of case-based reasoning (CBR) to different design tasks, rarely has the methodology been used for generating new works of art. If the goal is to produce completely novel artistic styles, then perhaps other reasoning methods offer better opportunities for producing interesting artwork. However, if the goal is to produce new artwork that fits a previously-existing style, then it seems to us that CBR is the ideal strategy to use. In this paper we present some ideas for integrating CBR with other artificial intelligence techniques in order to generate new artwork that imitates a particular artistic style. As an example we show how we have successfully implemented our ideas in a system that produces new works of art in the style of the Dutch painter Piet Mondrian. Along the way we discuss the implications that a task of this nature has for CBR and we describe and provide the results of some experiments we performed with the system

Abstract:

In this paper we present a computer system based on the notion of evolutionary algorithms that. Without human intervention. generates artwork in the style of the Dutch painter Piet Mondrian. Several implementation-related decisions that have to be made in order to program such a system are then discussed. The most important issue that has to be considered when implementing this type of system is the subroutine for evaluating the multiple potential artworks generated by the evolutionary algorithm, and our method is discussed in detail. We then present the set-up and results of a cognitive experiment that we performed in order to validate the way we implemented our evaluation subroutine, showing that it makes sense. Finally, we discuss our results in relation to other research into the computer generation of artwork that fits particular styles existing in the real world

Abstract:

The problem of assigning gates to aircraft that are due to arrive at an airport is one that involves a dynamic task environment. Airport gates can only be assigned if they are currently available, but deciding which gate to assign to which flight also involves satisfying multiple additional constraints. Once a solution has been found, new incoming flights will have approached the airpspace of the airport in question, and these will require arrival gates to be assigned to them, so the entire process must be repeated. We have come up with a combined knowledge-based and evolutionary approach for performing the airport gate scheduling task, to this paper we present our model from a theoretical point of view, and then discuss a particular implementation of it for the scheduling of arrival gates in a specific airport and show some experimental results

Abstract:

The problem of assigning gates to aircraft that are due to arrive at an airport is one that involves a dynamic task environment. Airport gates can only be assigned if they are currently available, but deciding which gate to assign to which flight also involves satisfying multiple additional constraints. Once a solution has been found, new incoming flights will have approached the airspace of the airport in question, and these will require arrival gates to be assigned to them, so the entire process must be repeated These observations have led us to propose a coevolutionary model for automating the airport gate scheduling problem. We represent the genotypes of two species. One species corresponds to the current problem to be addressed (a list of departing and arriving flights at a given time-step) and the other species corresponds to the solutions being proposed for that problem (a list of possible gate assignments for the arriving flights). An evolutionary algorithm which operates on a population of solution genotypes is used to solve each instance of the airport gate scheduling problem. A coevolutionary algorithm in which the two species influence each other, which incorporates the previously-mentioned evolutionary algorithm once at each time-step, models the fact that multiple instances of the problem occur over time as an airport operates

Resumen:

El ingreso de México de México al GATT en 1986, trajo consigo la disminución de barreras a la importación. A este hecho se le denomina: recibir el trato de nación más favorecida. México signó con Japón un Acuerdo de Complementariedad Económica que entró en vigor a partir de 2005. Japón es un gran consumidor de carne de puerco y paga excelentes precios por ella. Entre sus principales proveedores se encuentran Estados Unidos y Canadá, miembros del TLCAN que establece preferencias arancelarias para la importación y exportación de productos agropecuarios que podrían propiciar la triangulación. El objetivo de este trabajo es determinar la competitividad de la carne de puerco de los países integrantes del TLCAN en el mercado japonés. El periodo de análisis comprendió de 1998 a 2012. Se utilizó el modelo no lineal SDAIDS (por sus siglas en inglés). Las pruebas de significancia se realizaron utilizando el procedimiento de Chalfant. En relación a las elasticidades de ingreso, cabe observar que todas resultan positivas, acordes con la teoría que indica una relación directa entre los gastos por importaciones con la cantidad demandada de productos cárnicos. En el caso de Estados Unidos de Norteamérica y México tienen un nivel de significancia del 1%; las de los tres países del TLCAN resultan todas menores que uno, lo que indica que las importaciones provenientes de países del resto del mundo son más sensibles a los cambios en el gasto. México tiene una ventana de oportunidad de mercado para ampliar sus exportaciones de carne de puerco a Japón

Abstract:

The entrance of Mexico to the GATT in 1986 diminished considerably the import barriers, and in particular produced a continuously increasing commercial exchange with Japan. In fact, later in 2005 Mexico and Japan signed an Economic Complementarity Agreement. Japan is an important pork meat consumer and has recently paid high prices for quality meat. On the other hand, NAFTA countries are important purveyors of the meat import market of Japan. The object of this work is to determine the competitiveness of the different NAFTA countries in meat import market to Japan. Our analysis covers the period from 1998 to 2012. We employed a non-linear Source Differentiated Almost Ideal Demand System (SDAIDS) to estimate the elasticities of the corresponding demand functions. The significance tests were realized using Chalfant’s procedure. We obtained positive expenditure elasticities as expected. However, in the case of Mexico and USA the elasticities turned out to be less than 1, while the expenditure elasticity for the Rest of the World is greater than 1. This shows that pork meat imports from other countries are more sensible to changes in expenditure. We conclude that Mexico has an important opportunity to expand his exports of pork meat to Japan

Abstract:

In this paper we study delegated portfolio management when the manager’s ability to short-sell is restricted. Contrary to previous results, we show that under moral hazard, linear performance-adjusted contracts do provide portfolio managers with incentives to gather information. We find that the risk-averse manager’s effort is an increasing function of her share in the portfolio’s return. This result affects the risk-averse investor’s choice of contracts. Unlike previous results, the purely risk-sharing contract is now shown to be suboptimal. Using numerical methods we show that under the optimal linear contract, the manager’s share in the portfolio return is higher than what it is under a purely risk sharing contract. Additionally, this deviation is shown to be: (i) increasing in the manager’s risk aversion and (ii) larger for tighter short-selling restrictions. As the constraint is relaxed the deviation converges to zero

Resumen:

En simulación de eventos discretos a veces existe la necesidad de clasificar una entidad de acuerdo con algún criterio o característica. En muchos procesos de servicios la característica más fácil de observar es el tiempo en el que ocurre cierto evento, y éste es incluido de manera intrínseca dentro de la tasa de arribos de un proceso de Poisson para después obtener la distribución de probabilidad y hacer predicciones. En este documento se muestra que también se puede obtener una estimación de densidad de los arribos condicionada en el tiempo durante todo el periodo del proceso Poisson la cuál puede ser multimodal y mediante un clasificador bayesiano realizar un muestreo para hacer la clasificación. Los resultados muestran que después de hacer la clasificación, los eventos convergen en la proporción a priori deseada, esto se observa con una gráfica de medias acumuladas, además que la distribución de probabilidad o verosimilitud observada de los datos originales se mantiene en los eventos clasificados

Abstract:

In simulation of discrete events sometimes there is a need to classify an entity according to some criterion or characteristic. In many service processes the easiest feature to observe is the time in which a certain event occurs and is included intrinsically within the arrival rate of a Poisson process, this to obtain the probability distribution and make predictions. This document shows that it is also possible to obtain a density estimate of the arrivals conditioned over time during the entire period of the Poisson process which can be multimodal and a Bayesian classifier is used to perform sampling to do the classification. The results show that after the classification, the events converge in the desired a priori proportion, this is observed with a graph of accumulated means, in addition that the probability distribution or observed likelihood of the original data is maintained in the classified events

Abstract:

A restricted forecasting compatibility test for Vector Autoregressive Error Correction models is analyzed in this work. It is shown that a variance–covariance matrix associated with the restrictions can be used to cancel out model dynamics and interactions between restrictions. This allows us to interpret the joint compatibility test as a composition of the corresponding single restriction compatibility tests. These tests are useful for appreciating the contribution of each and every restriction to the joint compatibility between the whole set of restrictions and the unrestricted forecasts. An estimated process adjustment for the test is derived and the resulting feasible joint compatibility test turns out to have better performance than the original one. An empirical illustration of the usefulness of the proposed test makes use of Mexican macroeconomic data and the targets proposed by the Mexican Government for the year 2003

Abstract:

In this work we present a truncated Newton method able to deal with large scale bound constrained optimization problems. These problems are posed during the process of identifying parameters in mathematical models. Specifically, we propose a new termination rule for the inner iteration, that is effective in this class of applications. Preliminary numerical experimentation is presented in order to illustrate the merits of the rule

Resumen:

A partir de las respuestas de los alumnos en la Olimpiada de Mayo, evento donde son aplicadas pruebas a estudiantes de primaria y secundaria en todo México, se intenta plantear un enfoque diferente para la enseñanza de las matemáticas en la educación básica. Los problemas “Tipo Olimpiada” trascienden el plan de estudios y el bagaje académico de los estudiantes. Es por ello que postulamos que presentar problemas de este tipo, los cuales llevan al alumno a considerar diferentes formas de llegar a una respuesta, tiene un impacto positivo en el desarrollo lógico matemático de los jóvenes y se traduce en un mejor entendimiento de las matemáticas

Abstract:

From the students’ answers in the Olimpiada de Mayo, which is applied to elementary and secondary school students all over Mexico, it is intended to pose a different approach on the instruction of mathematics in elementary education. The “Olympic Type” problems transcend the study plan and the students’ academic baggage. It is why we postulate that presenting problems of this sort, that push the student to consider different ways of reaching an answer, have a positive impact on the development of the youth’s mathematical logic, which translates on a better understanding of mathematics

Abstract:

A notion of an almost regular inductive limits is introduced. Every sequentially complete inductive limit of arbitrary locally convex spaces is almost regular

Abstract:

A regular inductive limit of sequentially complete spaces is sequentially complete. For the converse of this theorem we have a weaker result: if indEn is sequentially complete inductive limit, and each constituent space En is closed in indEn, then indEn is α-regular

Abstract:

In shale plays, as with all reservoirs, it is desirable to achieve the optimal development strategies, particularly well spacing, as early as possible, without overdrilling. This paper documents a new technology that can aid in determining optimal development strategies in shale reservoirs. We integrate a decline-curve-based reservoir model with a decision model that incorporates uncertainty in production forecasts. Our work extends previous work by not only correlating well spacing and other completion parameters with performance indicators, but also developing an integrated model that can forecast production probabilistically and determine the impact of development decisions on long-term production. A public data set of 64 horizontal wells in the Barnett shale play in Cooke, Montague and Wise Counties, Texas, was used to construct the integrated model. This part of the Barnett shale is in the oil window and wells produce significant volumes of hydrocarbon liquids. The data set includes directional surveys, completion and stimulation data, and oil and gas production data. Completion and stimulation parameters, such as perforated interval, fluid volume, proppant mass, and well spacing, were correlated with decline curve parameters, such as initial oil rate and a proxy for the initial decline rate, the ratio of cumulative production at 6 months to 1 month (CP6to1), using linear regression. In addition, a GOR model was developed based on thermal maturity and average GOR versus time. Thousands of oil and gas production forecasts were generated from linear regression and GOR models using Monte Carlo simulation, which serve as the input to the decision model. The decision model then determines the impact of well spacing and other completion/stimulation decisions on long-term production performance.

Abstract:

Public goods games played by a group of individuals collectively performing actions towards a common interest characterize social dilemmas where both competition and cooperation are present. By using agent-based simulation, this paper investigates how collective action in the form of repeated n-person linear public goods games is affected when the interaction of individuals is driven by the underlying hierarchical structure of an organization. The proposed agent-based simulation model is based on generally known empirical findings about public goods games and takes into account that individuals may change their profiles from conditional cooperators to rational egoists or vice versa. To do so, a fuzzy logic system was designed to allow the cumulative modification of agents' profiles in the presence of vague variables such as individuals' attitude towards group pressure and/or their perception about the cooperation of others. From the simulation results, it can be concluded that collective action is affected by the structural characteristics of hierarchical organizations. The major findings are as follows: (1) Cooperation in organizational structures is fostered when there is a collegial model defining the structure of the punishment mechanisms employed by organizations. (2) Having multiple, and small organizational units fosters group pressure and highlights the positive perception about the cooperation of others, resulting in organizations achieving relatively high aggregate contribution levels. (3) The greater the number of levels, the higher the aggregate contribution level of an organization when effective sanctioning systems are introduced

Abstract:

Though most countries have established public defense systems to represent indigent defendants, this is far from implying their offices are in good shape. Indeed, significant variation likely exists in the systems' effectiveness, across societies and at the subnational level. Defense agencies' performance likely depends on their configuration, including their funding, their internal arrangements, and their selection and retention mechanisms. Centered on public defense in Argentina, this article compares the performance of public and retained counsel at the country's Supreme Court. Public defenders' offices received a boost in the last two decades, and are institutionally well positioned to square off against prosecutors, putting them at least on par with the averaged retained counsel. Using a fresh dataset of around 3000 appeal decisions from 2008 to 2013, the study largely tests representational capabilities by looking at whether counsel meets briefs' formal requirements, a subset of decisions particularly valuable to reduce potential biases. It finds that formal dismissals are significantly less frequent when a public defender is named in an appeal, particularly when a federal defender is involved. It also discusses and tests alternative mechanisms. The article's findings illuminate discussions of support structures for litigation, criminal justice reform, and criminal defendants' rights

Abstract:

Does a young football (soccer) player's birthdate affect his prospects in the sport? Scholars have found a correlation between early births in the competition year among young players within the same cohort and improved chances in sports as they advance to other stages. This article is one of the first studies to ask this question about a male premier league in Latin America - the Argentinian 'A' league. It uses a large-N data-set of all players in the period 2000-2012, around 3000 players. The article finds a large effect of the player's relative age on his prospect to become a professional, though the effect is only present in the case of Argentinian-born players. The effect evaporates once a set of measures are employed to compare professional players with one another. The article contributes to the discussion of the biased effect of seemingly neutral institutional policies, and its conclusions may shed light in other areas

Abstract:

This paper presents an estimation of ideal points for the Justices of the Supreme Court of Argentina for 1984–2007. The estimated ideal points allow us to focus on political cycles in the Court as well as possible coalitions based on presidential appointments. We find strong evidence to support the existence of such coalitions in some periods (such as President Carlos Menem’s term) but less so in others (including President Néstor Kirchner’s term, a period of swift turnover in the Court due to impeachment processes and resignations). Implications for comparative judicial politics are discussed

Abstract:

Engineers make things, make things work, and make things work better and easier. This kind of knowledge is crucial for innovation, and much of the explicit knowledge developed by engineers is embodied in scientific publications. In this paper, we analyze the evolution of publications and citations in engineering in a middle-income country such as Mexico. Using a database of all Mexican publications in Web of Science from 2004 to 2017, we explore the characteristics of publications that tend to have the greatest impact; this is the highest number of citations. Among the variables studied are the type of collaboration (no collaboration, domestic, bilateral, or multilateral), the number of coauthors and countries, controlling for a coauthor from the USA, and the affiliation institution of the Mexican author(s). Our results emphasize the overall importance of joint international efforts and suggest that publications with the highest number of citations are those with multinational collaboration (coauthors from three or more countries) and when one of the coauthors is from the USA. Another interesting result is that single-authored papers have had a higher impact than those written through domestic collaboration

Abstract:

The purpose of this paper is to analyze the extent to which productivity and formal research collaboration has changed in the fields of social sciences in Mexico. The results show that all fields have had extensive growth in the number of publications, mainly since 2005 when the number of journals in Social Sciences indexed in Web of Science (WoS) started a significant growth. However, there are important variations among areas of knowledge. The four most productive fields, considering only publications in WoS, are Business & Economics; Education & Educational Research; Social Sciences Other Topics; and Psychology. The evolution of the mean of coauthors per paper, over the period of analysis, has not had a steady growth. On the contrary, the evolution has been almost flat in almost all fields of knowledge. The evolution of communication and information technologies does not seem to have influenced substantially co-authorship in Social Sciences in Mexico. Nor has there been a big change in terms of collaboration. On average, 42% of the publications in all fields of knowledge were by solo authors, and 26% were local collaborations, i.e. collaborations among authors affiliated at Mexican institutions. Related to international collaboration, 24% of the publications were bilateral collaboration (Mexico and another country) and only 8% of the publications involved researchers from three or more countries (multilateral collaboration)

Abstract:

Nations consider R&D a fundamental way to spur business innovation, increase the international competitiveness of domestic firms, achieve higher levels of economic growth, and increase the social welfare of its citizens. The empirical evidence indicates that, in the Latin America region, investment in R&D is comparatively low, largely depends on public funds, and is highly concentrated in academic research with limited business applications. Empirical evidence suggests a lack of connection in the region between those who produce knowledge (academia) and those who use that knowledge (business practitioners). This paper argues that business schools in the region have a role to play filling this gap by conducting more research with real-world business applications and by fostering innovative entrepreneurship among business school students

Abstract:

This paper analyzes science productivity for nine developing countries. Results show that these nations are reducing their science gap, with R&D investments and scientific impact growing at more than double the rate of the developed world. But this “catching up” hides a very uneven picture among these nations, especially on what they are able to generate in terms of impact and output relative to their levels of investment and available resources. Moreover, unlike what one might expect, it is clear that the size of the nations and the relative scale of their R&D investments are not the key drivers of efficiency

Abstract:

This paper provides useful insights for the design of networks that promote research productivity. The results suggest that the different dimensions of social capital affect scientific performance differently depending on the area of knowledge. Overall, dense networks negatively affect the creation of new knowledge. In addition, the analysis shows that a division of labor in academia, in the sense of interdisciplinary research, increases the productivity of researchers. It is also found that the position in a network is critical. Researchers who are central tend to create more knowledge. Finally, the findings suggest that the number of ties have a positive impact on future productivity. Related to areas of knowledge, Exact Sciences is the area in which social capital has a stronger impact on research performance. On the other side, Social and Humanities, as well as Engineering, are the ones in which social capital has a lesser effect. The differences found across multiple domains of science suggest the need to consider this heterogeneity in policy design

Abstract:

This paper aims to further our understanding of how embeddedness affects the research output and impact of scientists. The analysis uses an extensive panel data that allows an analysis of within person variation over time. It explores the simultaneous effects of different dimensions of network embeddedness over time at individual level. These include the establishment of direct ties, the strengths of these ties, as well as the density, structural holes, centrality, and cross-disciplinary links. Results suggest that the network dynamics behind the generation of quality output contrasts dramatically with that of quantity. We find that the relational dimension of scientists matters for quality, but not for output, while cognitive dimensions have the opposite effect, helping output, while being indifferent toward impact. The structural dimension of the network is the only area where there is some degree of convergence between output quantity and quality; here, we find a prevalence for the role of brokerage over cohesion. It concludes by discussing implications for both network research and science policy

Abstract:

Mexico is among the set of countries that look at Science, Technology and Innovation a fundamental mechanism to support competitiveness, reach higher levels of economic growth and increase the social welfare of its citizens. This paper provides a description of the main policy initiatives to foster innovation in Mexico. Afterward, some characteristics of innovation practices in private-sector businesses and how these features have evolved over a 5-year period are resented to assess the impact of those policy initiatives. To this purpose, results of the 2001 and 2006 national innovation surveys are used

Abstract:

The Mexican government faces significant challenges in providing resources and implementing policies, to support steps that have been taken to strengthen its science and technology capacity. The country's research and development (R&D) investments was less than 0.4% of its gross domestic product (GDP) in 2004, as compared with other countries. Abundant natural resources, such as oil, along with closed and regulated economy are some of the main factors that have prevented the government and companies from investing in R&D activities in the country. Severe financial crisis experienced by the country during 1980s that increased inflation to more than 150% was another factor, which affected the government's efforts to strengthen research and development activities in the fields of science and technology in the country

Abstract:

This paper uses a unique data set of Mexican researchers to explore the determinants of research output and impact. Our findings confirm a quadratic relationship between age and the number of published papers. However, publishing peaks when researchers are approximately 53 years old, 5 or 10 years later than what prior studies have shown. Overall, the results suggest that age does not have a substantial influence on research output and impact. We also find that reputation matters for the number of citations but not publications. Results also show important heterogeneity across areas of knowledge. Interpretations of other aspects, such as gender, country of PhD, and cohort effect, among others, are also discussed

Resumen:

El presente trabajo señala algunas de las críticas que se han realizado a la cuestión de lo social en Heidegger. Se muestra que el método por sí mismo excluye la posibilidad de hacer propuestas concretas en lo referente a lo ético o lo social, que no debe traducirse como un desentendimiento de lo humano. Por otro lado, se realiza un recorrido de algunos de los lugares centrales de la ontología fundamental en la que Heidegger aborda el tema de la alteridad

Abstract:

This work outlines some of the criticism regarding Heidegger’s social theory. It will be demonstrated that Heidegger’s method itself eliminates the possibility of specific suggestions regarding ethical and social issues, which must not be interpreted as a disregard of human issues. On the other hand, we will revisit some key places in his Fundamental Ontology, where the philosopher addresses the theme of alterity

Resumen:

En este artículo se fundamenta una idea sencilla: en la actualidad muchos líderes políticos usan el término tolerancia para calicar sus propias actitudes hacia cierto tipo de personas, prácticas y culturas. La pregunta es simple: ¿al estado tolerante se le permite hablar (moral y conceptualmente) acerca de la tolerancia? La tesis defendida por el autor es que el Estado liberal moderno no puede (por razones conceptuales) y no debe (por razones morales) hablar acerca de la tolerancia

Abstract:

In this article the author makes a simple claim: in present days, several political leaders use the term tolerance to qualify their attitudes towards certain kind of people, practices and cultures. The question is simple: Is the tolerant state allowed (morally and conceptually) to speak about tolerance? The thesis defended by the author is that the modern liberal State cannot (because of conceptual reasons) and should not (because of moral reasons) talk about tolerance

Resumen:

En algunos sectores crece la convicción de que la universidad es innecesaria y que puede ser remplazada por una serie de dispositivos perfectamente alineados a los intereses de la empresa. Se multiplican los expertos que anuncian la muerte de la universidad. Conviene reflexionar sobre los rasgos que tendría ese mundo sin universidades y destacar la necesidad de un espacio libre de condicionamientos políticos y económicos

Abstract:

The conviction that the university is unnecessary and can be replaced by devices perfectly aligned with the interests of the company is growing in some sectors. The experts who announce the death of the university are multiplying. It is worth reflecting on the characteristics that this world without universities would have and highlight the need for a space free of political and economic conditioning

Resumen:

La filosofía hegeliana tiene un carácter pedagógico que se hace presente en el concepto Bildung (educación, formación o cultivo de sí). La fenomenología del espíritu es la ciencia de la experiencia de la conciencia; la exposición del proceso de aprendizaje del espíritu en la conciencia y en la historia. El espíritu es de por sí el sujeto que se ha aprendido a sí mismo, que sabe que sabe y que no deja de desplegarse en diversas formas históricas. Así, el espíritu trabaja, se hace a sí mismo. Es pura autoactividad que se forma y se educa dialécticamente. Sostengo la afirmación de que el sistema de la ciencia de Hegel no solo es lógico o histórico sino pedagógico en el sentido enfático de que la Idea o la totalidad de lo real no puede ser sino un largo proceso formativo

Abstract:

The Hegelian philosophy has a pedagogical character that is present in concept Bildung (education, formation or self-cultivation). The phenomenology of the spirit is the science of the experience of consciousness; the exposition of the learning process of the spirit in consciousness and in history. The spirit is in itself the subject that has learned itself, that knows that it knows and that does not cease to unfold in various historical forms. Thus, the spirit works, it makes itself. It is pure self-activity that is formed and educated dialectically. I support the assertion that Hegel’s system of science is not only logical or historical but pedagogical in the emphatic sense that the Idea or the totality of the reality, can only be a long learning process

Resumen:

La Universidad de Columbia determinó el 20 de enero de 1919 implantar el curso Contemporary Civilization, que se convirtió con el paso del tiempo en uno de los más exitosos de la educación estadounidense. El propósito del curso era comprender los problemas del mundo y sus soluciones. Solo un estudiante cultivado puede alcanzar un entendimiento profundo de sí mismo y de la sociedad, y de emitir sus propios juicios. El proyecto se inspiró en la idea de la obligación moral de ser inteligente de John Erskine. Daniel Cosío Villegas (1898-1976) impulsó la introducción del curso en México en el nuevo plan de estudios (1958) de la Facultad de Economía de Nuevo León. En esta investigación exploramos las circunstancias históricas del curso y los problemas que enfrentó Consuelo Meyer L'Epée (1918-2010) para implantarlo

Abstract:

On January 20, 1919, Columbia University decided to introduce the Contemporary Civilization course. This new academic program became over time one of the most successful in the American education. The purpose of the course was to understand the world's problems and their solutions. Only a cultivated student can develop a deep understanding of himself and society, and the ability to make his or her own judgments. The project was inspired by John Erskine's (1879-1951) the moral obligation to be intelligent. Daniel Cosío Villegas (1898-1976) promoted the introduction of the Contemporary Civilization course in Mexico in the new curriculum (1958) of the Faculty of Economics of Nuevo León. In this research we explore the historical circumstances of this course and the challenges faced by Consuelo Meyer L'Epée (1918-2010) in its implementation

Resumen:

El presente artículo profundiza en las características, virtudes y ventajas del método socrático en la educación universitaria de la mano de la experiencia de los Grandes Libros en la Universidad de Chicago. Además, se reflexiona sobre la actualidad y los desafíos de tal método en tiempos de pandemia y de tecnología

Abstract:

This article delves into the characteristics, virtues and advantages of the Socratic method in higher education based on the experience of the Great Books at the University of Chicago. It also reflects on the actuality and challenges of such a method in times of pandemic and technology

Resumen:

Hegel elaboró una rigurosa filosofía del derecho fundada en la suspensión del formalismo jurídico y el subjetivismo de las preferencias individuales. Su filosofía no es positivista, pero tampoco historicista o sociológica. Se afirma como actividad científica porque se sustenta en la lógica y se demuestra como realidad efectiva. La filosofía del derecho expone la libertad a lo largo de tres momentos: el derecho abstracto, la moralidad y la eticidad. En el derecho abstracto se afirma la libertad formal del individuo, su personalidad jurídica: declaro que soy libre y tengo "derecho"; solo es efectivo cuando se expone y se prueba exteriormente. El derecho demanda el reconocimiento de la comunidad: un dejar hacer (laissez faire) y el otorgamiento de una prestación (prerrogativa) por parte del Estado. Para Hegel, la eticidad es el orden que permite la relación entre individuos libres en la esfera social, económica o jurídica. Por lo tanto, el derecho es realmente efectivo en la eticidad. En la constitución política y en las leyes, las personas exteriorizan la racionalidad práctica: la perfecta unión de las normas universales de la cultura y sus costumbres. Sólo así, el ciudadano se reconoce a sí mismo y a los demás en lo universal de las leyes

Abstract:

Hegel developed a rigorous rights philosophy that is founded on overcoming legal formalism and subjectivism of individual preferences. His philosophy is not positivist but neither historicist nor sociological. It is affirmed as a scientific activity because it is based on logic and is demonstrated as an effective reality. Philosophy of Law exposes freedom throughout its three moments: abstract law, morality and ethicity. In the abstract law, the individual's formal freedom is affirmed, his legal personality, I declare that I am free and have "right" however, this is only effective when it is exposed and tested externally. The law demands the recognition of the community: a let do (laissez faire) and the granting of a benefit (prerrogative) by the State. For Hegel, ethicity is the order that allows the relationship between free individuals in the social, economic or legal sphere. Therefore, the law is really effective in ethicity. In the political constitution and laws, people externalize practical rationality that is nothing more tan the perfect union of the universal norm of culture and customs. Only in this way, the citizen recognizes himself and others in the universal law

Resumen:

A Hegel se le acusa de ser un pensador totalitario, un abogado del "estatismo" y un "enemigo de la libertad". Se supone que no tiene un pensamiento económico y que, si logró hacer algunos esbozos al respecto, deben ser ignorados. Una exégesis rigurosa de las obras de Hegel no permite que se le atribuya el título de pensador totalitario. Al contrario, Hegel es un pensador liberal. No solo estudió la economía política de James Steuart (1707-1780) y de Adam Smith (1723-1790), sino que dedicó una parte de su reflexión a temas económicos como el "dinero”, el "sistema de necesidades" y la "sociedad civil". Para Hegel, el desarrollo de la libertad individual consolidó al "dinero" como medio de cambio universal y este, al mismo tiempo, posibilitó unas relaciones más justas entre los hombres

Abstract:

Hegel is accused of being a totalitarian thinker, an advocate of "statism" and an "enemy of freedom". It is assumed that he does not have an economic thought and that if he managed to make some sketches about them they should be ignored. A rigorous exegesis of Hegel’s works does not allow the title of totalitarian thinker to be attributed to him. On the contrary, Hegel is a liberal thinker. He not only studied the political economy of James Steuart (1707-1780) and Adam Smith (1723-1790), but devoted a part of his reflection to economic issues such as "money", "needs system" and "civil society". For Hegel, the development of individual freedom consolidated "money" as means of universal change and this, at the same time, made possible more just relations among men

Resumen:

¿Qué significado tiene el humanismo en el siglo xxi? ¿Qué papel tienen las humanidades en la universidad? ¿Qué relación tiene la universidad con la sociedad? ¿Cuáles son los presupuestos de una filosofía de la educación relevante para nuestra época? Este artículo busca responder estas preguntas a partir de las reflexiones que Martha Nussbaum ha dedicado a la filosofía educativa. El reto de la educación es el reto del hombre que se bate entre el narcicismo y la interdependencia, entre vergüenza y aceptación de la propia humanidad

Abstract:

What do humanities mean in this century? What role do humanities play in the university? How is the university related to our society? What is the foundation of a meaningful philosophy of education to our times? In this article, we plan to address those questions based on Martha Nussbaum’s reflections on educational philosophy. The challenge of education is that of a man who is torn between narcissism and interdependence, shame, and the acceptance of his humanity

Resumen:

Carlos de la Isla reflexiona sobre los retos que enfrenta la educación en el mundo contemporáneo. Su pensamiento coincide con el de otros especialistas del mundo al advertir sobre distintos peligros para las instituciones educativas, como supeditar las acciones educativas a los criterios mercantilistas. En esta entrevista, se pondera el significado de la pedagogía dialógica, el currículo oculto y la pedagogía crítica

Abstract:

Carlos de la Isla reflects on the challenges facing education today. His opinion is in agreement with various specialists worldwide warning of the dangers facing educational institutions as subjugating educational incentives to business criteria. In this interview, the significance of dialogic pedagogy, hidden syllabuses, and critical pedagogy are considered

Resumen:

Dese hace tiempo, se ha consolidado un nuevo paradigma educativo, centrado en el desarrollo de “competencias”, que enfatizan las actuaciones y las capacidades del estudiante “para saber hacer en un contexto”. Tal enfoque está presente en muchas reformas educativas en Europa y Latinoamérica, debido a la influencia de organismos económicos internacionales. Esta artículo analiza el concepto “competencia” y su evolución, y evalúa el significado de la noción “competencia social y ciudadana” desde la educación como ejercicio de crítica y transformación social

Abstract:

For some time, a new educational paradigm has been established, based on the development of competencies. It focuses on the actions and talents of students to perform in a certain situation. This approach is present in many of the educational reforms in Europe and Latin American due to the influence of international economic organizations. This article analyzes the concept of competencies and their growth. Moreover, it evaluates the meaning of social and civil competency from an educational perspective as a critical duty and a social transformation

Resumen:

Cuando asumió la presidencia de la república en diciembre de 1940, Manuel Ávila Camacho, encontró a su gobierno inmerso en la Segunda Guerra Mundial y comprendió que México no podía quedar al margen. Por eso nombró a Ezequiel Padilla al frente de la Secretaría de Relaciones Exteriores, y a quien solicitó ajustara la política exterior mexicana, sin abandonar los principios constitucionales de no intervención, respeto a la soberanía y solución pacífica de las controversias internacionales, para que el país ingresara a la Segunda Guerra Mundial. La decisión presidencial fue decisiva para hacer frente a la guerra, pero también para definir la política exterior que el país seguiría en la posguerra. Esta situación convirtió a Ezequiel Padilla en uno los cancilleres más influyentes del continente americano, a pesar de ello, ha sido una figura poco estudiada

Abstract:

After assuming the presidency of the Republic in December 1940, President Manuel Avila Camacho found his government immersed in World War II, and he understood that Mexico could not be left out. Hence, he decided to appoint Ezequiel Padilla to the head of the Ministry of Foreign Affairs, and whom he requested Mexican foreign policy, without abandoning the constitutional principles of non-intervention, respect for sovereignty and peaceful resolution of international disputes, so that the country would enter World War II. The presidential decision was decisive in tackling the War, but also to define the foreign policy that the country would follow in the post-war period. This situation made Ezequiel Padilla one of the most influential chancellors of the Americas, yet he has been a poorly studied figure

Abstract:

Our knowledge of the top management team (TMT) and board interface in the context of major strategic decisions remains limited. Drawing upon the strategic leadership system perspective (SLSP) and the interface approach, we argue that the two groups constitute a strategic-oriented multiteam system and consider how supplementary (similarity) and complementary (interacting variety) congruence of international and functional backgrounds influence strategic decision-making. Looking at the internationalization decisions of the largest public firms in the UK, we find that complementary congruence of international backgrounds and supplementary congruence of functional experience promote the pursuit of new market entries. We extend the SLSP by showing how the cognitive TMT-board interface dynamics associated with supplementary and complementary congruence are important antecedents of strategic outcomes. Further, we find a boundary condition to the interface approach in strategic leadership research by identifying the underlying mechanisms that activate some TMT-board interfaces and not others

Abstract:

Existing research has underexplored the role of context as a source of heterogeneity in family firms' (FFs) internationalization strategies. Drawing upon institutional theory, we develop and test a mid-range theory positing that differences in the quality of the institutional context can moderate the strength of the relationship between individual- and board-level attributes and FF internationalization. Our comparison of U.S. FFs with FFs from Brazil and Mexico reveals that in emerging market FFs, individual-level attributes such as CEO international experience, CEO educational attainment, and CEO international education exhibit a stronger relationship with internationalization. Similarly, we find that board-level attributes such as board size and board independence are also more strongly related to internationalization in emerging market contexts. We contribute to the literature by identifying a source of variation in FF internationalization strategies based on context and by examining the relationship between a wide range of FF attributes and internationalization

Abstract:

Integrating institutional and effectuation theories, we examine the relationship between entrepreneurs' means and internationalization in an emerging market. Results indicate that some means, such as technical expertise or business network membership, transform into valuable internationalization resources despite difficult institutional conditions. Others, however, such as industry or international experience, are best deployed locally. Findings also indicate that means such as entrepreneurial experience and number of founders act as catalysts of internationalization, allowing for other means to transform into internationalization resources. We extend effectuation theory by showing how different means transform into internationalization resources and contribute to research at the intersection of institutional theory and international entrepreneurship by expanding our understanding of universally-enabling and context-binding internationalization resources. In so doing, we identify a boundary condition to international entrepreneurship theories that emphasize the role of individual resources during venture internationalization by revealing a context in which certain traits exhibit nonstandard relationships with internationalization

Abstract:

The rise of the modern corporation has disrupted the class structures of nation-states because, in the era of globalization, such reorganization now occurs across borders. Yet, has globalization been deep enough to facilitate the emergence of a transnational capitalist class (TCC) in which both class formation and consolidation processes are located in the transnational space itself? I contribute to our understanding of the TCC by contrasting the personal characteristics, life histories and capital endowments of members of the British corporate elite with and without transnational board appointments. The existence of the honours system in the UK allows us to compare individuals objectively in terms of their symbolic capital and to link this trait to embeddedness in the TCC. By studying 448 directors from the 100 largest firms in the UK in 2011, I find evidence of a TCC with a class consolidation process that is located within transnational space, but whose class formation dynamics are still tethered to national processes of elite production and reproduction

Abstract:

Systematic international diversification research depends on reliable measurements of the degree of internationalization (DOI) of the firm. In this paper, we argue that the inclusion of social markers of internationalization can contribute to the development of more robust conceptualizations of the DOI construct. Unlike traditional metrics of DOI, such as foreign sales over total sales or foreign assets over total assets, social-based metrics of internationalization can reveal less visible foreign resource interdependencies across the firm's entire value chain. By combining social-based metrics of DOI with traditional measures of internationalization, we uncovered three distinct dimensions of internationalization: a real one, composed of the firm's foreign sales and assets; an exposure one, represented by the firm's extent and cultural dispersion of foreign subsidiaries; and a social one, represented by the extent of the firm's top managers international experience and the number and cultural zone dispersion of the firm's transnational board interlocks. Results from both an exploratory and confirmatory factor analysis show that these dimensions are sufficiently distinctive to warrant theoretical and empirical partitioning. These findings have implications for the way researchers select and combine DOI metrics and underscore the importance of conducting a thorough theoretical and statistical assessment of DOI conceptualizations before proceeding with empirical research

Abstract:

Board interlocks between firms headquartered in different countries are increasing. We contribute to the understanding of this practice by investigating the transnational interlocks formed by the 100 largest British firms between 2011 and 2014. We explore the association between different attributes of a firm's internationalization process, namely performance, structural and attitudinal, and the extent of the firm's engagement in transnational interlocks. We posit that the value of transnational interlocks as a non-experiential source of knowledge will vary according to which of these three attributes becomes more prominent as the firm internationalizes. We do not find a significant relationship between the performance and structural attributes of internationalization, as measured by the firm's percentage of foreign sales and assets, respectively, and increased engagement in transnational interlocks. We do, however, find an inverted U-shaped relationship between the attitudinal attribute of internationalization, represented by the psychic dispersion of the firm's foreign operations, and the firm's number of transnational interlocks. This non-linear relationship reveals both a natural boundary for the firm's capacity to engage in transnational interlocks and a reduced willingness to engage in such ties once a certain degree of attitudinal internationalization has been reached

Abstract:

While previous studies have focused on the role of directors in the formation of transnational interlocks, this paper argues that firm strategy can also influence the development of these relationships. The purpose of this paper is to shed light on the practice of transnational interlocks by extending board interlocks theory from the national to the transnational context, and exploring aspects that are unique to the transnational level

Abstract:

A ring R is called left semihereditary (p.p.) if each finitely generated (principal) left ideal of R is projective. Left p.p. rings have been used to study left semihereditary rings in many instances (e.g., see [1] and [9].) Left semihereditary group rings were studied in [4], [5], and [7]. More generally, left semihereditary monoid rings were studied in [2], [5], [8], and [9]. In [9, Theorem 9] necessary and suficient conditions were given for a monoid ring R[S] to be left semihereditary when S is a semilattice of torsion groups. For such a monoid S; all of the idempotents of S are central. In this paper, we consider the more general situation in which the monoid S is a semilattice of periodic semigroups, each having exactly one central idempotent element. Such semigroups have been studied in [6] and [10]. We show that these periodic semigroups must, in fact, be groups; thus we are able to characterize the semihereditary monoid rings in this case. We obtain this result as a corollary of a theorem on left p.p. rings. Before we present this theorem, we review the fundamental definitions that we will use. A semigroup S is periodic if, for each s in S; there exist positive integers m, n such that s^(m+n) = s^(m): A periodic semigroup S(e) with exactly one idempotent e contains a (largest) group G(e) = {s in S(e)|es = s}. A commutative semigroup E is called a semilattice if e^2 = e for each e in E. A semilattice E has a natural partial order given by e >= f if and only if ef = f. Then a semigroup S = Union of e in E S(e) is called a semilattice of semigroups if E is a semilattice and, for s in S(e) and t in S(f)(e, f in E); we have st in S(ef). Further information on these semigroups can be found in [3] or [12]. For a ring R and a monoid S; we use R[S] to denote the monoid ring; [11] is a basic reference for these rings

Resumen:

La dimensión política del proceso de reforma de salud es un factor fundamental que no sólo determina su factibilidad, sino la forma y contenido que ésta tome. De ahí que el estudio del aspecto político de las reformas de salud sea esencial en el análisis y manejo de la factibilidad política de las mismas. El presente estudio enfoca su atención sobre la capacidad del Estado para impulsar exitosamente propuestas de reforma de salud, usando como estudios de caso Colombia y México. Se concentra específicamente en aquellos elementos que buscan incrementar la factibilidad política para formular, legislar e instrumentar propuestas de cambio. Para ello, toma como variables el contexto institucional en que se desenvuelven las iniciativas de reforma; la dinámica política de su proceso, y las características y estrategias de los equipos a cargo de dirigir el cambio (equipos de cambio). Entre los principales hallazgos que aquí se presentan destacan las claras similitudes entre las estrategias políticas usadas por los grupos encargados de la reforma de salud y aquellas aplicadas por equipos tecnocráticos similares, a cargo de las reformas económicas en estos países. Se argumenta que si bien estas estrategias resultaron efectivas en la creación de nuevos actores en el sector salud –tales como organizaciones privadas de financiamiento y provisión de servicios–, no tuvieron el mismo impacto en la transformación de los viejos actores –los servicios de los ministerios de salud y de los institutos de seguridad social–, lo que ha limitado considerablemente el avance de las reformas

Abstract:

The political dimension of the health reform is a fundamental aspect that not only influences the project's feasibility, but also its form and content. Therefore the study of the political aspects involved in the health reform process is essential to determine the political feasibility of the reform. Based on the case studies of Colombia and Mexico, this study concentrates on the State's capability to promote health reform projects successfully. It specifically focuses on those elements that seek to improve the political feasibility of formulating, legislating and implementing reform proposals. The relevant variables under study are: the institutional context in which the reform initiatives develop; the political dynamic of the reform process; and the characteristics and strategies of the teams in charge of leading the reforms (change teams). The similarities in the political strategies used by the teams in charge of the health reform, and those of similar technocratic teams in charge of economic reform, stand out as one the study's main findings. It is argued that, although these strategies were effective in bringing about the creation of new actors in the health sector such as private organizations for the financing and provision of health services, they did not have the same impact on the transformation of the old actors the health ministries and the social security institutes, therefore considerably limiting the scope of the reforms

Abstract:

This paper presents a case study to identify the characteristics of an information visualization tool that facilitates its learning and adoption process in the context of professional practice. We identified the principal categories that explain the most important aspects on the usefulness of the info vis tool that emerged during the practical use of the system and the challenges for its adoption. Our results point to the value of allowing more consideration of the user needs and may direct a future research on the development of information visualization tools

Abstract:

Computing devices have become a primary working tool for many professional roles, among them software programmers. In order to enable a more productive interaction between computers and humans for programming purposes it is important to acquire an awareness of human attention/concentration levels. In this work we report on a controlled experiment to determine if a low-cost BCI (Brain Computer Interface) device is capable of classifying whether a user is fully concentrated while programming, during three typical tasks: creativity (writing code), systematic (documenting the code) and debugging (improving or modifying the code to make it work). This study employs EEG (Electroencephalogram) signals, to measure an individual’s concentration levels. The chosen BCI device is NeuroSky’s Mindwave due to its availability in the market and low cost. The three tasks described are performed in a physical and digital form. A statistically significant difference between debugging and creativity tasks is revealed in our experiments, in both physical and digital tests. This is a good lead for following experiments to focus on these two areas. Systematic tasks might not bring good results due to their nature

Abstract:

Different approaches used to define how a website shows information have an impact on how users evaluate its usability. As shown in the present study, how people accomplish a search of visual content in a newspaper website is an important factor to review while designing it. In this study, 47 participants were randomly assigned to evaluate one of two different newspaper websites and asked to do visual and written searches. The evaluation metrics were: task success and task time. Also the participants made an overall evaluation of the site, answering two Likert questions and an open-ended question to measure qualitative aspects. Finally, we measured the overall satisfaction with a SUS questionnaire. The results show that a poor performance in the search of visual content lead to lower usability perception, this might be a main aspect to improve when defining priorities to enhance overall usability

Abstract:

The aim of this investigation is to identify and understand the relations between the people’s mental models and their performance and usability perception about a complex interactive system (Twitter). Our study includes the participation of thirty college students where each of them was asked to perform a number of activities with Twitter, and to draw graphical representations of the mental model about it. The participants have either none or at least a year of expertise using Twitter. We identified three typical types of mental models used by participants to describe Twitter and found that the level of expertise had a major impact on performance rather than the mental model style defining the understanding about the system. Furthermore, and in contrast, we found that usability perception was affected by the level of expertise

Abstract:

Media-sharing Web sites are facilitating modern versions of storytelling activities. This study investigates the use of photo-based narratives to support young parents who are geographically separated from their aging parents to share stories about their young children. The case analyses Malaysian young mothers living in the UK, communicating regularly with their families back home, sharing experiences living in another country, looking for parenting advice, and opening opportunities for sharing the life and development of their young children. Sixteen families participated in the study by providing access to their social networking and web spaces and participating in exercises for creating photo stories. We identified the characteristics of the mediating system serving to establish the contact between grandparents and grandchildren as well as the characteristics of the photo stories and the practices around sharing them

Abstract:

Advances in computer technology have made it increasingly easy for users to navigate through a virtual world. We are interested in assessing what kind of immersive virtual experience is being delivered to the users through projects that aim to share virtualized aspects of the physical world. In order to achieve this we will assess whether people’s opinions on tourist-oriented places differ between a virtual visit and one in-person. Our proposed five-dimensional model is designed to incorporate cultural factors while analyzing perceived differences. This research is intended to contribute to the understanding of the comparison between these two experiences, with the final objective of improving on the user experience

Abstract:

Purpose – An ensemble is an intermediate unit of work between action and activity in the hierarchical framework proposed by classical activity theory. Ensembles are the mid-level of activity, offering more flexibility than objects, but more purposeful structure than actions. The paper aims to introduce the notion of ensembles to understand the way object-related activities are instantiated in practice. Design/methodology/approach – The paper presents an analysis of the practices of professional information workers in two different companies using direct and systematic observation of human behavior. It also provides an analysis and discussion of the activity theory literature and how it has been applied in areas such as human-computer interaction and computer-supported collaborative work. Findings – The authors illustrate the relevance of the notion of ensembles for activity theory and suggest some benefits of this conceptualization for analyzing human work in areas such as human-computer interaction and computer-supported collaborative work. Research limitations/implications – The notion of ensembles can be useful for the development of a computing infrastructure oriented to more effectively supporting work activities. Originality/value – The paper shows that the value of the notion of ensembles is to close a conceptual gulf not adequately addressed in activity theory, and to understand the practical aspects of the instantiation of objects over time

Abstract:

The expansion of information and communication technology (ICT) infrastructure in the developing world has considerably increased opportunities for people to connect. Today, people can better maintain long-distance relationships as well as be better informed of how their family, close friends, and emotional partners are doing. As a result, many migrants use ICTs to maintain and strengthen ties to their places of origin. For many years, the most popular means of communication for migrants was the telephone since it allows communication at relatively low rates even for international calls. Now new types of ICTs for family communication such as Internet services (e.g., instant messaging) are becoming more common. Given the economic, social, and political implications of migrants being connected to their places of origin, this socio-technical phenomenon deserves attention from academia, industry, and governments. Four different forms of ICTs are studied here: 1) hometown Web sites, 2) video conferencing, 3) call forwarding services, and 4) online TV. We analyze contributions of these services to enhance migrants' communication as well as the factors for adoption

Abstract:

This research reports on a study of the interplay between multi-tasking and collaborative work. We conducted an ethnographic study in two different companies where we observed the experiences and practices of thirty-six information workers. We observed that people continually switch between different collaborative contexts throughout their day. We refer to activities that are thematically connected as working spheres. We discovered that to multi-task and cope with the resulting fragmentation of their work, individuals constantly renew overviews of their working spheres, they strategize how to manage transitions between contexts and they maintain flexible foci among their different coworking spheres. We argue that system design to support collaborative work should include the notion that people are involved in multiple collaborations with contexts that change continually. System design must take into account these continual changes: people switch between local and global perspectives of their working spheres, have varying states of awareness of their different working spheres, and are continually managing transitions between contexts due to interruptions

Abstract:

Most current designs of information technology are based on the notion of supporting distinct tasks such as document production, email usage, and voice communication. In this paper we present empirical results that suggest that people organize their work in terms of much larger and thematically connected units of work. We present results of fieldwork observation of information workers in three different roles: analysts, software developers, and managers. We discovered that all of these types of workers experience a high level of discontinuity in the execution of their activities. People average about three minutes on a task and somewhat more than two minutes using any electronic tool or paper document before switching tasks. We introduce the concept of working spheres to explain the inherent way in which individuals conceptualize and organize their basic units of work. People worked in an average of ten different working spheres. Working spheres are also fragmented; people spend about 12 minutes in a working sphere before they switch to another. We argue that design of information technology needs to support people's continual switching between working spheres

Abstract:

This paper reports results from a study on the adoption of an information visualization system by administrative data analysts. Despite the fact that the system was neither fully integrated with their current software tools nor with their existing data analysis practices, analysts identified a number of key benefits that visualization systems provide to their work. These benefits for the most part occurred when analysts went beyond their habitual and well-mastered data analysis routines and engaged in cfeative discovery processes. We analyze the conditions under which these benefits arose, to inform the design of visualization systems that can better assist the work of administrative data analysts

Abstract:

This paper describes a method of Resource Reservation Management (RRM) mechanism that optimises bandwidth reservation in IP routers. The technique uses the original specification of the Resource Reservation Protocol (RSVP) with minimum modifications. In the proposal it is assumed that the video services are coded at multiresolution bit rates, each providing a different quality of service. The paper analyses the behaviour of the proposed RRM in an Internet network with different levels of congestion. The results show that the proposed mechanism can deliver an acceptable quality of service by dynamically adjusting the demanded bandwidth

Abstract:

In this paper, extending the ideas given by (Deheuvels 1979) for the empirical copula, we first assume that we are sampling from the product copula, and we construct Cnm the d-sample copula of order m, which is an estimator of C(m) the checkerboard approximation of order m, see (Li et al. 1997), then we study the distribution of the number of observations in each of the boxes generated by the regular partition of order m in [0,1]2, and we also give the corresponding moments and correlations. We also prove the weak convergence of the sample process, to a centered Gaussian process. In fact, we use the known results of the weak convergence for the empirical process, to prove the weak convergence of the sample process. We first show that the sample copula can be written as a linear functional of the empirical copula, and then we observe that this functional is Hadamard differentiable. Therefore, we can apply the delta method to get the weak convergence of the sample process. Finally, we performed several simulations of the sample process at a given point of [0,1]2 to study the sample size needed to get the convergence to a centered Gaussian process

Resumen:

Este trabajo propone un nuevo algoritmo basado en Optimización por Colonia de Hormigas (OCH) para la determinación de los niveles de inventarios de seguridad en una red logística cuando se utiliza el modelo de servicio garantizado. La red logística se modela como una serie de etapas de aprovisionamiento, de manufactura y de distribución. El objetivo es determinar el costo mínimo de colocar inventario de seguridad en cada una de las etapas, si éstas deben atender la demanda de las etapas sucesoras en un tiempo garantizado. El algoritmo propuesto es aplicado a una serie de instancias y se comparan los resultados con los obtenidos por el método estándar de programación dinámica. El algoritmo propuesto resuelve redes logísticas con 200 etapas en 120 segundos en promedio. Además, se hace un análisis de correlación entre los diferentes valores de los parámetros del algoritmo y el costo total del inventario de seguridad

Abstract:

This paper proposes a new algorithm based on Ant Colony Optimization (ACO) to determine the inventory levels in a logistics network when the inventory is managed under guaranteed-service time inventory model. The logistics network is modeled by supplying, manufacturing and delivering stages. The aim is to determine the minimum cost of placing safety stock in each stage, whether it must serve its predecessor stages just in a guaranteed service time. The proposed algorithm is applied to some instances, and the results are compared to those computed using standard operations of dynamic programming. The proposed algorithm solved 200-stage logistics networks in about 120 seconds. Also, a correlation analysis is carried out between the different values of the algorithm parameters and the total safety stock cost

Abstract:

We are concerned with deterministic and stochastic nonstationary discrete--time optimal control problems in infinite horizon. We show, using Gâteaux differentials, that the so--called Euler equation and a transversality condition are necessary conditions for optimality. In particular, the transversality condition is obtained in a more general form and under milder hypotheses than in previous works. Sufficient conditions are also provided. We also find closed--form solutions to several (discounted) stationary and nonstationary control problems

Resumen:

La pregunta que motiva esta investigación es la siguiente: Por qué Adolfo López Mateos, siendo secretario del Trabajo, tuvo una brillante actuación al conciliar y resolver muchos de los conflictos laborales del sexenio del presidente Adolfo Ruiz Cortines (1952-1958), y cuando sube a la Presidencia de la República (1958-1964) realiza una política de pan y palo, en contraposición a su actuación como secretario del Trabajo en el sexenio anterior. La respuesta a dicha pregunta se resuelve de la siguiente manera: "un presidente o un alto burócrata, a través de sus facultades constitucionales o metaconstitucionales, puede crear una agencia burocrática o modificar su curso de acción; a través de ella puede avanzar sus intereses, tanto de agenda política, como electorales"

Resumen:

Este ensayo desarrolla una explicación de clase y cultura sobre el persistente resurgimiento de la derecha radical en Francia. Su principal afirmación es que la derecha radical es mejor entendida como una tradición política continua cuyo apelativo puede ser rastreado en las modalidades y consecuencias de la modernización económica y política del país desde mediados del siglo XIX. Se analizan los valores económicos y políticos específicos propios de los miembros de esta categoría social para ayudar a explicar su prolongada atracción hacia el excluyente y autoritario discurso y programas de la derecha radical francesa

Abstract:

This paper develops a class-cultural explanation for the persistent resurgence of the Radical Right in France. Its principal claim is that the Radical Right is best understood as a continuous political tradition whose appeal can be traced to the modalities and consequences of the country’s economic and political modernization since the mid nineteenth century. The paper analyzes the specific economic and political values which members of this social category evolved in order to help explain their longstanding attraction to the exclusionary and authoritarian discourse and program of the French Radical Right

Abstract:

This article explains the victory of the Front National (FN) in the May 2014 European elections in France. Taking issue with standard academic accounts that conceive of the latter as 'second-order' elections, it argues that the FN won by harnessing voters' growing anxiety about European integration as an electoral issue. First, the article contends that, on the backdrop of worsening unemployment and social crisis, Europe assumed unprecedented salience in both national and European elections. In turn, it argues that by staking out a Europhobe position in contrast to the mainstream parties and the radical left, the FN claimed effective ‘ownership’ over the European issue, winning the bulk of the Eurosceptic vote to top the electoral field

Abstract:

The failure of the Los Cabos summit to satisfactorily address the European sovereign debt crisis and ominous world economic outlook, let alone agree on concrete measures to improve the oversight and functioning of the global economy, appears to confirm the diminishing effectiveness and relevance of the G20 as an organ of international governance since its inception in December 2008. While few accomplishments were achieved in the area of global governance during the Mexican presidency, acute collective action problems, made worse by the present economic crisis, paralysed the G20 in the lead-up to and during the Los Cabos summit. These collective action problems and the ensuing failure of global governance are attributable to the absence of leadership evident at both the global and European levels, which in turn testifies to the excessive dispersion of state economic and political power within the international system

Abstract:

This article seeks to account for the emergence of radical populist parties in France and Germany during the final two decades of the twentieth century and the first decade of the twenty-first. These parties, of the Far Right in France and Far Left in Germany, have attracted the support of economically and socially vulnerable groups—industrial workers and certain service-sector strata—who have broken with their traditional corporative and partisan attachments and sought out alternative bases of social and political identification. Contrary to classical liberal analyses that attribute rising unemployment and declining living standards among these groups to these countries’ failure to reform their economies, or varieties of capitalism arguments which claim that the institutional specificities of the post-war French and German economies insulated them from the impacts of neoliberal modernization, the article posits that this outcome is in fact attributable to the far-reaching economic liberalization which they experienced since the 1980’s in France and the 1990’s in Germany. Specifically, it is argued that this process of liberalization dissolved the Fordist social contract that had ensured the inclusion of these class groups in the postwar capitalist order, triggering a structural and cultural crisis which fueled their political radicalization

Abstract:

We consider the following version of the standard problem of empirical estimates in stochastic optimization. We assume that the underlying random vectors are independent and not necessarily identically distributed but that they satisfy a “slow variation” condition in the sense of the definition given in this paper. We show that these assumptions along with the usual restrictions (boundedness and equicontinuity) on a class of functions allow one to use the empirical mean method to obtain a consistent sequence of estimates of infimums of the functional to be minimized. Also, we provide certain estimates of the rate of convergence

Abstract:

Within the CHI community there has been sustained interest in interruptions and multitasking behaviour. Research in the area falls into two broad categories: the micro world of perception and cognition; and the macro world of organisations, systems and long-term planning. Although both kinds of research have generated insights into behaviour, the data generated by the two kinds of research have been effectively incommensurable. Designing safer and more efficient interactions in interrupted and multitasking environments requires that researchers in the area attempt to bridge the gap between these worlds. This SIG aims to stimulate discussion of the tools and methods we need as a community in order to further our understanding of interruptions and multitasking

Abstract:

The standard model of knowledge, (Ω, P), consists of state space, Ω, and possibility correspondence, P. Usually, it is assumed that P satisfies all knowledge axioms (Truth Axiom, Positive Introspection Axiom, and Negative Introspection Axiom). Violating at least one of these axioms is defined as epistemic bounded rationality (EBR). If this happens, a researcher may try to look for another model, (Ω∗,P∗), which generates the initial model, (Ω, P), while satisfying all knowledge axioms. Rationalizing EBR means that the researcher finds such a model. I determine when rationalization of EBR is possible. I also investigate when a model, (Ω∗,P∗), which satisfies all knowledge axioms, generates amodel, (Ω, P),which satisfies these axioms as well

Abstract:

Type space is of fundamental importance in epistemic game theory. This paper shows how to build type space if players approach the game in a way advocated by Bernheim’s justification procedure. If an agent fixes a strategy profile of her opponents and ponders which of their beliefs about her set of strategies make this profile optimal, such an analysis is represented by kernels and yields disintegrable beliefs. Our construction requires that underlying space is Polish

Resumen:

En este artículo se presenta el diseño y construcción de un robot móvil omnidireccional para aplicaciones colaborativas. El modelo cinemático que describe el movimiento omnidireccional del robot es implementado con un algoritmo de control a bordo del robot en una tarjeta de desarollo MOJO la cual cuenta con una FPGA Spartan 6 XC6SLX9 y un microcontrolador AT-mega32U4 y con un programa externo al que se accede desde el Robot Operating System (ROS). Los experimentos realizados ilustran el buen comportamiento del sistema diseñado

Resumen:

En este estudio se reflexiona sobre recientes cambios observados en las relaciones de China con los países vecinos con los que mantiene disputas territoriales terrestres, India y Bután. Posteriormente, se caracterizará la actitud actual de China ante sus vecinos marítimos. En tercer lugar, se relacionan las actuales políticas agresivas del gobierno central contra Hong Kong y Taiwán, y la sistemática política de control de la minoría musulmana en Xinjiang. El análisis resalta las nuevas herramientas diplomáticas de China para enfrentar las amenazas a su seguridad nacional y regional. El estudio concluye con proyecciones sobre los escenarios regionales e internacionales del poderío chino

Abstract:

This study presents reflections on recent changes observed in China's relations with neighboring countries with which it has land territorial disputes, India and Bhutan. Subsequently, China's current attitude towards its maritime neighbors will be characterized. Thirdly, the current aggressive policies of the central government against Hong Kong and Taiwan, and the systematic policy of control of the Muslim minority in Xinjiang are related. The analysis highlights China's new diplomatic tools to deal with its national and regional security threats. The study concludes with projections on regional and international scenarios for Chinese power

Abstract:

As part of the restructuring of state organizations announced in March 2018, it is known that the China Coast Guard (CCG), previously controlled by the State Oceanic Administration, is coming under the administration of the People’s Armed Police (PAP) from the Central Military Commission (CMC). As a paradigmatic shift from a joint civilian–military control (State Council–CMC) to a purely military one, the reorganization of the CCG, only five years from the latest reshuffling, seems to reveal an the party’s increasing control over the military as outlined in the September 2017 CCP Central Committee and also the intention by the Chinese central government to provide the CCG with more flexibility and authority to act decisively in disputed waters in the East and South China Seas if needed. This article inquiries into the causes, logic, and likely regional consequences of such a decision. Amid the upgrading of insular features in the Spratlys, the deployment of bombers in the Paracels, and overall modernization of China’s naval capabilities, the article also explores plausible developments in which the PAP-led CCG, irregular maritime militias, and People’s Liberation Army Navy forces might coordinate more effectively efforts to safeguard self-proclaimed rights in littoral and blue-water areas in dispute

Resumen:

Mientras que ciertos elementos sentaron las bases de la clase media japonesa en la posguerra, su consolidación y los cambios sufridos en la sociedad desde la década de 1990 dieron paso a una clase media adelgazada y precaria, que ha resentido la desigualdad económica, las alteraciones del patrón laboral y las tendencias marcadas por las reformas estructurales

Abstract:

While certain elements laid the foundations of the Japanese middle class in the postwar period, its consolidation and the changes undergone in society since the 1990s gave way to a thinned and precarious middle class, which has suffered from economic inequality, alterations in the labor pattern and the tendencies marked by structural reforms

Abstract:

The research inquiries into New Delhi's current approaches to Maritime Asia regional security in general and the South China Sea from the perspective of an Indian Act East Policy operating in the East Asian security supercomplex. Shaped by theoretical insights from defensive realism and security studies and based on empirical analysis of India's policy decisions from 2014 to the present, the research evaluates India's reach and limitations over its diplomatic and naval strategic policies with key Southeast Asian and extra regional states, mainly Vietnam, the United States and Japan. While identifying the need to update current India's naval strategy to better protect freedom of navigation in the South China Sea, the analysis finds relevant incentives for a closer India-China cooperative engagement so as to both improve the security architecture in this maritime region and for the sake of India's own security at large

Resumen:

En este artículo se expone la evolución de la relación bilateral y la importancia de Japón para México a la luz del reciente acercamiento económico; se identifican los retos importantes que enfrenta esta relación y se ofrecen propuestas a futuro

Abstract:

The essay outlines the evolution of the bilateral relationship and Japan’s importance to Mexico amid the recent economic developments; identifies relevant challenges in the bilateral realm, and offers proposals for improvement

Resumen:

Con la reciente inestabilidad en aguas del noreste y sudeste de Asia se ha reiniciado el debate sobre la capacidad y voluntad de China para preservar la paz y estabilidad regional con países vecinos. En el conflicto por las islas Spratly se observa desde hace pocos años un patrón cada vez más obvio de internacionalización con involucramiento de actores extrarregionales, en particular Estados Unidos. En este ensayo se presenta un análisis actualizado de este diferendo de larga historia que se ha agravado aceleradamente en los últimos años, considerando los motivos económicos y geopolíticos actuales más relevantes que han contribuido a la presente crisis entre China y algunos países, en particular Vietnam y Filipinas. Asimismo, se identifican las características más visibles de la entrada de Estados Unidos a este conflicto y sus implicaciones, seguido por una evaluación sobre si este diferendo, con sus nuevas aristas internacionales, ha llegado finalmente a un punto de ebullición. El ensayo finaliza con una serie de reflexiones sobre posibles caminos a seguir a futuro en este diferendo, tomando en cuenta las diversas variables analizadas

Abstract:

Recent instability in Northeast and Southeast Asian waters has reignited the debate over China's capacity and intention to preserve peace and security with neighboring countries. On the Spratly Islands territorial conflict, a pattern of internationalization -mainly a more direct US involvement- has been increasingly manifest during the last five years. This essay offers an updated analysis over this imbroglio, a conflict with a long history but nonetheless having been deteriorating fast during recent years, taking into account those most relevant economic and geopolitical causes contributing to the current crisis between China and some countries -mainly Vietnam and the Philippines-. Also, the most visible features and implications of US involvement into this maritime region are identified, followed by an evaluation of whether this conflict, with its international implications, has reached a breaking point. The essay concludes with some thoughts over future scenarios for the maritime region

Abstract:

In 1946, the Philippines raised claims in the South China Sea over an area already known as Spratly Islands. This claim advanced through peculiar stages, starting when Thomas Cloma allegedly discovered islands in 1946, later named as Freedomland, and maturing to some extent in 1978 by the government’s claim over the so-called Kalayaan Island Group. Considered as an oceanic expansion of its frontiers, this paper reviews the basis of the claim, first over the nature of Cloma’s activities, and secondly over the measures the Philippine government took as a reaction of Cloma’s claim of discovery of an area already known in western cartography as the Spratlys. Eventually, what is the nature of the link between the 1978 Kalayaan Islands Group’s official claim and 1956 Cloma’s private one?

Abstract:

This paper concerns the "Chinese ocean policy" with regard to the South China Sea in a transitional period, namely the late 1940s and early 1950s. Dealing with territorial conflict over the Spratly, Paracel, and Pratas Islands and Macclesfield Bank, China, as a coastal state, actively pushed what it saw as its core interests in its Southern Sea, and tried to defend them as best as it could. This paper has two aims. One is to describe the origin of the PRC's (and the ROC's) current policy vis-à-vis the Paracels and Spratlys. Both from a political and a military perspective, this policy has its origin in the post-War, 1946—1952 period, including the famous U-shaped dotted line enclosing all four archipelagoes claimed, denoting China's "traditional maritime boundary line" and her "historical waters." A review of the period 1946—1952 is necessary to properly understand the modern history and historiography of the Chinese claim. The other aim is to highlight the economic aspects of the claim, that is, a policy of maritime economic development as being developed, not at a central-planning, decision-making level, but rather at lower governmental levels. This study aims to give sufficient emphasis to the economic dimension of China's policy in the immediate post-War period

Abstract:

A study of China's defense of its "maritime frontier" in the period from 1902 to 1937, including the establishment of self-recognized sovereignty rights over the South China Sea archipelagos, provides a good illustration of how the country has dealt with relevant issues of international politics during the twentieth century. The article intends to show that throughout the period between the fall of the Qing dynasty, the consolidation of power of Chiang Kai-shek's Nationalist government, and up to just before the Pacific War, the idea of a maritime frontier, as applied to the South China Sea, was deeply subordinated to the political needs arising from the power struggle within China and to the precarious position of the country vis-à-vis world powers. Therefore, the protection of rights over the Spratly and Paracel Islands was not a priority of the Chinese government's foreign policy agenda during the first three decades of the republic. However, in contrast to the probable involvement of Sun Yat-sen in a scheme with Japanese nationals in the early 1920s, intended to yield rights for economic exploitation in the Southern China littorals and islands, the Nanjing government's defense of the maritime frontier in Guangdong province since 1928 marked the first precedent in China's self-definition as a modern oceanic nation-state pursuing her own maritime-territorial rights against world powers that had interests in the region

Resumen:

El propósito de este artículo es presentar una interpretación del ensayo de Kant Sobre un presunto derecho a mentir por filantropía, basada en los conceptos fundamentales de su propia filosofía del derecho. Por medio de esta lectura, hemos de explicar por qué muchos comentadores no han apreciado lo que verdaderamente estaba discutiéndose en su debate con el jurista francés Benjamin Constant. A nuestro entender, los dos puntos principales que Kant trata de defender en este ensayo son la naturaleza de las declaraciones jurídicas y la imposibilidad de un deber o una obligación de mentir en un contexto legal. Las tesis que trataremos de defender son, por un lado, que la validez de la posición kantiana respecto a ambos temas no depende, en sentido alguno, de su equivocado tratamiento del polémico ejemplo que discute, y, por otro lado, que ambos tópicos son de gran relevancia en aras de procurar un conocimiento adecuado de cómo las instituciones jurídicas deben impartir justicia

Abstract:

The aim of this article is to present an interpretation of Kant's essay On a Supposed Right to Lie for Philanthropy grounded on the fundamental concepts of his own philosophy of law. By means of this reading, we will explain why many commentators have failed to the see what was at stake in the debate between Kant and the French jurist Benjamin constant. In our understanding, the two main points that Kant tries to defend in the until-now infamous essay are the nature of juridical declarations and the impossibility of a duty or an obligation to lie in a legal context. The theses that we will try to defend are, on the one hand, that the validity of the Kantian position regarding both subjects does not depend, in any sense whatsoever, on his flawed treatment of the polemical example that he discusses, and on the other hand, that both topics are of great relevance in order to understand adequately how juridical institutions should impart justice

Resumen:

La entrega de bienes y servicios por partidos políticos en campaña electoral es endémica en la nueva democracia mexicana y esta práctica parece estar aumentando desde 2000. A partir de información recopilada en una base de datos tipo panel de ciudadanos durante la campaña electoral de 2018, ofrecemos el estudio más detallado hasta ahora disponible sobre los intentos de compra de voto en México. Tales esfuerzos fueron practicados por casi todos los partidos, involucraron a millones de ciudadanos, incluyeron una variedad de ofertas materiales e intentaron inducir a los votantes a alterar su comportamiento electoral de innumerables maneras. No obstante, la evidencia descriptiva sugiere que el cumplimiento de las metas de las maquinarias partidistas puede haber sido bajo porque muchos beneficiarios tenían una comprensión limitada de lo que se les pedía que hicieran y no temían las represalias del partido comprador de votos. Además, la evidencia circunstancial sugiere que los esfuerzos de compra de votos fueron insuficientes para anular la ventaja del candidato ganador en las elecciones presidenciales

Abstract:

Election-season handouts of goods and services by political parties are endemic in Mexico's new democracy, and the practice appears to be increasing since 2000. Using information from a 2018 election-season panel data set of ordinary citizens, we provide the most detailed examination yet available of vote-buying attempts in Mexico. Such efforts were practiced by nearly all parties, involved millions of citizens, included a variety of material offers, and attempted to induce voters to alter their electoral behavior in myriad ways. Nevertheless, descriptive evidence implies that compliance with political machines' wishes may have been low because many recipients had a muddled understanding of what they were asked to do and did not fear retribution from the vote buying party. In addition, circumstantial evidence suggests that vote-buying efforts were insufficient to overturn the winning candidate's advantage in the presidential election

Abstract:

Mentoring is more and more studied by researchers on account of its professional and personal impact on mentees. This contribution has two main objectives. First, to empirically validate the benefits for the mentor and to test links between mentoring activities and benefits through a multidimensional analysis. Second, to incorporate two variables structuring the relationship into the analysis: the formal vs informal nature of the mentoring relationship and the gender composition of the dyad. The paper aims to discuss these issues

Abstract:

Constant power loads (CPLs) in power systems have a destabilizing effect that gives rise to significant oscillations or to network collapse, motivating the development of new methods to analyse their effect in AC and DC power systems. A sine qua non condition for this analysis is the availability of a suitable mathematical model for the CPL. In the case of DC systems power is simply the product of voltage and current, hence a CPL corresponds to a first–third quadrant hyperbola in the loads voltage–current plane. The same approach is applicable for balanced three-phase systems that, after a rotation of the coordinates to the synchronous frame, can be treated as a DC system. Modelling CPLs for single-phase (or unbalanced poly-phase) AC systems, on the other hand, is a largely unexplored area because in AC systems (active and reactive) power involves the integration in a finite window of the product of the voltage and current signals. In this paper we propose a simple dynamic model of a CPL that is suitable for the analysis of single-phase AC systems. We give conditions on the tuning gains of the model that guarantee the CPL behaviour is effectively captured

Resumen:

Desde el siglo XIX la especialización y exportación de productos básicos en la América Latina no ha sido incompatible con la creación de una base fabril, e incluso los caminos para la industrialización dependieron del tipo de infraestructura exigida por la actividad primario-exportadora. En ello el ferrocarril fue clave no sólo como transportador de productos básicos sino también como medio de transferencia tecnológica y agente industrializador, porque los talleres ferroviarios hacia 1900 constituían las bases de industrias de ingeniería en la América Latina, lo cual es el foco de atención del presente estudio: la contribución del ferrocarril a la creación de una industria de bienes de capital, por medio de la producción de carros de carga, coches de pasajeros y locomotoras en Chile y México entre 1860 y 1950. El estudio detecta la producción de equipos rodantes en el interior de los talleres de los ferrocarriles y el tránsito hacia fábricas independientes, cuestionándose las evidencias hasta ahora disponibles que indicaban que en el largo plazo la producción de locomotoras, carros, coches, refacciones y accesorios ferroviarios fue mínima dentro de la producción industrial latinoamericana, conclusión que no consideraba que todo ferrocarril para su operación diaria contiene, en sí mismo, una capacidad industrial productora de bienes metálicos que en gran medida no entraban al mercado ni quedaron registrados en la estadística histórica.

Abstract:

Since the 19th century, the specialization in and export of basic products in Latin America has not been incompatible with the creation of manufacturing industry; moreover, the path towards industrialization depended on the kind of infrastructure demanded by primary goods-exporting activity. In such an effort, the railroad was crucial, not only as a means of transporting basic products, but also as a means of transferring technology and as an industrializing agent, the reason being that railroad workshops constituted the basis of engineering industries in Latin America by 1900. That is precisely the central issue in the present study: the contribution of the railroad to the creation of a capital goods industry, through the production of cargo cars, passenger wagons and locomotives in Chile and Mexico between the years of 1860 and 1950. This study focuses on the production of rolling stock in the railroad workshops and its transit to independent factories, and questions the evidence available until now that has indicated that in the long run, the production of locomotives, cars, wagons, parts and accessories was minimal inside the Latin American industrial production, a conclusion that did not consider that every railroad contains in itself, an industrial capacity that produces metallic goods for its everyday operation, many of which did not enter in the market nor were registered in historic statistics.

Abstract:

Most of the tools for the construction of Knowledge Based Systems (KBSs) offer a reasoning mechanism within the logical scheme, some of them combining rules with objects. At the end of the 1970s to the beginning of the 1980s cases began to be used as an alternative to representing and storing experience and knowledge of organizations. This is how Case-Based Reasoning (CBR) systems, which are capable of solving problems using solutions of previously solved problems stored in the Case Memory (CM), were originated. In this article, we introduce RBCShell, a tool to construct KBSs with CBR. We also present an applicatíon developed by means of RBCShell.

Abstract:

In this paper we present a simple procedure to design fractional-PDμ controllers for Single-Input-Single-Output-Linear Time Invariant (SISO-LTI) systems with constant time-delay. More precisely, based on a geometric approach, we present a methodology not only to design a stabilizing controller, but also to provide practical guidelines to design non-fragile PDμ controllers. Finally, in order to illustrate the simplicity of the proposed approach as well as the efficiency of the PDμ controller in a realistic scenario, we consider an experimental setup consisting of controlling a teleoperated system using two Phantom Omni haptic devices

Abstract:

A new measure of dispersion is presented here that generalizes entropy for positive data. It is intrinsically linked to a measure of central tendency and is determined by the data through a power transformation that best symmetrizes the observations

Abstract:

This article proposes a simple statistical approach to combine nighttime light data with official national income growth figures. The suggested procedure arises from a signal-plus-noise model for official growth along with a constant elasticity relation between observed night lights and income. The methodology implemented in this paper differs from the approach based on panel data for several countries at once that uses World Bank ratings of income data quality for the countries under study to produce an estimate of true economic growth. The new approach: (a) leads to a relatively simple and robust statistical method based only on time series data pertaining to the country under study and (b) does not require the use of quality ratings of official income statistics. For illustrative purposes, some empirical applications are made for Mexico, China and Chile. The results show that during the period of study there was underestimation of economic growth for both Mexico and Chile, while official figures of China over-estimated true economic growth

Resumen:

El INEGI mantiene el Sistema de Indicadores Compuestos, Coincidente y Adelantado, como una herramienta de apoyo para la toma de decisiones oportunas por parte del público usuario de la información macroeconómica. El actual Sistema fue cambiado sustancialmente el año 2010 al adoptar la definición de ciclo de crecimiento, en lugar de ciclo clásico, lo cual requirió la aplicación de un método para estimar y cancelar tendencias, apropiado para los datos de México. Dicho Sistema fue actualizado en 2014, cuando se cambiaron algunas variables de los correspondientes índices compuestos. Aun así, el Sistema debe actualizarse de manera regular para que mantenga su vigencia y utilidad en el entorno cambiante de la economía mexicana. El presente estudio se enfoca, en primer lugar, en analizar la adecuación del método usado para estimar los indicadores coincidente y adelantado, sin salir del marco de análisis que se utiliza en el INEGI actualmente. En segundo lugar, se explota la metodología estadística utilizada para asignar incertidumbre a la estimación de los ciclos, de manera que sea más clara y objetiva la identificación de las fases de los mismos. Finalmente, la técnica en uso en el INEGI se complementa con otra herramienta estadística, basada en un Modelo de Factores Dinámicos, cuyo fundamento teórico-estadístico es muy sólido y que es aplicable en el caso de la economía mexicana. Como resultado de este trabajo, se propone: calcular los indicadores compuestos con un filtro diferente al que se usa actualmente en el INEGI; incluir una variable adicional en el indicador adelantado para mejorar su capacidad de adelanto; utilizar bandas de tolerancia para asignar incertidumbre a la estimación de los ciclos; y regularizar el periodo de actualización del Sistema de Indicadores Compuestos, para mantenerlo vigente al paso del tiempo

Resumen:

En este trabajo se analiza la adecuación del método de estimación de los indicadores coincidente y adelantado, en el marco de análisis actual del Sistema de Indicadores Compuestos Coincidente y Adelantado del INEGI, y se asigna incertidumbre a los ciclos estimados para que sea más clara y objetiva la identificación de sus fases. La técnica se complementa con el uso de un modelo de factores dinámicos que apoya la propuesta que se hace. Como resultado de la investigación, se recomienda calcular los indicadores compuestos con un filtro diferente al que se usa en el INEGI, incluir en el indicador adelantado una variable adicional para mejorar se capacidad de adelanto, utilizar bandas de tolerancia para asignar incertidumbre a la estimación de los ciclos y regularizar el periodo de actualización del Sistema de Indicadores Compuestos para mantenerlo vigente al paso del tiempo

Abstract:

We anlyze the adequacy of the estimation method of the coincident and leading indicators within the context of the current INEGI's System of Composite Conicident and Leading Indicators. We propose a method to assign uncertainty to the cycle estimates in ordeer to identify its phases more clearly and objetively. This work is complemented with the application of a Dynamic Factor Model whose results back up our proposal. As a result of this study, we recommend to calculate the indicators whi tge aid of different filte from the current one at use in INEGI; we also recommended to include a new component variable in the leading indicator to improve its leading ability; to employ the tolerance bands here derived to appreciate the uncertainty of the cycle estimates; and to regularize the revising span of the System of Composite Indicators to keep it update along time

Resumen:

La intención de este trabajo es brindar al usuario de series de tiempo económicas ajustadas por estacionalidad las herramientas que le permitan usarlas de forma adecuada, para poder tomar decisiones mejor informadas. Por ello, se describe la metodología, se enfatizan algunos aspectos de carácter técnico y se brindan algunas recomendaciones que podrían mejorar la calidad del ajuste. Asimismo, se presenta un ejemplo ilustrativo del ajuste estacional de una serie relevante para México. Los resultados aportan elementos al usuario para elegir el paquete X-13ARIMA-SEATS como el mejor en la actualidad para realizar este procedimiento. Además, se resalta la importancia de comprender mejor las herramientas que dan soporte a los procesos considerados automáticos para interpretar en forma apropiada los resultados que se obtienen

Abstract:

The aim of this work is to provide the user of seasonally adjusted economic time series with the tools that allow her/him to interpret those series adequately in order to make better-informed decisions. To that end, we describe the seasonal adjustment methodology, emphasize some technical aspects and make some recommendations that may help improving the quality of the adjustment. We also present an illustrative example of a relevant series for Mexico. The results are useful to understand that the X-13 package is the best one we can use nowadays to deseasonalize a time series. Furthermore, we address the need of acquiring a deep knowledge of the tools underlying the so-called automatic processes, so that their output can be better understood

Abstract:

We apply a filtering methodology to estimate the segmented trend of a time series and to get forecasts with the underlying unobserved-component model of the filter employed. The trend estimation technique is based on Penalized Least Squares. The application of this technique allows us to control the amount of smoothness in the trend by segments of the data range, corresponding to different regimes. The method produces a smoothing filter that mitigates the effect of outliers and transitory blips, thus capturing an adequate smooth trend behavior for the different regimes. Obtaining forecasts becomes an easy task once the filter has been applied to the series, because the unobserved-component model underlying the filter has parameters directly related to the smoothing parameter of the filter. The empirical application is useful to appreciate the appropriateness of our segmented trend approach to capture the underlying behavior of the annual rate of growth of remittances to Mexico

Abstract:

We present an analysis of the Manufacturing Business Opinion Survey carried out by Mexico's national statistics agency. We describe first the survey and employ exploratory statistical analyses based on coincidences and cross-correlations. We also consider forecasting models for the indices of industrial production and the Mexican global economic activity, including opinion indicators as predictors as well as lags of the quantitative variable to be predicted, so that the net contribution of the opinion indicators can be best appreciated in a forecasting experiment. The forecasting models employed are statistically adecuate in the sense that they satisfy the underlying assumptions, so that statistical inferences and conclusions are validated by the data at hand. Our results lend empirical support to the intuition that the survey provides information that anticipates the behavior of important macroeconomic variables, such as the Mexican index of global economic activity and the index of industrial production. We include data up to October 2017. These new tables show that the original conclusions remain valid even though the data were subjected in 2011 to some modifications due to a three-fold increase of the sample size and an extended coverage of economic activities

Abstract:

This paper studies the effect of autocorrelation on the smoothness of the trend of a univariate time series estimated by means of penalized least squares. An index of smoothness is deduced for the case of a time series represented by a signal-plus-noise model, where the noise follows an autoregressive process of order one. This index is useful for measuring the distortion of the amount of smoothness by incorporating the effect of autocorrelation. Different autocorrelation values are used to appreciate the numerical effect on smoothness for estimated trends of time series with different sample sizes. For comparative purposes, several graphs of two simulated time series are presented, where the estimated trend is compared with and without autocorrelation in the noise. Some findings are as follows, on the one hand, when the autocorrelation is negative (no matter how large) or positive but small, the estimated trend gets very close to the true trend. Even in this case, the estimation is improved by fixing the index of smoothness according to the sample size. On the other hand, when the autocorrelation is positive and large the simulated and estimated trends lie far away from the true trend. This situation is mitigated by fixing an appropriate index of smoothness for the estimated trend in accordance to the sample size at hand. Finally, an empirical example serves to illustrate the use of the smoothness index when estimating the trend of Mexico's quarterly GDP

Resumen:

En este trabajo se retropola el producto interno bruto estatal de México de 1980 a 1992 por gran actividad económica a partir de los datos oficiales disponibles para el público. El documento consta de seis etapas: 1) desagregración trimestral de la base de datos anual del 2003 al 2015, por estado; 2) conversión de la base de datos estatal y anual, de años base 1993 al 2008; 3) retropolación restringida de 1993 al 2002 con datos desagregados por estado que satisfacen restricciones temporales; 4) reconciliación de cifras estatales previamente retropoladas con los datos a nivel nacional; 5) retropolación restringida de 1980 a 1992 de la base ya reconciliada, por gran actividad económica, de manera que la suma de los estados produce el total nacional y, por último, 6) cambio de año base del 2008 al 2013, actualizando la información al 2016. Los resultados empíricos se validan tanto estadística como econométricamente y se ilustran con datos de la Ciudad de México

Abstract:

In this paper, we retropolate the Mexican Gross Domestic Product (GDP) by State and Economic Activity from 1980 to 1992 using all the official databases currently available. We apply 6 steps: i) temporal disaggregation of the state-annual database for years 2003-2015, ii) conversion of the state-annual database from base year 1993 to 2008, iii) restricted retropolation from 1993 to 2002 with the state-disaggregated database, satisfying temporal restrictions, iv) reconciliation of the retropolated querterly data with the national database, v) restricted retropolation of the reconciliated database from 1980 to 1992 by Economic Activity, so that the sum of the state data yields the national figure; and finally, vi) change of base year from 2008 to 2013, with information updated to 2016. The empirical results are verified both statistically and econometrically, and illustrated with Mexico City’s data

Resumen:

En este trabajo se retropola el Producto Interno Bruto (PIB) estatal de México de 1980 a 1992, por Gran Actividad económica, a partir de los datos oficiales disponibles al público. El trabajo consta de 6 etapas: i) Desagregación trimestral de la base de datos anual de 2003 a 2015, por estado, ii) Conversión de la base de datos estatal y anual, de año base 1993 a base 2008, iii) Retropolación restringida de 1993 a 2002 con datos desagregados por estado, que satisfacen restricciones temporales, iv) Reconciliación de cifras estatales previamente retropoladas con los datos a nivel nacional, v) Retropolación restringida de 1980 a 1992, de la base ya reconciliada, por gran actividad económica, de manera que la suma de los estados produce el total nacional y, finalmente, vi) cambio de año base de 2008 a 2013, actualizando la información al año 2016. Los resultados empíricos se validan tanto estadística como econométricamente y se ilustran con datos de la Ciudad de México

Abstract:

In Mexico, the System of National Accounts is disaggregated at the State level and expressed at constant prices of the most recent base year, 2008, for the years 2003 to 2015. Another frequently used database related to the National Accounts and disaggregated by State contains a quarterly index of economic activity. Further, a yearly database is also available with State-level disaggregation and base year 1993, but it only covers the years 1993 to 2006 and employs a different classification system from that of base year 2008. In this work, we are concerned with the problem of retropolating the database of a Mexican State called Mexico City with the maximum level of disaggregation allowed by the publicly available databases. We followed a data-driven approach and combined the three databases to produce an estimated homogeneous quarterly database with base year 2008, covering the years 1993 to 2015 and disaggregated up to groups of sectors

Abstract:

This paper extends the univariate time series smoothing approach provided by penalized least squares to a multivariate setting, thus allowing for joint estimation of several time series trends. The theoretical results are valid for the general multivariate case, but particular emphasis is placed on the bivariate situation from an applied point of view. The proposal is based on a vector signal-plus-noise representation of the observed data that requires the first two sample moments and specifying only one smoothing constant. A measure of the amount of smoothness of an estimated trend is introduced so that an analyst can set in advance a desired percentage of smoothness to be achieved by the trend estimate. The required smoothing constant is determined by the chosen percentage of smoothness. Closed form expressions for the smoothed estimated vector and its variance-covariance matrix are derived from a straightforward application of generalized least squares, thus providing best linear unbiased estimates for the trends. A detailed algorithm applicable for estimating bivariate time series trends is also presented and justified. The theoretical results are supported by a simulation study and two real applications. One corresponds to Mexican and US macroeconomic data within the context of business cycle analysis, and the other one to environmental data pertaining to a monitored site in Scotland

Abstract:

We consider the problem of estimating a trend with different amounts of smoothness for segments of a time series subjected to different variability regimes. We propose using an unobserved components model to consider the existence of at least two data segments. We first fix some desired percentages of smoothness for the trend segments and deduce the corresponding smoothing parameters involved. Once the size of each segment is chosen, the smoothing formulas here derived produce trend estimates for all segments with the desired smoothness as well as their corresponding estimated variances. Empirical examples from demography and economics illustrate our proposal

Resumen:

Se presenta un análisis de la Encuesta Mensual de Opinión Empresarial que realiza en Instituto Nacional de Estadística y Geografía (INEGI), la agencia mexicana de estadística oficial. Se describe primero la encuesta y se lleva a cabo un análisis estadístico exploratorio basado en coincidencias y correlaciones cruzadas. Después se consideran modelos de pronósticos para los índices de producción industrial y de la actividad economica global; modelos que incluyen indicadores de opinión como predictores, así como restrasos de la variable cuantitativa por predecir, de manera que se puede apreciar la contribución neta de los indicadores de opinión en un experimento de predicción. Los modelos de pronósticos empleados satisfacen los supuestos subyacentes, por lo que las conlusiones pueden considerarse estadísticamente válidas. Los resultados obtenidos apoyan la intuición de que esta encuesta brinda información que anticipa el comportamiento de variables macroeconómicas relevantes para México, como es el índice Global de Actividad Económica

Abstract:

We present an analysis of the Manufacturing Business Opinion Survey carried out by Mexico's national statistical agency. We describe first the survey and employ exploratory statistical analyses based on coincidences and cross-correlations. We also consider forecasting models for the indices of industrial production and the Mexican global economic activity, including opinion indicators as predictors as well as lags of the quantitative variable to be predicted, so that the net contribution of the opinion indicators can be best appreciated in a forecasting experiment. The forecasting models employed are statistically adequate in the sense that they satisfy the underlying assumptions, so that statistical inferences and conclusions are validated by the data at hand. Our results lend empirical support to the intuition that this survey provides information than anticipates the behavior of important macroeconomic variables, such as the Mexican index of global economic activity and the index of industrial production

Abstract:

We consider a forecasting problem that arises when an intervention is expected to occur on an economic system during the forecast horizon. The time series model employed is seen as a statistical device that serves to capture the empirical regularities of the observed data on the variables of the system without relying on a particular theoretical structure. Either the deterministic or the stochastic structure of a vector autoregressive error correction model of the system is assumed to be affected by the intervention. The information about the intervention effect is just provided by some linear restrictions imposed on the future values of the variables involved. Formulas for restricted forecasts with intervention effects and their mean squared errors are derived as a particular case of Catlin’s static updating theorem. An empirical illustration uses Mexican macroeconomic data on five variables and the restricted forecasts consider targets for years 2011–2014

Resumen:

En este trabajo se considera la estimación de tendencias y el análisis de ciclos económicos, desde la perspectiva de aplicación de métodos estadísticos a datos económicos presentados en forma de series de tiempo. El procedimiento estadístico sugerido permite fijar el porcentaje de suavidad deseado para la tendencia y está ligado con el filtro de Hodrick y Prescott. Para determinar la constante de suavizado requerida, se usa un índice de precisión relativa que formaliza el concepto de suavidad de la tendencia. El método es aplicable de manera directa a series de tiempo trimestrales, sin embargo, éste se extiende aquí también al caso de series de tiempo con periodicidad de observación distintas de la trimestral

Abstract:

This work deals with trend estimation and business cycle analysis, with emphasis on the application of statistical methods to economic time series data. The suggested statistical procedure allows one to fix a desired percentage of trend smoothness and is linked to the Hodrick-Prescott filter. An index of relative precision, coming out from a formal definition of trend smoothness, is used to decide the smoothness constant involved. This method is directly applicable to quarterly series, but its use is extended with ease to time series with frequencies of observation other than quarterly

Resumen:

Este trabajo presenta un análisis de la capacidad predictiva de algunos índices cíclicos para los puntos de giro de la economía mexicana. El enfoque es de ciclo de crecimiento, el cual requiere eliminar la tendencia de las series; por ello se probaron diversos métodos y se determinó que el filtro de Hodrick-Prescott aplicado dos veces es el mejor, en términos de revisiones. Después, los índices coincidentes y adelantados se estimaron con tres métodos distintos: 1) del NBER, 2) de la OCDE y 3) de Stock-Watson. Los índices coincidentes producen resultados similares y aceptables, pero los adelantados no brindan resultados satisfactorios

Abstract:

This work analyzes the predictive ability of some cyclical indices for the turning points of the Mexican economy. The growth cycle approach adopted requires working with detrended series, and so several detrending methods were tried. A double Hodrick-Prescott filter application produced the best results in terms of revisions. Then, the coincident and leading indices were estimated with three different methods: 1) NBER, 2)OECD and 3) Stock-Watson’s. The resulting coincident indices produce similar and acceptable results, but the leading ones do not work as expected

Abstract:

This work presents a procedure for creating a timely estimation of Mexico’s quarterly GDP with the aid of Vector Auto-Regressive models. The estimates consider historical GDP data up to the previous quarter as well as the most recent figures available for two relevant indices of Mexican economic activity and other potential predictors of GDP. We obtain two timely estimates of the Grand Economic Activities and Total GDP. Their corresponding delays are at most 15 days and 30 days respectively from the end of the reference quarter, while the first official GDP figure is delayed 52 days. We follow a bottom-up approach that imitates the official calculation procedure applied in Mexico. Empirical validation is carried out with both in-sample simulations and in real time. The mean error of the 30-day delayed estimate of total GDP is 0.13% and its root mean square error is 0.67%. These figures compare favorably with those of no-change models

Resumen:

En este artículo se presentan algunos problemas que enfrentan los institutos nacionales de estadística en relación con la generación, difusión y análisis de los datos económicos oficiales de series de tiempo. Se mencionan las herramientas de análisis estadístico que pudieran aplicarse para darles solución y, aunque deberían ser aplicadas de preferencia por las propias instituciones, algunas podrían también ser empleadas por los usuarios interesados en la información. Además, se enuncian diversas circunstancias que dan origen a los problemas tratados y se ilustran de forma somera algunas aplicaciones con datos generados por el Instituto Nacional de Estadística y Geografía (INEGI) de México

Abstract:

This paper presents some problems faced by national statistical institutes with regard to the generation, dissemination and analysis of official economic time series. The statistical analysis tools that may be applied to solve those problems are just mentioned. These tools should be preferably used by statistical institutes but, if needed, some may be employed by the information user. Several circumstances that give rise to such problems are also considered and the solutions are illustrated with data generated by Mexico's National Institute of Statistics and Geography (INEGI)

Abstract:

This paper considers the statistical problems of editing and imputing data of multiple time series generated by repetitive surveys. The case under study is that of the Survey of Cattle Slaughter in Mexico’s Municipal Abattoirs. The proposed procedure consists of two phases; firstly the data of each abattoir are edited to correct them for gross inconsistencies. Secondly, the missing data are imputed by means of restricted forecasting. This method uses all the historical and current information available for the abattoir, as well as multiple time series models from which efficient estimates of the missing data are obtained. Some empirical examples are shown to illustrate the usefulness of the method in practice

Abstract:

We propose to decompose a financial time series into trend plus noise by means of the exponential smoothing filter. This filter produces statistically efficient estimates of the trend that can be calculated by a straightforward application of the Kalman filter. It can also be interpreted in the context of penalized least squares as a function of a smoothing constant has to be minimized by trading off fitness against smoothness of the trend. The smoothing constant is crucial to decide the degree of smoothness and the problem is how to choose it objectively. We suggest a procedure that allows the user to decide at the outset the desired percentage of smoothness and derive from it the corresponding value of that constant. A definition of smoothness is first proposed as well as an index of relative precision attributable to the smoothing element of the time series. The procedure is extended to series with different frequencies of observation, so that comparable trends can be obtained for say, daily, weekly or intraday observations of the same variable. The theoretical results are derived from an integrated moving average model of order (1,1) underlying the statistical interpretation of the filter. Expressions of equivalent smoothing constants are derived for series generated by temporal aggregation or systematic sampling of another series. Hence, comparable trend estimates can be obtained for the same time series with different lengths, for different time series of the same length and for series with different frequencies of observation of the same variable

Abstract:

This work presents a method for estimating trends of economic time series that allows the user to fix at the outset the desired percentage of smoothness for the trend. The calculations are based on the Hodrick-Prescott (HP) filter usually employed in business cycle analysis. The situation considered here is not related to that kind of analysis, but with describing the dynamic behaviour of the series by way of a smooth curve. To apply the filter, the user has to specify a smoothing constant that determines the dynamic behaviour of the trend. A new method that formalizes the concept of trend smoothness is proposed here to choose that constant. Smoothness of the trend is measured in percentage terms with the aid of an index related to the underlying statistical model of the HP filter. Empirical illustrations are provided using data on Mexico’s GDP

Abstract:

The time series smoothing problem is approached in a slightly more general form than usual. The proposed statistical solution involves an implicit adjustment to the observations at both extremes of the time series. The resulting estimated trend becomes more statistically grounded and an estimate of its sampling variability is provided. An index of smoothness is derived and proposed as a tool for choosing the smoothing constant

Abstract:

The inclusion of linear deterministic effects in a time series model is important to get an appropriate specification. Such effects may be due to calendar variation, outlying observations or interventions. This article proposes a two-step method for estimating an adjusted time series and the parameters of its linear deterministic effects simultaneously. Although the main goal when applying this method in practice might only be to estimate the adjusted series, an important byproduct is a substantial increase in efficiency in the estimates of the deterministic effects. Some theoretical examples are presented to demonstrate the intuitive appeal of this proposal. Then the methodology is applied on two real datasets. One of these applications investigates the importance of the 1995 economic crisis on Mexico’s industrial production index

Abstract:

A confidence interval was derived for the index of a power transformation that stabilizes the variance of a time-series. The process starts from a model-independent procedure that minimizes a coefficient of variation to yield a point estimate of the transformation index. The confidence coefficient of the interval is calibrated through a simulation

Abstract:

We present a general result that allows us to combine data from two different sources of information in order to improve the efficiency of predictors within the context of multiple time series analysis. Such a result is derived from generalized least squares and is given as a combining rule that takes into account the possibility of correlation between forecasts and bias in one of them. We then specialize that result to situations in which the predictors are unbiased and uncorrelated. Afterwards we propose measuring precision shares and testing for compatibility in order for the combination to make sense. Several applications of the combining rule are presented according to the nature of the linear constraints imposed by one of the data sources. When the constraints are binding we consider the case of restricted forecasts with exact linear restrictions, deterministic changes in the model structure and partial information on some variables. When the constraints are stochastic we study forecast combinations that include expert judgments and benchmarking. Thus, the connections among different standard techniques are emphasized by the combining rule and its companion compatibility test. An empirical example illustrates the usefulness of this inferential procedure in practice

Abstract:

On the basis of some suitable assumptions, we show that the best linear unbiased estimator of the true mortality rates has the form of Whittaker's solution to the graduation problem. Some statistical tools are also proposed to help reducing subjectivity when graduating a dataset

Abstract:

An important tool in time series analysis is that of combining information in an optimal way. Here we establish a basic combining rule of linear predictors and show that such problems as forecast updating, missing value estimation, restricted forecasting with binding constraints, analysis of outliers and temporal disaggregation can be viewed as problems of optimal linear combination of restrictions and forecasts. A compatibility test statistic is also provided as a companion tool to check that the linear restrictions are compatible with the forecasts generated from the historical data.

Abstract:

A method is proposed for choosing a power transformation that allows a univariate time series to be adequately represented by a straight line, in an exploratory analysis of the data. The method is quite simple and enables the analyst to measure local and global curvature in the data. A description of the pattern followed by the data is obtained as a by-product of the method. A specific form of the coefficient of determination is suggested to discriminate among several combinations of estimates of the index of the transformation and the slope of the straight line. Some results related to the degree of diff erencing required to make the time series stationary are also exploited. The usefulness of the proposal is illustrated with four empirical applications-two using demographic data and the other two concerning market studies. These examples are provided in line with the spirit of an exploratory analysis, rather than as a complete or confirmatory analysis of the data.

Abstract:

A method is proposed for estimating unobserved values of multiple time series whose temporal ancl contemporaneous aggregates are known. The resulting estimates are obtained from a model based procedure in which the models employed are indicated by the data alone. This procedure is empirically supported by a discrepancy measure here derived. Even though the problem can be cast into a state space formulation, the usual assumptions underlying Kalman filtering are not fulfilled and such an approach cannot be applied directly. Some simulated examples are provided to validate the method numerically and an application with real data serves to illustrate its use in practice

Abstract:

Univariate time series models make efficient use of available historical records of electricity consumption for short-term forecasting. However, the information (expectations) provided by electricity consumers in an energy-saving survey, even though qualitative, was considered to be particularly important, because the consumers perception of the future may take into account the changing economic conditions. Our approach to forecasting electricity consumption combines historical data with expectations of the consumers in an optimal manner, using the technique of restricted forecasts. The same technique can be applied in some other forecasting situations in which additional information--besides the historical record of a variable--is available in the form of expectations

Abstract:

We consider the problem of estimating the effects of an intervention on a time series vector subjected to a linear constraint. Minimum variance linear and unbiased estimators are provided for two different formulations of the problem-(1) when a multivariate intervention analysis is carried out and an adjustment is needed to fulfill the restriction and (2) when a univariate intervention analysis was performed on the aggregate series obtained from the linear constraint, previous to the multivariate analysis, and the results of both analyses are required to be made compatible with each other. A banking example that motivated this work illustrates our solution

Abstract:

Several forecasting algorithms have been proposed to forecast a cumulative variable using its partially accumulated data. Some particular cases of this problem are known in the literature as the "style goods inventory problem" or as "forecasting shipments using firm orders-to-date", among other names. Here we summarize same of the most popular techniques and propose a statistical approach to discriminate among them in an objective (data-based) way. Our basic idea is to use statistical models to produce minimum mean square error forecasts and let the data lead us to select an appropriate model to represent their behavior. We apply our proposal to some published data showing total accumulated values with constant level and then to two actual sets of data pertaining to the Mexican economy, showing a nonconstant level. The forecasting performance of the statistical models was evaluated by comparing their results against those obtained with algorithmic solutions. In general the models produced better forecasts for all lead times, as indicated by the most common measures of forecasting accuracy and precision

Abstract:

A recursive procedure for temporally disaggregating a time series is proposed. In practice, it is superior to standard non-recursive procedures in several ways: (i) previously disaggregated data need not be modified, (ii) calculations become simpler and (iii) data storage requirements are minimized. The suggested procedure yields Best Linear Unbiased Estimates, given the historical record of previously disaggregated figures, concurrent data in the form of a preliminary series and an aggregated value for the current period. A test statistic is derived for validating the numerical results obtained in practice

Abstract:

A method is presented to improve the precision of timely data, which are published when final data are not yet available. Explicit statistical formulae, equivalent to Kalman filtering, are derived to combine historical with preliminary information. The application of these formulae is validated by the data, through a statistical test of compatibility between sources of information. A measure of the share of precision of each source of information is also derived. An empirical example with Mexican economic data serves to illustrate the procedure

Abstract:

This paper presents some procedures aimed at helping an applied time series analyst in the use of power transformations. Two methods are proposed for selecting a variance-stabilizing transformation and another for bias-reduction of the forecast in the original scale. Since these methods are essentially model-independent, they can be employed with practically any type of time series model. Some comparisons are made with other methods currently available and it is shown that those proposed here are either easier to apply or are more general, with a performance similar to or better than other competing procedures

Abstract:

Some time series models, which account for a structural change either in the deterministic or in the stochastic part of an ARIMA model are presented. The structural change is assumed to occur during the forecast horizon of the series and the only available information about this change, besides the time point of its occurrence, is provided by only one or two linear restrictions imposed on the forecasts. Formulas for calculating the variance of the restricted forecasts as well as some other statistics are derived. The methods here suggested are illustrated by means of empirical examples

Abstract:

Many economic time series are only available in temporally aggregated form. When the analysis requires disaggregated data, the analyst faces the problem of deriving these data in the most reasonable way. In this paper a data-based method is developed which produces an optimal estimator of the disaggregated series. The method requires a preliminary estimate of the series, which is adjusted to fulfil the restrictions imposed by the aggregated data. Empirical selection of the preliminary estimate is discussed and a statistic is developed for testing its adequacy. Some comparisons with other methods, as well as numerical illustrations, are presented

Abstract:

An optimal univariate forecast, based on historical and additional information about the future, is obtained in this paper. Its statistical properties, as well as some inferential procedures derived from it, are indicated. Two main situations are considered explicitly: (1) when the additional information imposes a constraint to be fulfilled exactly by the forecasts and (2) when the information is only a conjecture about the future values of the series or a forecast from an alternative model. Theoretical and empirical illustrations are provided, and a unification of the existing methods is also attempted

Abstract:

We investigate the use of power transformations when data on two quantitative variables are presented in a two-way table. Given a suitable transformation we can model the underlying continuous variables. Regressions and the correlation are obtained from the transformed grouped data. Also, by transforming back to the original scale, we obtain a smoothed version of the data

Abstract:

In this note, the resemblance between the family of inequality indices introduced by Atkinson (1970) and the Box-Cox transformation is exploited in order to provide a data-based procedure for choosing an inequality index within the family

Abstract:

Box and Cox (1964) proposed a power transformation which has proven utility for transforming ungrouped data to near normality. In this paper we extend its applicability to grouped data. Illustrative examples are presented and the asymptotic properties of the estimators derived

Abstract:

The power transformation suggested by Box & Cox (1964) is applied to the odds ratio to generalize the logistic model and to parameterize a certain type of lack of fit. Transformation of the design variable within the context of the dose-response problem is also considered

Abstract:

There is an increasing push by environmentalists, scholars, and some politicians in favor of a form of environmental rights referred to as "rights of nature" or "nature's rights." A milestone victory in this movement was the incorporation of rights of nature into the Ecuadorian constitution in 2008. However, there are reasons to be skeptical that these environmental rights will have the kinds of transformative effects that are anticipated by their most enthusiastic proponents. From a conceptual perspective, a number of difficulties arise when rights (or other forms of legal or moral consideration) are extended to non-human biological aggregates, such as species or ecosystems. There are two very general strategies for conceiving of the interests of such aggregates: a "bottom-up" model that grounds interest in specific aggregates (such as particular species or ecosystems), and then attempts to compare various effects on those specific aggregates; and a "top-down" model that grounds interests in the entire "biotic community." Either approach faces serious challenges. Nature's rights have also proven difficult to implement in practice. Courts in Ecuador, the country with the most experience litigating these rights, have had a difficult time using the construct of nature's rights in a non-arbitrary fashion. The shortcomings of nature's rights, however, do not mean that constitutional reform cannot be used to promote environmental goals. Recent work in comparative constitutional law indicates that organizational rights have a greater likelihood of achieving meaningful results than even quite concrete substantive rights. Protection for the role of environmental groups within civil society may, then, serve as the most effective way for constitutional reform to vindicate the interests that motivate the nature's rights movement

Abstract:

A general question in finance is whether the volatility of the price of futures contracts follows any particular trend over the contract's life. In this study, we contribute to the debate by empirically analyzing the trend of the term structure of the volatility of short term interest rates (STIR) futures prices. Using data on the Eurodollar, Euribor, and Short-Sterling futures contracts for the period between 2000 and 2018, we model the volatility of each individual contract considering time to expiration and trading activities. Furthermore, we investigate whether these trends change according to overall economic conditions. We find that STIR futures behave differently than futures on other underlying assets and that, most of the time, STIR futures price volatility declines as the contract approaches expiration. Moreover, the relation between volatility and time to maturity depends on market conditions and trading activities, and it is non-linearly related to the observation period

Abstract:

This study investigates the relationship between volatility and contract expiration for the case of Mexican interest rate futures. Specifically, it examines the hypothesis that the volatility of futures prices should increase as contracts approach expiration (the “maturity effect”). Using panel data techniques, the study assesses the differences in volatility patterns between contracts. The results show that although the maturity effect was sometimes present, the inverse effect prevails; volatility decreases as expiration approaches. On the basis of the premises of the negative covariance hypothesis, the study provides additional criteria that explain this behavior in terms of the term structure dynamics

Abstract:

LGBTQ+ individuals may face particular labor market challenges concerning disclosure of their identity and the prevalence of homophobia. Employing an online survey in Mexico with two elicitation methods, we investigate the size of the LGBTQ+ population and homophobic sentiment across various subgroups. We find that around 5%-13% of respondents self-identify as LGBTQ+, with some variation by age and job sectors. Homophobic sentiment is more prevalent when measured indirectly and is higher among males, older and less educated workers, and in less traditional sectors. Lastly, we uncover a negative correlation between homophobia and LGBTQ+ presence in labor markets, suggesting a need for policies to address these disparities

Abstract:

This paper studies the effect of bank credit supply shocks on formal employment in Mexico using a proprietary dataset containing information on all loans extended to firms by commercial banks during 2010–2015. We find large impacts on the formal employment of small and medium firms: a positive credit shock of 1 standard deviation increases yearly employment by 1.4 percentage points. The shares of uncollateralized credit and credit received by family firms, younger firms, and firms with no previous bank relationships also increase, suggesting that credit shocks may play a more prominent role for employment creation in credit-constrained settings

Abstract:

Do electorally concerned politicians have an incentive to contain epidemics when public health interventions may have an economic cost? We revisit the first pandemic of the twenty-first century and study the electoral consequences of the 2009 H1N1 outbreak in Mexico. Leveraging detailed administrative data and a difference-in-differences approach, we document a statistically significant negative effect of local epidemic outbreaks on the electoral performance of the governing party. The effect (i) is not driven by differences in containment policies, (ii) implies that the epidemic may have shifted outcomes of close electoral races, and (iii) persists at least three years after the pandemic. Part of the negative impact on incumbent vote share can be attributed to a decrease in turnout, and the findings are also in line with voters learning about the effectiveness of government policies or incumbent competence

Abstract:

Providing information is important for managing epidemics, but issues with data accuracy may hinder its effectiveness. Focusing on Covid-19 in Mexico, we ask whether delays in death reports affect individuals' beliefs and behavior. Exploiting administrative data and an online survey, we provide evidence that behavior, and consequently the evolution of the pandemic, are considerably different when death counts are presented by date reported rather than by date occurred, due to non-negligible reporting delays. We then use an equilibrium model incorporating an endogenous behavioral response to illustrate how reporting delays lead to slower individual responses, and consequently, worse epidemic outcomes

Abstract:

Could taxing sugar-sweetened beverages in areas where clean water is unavailable lead to increases in diarrheal disease? An excise tax introduced in Mexico in 2014 led to a significant 6.6 percent increase in gastrointestinal disease rates in areas lacking safe drinking water throughout the first year of the tax, with evidence of a diminishing impact in the second year. Suggestive evidence of a differential increase in the consumption of bottled water by households without access to safe water two years post-tax provides a potential explanation for this declining pattern. The costs implied by these results are small, particularly compared to tax revenues and the potential public health benefits. However, these findings inform the need for accompanying soda taxes with policy interventions that guarantee safe drinking water for vulnerable populations

Abstract:

Existing literature suggests that hospital occupancy matters for quality of care, as measured by various patient outcomes. However, estimating the causal effect of increased hospital busyness on in-hospital mortality remains an elusive task due to statistical power challenges and the difficulty in separating shocks to occupancy from changes in patient composition. Using data from a large public hospital system in Mexico, we estimate the impact of congestion on in-hospital mortality by exploiting the shock in hospitalizations induced by the 2009 H1N1 pandemic, instrumenting hospital admissions due to acute respiratory infections (ARIs) with measures of ARI cases at nearby healthcare facilities as a proxy for the size of the local outbreak. Our instrumental-variables estimates show that a 1% increase in ARI admissions in 2009 led to a 0.25% increase in non-ARI in-hospital mortality. We show that these effects are nonlinear in the size of the local outbreak, consistent with the existence of tipping points. We further show that effects are concentrated at hospitals with limited infrastructure, suggesting that supply-side policies that improve patient assignment across hospitals and strategically increase hospital capacity could mitigate some of the negative impacts. We discuss managerial implications, suggesting that up to 25%-30% of our estimated deaths at small and non-intensive-care-unit hospitals could have been averted by reallocating patients to reduce congestion

Abstract:

Abatement expenditures are not the only available tool for firms to decrease emissions. Technology choice can also indirectly affect environmental performance. We assess the impact of import competition on plants' environmental outcomes. In particular, exploiting a unique combination of Mexican plant-level and satellite imagery data, we measure the effect of tariff changes due to free-trade agreements on three main outcomes: plants' fuel use, plants' abatement expenditures, and measures of air pollution around plants' location. Our findings show that import competition induced plants in Mexico to increase energy efficiency, reduce emissions, and in turn reduce direct investment in environmental protection. Our findings suggest that the general technology upgrading effect of any policy could be an important determinant of environmental performance in developing countries and that this effect may not be captured in abatement data

Abstract:

We investigate the impact of financial access on law compliance (whether workers are registered in a mandated social security system). In contrast to previous studies that focus on firms' access to credit, we investigate workers' access to credit. Exploiting the geographic variation in financial access due to Banco Azteca's opening in Mexico in 2002 that changed financial access by poor people almost over-night, we find that financial access increased the probability of getting formalized

Abstract:

A regression discontinuity analysis is used to test whether a sharp increase in the government transfers received by households, induced by a pension program for individuals age 70 and older in Mexico City, affects coresiding children's school enrollment. Results show that while household composition and other characteristics do not change significantly at the cutoff age for program eligibility, school enrollment increases significantly. This suggests that households may be credit constrained, as the sharp increase in government transfers is known and anticipated by individuals below the cutoff age

Abstract:

This paper exploits the sharp change in air pollutants induced by the installation of small-scale power plants throughout Mexico to measure the causal relationship between air pollution and infant mortality, and whether this relationship varies by municipality’s socio-economic conditions. The estimated elasticity for changes in infant mortality due to respiratory diseases with respect to changes in air pollution concentration ranges from 0.58 to 0.84 (more than ten times higher than the Ordinary Least Squares estimate). Weaker evidence suggests that the effect is significantly lower in municipalities with a high presence of primary healthcare facilities and larger in municipalities with a high fraction of households with low education levels

Abstract:

This paper evaluates the impact of an intervention targeted at marginalizedlow-performance students in public secondary schools in Mexico City. The program consisted in offering free additional math courses, taught by undergraduate students from some of the most prestigious Mexican universities, to the lowest performance students in a set of marginalized schools in Mexico City. We exploit the information available in all students’ (treated and not treated by the program) transcripts enrolled in participating and non-participating schools. Before the implementation of the program, participating students lagged behind non-participating ones by more than a half base point in their GPA (over 10). Using a difference-in-differences approach, we find that students participating in the program observed a higher increase in their school grades after the implementation of the program, and that the difference in grades between the two groups decreases over time. By the end of the school year (when the free extra courses had been offered, on average, for 10 weeks), participating students’ grades were not significantly lower than non-participating students’ grades. These results provide some evidence that short and low-cost interventions can have important effects on student achievement

Abstract:

This paper explores the relationship between fertility and the introduction of new laws regulating cohabitation, in a context of low fertility and high out of wedlock childbearing. We show that in France, while fertility and marriage rates moved closely together before 1999, since the introduction (in 1999) of the "Pacte Civil de Solidarité" (PACS)-a cohabitation contract less binding than marriage-this relationship is much weaker. Surprisingly, legal unions (defined as marriage plus PACS) and fertility continue to move together after this date. We provide evidence of the relationship between the introduction of PACS and fertility, utilizing the regional variation in the number of PACS per woman (PACS intensity) and the differences in fertility before and after 1999. We show that French Departments with high PACS intensity did not show a different trend in fertility before 1999 than those with low PACS intensity (excluding Metropolitan Paris). However, they did experience an increase in their fertility levels after the introduction of PACS. This suggests the need to collect better and more detailed data, in order to assess whether the recent increases in French fertility can be partially explained by the availability of PACS

Abstract:

This research uses a unique dataset that provides relatively inexpensive measures of air quality at detailed geography. The analytical focus is the relationship, in Mexico, between Aerosol Optical Depth (AOD, a measure of air quality obtained from satellite imagery) and infant mortality due to respiratory diseases from January, 2001 through December, 2006. The results contribute to existing literature on the relationship between air pollution and health outcomes by examining, for the first time, the relationship between these variables for the entire land area of Mexico, for most of which no ground measures of pollution concentrations exist. Substantive results suggest that changes in AOD have a significant impact on infant mortality due to respiratory diseases in municipalities in the three highest AOD quartiles in the country, providing evidence that air pollution's adverse effects, although nonlinear, are not only present in large cities, but also in lower pollution settings which lack ground measures of pollution. Methodologically, it is argued that satellite-based imagery can be a valuable source of information for both researchers and policy makers when examining the consequences of pollution and/or the effectiveness of pollution-control mechanisms

Abstract:

The workload of Cloud data centers is constantly fluctuating causing imbalances across physical hosts that may lead to violations of service-level agreements. To mitigate workload imbalances, this work proposes a concurrent agent-based problem-solving technique supported by cooperative game theory capable of balancing workloads by means of live migration of virtual machines (VMs). Nearby agents managing physical hosts are partitioned into coalitions in which agents play coalitional games to progressively balance separate sections of a data center while considering the coalition's benefit of migrating a VM as well as its associated network overhead. Simulation results show that, in general, the proposed coalition-based load balancing mechanism outperformed a load balancing mechanism based on a hill-climbing algorithm used by a top data center vendor when considering altogether (i) the standard deviation of resource usage, (ii) the number of migrations, and (iii) the number of switch hops per migration

Abstract:

Using the minimum list of indicators for measuring the social development proposed by the United Nations, this work identifies cross-national indicators of homicide by analyzing socio-economic profiles of 202 countries. Both a correlation analysis and a decision-tree analysis indicate that countries with a relatively low homicide rate are characterized by a high life expectancy of women at birth and a very low adolescent fertility rate, while countries with a relatively high homicide rate are characterized by a low to medium life expectancy of women at birth, a high women-to-men ratio, and a high women’s share of adults with HIV/AIDS. The significance of this work stems from identifying cross-national indicators of homicide that can be used to assist policymakers in designing public policies aimed at reducing homicide rates by improving social indicators

Abstract:

Agent-based virtual simulations of social systems susceptible to corruption (e.g., police agencies) require agents capable of exhibiting corruptible behaviors to achieve realistic simulations and enable the analysis of corruption as a social problem. This paper proposes a formal belief-desire-intention framework supported by the functional event calculus and fuzzy logic for modeling corruption based on the integrity level of social agents and the influence of corrupters on them. Corruptible social agents are endowed with beliefs, desires, intentions, and corrupt-prone plans to achieve their desires. This paper also proposes a fuzzy logic system to define the level of impact of corruption-related events on the degree of belief in the truth of anti-corruption factors (e.g., the integrity of the leader of an organization). Moreover, an agent-based model of corruption supported by the proposed belief-desire-intention framework and the fuzzy logic system was devised and implemented. Results obtained from agent-based simulations are consistent with actual macro-level patterns of corruption reported in the literature. The simulation results show that (i) the bribery rate increases as more external entities attempt to bribe agents and (ii) the more anti-corruption factors agents believe to be true, the less prone to perpetrate acts of corruption

Abstract:

Rule-governed artificial agent societies consisting of autonomous members are susceptible to rule violations, which can be seen as the acts of agents exercising their autonomy. As a consequence, modeling and allowing deviance is relevant, in particular, when artificial agent societies are used as the basis for agent-based social simulation. This work proposes a belief framework for modeling social deviance in artificial agent societies by taking into account both endogenous and exogenous factors contributing to rule compliance. The objective of the belief framework is to support the simulation of social environments where agents are susceptible to adopt rule-breaking behaviors. In this work, endogenous, exogenous and hybrid decision models supported by the event calculus formalism were implemented in an agent-based simulation model. Finally, a series of simulations was conducted in order to perform a sensitivity analysis of the agent-based simulation model

Abstract:

Strategies for the prevention of police corruption, for example, bribery, commonly neglects its social dimension in spite of the fact that police corruption has societal causes and undertaking a reform of the police requires, to some extent, reforming society. In this paper, we built a decision tree from socioeconomic profiles of 103 countries classified according to their level of police corruption using data from the United Nations Statistics Division and Transparency International. From the rules of the resultant decision tree, we identified and analyzed social determinants of police corruption to assist policy-makers in designing societal level strategies to control police corruption by improving socioeconomic conditions. We found that school life expectancy, involvement of women in society, economic development, and work-related indicators are relevant to police corruption. Moreover, empirical results indicate that countries should gradually improve social indicators to reduce police corruption

Abstract:

Bag-of-tasks (BoTs) applications are highly parallel, unconnected and unordered tasks. Since BoT executions often require costly investments in computing infrastructures, Clouds offer an economical solution to BoT executions. Cloud BoT executions involve (1) allocating and deallocating heterogeneous resources with possibly different price rates from multiple Cloud providers, (2) distributing BoT execution across multiple, distributed resources, and (3) coordinating self-interested Cloud participants. This paper proposes a novel agent-based Cloud BoT execution tool (CloudAgent) supported by a 4-stageagent-based protocol capable of dynamically coordinating autonomous Cloud participants to concurrently execute BoTs in multiple Clouds in a parallel manner. Cloud Agent is endowed with an autonomous agent-based resource provisioning system supported by the contract net protocol to dynamically allocate resources based on hourly cost rates from multiple Cloud providers. In addition, Cloud Agent is also equipped with an agent-based resource deallocation system that autonomously and dynamically deallocates resources assigned to BoT executions. Empirical results show that Cloud Agent can efficiently handle concurrent BoT executions, bear low BoT execution costs, and effectively scale

Abstract:

Cloud data centers are generally composed of heterogeneous commodity servers hosting multiple virtual machines (VMs) with potentially different specifications and fluctuating resource usages. This may cause a resource usage imbalance within servers that may result in performance degradation and violations to service level agreements. This work proposes a collaborative agent-based problem solving technique capable of balancing workloads across commodity, heterogeneous servers by making use of VM live migration. The agents are endowed with (i) migration heuristics to determine which VMs should be migrated and their destination hosts, (ii) migration policies to decide when VMs should be migrated, (iii) VM acceptance policies to determine which VMs should be hosted, and (iv) front-end load balancing heuristics. The results show that agents, through autonomous and dynamic collaboration, can efficiently balance loads in a distributed manner outperforming centralized approaches with a performance comparable to commercial solutions, namely Red Hat, while migrating fewer VMs

Abstract:

Cognitive computing is a multidisciplinary field of research aiming at devising computational models and decision-making mechanisms based on the neurobiological processes of the brain, cognitive sciences, and psychology. The objective of cognitive computational models is to endow computer systems with the faculties of knowing, thinking, and feeling. The major contributions of this survey include (i) giving insights into cognitive computing by listing and describing its definitions, related fields, and terms; (ii) classifying current research on cognitive computing according to its objectives; (iii) presenting a concise review of cognitive computing approaches; and (iv) identifying the open research issues in the area of cognitive computing

Abstract:

Load management in cloud data centers must take into account 1) hardware diversity of hosts, 2) heterogeneous user requirements, 3) volatile resource usage profiles of virtual machines (VMs), 4) fluctuating load patterns, and 5) energy consumption. This work proposes distributed problem solving techniques for load management in data centers supported by VM live migration. Collaborative agents are endowed with a load balancing protocol and an energy-aware consolidation protocol to balance and consolidate heterogeneous loads in a distributed manner while reducing energy consumption costs. Agents are provided with 1) policies for deciding when to migrate VMs, 2) a set of heuristics for selecting the VMs to be migrated, 3) a set of host selection heuristics for determining where to migrate VMs, and 4) policies for determining when to turn off/on hosts. This paper also proposes a novel load balancing heuristic that migrates the VMs causing the largest resource usage imbalance from overloaded hosts to underutilized hosts whose resource usage imbalances are reduced the most by hosting the VMs. Empirical results show that agents adopting the distributed problem solving techniques are efficient and effective in balancing data centers, consolidating heterogeneous loads, and carrying out energy-aware server consolidation

Abstract:

Public perception of safety from crime and actual crime statistics are often mismatched. Perception of safety from crime is a social phenomenon determined and affected by (i) the mass media broadcasting news dominated by violent content, and (ii) the structural composition of the society, e.g., its socioeconomic characteristics. This paper proposes an agent-based simulation framework to analyze and study public perception of safety from crime and the effects of the mass media on safety perception. Agent-based models for (i) information sources, i.e., mass media outlets, and (ii) citizens are proposed. In addition, social interaction (and its influence on the perception of safety) is modeled by providing citizen agents with a network of acquaintances to/from which citizen agents may transmit/receive crime-related news. Experimental results show the feasibility of simulating perception of safety from crime by obtaining simulation results consistent with generally known and accepted macro-level patterns of safety perception

Abstract:

Service composition in multi-Cloud environments must coordinate self-interested participants, automate service selection, (re)configure distributed services, and deal with incomplete information about Cloud providers and their services. This work proposes an agent-based approach to compose services in multi-Cloud environments for different types of Cloud services: one-time virtualized services, e.g., processing a rendering job, persistent virtualized services, e.g., infrastructure-as-a-service scenarios, vertical services, e.g., integrating homogenous services, and horizontal services, e.g., integrating heterogeneous services. Agents are endowed with a semi-recursive contract net protocol and service capability tables (information catalogs about Cloud participants) to compose services based on consumer requirements. Empirical results obtained from an agent-based testbed show that agents in this work can: successfully compose services to satisfy service requirements, autonomously select services based on dynamic fees, effectively cope with constantly changing consumers’ service needs that trigger updates, and compose services in multiple Clouds even with incomplete information about Cloud participants

Abstract:

The effects of crime are diverse and complex, ranging from psychological and physical traumas faced by crime victims, to negative impacts on the economy of a whole nation. In this paper, an agent-based crime simulation framework to analyze crime and its causes is proposed and implemented. The agent-based simulation framework models and simulates both 1) crime events as a consequence of a set of interrelated social and individual-level crime factors, and 2) crime opportunities, i.e., combinations of circumstances that enable a person to commit a crime. The selection of crime factors and design of agent models are supported by, and based on, existing criminological literature. In addition, the simulation results are validated and compared with macrolevel crime patterns reported by various criminological research efforts

Abstract:

Cloud data centers are networked server farms commonly composed of heterogeneous servers with a wide variety of computing capacities. Virtualization technology, in Cloud data centers, has improved server utilization and server consolidation. However, virtual machines may require unbalanced levels of computing resources (e.g., a virtual machine running a compute-intensive application with low memory requirements) causing resource usage imbalances within physical servers. In this paper, an agent-based distributed approach capable of balancing different types of workloads (e.g., memory workload) by using virtual machine live migration is proposed. Agents acting as server managers are equipped with 1) a collaborative workload balancing protocol, and 2) a set of workload balancing policies (e.g., resource usage migration thresholds and virtual machine migration heuristics) to simultaneously consider both server heterogeneity and virtual machine heterogeneity. The experimental results show that policy-based workload balancing is effectively achieved despite dealing with server heterogeneity and heterogeneous workloads

Resumen:

Newman cree que es posible, por medio del análisis de los fenómenos de la conciencia, formarse una imagen -no un concepto- de Dios. Por ello considera a la conciencia como el principio de la religión. Algunos estudiosos han visto en esos análisis un argumento para la existencia de Dios, lo cual es problematizado. No obstante, las reflexiones newmanianas adelantan temas actuales de la filosofía de la religión

Abstract:

Newman believes that it is possible, through the analysis of the phenomena of conscience, to form an image - not a concept - of God. For this reason, he con siders conscience to be the starting point of religion. Some scholars have seen in these analyses an argument for the existence of God, which is problematized. Nevertheless, Newman's reflections advance current issues in the philosophy of religion

Resumen:

El concepto de cosmovisión fue uno de los conceptos filosóficos más importantes del siglo XIX. Si bien Newman no emplea la palabra Weltanschauung o Worldview, este artículo pretende mostrar dos cosas: que lo nombrado con el término cosmovisión está presente en la vida y obra del cardenal inglés, y que los criterios o notas para el genuino desarrollo de las doctrinas que propone en su Ensayo sobre el desarrollo de la doctrina cristiana pueden ser considerados para una cosmovisión racionalmente responsable como un aporte significativo a la filosofía de la religión actual

Abstract:

The concept of worldview was one of the most important philosophical concepts of the 19th century. Although Newman does not use the word Weltanschauung or Worldview, this article intends to show two things: first, that what is named with the term 'worldview' is present in the life and work of the English cardinal, and second, that the criteria or notes for the genuine development of the doctrines that proposes in his Essay on the Development of Christian Doctrine can be considered for a rationally responsible worldview, which can be a significant contribution to the today's philosophy of religion

Resumen:

Partiendo de la similitud del contexto vital de Newman con el actual, se propone a Newman como precursor del término cosmovisión y se analizan e ilustran sus diversos aspectos: la experiencia vital y el núcleo de la misma (los principios fundamentales). Newman considera importante distinguir entre una cosmovisión "nocional" que existe únicamente en la mente de las personas y una cosmovisión personal, viva y crítica que impregna toda la vida humana

Abstract:

Starting from the similarity of Newman's vital context with the present one, Newman is proposed as a precursor of the term worldview and its diverse aspects are analyzed and illustrated: the vital experience and the core of it (the fundamental principles). Newman considers important to distinguish between a "notional" worldview that exists only in the minds of people and a personal, living and critical worldview that permeates all human life

Resumen:

Dos son las intenciones de este trabajo: por un lado, profundizar en la comprensión del espíritu de venganza como un concepto exclusivo de Así habló Zaratustra por hacer referencia directa al tiempo y al eterno retorno de lo mismo. Con esta comprensión ganada se revisa y critica la interpretación heideggeriana del espíritu de venganza y la pregunta por su superación

Abstract:

This article has two aims: first, to gain a deeper understanding of the spirit of revenge as a unique concept in Thus Spoke Zarathustra, being directly referred to time and to eternal recurrence. Building on this understanding, Heidegger's interpretation of the spirit of revenge is reviewed and criticized as well as the question of overcoming it

Resumen:

Roberto Calasso se refirió a René Girard como "el erizo que sabe solamente una cosa, pero importante", ya que toda la obra del pensador francés gira en torno a una idea básica: el deseo mimético. A partir de éste, Girard descubrió el mecanismo del chivo expiatorio y el asesinato fundador para explicar la relación entre la violencia y lo sagrado, relación que está a la base del origen y desarrollo de las culturas y de las sociedades. Y esto le permitió percatarse de la relevancia antropológico-social de la revelación judeocristiana. Su pensamiento pues, se movió entre la literatura, los mitos, la religión, la antropología y la filosofía. Y las implicaciones de su obra le hicieron entrar en contacto con una serie impresionante de pensadores: Hobbes, Rousseau, Kant, Hegel, Nietzsche, Heidegger, Derrida, Lévi-Strauss, Freud, sin dejar de lado a los grandes escritores como Cervantes, Shakespeare, Flaubert, Stendhal, Proust, Dostoievsky. Por todo ello, Michel Serres le llamó "el nuevo Darwin de la cultura", Jean Marie Domenach "el Hegel del cristianismo", Pierre Chaunu "el Albert Einstein de las ciencias del hombre" y Paul Ricoeur mencionó que su influencia en el siglo XXI será igual a la de Marx y Freud en el XIX

Resumen:

El artículo expone y compara las doctrinas metafísicas de dos filósofos pertenecientes a la corriente denominada tomismo trascendental: Bernard Lonergan y Emerich Coreth. Se resalta el concepto, el método y el punto de partida de la metafísica en los dos autores. Ambos intentan superar a Kant en la aplicación del método trascendental e intentan mostrar el ser como la condición última de posibilidad de toda realización cognoscitiva humana

Abstract:

In this article, we present and compare the metaphysical doctrines of two philosophers of the school of thought known as transcendental thomism: Bernard Lonergan and Emerich Coreth. We will highlight their ideas regarding metaphysics, methods, and points of view. Both philosophers attempt to go beyond Kant in the use of the transcendental method and attempt to portray the being as the ultimate condition of possibility in all human cognitive achievement

Resumen:

Nietzsche y Heidegger comparten un mismo punto de partida: el nihilismo. A partir de él, coinciden también, en su crítica, en conceptos fundamentales de la tradición filosófica occidental como el conocimiento, la verdad, el sujeto o el tiempo. A su vez, ambos proponen alternativas no metafísicas para pensar sobre el Ser, por lo que pueden ser considerados como precursores de la posmodernidad

Abstract:

Nietzsche and Heidegger share the same starting point: the Nihilism. From it, they also agree in their critic in fundamental concepts of the western philosophical tradition as knowledge, truth, subject or time. In turn both propose non-metaphysical alternatives to think about the being, by which they both can be considered precursors of Postmodernism

Resumen:

El artículo expone los rasgos generales de la idea de la universidad según el cardenal John Henry Newman, acentuando su concepto de educación liberal como término medio entre la educación confesional y la educación meramente pragmática. La educación liberal es libre y entraña la formación del intelecto, para que busque en todo momento la verdad y la realice. El artículo concluye señalando algunas semejanzas entre el pensamiento de Newman y el Departamento de Estudios Generales del itam

Abstract:

In this article, we will explore the main characteristics of Cardinal John Henry Newman’s idea of a university, emphasizing his notion of liberal education as in between a moral and a pragmatic education. Liberal education is free and trains a person’s intellect to constantly pursue the truth. Finally, this article showcases the similarities between his thought and that of itam’s General Studies Department

Resumen:

En este texto, el autor sintetiza la aportación más destacable, a su juicio, de Hegel. De La fenomenología del Espíritu a La ciencia de la Lógica abarca el recorrido del pensamiento hegeliano, hasta la consumación de la metafísica: con la lógica vuelve, en la idea absoluta, a la pura inmediación del ser

Abstract:

In this text, the author summarizes what he believes to be Hegel’s most notable contribution. From The Phenomenology of Spirit to The Science of Logic he covers Hegelian philosophy to the consummation of metaphysics: with logic he returns as an absolute concept to the pure immediacy of being

Resumen:

El parágrafo 65 de Ser y tiempo lleva por título "La temporalidad como sentido ontológico del cuidado". La intención de este artículo es brindar elementos para comprender cada uno de los conceptos ahí contenidos; es decir, dar elementos para comprender el cuidado, el sentido y la temporalidad

Abstract:

The paragraph 65 of Being and Time has as title "The Temporality as ontological Sense of Care". The intention of this paper is to give elements to understand each of the concepts of the title: care, sense and temporality

Resumen:

La colaboración persigue, como objetivo principal, poner en diálogo, más orientado a la complementación que a la confrontación, la teoría habermasiana de la democracia deliberativa y la teoría mimética de René Girard. A pesar de evidentes diferencias, la teoría girardiana ayudaría a tomar una mayor conciencia de las “interferencias miméticas” que intervienen en toda convivencia humana y en toda argumentación racional, mientras que la teoría habermasiana aportaría la concreción suficiente y real al proyecto de conversión (la superación paularina de la violencia que crea víctimas) que se deriva del análisis girardiano

Abstract:

This paper pursues, as main objective, to put in dialogue, more guided to the complementation that to the confrontation, the Habermasian theory of the deliberative democracy and the René Girard’s mimetic theory. In spite of evident differences, the Girardian theory would help to take to bigger conscience of those “mimetic interferences” that intervene in all human coexistence and in all rational argument, while the Habermasian theory would give the enough and real concretion to the conversion project (the gradual overcoming of the violence that create victims) that come from the girardian analysis

Abstract:

It is well-known that maximizing the Shannon entropy gives rise to an exponential family of distributions. On the other hand, some Bayesian predictive distributions, derived from exponential family sampling models with standard conjugate priors on the canonical parameter, maximize a generalized entropy indexed by a parameter infinite. As infinite is infinite, this generalized entropy converges to the usual Shannon entropy, while the predictive distribution converges to its corresponding sampling model. The aim of this paper is to study this type of connection between generalized entropies based on a certain family of α-divergences and the class of predictive distributions mentioned above. We discuss two important examples in some detail, and argue that similar results must also hold for other exponential families

Abstract:

Bayesian methods for the analysis of categorical data use the same classes of models as the classical approach. However, Bayesian analyses can be more informative and may provide more natural solutions in certain situations such as those involving sparse or missing data or unidentifiable parameters. In this article, we review some of the most common Bayesian methods for categorical data. We focus on the analysis of contingency tables, but several other useful models are also discussed. For ease of exposition, we describe most of the ideas in terms of two-way contingency tables

Abstract:

This paper presents a Bayesian nonparametric approach to the analysis of two different types of nonhomogeneous mixed Poisson processes. The unknown mean function is modelled a priori as a process with independent increments and the corresponding posteriors are derived. Posterior inferences are carried out via a Gibbs sampling scheme

Abstract:

The purpose of many reliability studies is to decide the warranty length. However, a review of the classic literature on reliability reflects that it is a common practice to only recommend the use of low quantiles of the estimated distribution of time to failure. We propose a methodology that takes into account the following aspects: the reliability of the product, the consumer appreciation of the competitiveness of the warranty scheme, the effect on the image of the company when the product fails under the warranty period, and the costs that the manufacturer incurs to fulfill the warranty. The approach is based on the formulation of a utility function. Regarding the data structure, we contemplate complete sampling or censorship: type I, II, and random. We illustrate the methodology with the determination of warranty length of brake linings

Abstract:

In this paper, we propose a comprehensive methodology to specify prior distributions for commonly used models in reliability. The methodology is based on characteristics easy to communicate by the user in terms of time to failure. This information could be in the form of intervals for the mean and standard deviation, or quantiles for the failure-time distribution. The derivation of the prior distribution is done for two families of proper initial distributions, namely -normal-gamma, and uniform distribution.We showthe implementation of the proposed method to the parameters of the -normal, lognormal, extreme value,Weibull, and exponential models. Then we show the application of the procedure to two examples appearing in the reliability literature, [26] and [28]. By estimating the prior predictive density, we find that the proposed method renders consistent distributions for the different models that fulfill the required characteristics for the time to failure. This feature is particularly important in the application of the Bayesian approach to different inference problems in reliability, model selection being an important example. The method is general, and hence it may be extended to other models not mentioned in this paper

Resumen:

A pesar de que el neoliberalismo ha sido el sistema económico dominante y la ideología hegemónica en América Latina durante los últimos cuarenta años, son pocos los estudios que abordan su relación con la literatura latinoamericana; de allí el interés de estudiar El traductor de Salvador Benesdra. Para tal fin, el análisis se apoya en los críticos que han historiado las reformas neoliberales (Harvey, Escalante Gonzalbo), en los que han cuestionado sus fundamentos ideológicos (Ahmed, Laval y Dardot) y, también, en los que se han ocupado de la obra de Benesdra (Avaro, Vitagliano). El resultado es una lectura de cómo Benesdra, durante la década de 1990, realiza una representación del nuevo orden neoliberal en diferentes ámbitos sociales (el laboral, el afectivo, el político, el cultural) y una reflexión respecto a la crítica que articula sobre este tema. En última instancia, también se muestran los vasos comunicantes existentes entre el neoliberalismo y la literatura latinoamericana

Abstract:

The study of El traductor [The Translator] focuses the relationship between Latin Ame-rican neoliberalism and literature. Nevertheless, neoliberalism as the main economic and political hegemonic system in the region since 1940's, it has not been studied in connection with literature. In this paper, the History of neoliberalism reforms are considered where it has been criticized their ideological support. At the same time, it is exposed the patterns pre-sented in Benesdra work related to Latin American neoliberalism. The result is an approach of how the author represented the power of neoliberalism in different scopes like social fra-meworks, labour sphere, emotional scenarios, and political and cultural contexts. Additiona-lly, it is outlining Brenesda's critical view of neoliberalism

Resumo:

Apesar de que o neoliberalismo tem sido o sistema econômico dominante e a ideologia hegemônica na América Latina durante os últimos quarenta anos, são poucos os estudos que abordam a sua relação com a literatura latino-americana; dali o interesse por estudar O tradutor de Salvador Benesdra. Com esse objetivo, a análise apoia-se nos críticos que têm historiado as reformas neoliberais (Harvey, Escalante Gonzalbo), nos que têm questionado seus fundamentos ideológicos (Ahmed, Laval y Dardot) e, também, nos que têm se ocupado da obra de Benesdra (Avaro, Vitagliano). O resultado é uma leitura de como Benesdra, durante a década de 1990, desenvolve a representação de uma nova ordem neoliberal em diferentes âmbitos sociais (laboral, afetivo, político, cultural) e uma reflexão respeito da crítica que articula sobre o tema. Por fim, são apresentados os vasos comunicantes existentes entre o neoliberalismo e a literatura latino-americana

Resumen:

El objetivo de este artículo es identificar el archivo en el que se basan un buen número de novelas latinoamericanas contemporáneas, con el fin de conocer los discursos a partir de los cuales se construyen, y que también intervienen en la realidad. De esta forma, a partir de los discursos subyacentes que participan en la literatura y fuera de ella, se propone la triada archivo-novela-realidad como una forma de interpretación del presente latinoamericano. Para ello, a partir de la teoría postulada por González Echevarría en Mito y archivo (1990), se formó un corpus de novelas latinoamericanas publicadas en el siglo XXI cuya escritura está basada en un discurso previo, consignado en un archivo. Se parte de la hipótesis de que la teoría de González Echevarría resulta productiva no solo para explicar la novela latinoamericana analizada en su estudio, sino la que estaba por venir, es decir, la del presente siglo

Abstract:

The objective of this article is to identify the archive on which six contemporary Latin American novels are based, in order to analyze the discourses from which they are built, and which also take part in reality. In this way, based on the underlying discourses that participate in and outside literature, we propose the triad archive-novel-reality as a form of interpretation of the Latin American present. To that effect, based on the theory postulated by González Echevarría in Myth and Archive (1990), we formed a corpus of Latin American novels published in the 21st century whose writing is based on a previous speech, recorded on an archive. We postulate the hypothesis that González Echevarría's theory is productive not only to explain the corpus analyzed in his study, but the novel that was yet to come: that of the present century

Resumen:

A partir de la lectura de Volverse Palestina (2013) de Lina Meruane, Poste restante (2016) de Cynthia Rimsky y Destinos errantes (2016) de Andrea Jeftanovic, se propone un nuevo modelo de la crónica de viajes latinoamericana contemporánea: el viaje a la raíz. Partimos de la hipótesis de que, al narrar el viaje al lejano y difuso origen familiar de las escritoras, los tres textos se alejan de las familias textuales hasta ahora predominantes e inauguran una nueva genealogía. En este sentido, el objetivo de este trabajo es describir su poética, al contrastar sus propuestas con las características tradicionales del género. El resultado permite reflexionar acerca de la inmigración en América Latina desde una nueva óptica y confirma la maleabilidad de un género en constante cambio

Abstract:

From reading Volverse Palestina (2013) by Lina Meruane, Poste restante (2016) by Cynthia Rimsky and Destinos errantes (2016) by Andrea Jeftanovic, we propose a new model of the Latin American travel chronicle: the trip to the root . Our hypothesis is that through the nar-ration of the trip toward the distant and diffuse family origin of the three writers, the three accounts move away from the traditional textual families and inaugurate a new genealogy. In this sense, the purpose of this work is to describe its poetics and to contrast its proposals against the traditional characteristics of the genre. The result allows to think immigration in Latin America through a new point of view and confirms the malleability of a constantly changing genre

Resumen:

A pesar de haber sido ignorado por la crítica, el género ‘relato de viajes’ representa una tradición en la literatura mexicana que se remonta a fray Servando Teresa de Mier y sus Memorias. Naturalmente, el género se ha tenido que adaptar al contexto social y estético de cada época; sucede lo mismo en el siglo XXI mexicano, en el que, contra todo pronóstico, el relato de viajes goza de vitalidad. El propósito del presente trabajo es, a partir de la definición de Alburquerque (2006) y de Rubio (2011) de relato de viajes, identificar los principales relatos escritos en el presente siglo en México, y, posteriormente, estudiar sus peculiaridades, con el fin último de proponer una poética del género en la literatura mexicana contemporánea

Abstract:

Despite having been ignored by critics, the ‘travel account’ genre represents a tradition in Mexican Literature that goes back to Fray Servando Teresa de Mier and his Memoirs. Naturally, the genre has had to adapt to the social and aesthetic context of each era. This also happens in the Mexican XXI century, in which, against all odds, the travel account shows vitality. The purpose of the present work is, from the definition of Alburquerque (2006) and Rubio (2011) of travel account, to identify the main accounts written in this century in Mexico, and to study their peculiarities, with the ultimate goal of proposing a poetics of the genre in contemporary Mexican Literature

Resumen:

Nunca en su historia Alemania había experimentado tal devastación como la que dejaron en sus ciudades los bombardeos aéreos al final de la Segunda Guerra Mundial. A pesar de su magnitud, prácticamente no hay fuentes que hablen sobre este desastre, y su ausencia de la memoria histórica alemana resulta inquietante. Pensadores como Enzensberger (2013) y Sebald (2003) han formulado respuestas que la explican. A partir de sus reflexiones, el presente artículo pretende leer dos testimonios sobre este periodo, Ningún lugar adonde ir, del lituano Jonas Mekas, y El daño oculto, del irlandés James Stern, con el objetivo de indagar la naturaleza de dicho silencio, bajo la hipótesis de que mientras ambos autores contemplaban la destrucción, también atestiguaban la construcción de un olvido deliberado

Abstract:

Germany had never experienced such devastation as the one that the air raids left in its cities at the end of Second World War. Despite its magnitude, there are almost no documentary sources that deal with this disaster, and its absence in German historic memory is disturbing. Thinkers such as Enzensberger (2013) and Sebald (2003) have built theories that analyze this phenomenon. From their reflections, the current paper pretends to read two testimonies of this period, I Had Nowhere to Go, by the Lithuanian artist Jonas Mekas, and The Hidden Damage, by Irish writer James Stern, with the purpose of understanding this silence, with the hypothesis that as both authors looked the destruction, they also witnessed the conformation of an intentional oblivion

Abstract:

Changes in formal institutions do not always affect economic outcomes. When an industry has specific technological features that limit a government' s ability to expropriate it, or when the industry is able to call on foreign governments to enforce its de facto property rights, economic agents can easily mitigate changes in formal institutions designed to reduce these property rights. We explore the Mexican oil industry from 1911 to 1929 and demonstrate that informal rather than formal institutions were key, permitting oil companies to coordinate their responses to increases in taxes or the redefnition of their de jure property rights

Abstract:

In this article, the sensorless control problem of a large class of power converters with unknown load conductance is investigated. A reduced-order generalized parameter estimation-based observer (GPEBO) is presented to reconstruct the unmeasurable states and estimate the unknown load conductance of the system. Three nice features of this observer are as follows: finite-time convergence (FTC) is guaranteed, an alertness preservation is imposed to be able to estimate a possibly time-varying load, and the required excitation condition is very weak and can be satisfied in normal operation of power converters. Then, replacing the estimated states and parameter, in a certainty equivalent manner, in a PI passivity-based controller, a sensorless control scheme is proposed to stabilize the systems with exponential convergence. By virtue of the FTC property of the GPEBO, the global exponential stability of the overall closed-loop system is established. Simulation and experimental results of the proposed controller with application to the boost and Cuk converters are given to assess its effectiveness

Abstract:

The existing results mainly focus on the full-information control for converters with constant power load (CPL). More precisely, it is assumed that all of the states of the systems should be measured. There are no theoretical results on an adaptive sensorless control scheme for the converters feeding CPL with the guaranteed stability. Note that a sensorless control will facilitate the decrease of the overall cost and fault rate as well as the increase of the reliability of the systems. In this paper, the sensorless control problem for DC-DC buck converter with unknown CPL is addressed. The main contribution of this paper is to design a reduced-order generalized parameter estimation-based observer to simultaneously reconstruct the unmeasurable inductor current and unknown power load of the system. Borrowing the dynamic regressor extension and mixing technique, its main idea is to transform the state observation into the parameter estimation problem. Besides, the finite time convergence of the observer can be ensured. It is noted that the buck converter is only regarded as an application example. As a matter of fact, this observer can be extended to a large class of converters with CPLs. Then, introducing the observed terms into an existing full-information controller, in a certainty equivalent manner, an adaptive sensorless control scheme is achieved. Finally, the performance of the designed controller is assessed via simulation and experimental results

Abstract:

We study a two-player investment game with information externalities. Necessary and sufficient conditions for a unique symmetric switching equilibrium are provided. When public news indicates that the investment opportunity is very profitable, too many types are investing early and investments should therefore be taxed. Conversely, any positive investment tax is suboptimally high if the public information is sufficiently unfavorable

Abstract:

This monograph introduces and compares the two leading frameworks for analyzing the adoption and diffusion of innovations - the imitation and threshold models. Imitation models perceive the diffusion process as being driven primarily by communication, whether initiated by the firm or between existing and potential customers, and are particularly useful when aggregate data is available, and allows the incorporation of some economic variables. By contrast, the threshold model emphasizes individual micro-economic decision making and explains the differences in the timing of adoption by heterogeneity among individuals or firms while the dynamic processes of learning affect costs as well as perceptions of value that drive the diffusion process. The threshold model provides a foundation to use cross section and panel data to estimate factors that affect differences in adoption patterns including size, wealth, education, and attitude towards risk. We show how to incorporate multiple marketing tools into both models. We find that the threshold model affords a more refined consideration of risk to optimize the choice of marketing tools because the threshold model can explicitly incorporate various economic frameworks such as expected utility, loss aversion and disappointment models, the safetyrule approach, and real-option theory. We illustrate how to manage marketing risk reduction tools in this context, including money back guarantees and demonstrations. Our review suggests that the two models should be treated as complementary models rather than as substitutes for each other. Our analysis expands on the analysis and design of marketing tools in promoting diffusion and discusses how to enhance their relevance and effectiveness. It also provides a bridge between marketing tools and the economic analysis of diffusion

Abstract:

Prospect theory has changed the way economists think about decision making under uncertainty yet after so many years there have been few applications of the theory and those appearing mostly in finance. One of the barriers to applying the prospect theory is that it is not designed to be applicable (Barberis, 2013). This study applies prospect theory to the selection of money back guarantee (MBG) contracts. When consumers can choose from a menu of MBG contracts they are basically trading off risk with price in a way that resembles a choice of lotteries with multidimensional outcomes. Our application, which integrates reference based utility models with elements of prospect theory and the disappointment model, helps in explaining the large premium attached to MBG contracts that cannot be explained by the expected utility framework. We further show that the combination of probability weighting with disappointment aversion appears to provide a better explanation for consumers' high valuation of MBGs relative to each one when measured separately. We empirically test how consumers' valuation of the MBG option is affected by MBG duration, variation in the likelihood of returns, and return conditions that affect consumers' return cost. Our approach can be applied to model choices of risk reduction mechanisms such as extended warranties, demonstrations, and sampling

Abstract:

In this article, we model money-back guarantees (MBGs) as put options. This use of option theory provides retailers with a framework to optimize the price and the return option independently and under various market conditions. This separation of product price and option value enables retailers to offer an unbundled MBG policy, that is, to allow the customer to choose whether to purchase an MBG option with the product or to buy the product without the MBG but at a lower price. The option value of having an MBG is negatively correlated with the likelihood of product fit and with the opportunity to test the product before purchase, and positively correlated with price and contract duration. Simulation of our model reveals that when customers are highly heterogeneous in their product valuation and probability of need-fit, and if return costs are low, an unbundled MBG policy is optimal. When customers have high likelihood of fit or return costs are excessive, no MBG is the best policy. When customers have small variance in product valuation, but vary greatly in likelihood of product fit, the retailer may prefer to offer a bundled MBG contract, extracting consumer surplus by charging a price close to the valuation level

Abstract:

This paper analyzes consumer, retailer and manufacturer preferences for and use of two important risk reduction tools: money-back guarantees and demonstrations. Theoretical findings from economics, marketing, consumer behavior and psychology are integrated to analyze the performance of these mechanisms under various conditions and product characteristics. The paper investigates the relationship between these two risk reduction mechanisms and reveals in which ways the two are complements or substitutes, identifying under which conditions money-back guarantees and demonstrations will be used separately, together, or not at all

Abstract:

Firms use samples to increase the sales of almost all consumable goods, including food, health, and cleaning products. Despite its importance, sampling remains one of the most under-researched areas. There are no theoretical quantitative models of sampling behavior other than the pioneering work of Jain et al. (1995), who modeled sampling as an important factor in the diffusion of new products. In this paper we characterize sampling as having two effects. The first is the change in the probability of a consumer purchasing a product immediately after having sampled the product. The second is an increase in the consumer’s cumulative goodwill formation, which results from sampling the product. This distinction differentiates our model from other models of goodwill, in which firm sales are only a function of the existing goodwill level. We determine the optimal dynamic sampling effort of a firm and examine the factors that affect the sampling decision. We find that although the sampling effort will decline over a product’s life cycle, it may continue in mature products. Another finding is that when we have a positive change in the factors that increase sampling productivity, steady-state goodwill stock and sales will increase, but equilibrium sampling can either increase or decrease. The change in the sampling level is indeterminate because, while increased sampling productivity means that firms have incentives to increase sampling, the increase in the equilibrium goodwill level indirectly reduces the marginal productivity of sampling, thus reducing the incentives to sample. We discuss managerial implications, and how the model can be used to address various circumstances

Abstract:

Among other concerns, construction planning involves the choice of construction technology, the definition of work tasks. the estimation of required resources and durations, the estimation of costs, and the preparation of a project schedule. A prototypical knowledge intensive expert system to accomplish these tasks, CONSTRUCTION PLANEX, is described in this paper. This system generates project activity networks, cost estimates and schedules, including the definition of activities, specification of precedences, selection of appropriate technologies and estimation of durations and costs. The CONSTRUCTION PLANEX system could be useful as an automated assistant in routine planning, as a laboratory for the analysis and evaluadon of planning strategies, and as a component for more extensive construction assistance systems involving design, site layout or project control. The current application for CONSTRUCTION PLANEX is to plan modular high-rise buildings, including excavation, foundation and structural construction

Abstract:

We would like to thank the commentators for their generous comments, valuable insights and helpful suggestions. We begin this response by discussing the selfishness ,axiom and the importance of the preferences, beliefs, and constraints framework as a way of modeling some of the proximate influences on human behavior. Next, we broaden the discussion to ultimate-level (that is evolutionary) explanations, where we review and clanrify gene-culture coevolutionary theory, and then tackle the possibility that evolutionary approaches that exclude culture might be sufficient to explain the data. Finally, we consider various methodological and epistemological concerns expressed by our commentators

Abstract:

This paper advances an “information goods” theory that explains prestige processes as an emergent product of psychological adaptations that evolved to improve the quality of information acquired via cultural transmission. Natural selection favored social learners who could evaluate potential models and copy the most successful among them. In order to improve the fidelity and comprehensiveness of such ranked-biased copying, social learners further evolved dispositions to sycophantically ingratiate themselves with their chosen models, so as to gain close proximity to, and prolonged interaction with, these models. Once common, these dispositions created, at the group level, distributions of deference that new entrants may adaptively exploit to decide who to begin copying. This generated a preference for models who seem generally “popular.” Building on social exchange theories, we argue that a wider range of phenomena associated with prestige processes can more plausibly be explained by this simple theory than by others, and we test its predictions with data from throughout the social sciences. In addition, we distinguish carefully between dominance (force or force threat) and prestige (freely conferred deference)

Abstract:

In location-based models of price competition, traditional sufficient conditions for existence and uniqueness of an equilibrium (Caplin and Nalebuff in Econometrica 59(1):25-59) are not robust for the firm that serves the right-tail of the consumers' distribution. Interestingly, as we relax these conditions, we observe only two new alternative cases. Moreover, we identify a novel, easily testable condition for uniqueness that is weaker than log-concavity and that can also apply to Mechanism Design. Thanks to this general framework, we can solve the equilibrium of general vertical differentiation models numerically and show that inequality has a U-shaped effect on profits and prices of a high-quality firm. Moreover, we prove that extreme levels of concentration can dissolve natural monopolies and restore competition, contrary to the Uniform case

Abstract:

In this paper, we empirically examine the spreads of syndicated and non-syndicated loans. We compare the spreads from tranches belonging to both types of contracts across deals of similar sizes. Our study of large corporate loans in the US market for the period from 1990 to 2013 shows that the differential be- tween syndicated and non-syndicated spreads is, on average, 19.23 basis points. Moreover, for small and medium loans, the differences are 39.3 and 21.89 basis points, respectively. We use different methodolo- gies and time periods and address endogeneity concerns on the decision of syndication to provide robust empirical evidence that, contrary to some previous studies, syndicated loans are not less expensive than non-syndicated loans and in most cases are significantly more expensive, particularly for small loans

Abstract:

We assume a financial market governed by a diffusion process reverting to a stochastic mean which is itself governed by an unobservable ergodic diffusion, similar to those observed in electricity and other energy markets. We develop a moment method algorithm for the estimation of the parameters of both the observable process and the unobservable stochastic mean. Our approach is contrasted with other methods for parameter estimation of partially observed diffusions, and applications to the modelling of interest rates and commodity prices are discussed

Abstract:

In this paper the consistency and asymptotic normality of maximum-likelihood estimations for a super-critical branching diffusion model are obtained under certain conditions on its drift, variance and reproduction law. We proceeded by first studying the limit behavior of the Fisher information measure and related processes, and then verifying conditions established in Barndorff-Nielsen and Sørensen (Int stat Rev 62:133–165, 1994). This in turn uses the Martingale Law of Large Numbers as well as the Martingale Central Limit Theorem

Abstract:

In this paper we consider a sequential trading economy with incomplete financial markets and a finite number of infinitely lived agents. We propose a specification of agents' budget sets and show that such specification features several desirable properties. We then establish the existence of an equilibrium for a regular class of economies

Abstract:

Restricted non-linear approximation is a type of N-term approximation where a measure ν on the index set (rather than the counting measure) is used to control the number of terms in the approximation. We show that embeddings for restricted non-linear approximation spaces in terms of weighted Lorentz sequence spaces are equivalent to Jackson and Bernstein type inequalities, and also to the upper and lower Temlyakov property. As applications we obtain results for wavelet bases in Triebel–Lizorkin spaces by showing the Temlyakov property in this setting. Moreover, new interpolation results for Triebel–Lizorkin and Besov spaces are obtained

Resumen:

Se evaluará el desarrollo económico de México con cifras recientes y mediciones de distintas variables que reflejan este complejo concepto. Se estudiará la evolución del nivel de vida promedio de la población, subrayando la generación de ingreso de sus habitantes. Posteriormente se dará cuenta del crecimiento económico del país como la variable que mejor explica el nivel de vida de la población, pues implica mejorar su bienestar en su totalidad. El tercer tema será la situación de la distribución del ingreso, para analizar posibles disparidades de bienestar en el interior del país. Finalmente, se examinará la situación de la pobreza en México como una de las variables que resume el desarrollo económico y que es resultado de la evolución de las variables anteriores

Abstract:

Mexico’s economic development will be analyzed based on recent data and measuring diverse factors reflecting such complex phenomenon. First, we will study the evolution of the standard of live focusing on the income generation capabilities of its population. Then it will be argued that the best explanatory variable for the country’s standard of life is the economic growth since it improves the well-being of all population. Consequently the third topic addressed will be the current state of income distribution, allowing us to analyze the country’s disparities regarding well-being. Finally, we will investigate the current state of poverty in Mexico as one of the factors summarizing economic growth and as the result of the development of the previous factors

Abstract:

The low open unemployment rate observed in Mexico and the relatively high labour effort experienced by certain groups of people may have the same explanation: Many Mexican households do not have enough assets to finance unemployment spells or non-market activities. In this study we show that most people from low income families, especially women, participate more in the labour market if real wages decrease. We find similar results for the total labour supply equations. We also show that the individual labour supply is sensitive to the structure and the economic behaviour of the household. Regrading unemployment, we find that the family helps its members both to avoid unemployment by directly providing them with jobs and information, and to finance their unemployment spells. Finally we find that the family non-labour income has a positive impact on the accepted wages of unemployed people

Abstract:

Lightning is a natural event that can cause severe human and financial losses. This work introduces a probability risk assessment of the occurrence of the cloud-to-ground (CG) lightning in urban and rural areas of Oklahoma. CG lightning, although not the most common type, is the most damaging. Previous studies have reported that urban areas experience an increase in the frequency of CG lightning events, during warm months. This increase poses serious threats to urban industries and electronic systems. Lightning strikes are point process in nature, although this quality has not been exploited in previous studies. We utilize a probability model for the spatiotemporal point process of CG lightning to estimate the risk of a CG lightning strike for a particular location and time. The data are discretized into small spatiotemporal cells (voxels), and then, we fit a generalized additive model with a complementary log–log link function using the location and the day of occurrence of the strike as explanatories. On the basis of this model, we compared the urban and rural monthly fitted rates of CG lightning strikes. We found that the rate in the rural area is smaller than the rate in the Tulsa metropolitan area during the warm months; however, it is larger than the rate in the Oklahoma metropolitan area during May and June

Resumen:

Esta investigación tiene como objetivo estudiar la construcción del concepto del supremo en el nivel de educación superior. El marco teórico que se utiliza es la teoría APOE (Dubinsky, 1991) sobre la construcción de objetos matemáticos. Se presenta una descomposición genética del concepto en la que se modelan las construcciones mentales que los estudiantes pueden llevar a cabo para la comprensión del concepto. Para validar estas posibles construcciones se diseñó un cuestionario y se aplicó a estudiantes de las licenciaturas de matemáticas y de física de una universidad pública. Algunas de las respuestas que presentamos permiten observar que el proceso de construcción del concepto por el que pasan los estudiantes es muy complejo y que la mayoría no construye una concepción de tipo acción del mismo. El análisis de las respuestas de los estudiantes ha sido útil también para conocer las dificultades a las que se enfrentan los estudiantes cuando intentan demostrar que cierto número es el supremo de un conjunto dado

Abstract:

The main goal of this work is to study how university students construct the supreme concept. The theoretical framework, used in this study, is APOS theory (Dubinsky, 1991); that describes mathematical objects construction. We present a genetic decomposition of the supreme concept that describes the mental constructions that the authors consider students should perform to understand such concept. To validate these possible constructions we designed and applied a questionnaire for mathematics and physics students of a public university. Some answers show that the student's construction process is a hard task, and most students do not construct an action conception of this concept. The analysis of the student answers has been also useful to know the difficulties that students face when they try to demonstrate that a number is a supreme of a given set

Resumen:

Objetivo. Analizar la veracidad de las cifras oficiales de acuerdo con la información disponible e identificar oportunidades de mejora. Material y métodos. Estimamos las coberturas de vacunación y tasas de deserción (para las vacunas administradas en multidosis) del esquema básico para niños de menos de un año de edad, con base en la información de cubos dinámicos de la Secretaría de Salud de 2015 a 2017. Resultados. Observamos variaciones en los reportes mensuales de vacunación que indican bajas tasas de vacunación, así como índices altos de deserción al comparar primeras y terceras dosis aplicadas. La cobertura nacional de esquema completo se estimó en 48.9 por ciento. Conclusión. No se cuenta con información confiable que permita estimar las coberturas reales de vacunación. En los reportes oficiales hay una constante sobrestimación de las coberturas que ha creado a una “falsa sensación de seguridad”. Esto se ha constituido en una barrera que impide el análisis crítico del Programa Universal de Vacunación

Abstract:

Objective. To analyze the validity of the official vaccination figures according to the available information and to identify opportunities for improvement. Materials and methods. We estimated vaccination coverage and dropout rates (for multi-dose vaccines) for one-year-old children, based on public information from the dynamic cubes of the Ministry of Health, for the years 2015 to 2017. Results. We observed variations in the vaccination monthly reports, which indicate low rates of vaccination, as well as high dropout rates when comparing first and third doses applied. For children 1 year of age, the national complete coverage was estimated at 48.9%. Conclusion. There is no reliable information to estimate the actual vaccination coverage. Government documents report a constant overestimation of vaccination coverage that creates a “false sense of security”. This has become a barrier for the critical analysis of the Universal Vaccination Program

Abstract:

Higher Education Institutions (HEI) face the need to manage effectively and efficiently their resources because the demand for quality educational services from the private sector, government, and society. So as to measure the technical efficiency of 40 Mexicans HEI, we applied the data envelopment analysis (DEA) which is a methodology to empirically measure the productive efficiency of decision–making units (DMU). In this paper, we measure three core functions of every HEI: teaching, research, and knowledge dissemination. We particularly applied the output–oriented BCC-Primal model given that the CCR model does not accommodate returns to scales. As a result, the HEI are classified as either technical efficiency or technical inefficiency; thus, we identified potential improvements for the technical inefficient HEI. These results could support the decision–making process to improve the performance of the HEI

Abstract:

This paper presents the control during take-off, transition and straight and level flight of a convertible aircraft based on Gain-Scheduling technique. The vehicle motion equations are deduced using Newton-Euler formulation. The lift, drag and pitch moment coefficients are modeled for angle of attack variations between one hundred eighty degrees and minus one hundred eighty degrees based on preliminary wing tunnel tests. The model of the aerodynamic effects caused by the wake rotors on the wing and horizontal stabilizer are taken into consideration. Numerical simulations are provided to verify the effectiveness of the proposed control strategy

Abstract:

For the regular polygonal relative equilibria on S2, we show that if all the particles are outside of the equator, then they are orbitally stable in a four-dimensional invariant symplectic manifold. For the stability in full space, if they are close to the north or south pole, then such relative equilibria are spectrally unstable. If they are close to the equator, then if the number of masses is odd, then they are orbitally stable; and if the number of masses is even, then they are spectrally unstable

Abstract:

Computer vision methodologies using machine learning techniques usually consist of the following phases: pre-processing, segmentation, feature extraction, selection of relevant variables, classification, and evaluation. In this work, a methodology for object recognition is proposed. The methodology is called PSEV-BF (pre-segmentation and enhanced variables for bird features). PSEV-BF includes two new phases compared to the traditional computer vision methodologies, namely: pre-segmentation and enhancement of variables. Pre-segmentation is performed using the third version of YOLO (you only look once), a convolutional neural network (CNN) architecture designed for object detection. Additionally, a simulated annealing (SA) algorithm is proposed for the selection and enhancement of relevant variables. To test PSEV-BF, the repository commons object in Context (COCO) was used with images exhibiting uncontrolled environments. Finally, the APIoU metric (average precision intersection over union) is used as an evaluation benchmark to compare our methodology with standard configurations. The results show that PSEV-BF has the highest performance in all tests

Abstract:

Modeling the interplay between immune system components and cancer cells via immunotherapy is the purpose of this work. We present a simple mathematical model of interaction between tumor cells and the immune system's effector cells. With rigorous mathematical analysis and numerical continuation, we study the generalized Hopf bifurcations (GH), known as Bautin bifurcation, including the Allee effect, presenting also Hopf bifurcation. The Hopf bifurcation indicates the existence of limit cycles. Bautin suggests two limit cycles; consequently, the model presents the equilibrium phase in immunoediting theory. Finally, the numerical continuation is performed to support the analytical results

Resumen:

A partir de la observación de que la muerte era construida mediáticamente de distinta manera en dos catástrofes naturales –tsunami de Indonesia y huracán Katrina en Nueva Orleans–, elaboramos un metadiscurso, a partir del discurso periodístico que remitiese a los fundamentos filosóficos y antropológicos que preceden a la construcción de dichas representaciones. Para ello, se convoca a la filosofía, a la antropología de la muerte y a la semiótica en su punto de convergencia

Abstract:

From the examination that the image of death was mediatically constructed in different ways when referring to two natural catastrophes –the cases of the tsunami in Indonesia and the hurricane Katrina in New Orleans–, we will attempt to understand the construction of these narratives, so we can shed light upon the philosophical and anthropological foundaments that precede these representations. We will try to do so from a complex perspective based on the point of convergence for philosophy, anthropology and semiotics

Abstract:

We develop a dynamic political economy model in which investment in the state capacity to levy taxes and deter crime is a policy variable, and we study the evolution of state capacity when policy is chosen by an elite. We show that democratization in the sense of expansion of the elite leads to an increased investment in state capacity and to a reduction in illegal activities and has nonmonotonic effects on tax rates as it reduces the willingness of the elite to engage in particularistic spending but enhances its willingness to provide public goods. Depending on initial conditions, consensual political changes may lead either to democratization or to the entrenchment of an immovable elite

Abstract:

We analyze the effect of turnout requirements in referenda in the context of a group turnout model. We show that a participation quorum requirement may reduce the turnout so severely that it generates a “quorum paradox”: In equilibrium, the expected turnout exceeds the participation quorum only if this requirement is not imposed. Furthermore, a participation quorum does not necessarily imply a bias for the status quo. We also show that in order to induce a given expected turnout and avoid the quorum paradox, the quorum should be set at a level that is lower than half the target. Finally, we argue that a super majority requirement to overturn the status quo is never equivalent to a participation quorum

Abstract:

We model electoral competition between two parties in a winner-take-all election. Parties choose strategically first their platforms and then their campaign spending under aggregate uncertainty about voters' preferences. We use the model to examine why campaign spending in the United States has increased at the same time that politics has become more polarized. We find that a popular explanation - more accurate targeting of campaign spending - is not consistent. While accurate targeting may lead to greater spending, it also leads to less polarization. We argue that a better explanation is that voters preferences have become more volatile from the point of view of parties at the moment of choosing policy positions. This both rises campaign spending and increases polarization. It is also consistent with the observation that voters have become less committed to the two parties

Abstract:

I analyze how an exogenous cost of entry in a risky asset market affects two endogenous variables: the degree of market participation and price volatility. I show that different entry costs generate different participation equilibria and that a multiplicity of equilibria may arise, but that the new market entrants are always more risk-averse than the rest of the participants. Every participation equilibrium is associated with a volatility of the asset price. Increased market participation leads to increased asset price volatility and higher welfare

Abstract:

Support Vector Machines (SVM) are learning methods useful for solving supervised learning problems such as classification (SVC) and regression (SVR). SVM's are based on the Statistical Learning Theory and the minimization of the Structural Risk [1], an enhancement over neural networks such as Multi-Layer Perceptrons. However, the major drawback is the high computational cost of the constrained Quadratic Problem (QP) combined with the selection of the kernel parameters they involve. Here we discuss ∈-SVRVGA, a detailed implementation of SVR that uses the non-traditional Vasconcelos Genetic Algorithm (VGA) [2] as tool for solving the associated QP along with the tuning of the kernel parameters. This work does not explore the automatic tuning of the regularization parameter C associated to the VC dimension [1] of the SVM what is considered an open research area. The ∈-SVRVGA fitting capability was tested with onedimensional Time Series (TS) data by reconstructing their n-dimensional state space [3] and adding Gaussian noise. Results show that ∈-SVRGVA is able to model successfully the TS in spite of a noisy environment as well as the self-selection of kernel parameters

Abstract:

In this work, we present a mathematical model to describe the adsorption-diffusion process on fractal porous materials. This model is based on the fractal continuum approach and considers the scale-invariant properties of the surface and volume of adsorbent particles, which are well-represented by their fractal dimensions. The method of lines was used to solve the nonlinear fractal model, and the numerical predictions were compared with experimental data to determine the fractal dimensions through an optimization algorithm. The intraparticle mass flux and the mean square displacement dynamics as a function of fractal dimensions were analyzed. The results suggest that they can be potentially used to characterize the intraparticle mass transport processes. The fractal model demonstrated to be able to predict adsorption-diffusion experiments and jointly can be used to estimate fractal parameters of porous adsorbents

Resumen:

En este trabajo analizamos los efectos de difusión y de derrama (spillover) del riesgo crediticio entre bancos dentro de un sistema bancario, utilizando el sistema financiero mexicano como caso de estudio. Nuestra selección de medida de riesgo crediticio es la razón de crédito en situación de mora como porcentaje del total de crédito en cada banco (NPL). Para este propósito construimos un modelo VAR que identifica la composición de la varianza en las razones de NPL, y lo dividimos en dos partes: una explicada por los coeficientes del modelo VAR, y otra explicada por el “error contemporáneo” o las “innovaciones” de cada banco en el sistema. El error en el modelo estructural representa las “noticias” que distorsionan el nivel de riesgo estable cada período. Nuestra investigación se basa en el índice de derrama (spillover index) propuesto por Diebold y Yilmaz (2009) el cual indica el grado sobre el cual el riesgo agregado de un sistema es explicado por estos efectos de derrama. Este método nos permite cuantificar las contribuciones de largo plazo del riesgo de cada banco sobre el resto del sistema bancario, a través de la difusión del riesgo entre intermediarios. Además de lo anterior, el método nos permite identificar la importancia relativa del efecto de derrama incrementando gradualmente el período de predicciones para cada razón de NPL de cada banco. Nuestras estimaciones para el sistema bancario mexicano entre 2002 y 2013 sugieren que para el total de la variación en el riesgo del sistema el índice de derrama representa 15 por ciento de la variación en el corto plazo y casi 40 por ciento de la variación en el largo plazo.

Abstract:

We analyze the diffusion and spillover effects of credit risk among banks within a banking system, using the Mexican financial system as case study. Our proxy to measure credit risk is the non-performing loans ratio (NPL). For this purpose we construct a VAR model to identify the composition of the variance of NPL’s ratios dividing it into two parts: one that is explained by the VAR coefficients, and the other attributed to the contemporary “error” or “shocks” on other banks in the system. The error in the structural model represents the “news” that disturbs the stable risk in each period. Our work builds on the spillover index proposed by Diebold and Yilmaz (2009) that indicates the degree on which the overall risk in the system is explained by the spillover effects. The method allows us to measure the longrun contributions of each bank’s risk on the rest of the banking system through the diffusion of risk between intermediaries. Moreover, we are able to gauge the relative importance of spillover by increasing the length of prediction periods for each bank’s NPL. Our estimations for the Mexican banking system between 2002 and 2013 suggest that the overall spillover effect index accounts for 15 percent of the aggregate risk’s observed variation in the short term and almost 40 percent in the long term. The spillover effect explains 32 percent of total risk in the short term and 78 percent in the long term when we control for individual bank’s characteristics, even though the total size of risk originated by news in the banks decreases relative to the model without control variables

Abstract:

The relationship between monetary policy and the behaviour of financial markets is commonly examined to assess the effectiveness of the actions of central banks. Our study explores, for the case of Mexico, the reaction of short-term interest rate futures to monetary policy announcements, and to what extent the change to the operational targeting of interest rates implemented by the central bank in 2004 led to changes in this reaction. The results show that in a market dominated by institutional investors trading solely for hedging purposes, the actions of the central bank are not fully incorporated into prices in advance. As a result, interest rate futures prices are adjusted on announcement dates. Furthermore, the change to an interest rate operational target modified the trading behaviour so that the changes in the volatility, volume and prices of the futures contracts on the announcement dates are nowmore evident

Abstract:

In this paper we consider some systems of ordinary differential equations which are related to coagulation-fragmentation processes. In particular, we obtain explicit solutions {ck(t)} of such systems which involve certain coefficients obtained by solving a suitable algebraic recurrence relation. The coefficients are derived in two relevant cases: the high-functionality limit and the Flory-Stockmayer model. The solutions thus obtained are polydisperse (that is, ck(0) is different from zero for all k ≥ 1) and may exhibit monotonically increasing or decreasing total mass. We also solve a monodisperse case (where c1(0) is different from zero but ck(0) is equal to zero for all k ≥ 2) in the high-functionality limit. In contrast to the previous result, the corresponding solution is now shown to display a sol-gel transition when the total initial mass is larger than one, but not when such mass is less than or equal to one

Abstract:

We consider an infinite system of reaction–diffusion equations that models aggregation of particles. Under suitable assumptions on the diffusion coefficients and aggregation rates, we show that this system can be reduced to a scalar equation, for which an explicit self-similar solution is obtained. In addition, pointwise bounds for the solutions of associated initial and initial-boundary value problems are provided

Abstract:

To optimally account for dynamic and nonlinear changes in the stock market return distribution we evaluate competing Markov regime-switching model setups for the Swiss stock market. We find that the stochastic movement is optimally tracked by time-varying first and second moments and including a memory effect. Besides the superior dynamic properties, this setup exhibits appealing economic interpretations

Abstract:

Background: To date, the burden of injury in Mexico has not been comprehensively assessed using recent advances in population health research, including those in the Global Burden of Disease Study 2017 (GBD 2017). Methods: We used GBD 2017 for burden of unintentional injury estimates, including transport injuries, for Mexico and each state in Mexico from 1990 to 2017. We examined subnational variation, age patterns, sex differences and time trends for all injury burden metrics. Results: Unintentional injury deaths in Mexico decreased from 45 363 deaths (44 662 to 46 038) in 1990 to 42 702 (41 439 to 43 745) in 2017, while age-standardised mortality rates decreased from 65.2 (64.4 to 66.1) in 1990 to 35.1 (34.1 to 36.0) per 100 000 in 2017. In terms of non-fatal outcomes, there were 3 120 211 (2 879 993 to 3 377 945) new injury cases in 1990, which increased to 5 234 214 (4 812 615 to 5 701 669) new cases of injury in 2017. We estimated 2 761 957 (2 676 267 to 2 859 777) disability-adjusted life years (DALYs) due to injuries in Mexico in 1990 compared with 2 376 952 (2 224 588 to 2 551 004) DALYs in 2017. We found subnational variation in health loss across Mexico’s states, including concentrated burden in Tabasco, Chihuahua and Zacatecas. Conclusions: In Mexico, from 1990 to 2017, mortality due to unintentional injuries has decreased, while non-fatal incident cases have increased. However, unintentional injuries continue to cause considerable mortality and morbidity, with patterns that vary by state, age, sex and year. Future research should focus on targeted interventions to decrease injury burden in high-risk populations

Abstract:

The digital paradigm began some decades ago with the introduction of the microprocessor and the manipulation of information. The progresses of science and technology have been fantastic since those years, and due to these tremendous advances, the digital paradigm is in transitions today. Whereas the omnipresent importance of information for physical and living systems is not neglected, it is claimed that recent technological advances introduced a qualitative and quantitative change in the nature of how we handle information, thus placing these processes at the forefront of human activities. In this paper, we present a renovated and updated overview of the scope of the digital paradigm. We focus on the old ideas that make it possible, but also in the new ones that will show the new road

Abstract:

This article argues that deliberative theory provides an important contribution in the debate about the legitimacy of an Islamic influence within the British education system. The contribution is a timely one, in light of the tendency to view issues involving Islam and Muslims through the distorting prism of Islamophobia. The contribution of deliberative theory is developed and explained through a constructive comparison with two positions within debates about the legitimacy of religion and the accommodation of minority claims within education: ethical liberalism and ethical pluralism

Abstract:

Debate has grown about the legitimacy of Muslim faith schools within the British education system. At the same time, scepticism has developed towards multiculturalism as a normative approach for dealing with diversity. This article argues that it is worth retaining the normative impetus of multiculturalism by returning to its roots in political philosophy. In particular, we can draw on Will Kymlicka's distinction between 'internal restrictions' and 'external protections' as a way to assess the legitimacy of minority claims. Having outlined this distinction, the paper applies it to the case of Muslim schools

Abstract:

It is shown that altruism does not affect the equilibrium provision of public goods although altruism takes the form of unconditional commitment to contribute. The reason is that altruistic contributions completely crowd out selfish voluntary contributions. That is, egoists free ride on altruism. It is also shown that public goods are less likely to be provided in larger groups. The only qualification to our results is when the probability of altruism is so high that it is a dominant strategy for all egoistic players to free ride. In this case, actually, both altruism and the larger group facilitate public good provision

Abstract:

Many common bacterial pathogens have become increasingly resistant to the antibiotics used to treat them. The evidence suggests that the essential cause of the problem is the extensive and often inappropriate use of antibiotics, a practice that encourages the proliferation of resistant mutant strains of bacteria while suppressing the susceptible strains. However, it is not clear to what extent antibiotic use must be reduced to avoid or reverse an epidemic of antibiotic resistance, and how early the interventions must be made to be effective. To investigate these questions, we have developed a small system dynamics model that portrays changes over a period of years to three subsets of a bacterial population— antibiotic-susceptible, intermediately resistant, and highly resistant. The details and continuing refinement of this model are based on a case study of Streptococcus pneumoniae, a leading cause of illness and death worldwide. The paper presents the model's structure and behavior and identifies open questions for future work

Abstract:

This paper describes a test of the null hypothesis that the first K autocorrelations of a covariance stationary time series are zero in the presence of statistical dependence. The test is based on the Box-98erce Q statistic with bootstrap-based P-values. The bootstrap is implemented using a double blocks-of-blocks procedure with prewhitening. The finite sample performance of the bootstrap Q test is investigated by simulation. In our experiments, the performance is satisfactory for samples of n = 500. At this sample size, the differences between the empirical and nominal rejection probabilities are essentially eliminated

Abstract:

Pre-electoral coalitions (PECs) may increase parties' chances of winning an election, but they may also distort electoral results and policies away from citizens' preferences. To shed light on how PECs shape post-electoral power distribution, we study the causes and consequences of PECs in Finland where elections use an open-list proportional representation system, and parties may form joint lists. We present descriptive evidence showing that PECs are more common between parties of equal size and similar ideology, and when elections are more disproportional or involve more parties. Using difference-in-differences and density discontinuity designs, we illustrate that voters punish coalescing parties and target personal votes strategically within the coalitions, and that PECs are formed with the particular purpose of influencing the distribution of power. PECs increase small parties' chances of acquiring leadership positions, lead to more dispersed seat distributions, and sometimes prevent absolute majorities. They can thus enable a broader representation of citizens' policy preferences

Abstract:

A principal wishes to persuade multiple agents to take a particular action profile. Each agent cares about both a payoff-relevant state and other agents' actions. The principal discloses information about the state to control the agents' behavior by using their strategic uncertainty. We show that for any nondegenerate prior, the principal can persuade the agents to take an action profile as a unique rationalizable outcome if that action profile satisfies a generalization of risk dominance. Moreover, this result remains true even if each of the agents is allowed to strategically choose whether to receive information from the principal or not

Abstract:

Third-party policing (TPP) refers to police efforts to persuade or coerce third parties to take some responsibility for crime control and prevention. The Yakuza Exclusion Ordinances (YEOs) of Japan aim to combat organized crime syndicates-the Yakuza. Consistent with the principles of TPP, the YEOs prohibit third parties (i.e., non-yakuza individuals) from providing any benefit to the yakuza. We argue that the effectiveness of the YEOs may depend on the strategic relationship among yakuza syndicates, where yakuza syndicates choose their power strategically to gain advantages over competition among rival yakuza syndicates

Abstract:

An Archimedean copula is characterised by its generator. This is a real function whose inverse behaves as a survival function. We propose a semiparametric generator based on a quadratic spline. This is achieved by modelling the first derivative of a hazard rate function, in a survival analysis context, as a piecewise constant function. Convexity of our semiparametric generator is obtained by imposing some simple constraints. The induced semiparametric Archimedean copula produces Kendall’s tau association measure that covers the whole range (−1, 1). Inference on the model is done under a Bayesian approach and for some prior specifications we are able to perform an independence test. Properties of the model are illustrated with a simulation study as well as with a real dataset

Abstract:

An archimedean copula is characterised by its generator. This is a real function whose inverse behaves as a survival function. We propose a semiparametric generator based on a quadratic spline. This is achieved by modelling the second derivative of a cumulative hazard function, in survival analysis context, as a piecewise constant function. Convexity of our semiparametric generator is obtained by imposing some simple constraints. The induced semiparametric archimedean copula produces Kendall's tau association measures that cover the whole range (-1,1). Inference on the model is done under a bayesian approach and for some prior specifications we are able to perform an independence test. Properties of the model are illustrated with a simulation study as well as with a real dataset

Abstract:

This paper studies the relationship between test scores and cognitive skills using two longitudinal data sets that track student performance in a national standardized exam in grades 6, 9, and 12 and post-secondary school outcomes in Mexico. Exploiting a large sample of twins to control for all between-family differences in school, household, and neighborhood inputs, we find that primary school test scores are a strong predictor of secondary education outcomes. Using a data set that links results in the national standardized test to later outcomes, we find that secondary school test scores predict university enrollment and hourly wages. These results indicate that, despite their limitations, large-scale student assessments can capture the skills they are meant to measure and can therefore be used to monitor student learning in developing countries

Abstract:

Half of the students in low- and middle-income countries fail to achieve minimum learning levels in core subject areas like literacy and numeracy. This learning crisis reduces productivity by close to a third in developing countries. Nobel prize winners Duflo, Banerjee and Kremer have produced evidence on the effectiveness of different strategies to address the learning crisis. Experimental evaluations show that teacher incentives created by linking employment contracts to performance and accountability, and face-to-face training strategies focused in specific subjects, are effective strategies to improve student learning. Randomized trials also show that complementing education systems with tutors or computer assisted learning to make instruction more relevant to the current level of students' competences has a significant impact on learning outcomes, particularly among lagging students

Resumen:

El objetivo de este artículo es demostrar las diferencias tan marcadas que existen entre el acercamiento de Canadá y el de México en torno a la seguridad de Norteamérica; argumenta que México enfrenta una batalla cuesta arriba en el esfuerzo de ser un buen aliado en la lucha contra el terrorismo

Abstract:

This article intends to show the pronounced differences between Canada and Mexico’s approaches regarding North American security. It further states that Mexico faces an uphill battle in its efforts of supporting the fight against terrorism

Abstract:

This exploratory study proposes a methodology for assessing digital collaboration skills based on the students' actual behavior. Students' comments using a digital collaboration tool during a one-month long project were manually coded. This methodology is not context specific and can be used across different domains. This assessment contrasts with self-reported measures in which students rate themselves as already possessing collaboration skills. Finally, the study explores the use of generative AI to automatically code student's comments to alleviate the labor-intensive process that coding requires and to enable scalability in coding data

Abstract:

Purpose: This research investigates the introduction of accounting practices into small family businesses, based on socioemotional wealth theory. Design/methodology/approach: A multiple-case study was conducted gathering data through interviews and documents (proprietary and public). The sample included six businesses (five Mexican and one American) from different manufacturing and service industries. Findings: It was found that, although owners control the implementation of accounting practices, others (including family employees, non-family employees and external experts) at times propose practices. The owner’s control can be relaxed, or even eliminated, as the result of proposals from some family employees. However, the degree of influence of family employees is not linked to the closeness of the family relationship, but rather to the owners’ perceived competence of the family employee, indicating an interaction between competence and experience on one side, and family ties on the other. Research limitations/implications: First, the owners chose which documentary data to provide and who was accessible for interviews, potentially biasing findings. Second, the degree of influence family employees can exert might change over time. Third, the study included a limited number of interviews, which can increase the risk of bias. Finally, all firms studied were still managed by the founder. It is possible that small family businesses that have undergone a succession process might incorporate accounting practices differently. Practical implications Organizations promoting the implementation of managerial accounting practices should be aware that, in addition to the owner, some family employees and external experts could influence business practices. Accountants already providing accounting services to small family business are also a good source for proposing managerial accounting practices [...]

Abstract:

We report the results of an experiment that examines the interpretation of probability expressions included in the International Financial Reporting Standards by bicultural individuals. In particular, we examine the impact of a phenomenon known as frame switching, which occurs when the thinking processes of bicultural individuals are influenced by the language they use. We compared the interpretation of participants from two distinct cultures (Americans and Mexicans) and from participants who share both cultures (Mexican-Americans). We find, consistent with the expected effects of frame switching, that the interpretations of probability expressions by Mexican-Americans were influenced by the language in which they read the standards. The responses of Mexican-Americans reading the standards in Spanish tended toward those of Mexicans; whereas the responses of Mexican-Americans reading the standards in English tended toward the interpretations of Americans

Abstract:

This study examines the translation of International Financial Reporting Standards (IFRS) from the official English version into Spanish by Mexican professional accountants. The use of IFRS in languages other than English creates the potential for translation differences that may introduce variation in accounting outcomes when different languages are used. In particular, given the move toward principles-based standards, with the corresponding increase in the proportion of generic phrases, the consistent translation of these terms is likely to become increasingly important. Thirty-eight participants translated (from English to Spanish) a total of 47 phrases excerpted from five different IFRS. Consistent with our hypotheses, we find that translations of accounting-specific phrases have less variation in translation than generic phrases, as exhibited by greater inter-rater agreement and lower relative dispersion

Abstract:

We conducted an experiment to investigate the influence of the framing of reports, the type of decision-aid system, and the cultural background of the decision maker on the intention to investigate fraud. We compared decisions made from reports generated by automated and manual systems to explore whether automated systems exacerbated or ameliorated the framing bias. We also explored whether the cultural background of participants—Americans and Mexicans—influenced the decision. Results indicated that the influence of type of system and framing are culturally dependent. When the framing highlights the possibility of the results being incorrect, people take a more cautious approach and the intention to investigate fraud is lower compared to the framing that highlights the probability of the results being correct. Automated systems appear to ameliorate the framing bias in the American sample and preserve the framing bias in the Mexican sample. The reason for the different impact of automated systems appears to be in how Americans and Mexicans perceive decision-aid systems. Americans are less likely to trust automated systems and more likely to trust manual systems than Mexicans. Mexicans, on the other hand, rely more on automated systems and evaluate their reputation at a higher level than Americans

Abstract:

The situation where a certain type of seed need to be classiffied into one of three different categories, according to its germination level, can be formulated as the simultaneous test of three statistical hypotheses. In this paper, the problem of assessing an optimal sample size for the simultaneous test of several hypotheses on a Bernoulli process is studied within the Bayesian framework, and a solution is obtained for a wide class of prior distributions and a logarithmic loss function

Abstract:

Cultural Dimensions, as stipulated by different theoretical perspectives such as Hofstede’s, are normally not considered to define student models. These cultural dimensions consist of traits that can be attributed to students and include both cognitive and affective characteristics. Some dimensions indicate students’ ability to represent an effect in the affect which may be useful to predetermine affective models. This research project hypothesizes that students’ cultural dimension may indicate affect tendency during the use of Intelligent Tutoring Systems (ITS). The methodology consisted of determining students’ cultural dimensions, cognitive achievement, and analyzing affective responses (selfreported) when the student used the ITS on an individual way. The results suggested that there are affective behaviors associated to a Hofstede cultural dimension (Power distance index). The implications of these results are that some cultural characteristics may predict students’ affective behaviors employing an ITS for mathematics. Additionally, affect models could be used to predefine affective- cognitive scaffolding

Abstract:

Consonance Analysis is a useful numerical and graphical approach for evaluating the consistency of the measurements and the panel of people involved in sensory evaluation. It makes use of several uni and multivariate techniques either graphical or analytical. It shows the implementation of this procedure in a graphical interface

Abstract:

Recent amendments to the Montreal Protocol specify a 100% reduction in methyl bromide consumption in all developed countries by the year 2005 (except for "critical uses"). In this paper we consider one possible chemical substitute to methyl bromide, methyl iodide, that until recently has received little attention as a potential alternative. We examine the viability of methyl iodide using information from studies of the economic benefits of methyl bromide to California agricultural producers and of methyl iodide effectiveness in preplant fumigation experiments, and from an assessment of the world market for iodine.

Abstract:

We use a calibrated life-cycle model to evaluate why high income households save as a group a much higher fraction of income than do low income households in US cross-section data. We find that (1) age and relatively permanent earnings differences across households together with the structure of the US social security system are suffcient to replicate this fact, (2) without social security the model economies still produce large differences in saving rates across income groups and (3) purely temporary earnings shocks of the magnitude estimated in US data alter only slightly the saving rates of high and low income households.

Abstract:

How will the distribution of welfare, consumption, and leisure across households be affected by social security reform? This paper addresses this question for social security reforms with a two-tier structure by comparing steady states under a realistic version of the current U.S. system and under the two-tier system. The first tier is a mandatory, defined-contribution pension offering a retirement annuity proportional to the value of taxes paid, whereas the second tier guarantees a minimum retirement income. Our findings, which are summarized in the Introduction, do not in general favor the implementation of pay-as-you go versions of the two-tier system for the U.S. economy

Abstract:

This paper investigates the one-sector growth model where agents receive idiosyncratic labor endowment shocks and face a borrowing constrainL It is shown that any steadystate capital stock líes strictly aboye the steady state in the model without idiosyncratic shocks. In addition, the capital stock increases monotonicalIy when it is sufficiently far belowa steady state. However, near a steady state there can be non-monotonic economic dynamics

Abstract:

This paper compares the age-wealth distribution produced in life-cycle economies to the corresponding distribution in the US economy. The idea is to calibrate the model economies to match features of the US earnings distribution and then examine the wealth distribution implications of the model economies. The findings are that the calibrated model economies with earnings and lifetime uncertainty can replicate measures of both aggregate wealth and transfer wealth in the US. Furthermore, the model economies produce the US wealth Gini and a significant fraction of the wealth inequality within age groups. However, the model economies produce less than half the fraction of wealth held by the top 1 percent of US households

Abstract:

We test for the presence of market discipline in the banking sector in early-twentieth- century Mexico. Using financial data from note-issuing banks between 1900 and 1910, we examinewhether bank fundamental s influenced the patterns of withdrawals and of note issue. We show that fundamentals were a strong determinant of bank withdrawals and note issue, indicating that market discipline was an important feature of the banking system in this periodo This result crucially depends on correcting for selection bias generated by the exit of several banks in the 1907 crisis

Abstract:

Empirical evidence suggests that real activity, the volume of bank lending activity, and the volume of trading in equity markets are strongly positively correlated. At the same time, inflation and financial market activity are strongly negatively correlated (in the long run), as are inflation and the real rate of return on equity. Inflation and real activity are also negatively correlated in the long run, particularly for economies with relatively high rates of inflation. We present a monetary growth model in which banks and secondary capital markets play a crucial allocative function. We show that - at least under certain configurations of parameters - the predictions of the model are consistent with these and several other observations about inflation, finance and long-run real activity

Abstract:

We consider a small open economy with a costly state verification problem and binding reserve requirements. The presence of these frictions leads to the existence of two steady states with credit rationing. An increase in the money growth rate, the world interest rate or reserve requirements raises (lowers) GDP in the high (low) activity steady state. However, sufficiently large increases in money growth or the world interest rate can transform the high activity steady state from a sink to a source. The model also delivers prescriptions for restoring the stability of this steady state in such an eventuality.

Abstract:

Least-squares methods enable us to price Bermudan-style options by Monte Carlo simulation. They are based on estimating the option continuation value by least-squares. We show that the Bermudan price is maximized when this continuation value is estimated near the exercise boundary, which is equivalent to implicitly estimating the optimal exercise boundary by using the value-matching condition. Localization is the key difference with respect to global regression methods, but is fundamental for optimal exercise decisions and requires estimation of the continuation value by iterating local least-squares (because we estimate and localize the exercise boundary at the same time). In the numerical example, in agreement with this optimality, the new prices or lower bounds (i) improve upon the prices reported by other methods and (ii) are very close to the associated dual upper bounds. We also study the method's convergence

Abstract:

This paper introduces a Monte Carlo simulation method for pricing multidimensional American options based on the computation of the optimal exercise frontier. We consider Bermudan options that can be exercised at a finite number of times and compute the optimal exercise frontier recursively. We show that for every date of possible exercise, any single point of the optimal exercise frontier is a fixed point of a simple algorithm. Once the frontier is computed, we use plain vanilla Monte Carlo simulation to price the option and obtain a low-biased estimator. We illustrate the method with applications to several types of options

Abstract:

This paper introduces the application of Monte Carlo simulation technology to the valuation of securities that contain many (buying or selling) rights, but for which a limited number can be exercised per period, and penalties if a minimum quantity is not exercised before maturity. These securities combine the characteristics of American options, with the additional constraint that only a few rights can be exercised per period and therefore their price depends also on the number of living rights (i.e., American-Asian-style payoffs), and forward securities. These securities give flexibility-of-delivery options and are common in energy markets (e.g., take-or-pay or swing options) and as real options (e.g., the development of a mine). First, we derive a series of properties for the price and the optimal exercise frontier of these securities. Second, we price them by simulation, extending the Ibáñez and Zapatero (2004) method to this problem

Abstract:

This paper presents a detailed analysis of the numerical implementation of the American put option decomposition into an equivalent European option plus an early exercise premium (Kim 1990, Jacka 1991, Carr et al. 1992). It subsequently introduces a new algorithm based upon this decomposition and Richardson extrapolation. This new algorithm is based upon (a) the derivation of the correct order for the error term when applying Richardson extrapolation, which is used to control the error of the extrapolated prices, (b) an innovative adjustment of Kim’s (1990) discrete-time early exercise premium, so that these premiums monotonically converge and, therefore, it is appropriate to use them in extrapolation, and (c) the optimal exercise frontier can be quickly computed through Newton’s method, permitting the efficient implementation of the decomposition formula in practice. Numerical experiments show that this new algorithm is accurate, efficient, easy to implement, and competitive in comparison with other methods. Finally, it can also be applied to other American exotic securities

Resumen:

En el proceso de secularización emprendido por la Corona española, esta se enfrentó a la difícil tarea de reducir a las órdenes regulares a la jurisdicción del Ordinario. En este contexto, tuvo lugar en la Puebla de los Ángeles, en la Nueva España, el aparatoso enfrentamiento entre el obispo Palafox y la orden ignaciana

Abstract:

During the secularization process undertaken by the Spanish crown, the difficult task of reducing the orders according to the Ordinary jurisdiction occurred. In the midst of it, a dramatic confrontation between Bishop Palafox and the Ignatian order took place in Puebla de los Angeles, New Spain

Abstract:

This paper assesses whether the program of trade liberalization undertaken by Mexico after 1985 was undermined by lack of credibility. It provides an empirical counterpart to some of the credibility issues that have been discussed elsewhere in the literature and proposes a methodology, based on the estimation of a probit model, to measure the probability of trade policy reversal due to the likelihood of occurrence of a balance of payments crisis. It is shown that the probability of trade policy reversal reduced indeed the rate of capital accumulation during the first years of the reform

Abstract:

This standard defines physical layer (PHY) and medium access control (MAC) layer specifications for Wireless Personal Area Networks (WPAN) Peer Aware Communications (PAC) optimized for peer-to-peer and infrastructure-less communications with fully distributed coordination. PAC network features include discovery of peer information without association, discovery signaling rate typically greater than 100 kb/s, discovery of PDs in the neighborhood, scalable data transmission rates, typically up to 10 Mb/s, group communications with simultaneous membership in multiple groups, relative positioning, operation in selected globally available unlicensed/licensed bands below 11 GHz, and security

Resumen:

El presente trabajo pasa revista, si bien de una manera meramente introductoria, a algunos de los temas y problemas cardinales del proyecto filosófico de Edmund Husserl. Desde la teoría del significado presentada en Investigaciones lógicas hasta La crisis de las ciencias europeas y la fenomenología trascendental, y las cuestiones de la clave metódica de la reducción, la constitución del tiempo propio de la vida de conciencia o la constitución del sentido “otro yo” (como yo pero que no soy yo)

Abstract:

The present article reviews, although in a merely introductory way, some of the issues and cardinal problems in the philosophical project of Edmund Husserl. From the theory of meaning presented in the Logical investigations, to The crisis of the European sciences and the transcendental phenomenology, going through issues such as the key methodical of the reduction, the constitution of the own time of the life of conscience or the complex and the sense “another me” (like me but that is not me)

Abstract:

We present a platform that monitors and reports the quality of the data services offered by mobile operators. The platform is based on a set of nodes and a mobile application that captures quality of service measurements and other relevant data. This information is sent to collector nodes where it is analysed to generate reports about the quality of the data services. The platform also provides information about the mobile sector, like the market share of mobile operators, smartphone manufacturers and models, and mobile technologies deployed. Since the platform has the potential to grow to a large-scale nation-wide project, the collector nodes are based on Hadoop for the storage and data analysis. We deployed a proof of concept in Mexico with a mobile App for the Android mobile operating system. The QoS servers were deployed on virtual machines on Amazon's AWS cloud. In order to test the capability of the Big Data based collector and BI nodes, we built a discrete-event simulator that generates QoS records with different random distributions, market behaviors and relevant patterns that would be interesting to observe

Abstract:

The simulation of communication networks using a continuous-state, or fluid modeling approach, has shown to be very efficient for a wide range of performance evaluation scenarios. In this paper we present a fluid model of the RED active queue management algorithm for FluidSim a simulation tool based on the fluid paradigm. We compare the behavior and efficiency of our model against the results provided by NS, a well known packet-based network simulator. The proposed model does capture RED's characteristics with acceptable precision providing good accelerations in typical network evaluation configurations

Abstract:

When we analyze the performance of a packet communication network using a queuing model, it is usual to see packets as tokens and to consider that they remain in service for an amount of time proportional to their lengths (the proportionality constant being the reciprocal of the transmission speed). In this paper we show that this paradigm may misrepresent the dynamics of the system, because of its implications on the way memory is used and, in particular, freed. Simulation and analytical results illustrate our claim. Concerning the latter, we provide modified Pollaczec-Khintchine formulas in the M/GI/1 case where we take into account the usual way memory is handled in a communication node

Abstract:

In this paper, we present a tool for the simulation of fluid models of high-speed telecommunication networks. The aim of such a simulator is to evaluate measures which cannot be obtained through standard tools in reasonable time or through analytical approaches. We follow an event-driven approach in which events are associated with rate changes in fluid flows. We show that under some loose restrictions on the sources, this suffices to efficiently simulate the evolution in time of fairly complex models. Some examples illustrate the utilization of this approach and the gain that can be observed over standard simulation tools

Abstract:

In light of the conflicting findings in previous research on the effect of country-of-origin (COO) on consumer product perceptions, this paper extends previous research by testing a decomposition of the construct (country of product design (COD), assembly (COA) and parts (COP) manufacture) on Mexican and US consumers. Additionally, several moderators are tested for the first time within the framework of this model. Test results on the three products (television, athletic shoes and mountain bike) indicate that design, assembly and parts origins do have different effects on product evaluations with the COP exhibiting the strongest influence. Furthermore, COO effects vary between US and Mexican consumers-most notably related to product design, introducing the issue of whether fashion and functionality of products, which may vary between societies, cause different COO effects. Finally, age exhibited a strikingly different moderating effect in the two countries

Abstract:

This study reports on the evaluation of a crime reporting and investigative interview system, i-recall. i-recall emulates a police officer conducting a cognitive interview (CI). It incorporates CI techniques to enhance witness memory retrieval and leverages natural language processing technology to support understanding of witness narratives. In a controlled user study i-recall was compared to a non-interactive textbox computer system. Sophomore college students acted as witnesses to a videotaped staged crime and reported what they saw using one of the two alternative reporting methods. Results indicate that i-recall outperformed the textbox system significantly in one of two measures, completeness of report. On average i-recall elicited 14 percent of information from witnesses and the textbox system elicited five (5) percent of information, all with 94 percent accuracy. i-recall is a promising Internet reporting alternative. Future work will evaluate i-recall by comparing it to a human expert cognitive interviewer

Abstract:

We study stochastic choice from lists. All lists present the same set of alternatives albeit in different orders. Faced with a list, the decision maker makes her choice in two stages. In the first stage she searches through the list up to a random depth capacity k. In the second stage she chooses from the first k alternatives using a stochastic choice function over menus. We show that the underlying primitives (depth probabilities and stochastic choice function over menus) are revealed by choice from lists. We characterize the model and two of its special cases. In the first special case the decision maker deterministically chooses the best observed alternative according to a given preference. In the second, the decision maker maximizes random preferences

Resumen:

El autor muestra la inseparable y estrechísima relación entre mundo, pensamiento y lenguaje, y apuesta a que el hombre comprenda que dicha relación es, ante todo, contemplación esperanzada, pero activa, y no afán de dominio y explotación

Abstract:

The author reveals the inseparable and close relationship between world, thinking, and language and bets on the fact that man finally understands that such relationship is, above all, a hopeful but active contemplation and not something to be dominated and exploited

Abstract:

This paper describes the design and implementation of an alternative communication device implemented as a mobile application for tablets. The application was developed applying user-centered design techniques and allows children between 3 and 12 years old with severe language impairments improve their communication skills with others. Several prototypes have been developed and evaluated with users. This paper summarizes the results and the advantages versus other apps

Resumen:

El presente artículo describe el diseño y la implementación de tres prototipos de un dispositivo de comunicación alternativa implementado como una aplicación móvil para tabletas digitales. La aplicación fue desarrollada utilizando un diseño centrado en el usuario y permite a niños de entre 3 y 12 años con discapacidades severas de lenguaje mejorar su comunicación con otras personas. Se presentan los prototipos desarrollados y algunos resultados de las pruebas de usabilidad realizadas

Resumen:

En este artículo evaluamos empíricamente el efecto de la contaminación del aire y la variación de la temperatura sobre los riesgos de salud de la población en tres municipios de la Zona Metropolitana del Valle de México (ZMVM). Con base en la teoría de la justicia ambiental nos preguntamos si en estos municipios de la ZMVM la asociación entre la concentración de PM (a la 10) y la mortalidad depende de las disparidades socioeconómicas de la población. En esta investigación diferimos de lo que habitualmente se ha hecho en otros estudios que establecen la relación de PM (a la 10) y la mortalidad al usar un modelo de espacio de estados, en lugar del modelo de regresión de Poisson. El modelo de espacio de estados permite estimar el tamaño de la población en riesgo no observada, su tasa de riesgo, la esperanza de vida de los individuos de esa población y el efecto de los cambios en las condiciones ambientales sobre la esperanza de vida. Nuestros resultados muestran una tasa de riesgo más baja en el municipio de mayor nivel socioeconómico comparada con la tasa más alta del municipio con menor nivel socioeconómico. La menor tasa de riesgo del municipio de mayor nivel socioeconómico incrementa la esperanza de vida y la probabilidad de que sus habitantes permanezcan por más tiempo en la población en riesgo, aumentando de esta forma el tamaño de dicha población, en comparación con el municipio de menor nivel socioeconómico, cuyos habitantes muestran una menor esperanza de vida. Entonces, entre más pequeña sea la población en riesgo, más enfermos estarán sus habitantes y, por tanto, menor será el impacto sobre la mortalidad a largo plazo. Nuestro estudio examina cómo se comportan las disparidades de salud a nivel regional y podría proporcionar información para proponer iniciativas de políticas de salud pública, con el fin de mejorar las condiciones de vida entre las diferentes comunidades

Abstract:

This study explored the nature of health risks in the population of three municipalities within the Metropolitan Area of the Valley of Mexico (MAVM) by means of an empirical analysis of health effects associated with air pollution and temperature variation. Based on the environmental justice theory, we asked whether, in unequal socioeconomic municipalities of the MAVM, the association between PM10 concentrations and mortality depends on socioeconomic disparities. We differ from previous studies that have established a relationship between PM10 and mortality based on a state-space model instead of the Poisson regression model. The state-space model allows estimating the size of the unobserved at-risk population, its hazard rate, the life expectancy of individuals in that population, and the effect of changes in environmental conditions on that life expectancy. Our results show a lower hazard rate in a wealthy municipality, as compared to a higher hazard rate in a poor one. The lower hazard rate of the wealthy municipality extends life expectancy and enhances the likelihood of inhabitants staying long-lasting within the population at risk, thus increasing the size of that population, as compared to the population at risk in the poor municipality, whose members show a lower life expectancy. Thus, the smaller the at-risk population, the sicker its average member and the smaller the impact on long-term mortality. Our study examines how regional health disparities could provide information for public health policy initiatives which might improve living conditions among different communities

Abstract:

The long-term relationship between government spending and Mexico's economic growth is studied, taking into account any endogenous non-linearity that could arise to verify if Wagner's law and/or the Keynes hypothesis are met. Threshold cointegration techniques are used, which provide information on the asymmetry in the adjustment towards the long-term equilibrium. We consider that the main contribution of the work is that, according to the literature review, this is the first investigation that considers non-linearity in the case of Mexico. The results show evidence of the existence of a non-linear relationship between public spending and economic growth that validates Wagner's law; however, there are periods in which the causal relationship changes in favor of the Keynesian hypothesis, which is why It is possible that both hypotheses can alternate

Abstract:

Remittances inflows have been associated with a reduction in the level and severity of poverty. They contribute to higher human capital accumulation, to improved access to formal financial sector services, to enhanced small business investment and to more entrepreneurship. Remittances play also an important role in contributing to the livelihoods of less prosperous people. Considering these facts, this paper proposes a statistical model to forecast remittances flows to Mexico in order to provide information for the design of policies that can help attract remittances inflows and use them productively. Here, we apply a statistical methodology based on the Multi-State Markov-Switching model with three different specifications. The model is applied to the trend of the time series data instead of the original observations with the aim of mitigating the effect of outliers and transitory blips. The filtering technique employed to estimate the trend allows us to control the amount of smoothness in the resulting trend. This method is also useful to take into account an implicit adjustment of the data at both extremes of the time series, thus providing better results than conventional filtering techniques such as the Hodrick-Prescott filter. Thus, the Markov-Switching approach captures more precisely the trend persistence of remittances and enhances both in-sample and out-of-sample forecast performance

Resumen:

El estudio prueba la hipótesis de no sesgo de la tasa forward de tipo de cambio para el mercado cambiario mexicano. Se utilizó un modelo no lineal de Markov con cambio de régimen en vez de un modelo de regresión lineal. El modelo identifica dos estados en el comportamiento del tipo de cambio forward : uno en el que la hipótesis nula de eficiencia se sostiene y otro en el que no. Con el modelo lineal la hipótesis se rechaza para ambas tasas forward, 30 y 90 días. Sin embargo, con el modelo de dos estados no es posible rechazar la hipótesis nula para la tasa forward de 30 días en el estadbrbentificado como eficiente, pero se rechaza en el otro estado, En el caso de la tasa de 90 días no se distingue entre los dos estados. Por lo tanto, el modelo no lineal de Markov de dos estados es superior al modelo de regresión lineal para probar la hipótesis de no sesgo del tipo de cambio, la cual es rechazada en periodos de alta incertidumbre económica y política

Abstract:

This study proves the hypothesis of no bias in the forward exchange rate for the Mexican exchange market. A nonlinear Markov model with regime change was used instead of a linear regression model. The model identifies two states in the behavior of the forward exchange rate: one in which the null hypothesis of efficiency holds and the other one in which it does not. With the linear model the hypothesis is rejected for both forward rates, 30 and 90 days. However, when using the two-state model it is not possible to reject the null hypothesis for the 30-day forward rate in the state identified as efficient, but can be rejected in the other state. For the 90-day forward rate, the two states cannot be distinguished. Therefore, the nonlinear two-state Markov model is superior to the traditional linear regression model to test the hypothesis of no bias in the forward exchange rate, which is rejected in periods of high economic and political uncertainty

Resumen:

Un aspecto fundamental de los países en desarrollo es la existencia de un gran sector informal. En el presente trabajo se analiza el efecto que este rasgo tiene sobre la relación entre las variaciones en el desempleo y el crecimiento de la producción en el caso de México, un país que se caracteriza por la existencia de un gran sector informal. Partiendo de estudios recientes sobre el coeficiente de Okun, en primer lugar, pondremos a prueba si la relación ente los componentes cíclicos de desempleo y producción es asimétrica. A continuación, exploraremos la posibilidad de que esta relación no lineal pueda verse afectada por variaciones en el sector informal. Nuestros resultados apuntan a la existencia de una relación asimétrica entre los componentes cíclicos

Abstract:

A key aspect of developing countries is the existence of a large informal sector. In the present paper, we analyse the effect of this feature on the relationship between unemployment changes and output growth for Mexico, a country characterized by the existence of a large informal sector. Following recent studies on Okun’s coefficient, we first test whether the relationship between the cyclical components of unemployment and output is asymmetric. We then explore the possibility that this non-linear relationship may be affected by changes in the informal sector. Our results indicate that there is evidence of an asymmetric relationship between the cyclical components

Abstract:

Previous studies about the relationship between the cyclical components of Mexico’s output and unemployment suggest that it closely resembles that found in the economy of the United States of America. This would indicate that the dynamics between output and labour markets in the two economies are rather similar. However, these estimates are puzzling for they do not correspond to a characterization made to Mexico’s labour market. Using a methodology first proposed by Clark (1989), we find that the correlation between the transitory components of output and unemployment is much lower than previously thought

Resumen:

Las conclusiones de estudios anteriores sobre la relación entre los componentes cíclicos del producto y el desempleo en México permiten suponer que esta es muy similar a la existente en la economía de los Estados Unidos de América. Ello indicaría que las relaciones dinámicas entre el producto y el mercado de trabajo de las dos economías tienen muchos rasgos en común, lo que es sorprendente porque, de hecho, no concuerda con la caracterización del mercado laboral de México. La aplicación de la metodología propuesta originalmente por Clark (1989) permitió concluir que la correlación entre los componentes transitorios del producto y el desempleo es mucho menor de lo que se creía

Resumen:

En este artículo se analiza el impacto de un choque (shock) de política monetaria en las tasas de desempleo de México. A diferencia de estudios anteriores, en este se volvieron a calcular las tasas de desempleo para compararlas con las de los países de la Organización de Cooperación y Desarrollo Económicos (OCDE). La conclusión es que, ante una política monetaria restrictiva, el desempleo aumenta siguiendo el mismo patrón en forma de U invertida que se observa en otros estudios. Los resultados son robustos con respecto a diferentes supuestos sobre la naturaleza del mercado laboral mexicano

Abstract:

In this paper we analyse the effects of a monetary policy shock on Mexican unemployment rates. Unlike previous studies, this one re-estimates unemployment to produce alternative rates comparable to those of the Organization for Economic Cooperation and Development (OECD) member countries. We find that in response to tightening monetary policy, unemployment increases with a characteristic hump-shaped pattern also found in other studies. Our results are robust to different assumptions about the nature of Mexico's labour market

Abstract:

An intelligent assistant (IA) is a computer system endowed with artificial intelligence and/or machine learning techniques capable of intelligently assisting people. IAs have gained in popularity over recent years due to their usefulness, significant commercial developments, and a myriad of scientific and technological advances resulting from research efforts by the computer science community. In particular, these efforts have led to an increasingly extensive and complex state of the art of IAs, making evident the need to carry out a review in order to identify and catalogue the advances in the construction of IAs as well as to detect potential areas of further research. This paper presents a systematic review aiming to (i) describe, classify, and organize recent advances in IAs; (ii) characterize IAs' objectives, application domains, and workings; (iii) analyze how IAs have been evaluated; and (iv) identify what artificial intelligence and machine learning techniques are used to enable the intelligence of IAs. As a result of this systematic review, it is also proposed a taxonomy of IAs and a set of potential future research directions to further improve IAs. A set of research questions was formulated to guide this systematic review and address the proposed objectives. Well-known scientific databases were searched for articles on IAs published from January 2015 to June 2021 using 172 search strings. A total of 22,554 articles were retrieved and after applying inclusion, exclusion, and quality criteria, an overall number of 99 articles were selected, which are the basis for this systematic review

Abstract:

We prove that the closure of the numerical range of a (n+1)-periodic and (2m+1)-banded Toeplitz operator can be expressed as the closure of the convex hull of the uncountable union of numerical ranges of certain symbol matrices. In contrast to the periodic 3-banded (or tridiagonal) case, we show an example of a 2-periodic and 5-banded Toeplitz operator such that the closure of its numerical range is not equal to the numerical range of a single finite matrix

Abstract:

In this paper we give conditions on a matrix which guarantee that it is similar to a centrosymmetric matrix. We use this conditions to show that some 4 × 4 and 6 × 6 Toeplitz matrices are similar to centrosymmetric matrices. Furthermore, we give conditions for a matrix to be similar to a matrix which has a centrosymmetric principal submatrix, and conditions under which a matrix can be dilated to a matrix similar to a centrosymmetric matrix

Abstract:

In this paper, we compute the closure of the numerical range of certain periodic tridiagonal operators. This is achieved by showing that the closure of the numerical range of such operators can be expressed as the closure of the convex hull of the uncountable union of numerical ranges of certain symbol matrices. For a special case, this result can be improved so that it is the convex hull of the union of the numerical ranges of only two matrices. A conjecture is stated for the general case

Abstract:

In this paper, we prove a conjecture stated by the first two authors establishing the closure of the numerical range of a certain class of n + 1-periodic tridiagonal operators as the convex hull of the numerical ranges of two tridiagonal (n + 1) x (n + 1) matrices. Furthermore, when n + 1 is odd, we show that the size of such matrices simplifies to n/2 + 1

Abstract:

Payments for ecosystem services (PES) programs have been increasingly studied with a policy mix perspective. So far, the focus has been on PES' interplay with other conservation instruments and resulting environmental outcomes at meso- and macrolevels. Though PES often operate among “poor” forest-dwelling communities in the Global South, our knowledge on PES' interactions with poverty alleviation policies is scarce, especially at the microlevel. This article examines PES' interactions—in terms of joint coverage, management, and spending of revenues, and socioeconomic effects of participation—with a conditional cash transfer (CCT) program in a case study of six communities in Selva Lacandona, Chiapas, Mexico. The article builds a dual framework combining policy mix analysis with an actor-oriented approach focused on participants' microagency, and is based on in-depth, qualitative research. Results reveal widespread joint PES and CCT coverage, and patterns of specialization between different household members regarding the management and spending of program revenues. Results also show positive, multilevel policy interactions as participants combine resources to pursue individual and collective socioeconomic strategies. The article highlights the creative ways in which local stakeholders integrate individual policies within their broader livelihoods, and how coordination failures among policy-implementing institutions and deficient public services limit participants' ability to achieve sustained livelihood improvements. The article also highlights how a focus on microlevel policy interactions complements meso- and macrolevel analyses for a better understanding of PES' role in a policy mix and concludes by providing some recommendations for building implementation synergies and improving program design

Abstract:

In this work, we propose a controller based on the well-known Immersion and Invariance technique to solve the stabilization problem of a class of underactuated mechanical systems with 2-Degree of Freedom (DOF). We define simple conditions such that the ordinary partial equations arising in Immersion and Invariance are solved. The class of systems addressed in this work includes underactuated mechanical systems with gyroscopic forces. Our approach is validated through simulation and experimental results

Abstract:

One of the most challenging issues in adaptive control of robot manipulators with kinematic uncertainties is the requirement of inverse Jacobian matrix in regressor form. This requirement is inevitable in the case of the control of parallel robots, whose dynamics formulation are derived in the task space. In this paper, an adaptive controller is proposed for parallel robots based on representation of Jacobian matrix in regressor form with asymptotic trajectory tracking. The main idea of this paper is separation of determinant and adjugate of Jacobian matrix in order to represent them into a new regressor forms. Simulation and experimental results on a 2-DOF RPR and a 3-DOF redundant cable driven robot, respectively, verify promising performance of the proposed method in practice

Abstract:

Designing control systems with bounded input is a practical consideration since realizable physical systems are limited by the saturation of actuators. The actuators' saturation degrades the performance of the control system, and in extreme cases, the stability of the closed-loop system may be lost. However, actuator's saturation is typically neglected in the design of control systems, with compensation being made in the form of overdesigning the actuator or by postanalyzing the resulting system to ensure acceptable performance. The bounded input control of fully actuated mechanical systems has been investigated in multiple studies, but it is not generalized for underactuated mechanical systems. This article proposes a systematic framework for finding the upper bound of control effort in underactuated systems, based on interconnection and the damping assignment passivity based control approach. The proposed method also offers design variables for the control law to be tuned, considering the actuator's limit. The primary difficulty in finding the control input upper bounds is the velocity-dependent kinetic energy-related terms. Thus, the upper bound of velocity is computed using a suitable Lyapunov candidate as a function of closed-loop system parameters. The validity and application of the proposed method are investigated in detail through two benchmark systems simulations

Abstract:

This study explores the possibility of flexibility in the use of information technology. The study examines different viewpoints and definitions of flexibility and tries to reconcile some of these different viewpoints by creating a model for flexibility in information systems. The model addresses two different types of flexibility, efficient versatility and robust responsiveness, that might effectively accommodate two types of changes, foreseen and unforeseen, respectively. Flexibility enablers for the two types of flexibility are proposed to accommodate these changes effectively. A case study examining an information system in a hypercompetitive industry is used to explore the viability of the flexibility enablers for the robust responsiveness flexibility type

Abstract:

Information Systems are needed to support the rapid development, collection and dissemination of market, product and process information in order to effectively respond to quick and unpredictable changes in business conditions (Boynton, 1993). As major changes happen in business, Information Systems have to be redesigned (Wand et al. 1999). Although Information systems flexibility is considered a valuable asset under conditions of very fast changes, (Byrd and Turner, 2000; Duncan, 1995) it is difficult to measure it because there is not an operational construct. This paper addresses the problem of IS flexibility in a technological convergence environment as part of a research to study it in telecom operators. It starts with a discussion on various aspects related to Information Systems flexibility. The discussion leads to the necessity and convenience of having a framework to evaluate Information Systems’ flexibility. The paper concludes with the proceedings to establish a framework as an effort to build an operational tool

Abstract:

A fundamental property of traffic assignment is that cyclic flows from a common origin or to a common destination cannot exist in an equilibrium solution. However, cyclic flows can easily be created by the Frank-Wolfe (F-W) assignment procedure, especially during its first several iterations. The PARTAN technique—a more rapidly converging derivative of the F-W method—can also create cyclic flows during its procedure. We show in this paper that once cyclic flows become part of a combined assignment, they are difficult to correct, thus presenting one impediment to convergence. We then present modifications to the F-W and PARTAN procedures that prevent cyclic flows from being created between adjacent pairs of nodes. The avoidance of cyclic flows in test problems is shown to accelerate the convergence of both the F-W and PARTAN techniques, particularly in the first several iterations. While the impossibility of cyclic flows in a true equilibrium solution is an important property of traffic assignment, this paper shows that (1) the F-W and PARTAN procedures eventually reduce cyclic flows to zero if they occur, (2) avoiding cyclic flows can be most helpful in the early iterations of these procedures, and (3) avoiding cyclic flows in large networks is very difficult because of large computational requirements

Abstract:

We introduce an autoregressive model for responses that are restricted to lie on the unit interval, with beta-distributed marginals. The model includes strict stationarity as a special case, and is based on the introduction of a series of latent random variables with a simple hierarchical speci cation that achieves the desired dependence while being amenable to posterior simulation schemes. We discuss the construction, study some of the main properties, and compare it with alternative models using simulated data. We nally illustrate the usage of our proposal by modelling a yearly series of unemployment rates

Abstract:

We study competition between political parties in repeated elections with probabilistic voting. This model entails multiple equilibria, and we focus on cases where political collusion occurs. When parties hold different opinions on some policy, they may take different policy positions that do not coincide with the median voter's preferred policy platform. In contrast, when parties have a mutual understanding on a particular policy, their policy positions may converge (on some dimension) but not to the median voter's preferred policy. That is to say, parties can tacitly collude with one another, despite political competition. Collusion may collapse, for instance, after the entry of a new political party. This model rationalizes patterns in survey data from Sweden, where politicians on different sides of the political spectrum take different positions on economic policy but similar positions on refugee intake-diverging from the average voter's position, but only until the entry of a populist party

Abstract:

We examine the stochastic stability of a process of learning and evolution in a gift-giving game. Overlapping generations of players are randomly matched to play the game. They may consult information systems to learn about the past behavior of their opponents. If the value of the gift is smaller than twice the cost, then gifts are not given. If the value of the gift is more than four times the cost, then gifts are exchanged. Moreover, in the stochastically stable equilibrium, a unique information system is selected to support cooperation

Abstract:

People use media-sharing web sites to document their lives and those of their children for maintaining and strengthening social ties with people living away. It is clear then that people can and like to create narratives as a form of expression. This study presents an analysis of the characteristics and type of baby stories written by young mothers. Nine mothers from Malaysia living in the UK participated in the study. The participants used a variety of media-sharing web sites to prepare and share the narratives. Most of them (seven) used photo-sharing web sites (Fotopages or Flickr), two used text-based blogs (Blogger). Two of them also uploaded videos of their babies in content sharing sites (YouTube). Within the period of three months, we identified 166 stories created, with 94 percent of them focusing on their baby. The stories present a number of topics such as skills demonstrations, outings, domestic activities, and social events. Based on the analysis of the data and interviews with participants, we found a significant positive correlation between the type of story and the type of media used. The result also shows there was a significant positive relationship between the type of story and the baby’s age

Resumen:

Las teorías de la inteligencia emocional tratan de identificar las competencias emocionales más significativas de cara al éxito personal y profesional, sin embargo, no se centran tanto en establecer niveles de jerarquía entre las mismas. En tal sentido, el objetivo de la investigación es realizar un estudio exploratorio sobre las competencias emocionales con el fin de aportar un modelo: el de la pirámide invertida, que permita su relación y jerarquización para poder predecir el nivel de desarrollo de algunas a partir de los niveles de otras. Para ello se realiza un Panel Delphi con 16 directivos de las grandes compañías que operan en la bolsa de Madrid (IBEX 35). Los resultados obtenidos muestran una relación jerárquica de las competencias emocionales de los altos directivos de las grandes empresas a partir de su categorización en subyacentes, básicas y ejecutivas. El estudio realizado destaca que las competencias más valoradas en las empresas son la escucha activa y la comunicación efectiva. Del modelo propuesto se infiere que para desarrollar estas competencias hay que trabajar sobre varias competencias básicas clave, como son la iniciativa, la flexibilidad, el optimismo y la empatía, características que parecen configurarse como esenciales para trabajar en un contexto de crisis como el actual

Abstract:

Emotional intelligence theories try to identify the most significant emotional competences related to personal and professional success; however, they do not focus much on establishing hierarchical levels among them. The objective of this research is to conduct an exploratory study about emotional competences and offer an inverted pyramid model that permits their interrelation and hierarchical structuring, in order to predict the level of development for some based on the levels of others. To accomplish this, a Delphi Panel was constituted with 16 directors from large companies that operate on the Madrid stock market (IBEX 35). The results show a hierarchical relationship for the emotional competences of top directors in large companies based on their categorization into underlying, basic and executive competences. The study emphasizes that the most valued competences are active listening and effective communication. Using the proposed model, it can be inferred that, to develop these competences, various basic key competences must be worked on, such as initiative, flexibility, optimism and empathy, characteristics that seem essential for working in a context of crisis such as the present one

Abstract:

This paper estimates the effect of a generous demogrant for individuals age 70 and older that started in 2001 in Mexico City on the labor supply and time use of beneficiaries and of non-elderly family members who live with them. Using a triple differences approach, I find that the program has no significant effect on the time use of eligible individuals, except for the sharp decrease in the housework participation of elderly women who live with another potential beneficiary. Individuals 60 to 69 years old change their time use only if they live with a potential beneficiary, but not because they expect to receive the demogrant themselves in a few years. In particular, men in their 60s seem to retire early if they live with someone who qualifies for the transfer. Both men and women 18 to 59 years old increase their labor supply if they live with an eligible man, but decrease it if they live with an eligible woman. These results are consistent with income in the hands of elderly women being shared more with younger family members than income in the hands of elderly men

Abstract:

To examine whether private support dampens or reinforces the impact of redistributive policies, this paper estimates the effect of an exogenous increase in the income of the elderly, caused by a recent demogrant in Mexico, on the amount of priva te transfers they receive. My instrumental variables strategy overcomes the endogeneity of income that typically contaminates estimates and, unlike related studies that use natural or policy experiments in reduced-form estimations, it yields evidence of a positive bias. This suggests that an unobservable characteristic is positively correlated both with income and private transfer receipt and that treating income as exogenous could lead to an underestimation of the crowding out effect. In contrast, my preferred estimates are negative, significant and not far from minus one, implying an almost complete crowding out. My findings suggest that private transfers couId neutralize the changes in the public transfers for the elderly

Abstract:

The potential for ubiquitous ambient technology to assist older adults to sustain an active life, raises questions about whether this can bring transformational effects for users including those related to modifying ageing perception. We aim to investigate the effects that technology related priming has in the perception of ageing via age estimation. Sixty participants, exposed to technology, ageing and neutral related stimuli, were asked to perform a priming activity and to estimate the age of a set of persons depicted in different photographs. We found that neither the estimation of the participants from ‘Technology’ nor ‘Ageing’ group differ from the estimation of participants from the ‘Neutral’ group. Evidence suggests that exposing participants to technology concepts by itself does not alter age perception. However, previous works show that the usage of technology can modify ageing perception. Therefore, we define a longitudinal second experiment in which we will provide different devices to older adults for them to use and through qualitative methods study this phenomenon

Abstract:

Technology can assist older adults to maintain an active lifestyle. To better understand the effect that technology has on aging perception, we conducted two studies. In the first study, through supraliminal priming, we analyzed the effects of aging- and technology-related stimuli on age estimation. In the second study, we conducted a technological intervention with a group of elders who used four interactive devices and analyzed effects on perceived aging. Results showed that technology-related stimuli did not affect estimated age. From the second study, we generated a sociotechnical model that explains the processes connecting technology use with successful aging. We concluded that the use of technology affects aging perception, although it depends on whether the elder people have a proactive attitude toward their aging process a priori

Abstract:

This work presents the study of a complex interactive system (Twitter) and the analysis of the relation between people’s mental models, performance and usability perceptions. Participants were asked to perform a number of activities with Twitter and to draw graphical representations of their mental model about it. We identified three typical Mental Model Styles (MMS) used to describe Twitter. With these styles in mind, we conducted two analyses, one based only on the MMS and the other one based on the MMS and the expertise. We found that the level of expertise had a major impact on performance rather than the mental model style defining the understanding about the system. Furthermore, we found that usability perception was affected by both, the level of expertise and the mental model style

Abstract:

weRemember was designed to provide elderly people suffering from Alzheimer’s disease (AD) a relative independence at home and a new way to communicate and interact with their family. Our solution offers support for AD patients helping them to longer deal with the disease while living at home with their family instead of moving into a nursing home. Following an iterative design approach, a number of prototypes were evaluated with potential users and their feedback was used to enhance the family experience. During the prototype evaluation we found that the system could have a positive impact both on the relationship between the patient and the caregivers as well as on the patient home experience

Abstract:

Imprinting theory suggests that founding conditions are ‘stamped’ on organizations, and these imprinted routines often resist change. In contrast, strategic choice theory suggests that the firm can overcome organizational inertia and deliberately choose its future. Both theories offer dramatically different explanations behind an organization’s capacity for change. IPO firms provide a unique context for exploring how imprinting forces interact with strategic choice factors to address organizational capacity for change as a firm moves from private to public firm status. Juxtaposing imprinting and strategic choice perspectives, we employ fuzzy set analysis to examine the multi-level determinants of organizational capacity for change. Our cross-national data reveal three effective configurations of organizational capacity for change within IPOs, and two ineffective configurations. Our results suggest that the antecedents of organizational capacity for change in entrepreneurial threshold firms are nonlinear, interdependent, and equifinal

Abstract:

Prior studies of IPO underpricing, mostly using agency theory and single-country samples, have generally fallen short. In this study, we employ the knowledge-based view (KBV) to explore underpricing across 17 countries. We find that agency indicators are insignificant predictors, board of director knowledge limits underpricing, and external knowledge both substitutes for and complements internal board knowledge. This third finding suggests that future KBV studies should consider how internal and external knowledge states interact with each other. Our study offers new insights into the antecedents of underpricing and extends our understanding of comparative governance and the KBV of the firm

Abstract:

The motivations of armed groups are widely considered to be irrelevant for the applicability of international humanitarian law (IHL). As long as organized violence is on sufficient intensity, and armed groups have sufficient capacity to coordinate and carry out military operations, there is and armed conflict for purposes of international law. It follows that large-scale criminal organizations can, in principle, be treated legally on a par with political insurgent groups. Drug cartels in particular, if sufficiently armed and well organized, can constitute armed opposition groups in the legal sense when their confrontation with State forces is sufficiently intense. This article problematizes this interpretation. It corroborates standing legal doctrine in finding that subjective motives are not a sound basis to exclude the application of IHL, but it argues that a workable distinction can be made between the strategic logic and the organizational goals of criminal groups and those political insurgents. Drawing on a growing body of empirical studies on the political economy of criminal violence, a strong presumption is defended against qualifying as armed conflict organized violence involving criminal organizations

Abstract:

We estimate the effect on business start-ups of a program that significantly speeds up firm registration procedures. The program was implemented in Mexico in different municipalities at different dates. Our estimates suggest that new start-ups increased by about 5% per month in eligible industries, and we present evidence supporting robustness and a causal effect interpretation. Most of the effect is temporary, concentrated in the first 15 months after implementation. The estimated effect is much smaller than World Bank and Mexican authorities claim it is, which suggests attention in business deregulation may be over emphasized

Abstract:

Using a newly assembled data set on procedures filed in Mexican labor tribunals, we study the determinants of final awards to workers. On average, workers recover less than 30 percent of their claim. Our strongest result is that workers receive higher percentages of their claims in settlements than in trial judgments. We also find that cases with multiple claimants against a single firm are less likely to be settled, which partialIy explains why workers involved in these procedures receive lower percentages of their claims. Finally, we find evidence that a worker who exaggerates his or her claim is Iess likely to settle

Resumen:

Este artículo analiza el efecto de los salarios mínimos en los ingresos laborales en México. Con paneles de datos de la Encuesta Nacional de Empleo Urbano (ENEU) y con los registros administrativos del Instituto Mexicano del Seguro Social (IMSS) se muestra qué cambios en el salario mínimo real tienen un efecto de signo positivo en el cambio de los ingresos laborales para todos los grupos salariales. El efecto es más débil en los trabajadores que ganan varias veces el monto del salario mínimo. También se muestra que este efecto era más grande en el periodo que abarca de 1985 a 1993 que entre 1994 y 2001, lo cual sugiere que este efecto está perdiendo fuerza

Abstract:

We construct a novel data set matching occupational data from separate establishments to the establishments' corporate parents, in order to study labor market links across establishments within diverse firms. We find substantial wage components common to all establishments within firms, even after netting out índustry and occupation effects. However, employment changes are localized to establishments. The data suggest that internal labor markets of multiestablishment firms are linked throughout thelr entire organizations, but that establishment-level demand shocks do not permeate the firm

Abstract:

Most financial markets do not have rates of return that are Gaussian. Therefore, the Markowitz mean variance model produces outcomes that are not optimal. We provide a method of improving upon the Markowitz portfolio using value at risk and median as the decision-making criteria

Abstract:

This paper introduces a hierarchical Wireless Random Access scheme based on power control where intelligence is split among the mobile users in order to drive the outcome of the system towards an efficient point. The hierarchical game is obtained by introducing a special user who plays the role of altruistic leader whereas the other users assume the role of followers. We define the power control scheme in such a way that the leader_first chooses the lowest power to transmit its packets among N available levels whereas the followers re-transmit by randomly choosing a power level picked from N - 1 higher distinct power levels. Using a 3D Markovian model, we compute the steady state of the system and derive the average system throughput and expected packet transmission delay. Our numerical results show that the proposed scheme considerably improves the global performance of the system avoiding the well known throughput collapse at high loads commonly characterizing most random channel access mechanisms

Abstract:

In this paper, we present a game analysis of the Binary Exponential Backoff (BEB), a popular bandwidth allocation mechanism used by a large number of distributed wireless technologies. A Markov chain analysis is used to obtain equilibrium retransmission probabilities and throughput. Numerical results show that when the arrival probability increases, the behavior of mobile stations MSs become more and more aggressive resulting in a global deterioration of the system throughput. We then consider a non-cooperative game framework to study the operation and evaluate the performance of the BEB algorithm when a group of MSs competing with each other to gain access to the wireless channel. We focus our attention to the case when an MS acts selfishly by attempting to gain access to the channel using a higher retransmission probability as a means to increase its own throughput. As a means to improve the system performance, we further explore the use of two transmission mechanisms and policies. First, we introduce the use of multiple power levels (MPLs) for the data transmission. Under the proposed scheme, named MPL-BEB, the effect of the aggressive behavior, higher transmission probabilities, is diminished since the power level is chosen randomly and independently by each and every station. Second, we introduce a disutility policy for power consumption. The resulting mechanism, named MPL-BEB with costs, is of prime interest in wireless networks composed of battery-powered nodes. Under this scheme aggressive behavior is discouraged since each retransmission translates into the depletion of the energy stored in the battery. Via price of anarchy, our results identify a behavior similar to the well-know prisoner’s dilemma. A non-efficiency of Nash equilibrium is observed for all schemes (BEB, MPL-BEB, MPL-BEB with costs) under heavy traffic with a notable outperformance of MPL-BEB with costs over both MPL-BEB and BEB

Abstract:

We present a team analysis of a slotted random wireless channel access mechanism. Under the proposed scheme, denoted wireless random access mechanism with multiple power levels (MPL-WRA), each mobile station contends for a transmission opportunity following the principles of a slotted access mechanism incorporating a random transmitting power value selected among various available power levels. In this way, a capture effect may be produced allowing the packet to be decoded whenever the signal-tointerference-plus-noise ratio is higher than a given threshold. In order to analyze the performance and optimization of the proposed setup, we build a Markovian model integrating the wireless access mechanism supplemented by the use of multiple power levels in an attractive and simple cross-layer fashion. We follow a team problem approach allowing us to fine tune the design parameters of the overall system configuration. Throughout an extensive numerical analysis, our main results set the basis for the social optimal system configuration of the proposed mechanism taking into account the physical constraints of using multiple power levels and the actual practical implementation of a slotted access mechanism. We end the paper with concluding remarks and future research directions including guidelines for the actual implementation of our proposal

Resumen:

En 1950 México entró en un periodo de despegue económico y creció rápidamente durante más de 30 años. El crecimiento se detuvo durante las crisis de 1982-1995, a pesar de importantes reformas, incluyendo la liberación del comercio exterior y la inversión extranjera. Desde entonces, el crecimiento ha sido modesto. Analizamos la historia económica de México desde 1877 hasta 2010. Concluimos que el crecimiento en el periodo 1950-1981 estuvo impulsado por la urbanización, la industrialización y la educación, y que México habría crecido incluso a un ritmo más acelerado si el comercio y la inversión se hubieran liberado antes. Si México pretende reanudar su rápido crecimiento —de manera que pueda acercarse a los niveles de ingreso de los Estados Unidos— necesita aún más reformas

Abstract:

In 1950 Mexico entered an economic takeoff and grew rapidly for more than 30 years. Growth stopped during the crises of 1982-1995, despite major reforms, including liberalization of foreign trade and investment. Since then growth has been modest. We analyze the economic history of Mexico 1877-2010. We conclude that the growth 1950-1981 was driven by urbanization, industrialization, and education and that Mexico would have grown even more rapidly if trade and investment had been liberalized sooner. If Mexico is to resume rapid growth —so that it can approach U.S. levels of income— it needs further reforms

Abstract:

In 1950 Mexico entered an economic takeoff and grew rapidly for more than 30 years. Growth stopped during the crises of 1982-1995, despite major reforms, including liberalization of foreign trade and investment. Since then growth has been modest. We analyze the economic history of Mexico 1877-2010. We conclude that the growth 1950-1981 was driven by urbanization, industrialization, and education and that Mexico would have grown even more rapidly if trade and investment had been liberalized sooner. If Mexico is to resume rapid growth—so that it can approach U.S. levels of income—it needs further reforms

Abstract:

This paper investigates how volatile the general price level can be in an equilibrium where all uncertainty is extrinsic. The government operates a lump-sum redistribution policy using fiat money. An approach to modeling asset market segmentation is introduced in which this tax policy determines how volatile the price level can be, which in turn determines the volatility of consumption. The paper characterizes (i) the set of general price levels consistent with the existence of competitive equilibrium and (ii) the resulting set of equilibrium allocations. The results demonstrate how redistribution policies that are fixed in nominal terms can have a destabilizing effect on an economy, and show how to evaluate the amount of volatility that a particular policy may induce

Abstract:

This paper examines the impact monetary redistribution policies have on the amount of sunspot-induced volatility in an economy. A dynamic model of segmented asset markets is presented in which the tax-transfer policy determines, in a continuous way, the influence sunspots can have on the general price level and on consumption. If the policy leads to a transfer of resources across segmentation lines, there exist equilibria in which sunspots affect consumption. The primary result is that there is an efficiency cost of taxation: larger transfers lead to larger fluctuations in consumption. The paper also shows that, in many cases, improvements in asset markets that decrease consumption volatility simultaneously increase price-level volatility.

Resumen:

Se reportan resultados de un seminario multidisciplinario que aborda el reconocimiento y construcción de la hepatitis C como problema de salud en México. La prevalencia es de 1.4%. La incidencia se estima en 19 300 nuevos casos por año. Al disminuir los casos relacionados a transfusión, aumentan en importancia la transmisión nosocomial y uso de drogas vía intravenosa o intranasal. Es necesario construir nuevos contenidos para las representaciones sociales y percepción de riesgo. El tratamiento basado en respuesta por PCR-ARN ha modificado los esquemas de manejo, situación importante al considerar políticas de tratamiento. Jurídicamente existe poca incidencia legislativa en el tema. La distribución de competencias entre el Gobierno Federal y las entidades federativas en materia de salubridad general se encuentra establecida en el artículo 13 de la Ley General de Salud (México). Es necesario definir una estrategia que adquiera el carácter de política pública de alcance nacional

Abstract:

We report the results of a multidisciplinary seminar approaching the recognition and construction of hepatitis C as a health issue in Mexico. Its prevalence is 1.4% and its incidence is estimated in 19 300 new cases per year. As transfusion decreases as a risk factor, the relevance of nosocomial transmission and use of intravenous or intranasal drugs increases. It is necessary to develop new contents for the social representation and risk perception of the disease. Response guided treatment based on PCR-RNA has modified the treatment schemes, a very important issue when considering policies for management. Legislation about hepatitis C in the country is limited. Assignments of the Federal Government and the federative entities in the country regarding health issues are framed in the 13th article of the General Mexican health law. It is necessary to advance towards the development of a public health policy at the national level for hepatitis C

Abstract:

In this paper a nonlinear control design for power balancing in networked microgrids using energy storage devices is presented. Each microgrid is considered to be interfaced to the distribution feeder though a solid-state transformer (SST). The internal duty cycle based controllers of each SST ensures stable regulation of power commands during normal operation. But problem arises when a sudden change in load or generation occurs in any microgrid in a completely unpredicted way in between the time instants at which the SSTs receive their power setpoints. In such a case, the energy storage unit in that microgrid must produce or absorb the deficit power. The challenge lies in designing a suitable regulator for this purpose owing to the nonlinearity of the battery model and its coupling with the nonlinear SST dynamics. We design an input-output linearization based controller, and show that it guarantees closed-loop stability for either the autonomous operation of a SST or co-operative operation between SSTs, in which each SST assist a given SST whose storage capacity is insufficient to serve the unpredicted load. The design is verified using the IEEE 34- bus distribution system with nine SST-driven microgrids

Abstract:

Smooth transition autoregressive models are widely used to capture nonlinearities in univariate and multivariate time series. Existence of stationary solution is typically assumed, implicitly or explicitly. In this paper, we describe conditions for stationarity and ergodicity of vector STAR models. The key condition is that the joint spectral radius of certain matrices is below 1. It is not sufficient to assume that separate spectral radii are below 1. Our result allows to use recently introduced toolboxes from computational mathematics to verify the stationarity and ergodicity of vector STAR models

Abstract:

This paper considers parametric model adequacy tests for nonlinear multivariate dynamic models. It is shown that commonly used Kolmogorov-type tests do not take into account cross-sectional nor time-dependence structure, and a test, based on multi-parameter empirical processes, is proposed that overcomes these problems. The tests are applied to a nonlinear LSTAR-type model of joint movements of UK output growth and interest rate spreads. A simulation experiment illustrates the properties of the tests in finite samples. Asymptotic properties of the test statistics under the null of correct specification and under the local alternative, and justification of a parametric bootstrap to obtain critical values, are provided

Abstract:

This paper proposes new specification tests for conditional models with discrete responses, which are key to apply efficient maximum likelihood methods, to obtain consistent estimates of partial effects and to get appropriate predictions of the probability of future events. In particular, we test the static and dynamic ordered choice model specifications and can cover infinite support distributions for e.g. count data. The traditional approach for specification testing of discrete response models is based on probability integral transforms of a jittered discrete data which leads to continuous uniform iid series under the true conditional distribution. Then, standard specification testing techniques for continuous variables could be applied to the transformed series, but the extra randomness from jitters affects the power properties of these methods. We investigate in this paper an alternative transformation based only on original discrete data that avoids any randomization. We analyze the asymptotic properties of goodness-of-fit tests based on this new transformation and explore the properties in finite samples of a bootstrap algorithm to approximate the critical values of test statistics which are model and parameter dependent. We show analytically and in simulations that our approach dominates the methods based on randomization in terms of power. We apply the new tests to models of the monetary policy conducted by the Federal Reserve

Abstract:

This paper proposes new specification tests for conditional models with discrete responses, which are key to apply efficient maximum likelihood methods, to obtain consistent estimates of partial effects and to get appropriate predictions of the probability of future events. In particular, we test the static and dynamic ordered choice model specifications and can cover infinite support distributions for e.g. count data. The traditional approach for specification testing of discrete response models is based on probability integral transforms of a jittered discrete data which leads to continuous uniform i.i.d. series under the true conditional distribution. Then, standard specification testing techniques for continuous variables could be applied to the transformed series, but the extra randomness from jitters affects the power properties of these methods. We investigate in this paper an alternative transformation based only on original discrete data that avoids any randomization. We analyze the asymptotic properties of goodness-of-fit tests based on this new transformation and explore the properties in finite samples of a bootstrap algorithm to approximate the critical values of test statistics which are model and parameter dependent. We show analytically and in simulations that our approach dominates the methods based on randomization in terms of power. We apply the new tests to models of the monetary policy conducted by the Federal Reserve

Abstract:

The paper analyzes the sources of productivity of the different subsectors and firms of the Mexican manufacturing sector applying both parametric and nonparametric methods. The main conclusion is that the firms with foreign capital, either from the United States and Canada or the rest of the world, have higher productivity. Nevertheless, the firms with capital from the rest of the world generate more externalities in the sense that they have additional effects of increasing the productivity of other firms in the same subsector. The effects of foreign capital on productivity become more evident after the trade liberalization

Abstract:

This paper studies the effects of import-price shocks on measured output and productivity in a standard small open economy model and quantifies such effects in the case of the Korean crisis of 1997–1998. I argue that it is the price of imported goods relative to the price of domestic goods but not the terms of trade that determine measured output and productivity. The simulated results show that shocks to the price of imports account for about half of the output deviation (from trend), one third of the TFP deviation and two thirds of the labor deviation in 1998. For the quantitative results, the extent to which the usage of imported goods is distorted is critical and substantially larger than tariffs because of significantly sizable non-tariff distortions

Abstract:

To support the value assessment of technically feasible smart grid initiatives there exist several valuation methods. To determine whether those methods address all concerns relevant for smart grid valuation, we carry out a literature analysis aiming at (1) identifying existing valuation methods and the steps they propose, (2) identifying important valuation considerations, and (3) confronting these considerations with artifacts proposed by the existing valuation methods to identify open issues, requirements, and remaining challenges. Bases on the conducted analysis we identify, among others, the following main deficiencies: (1) only a limited scope of concerns relevant to valuation is covered, particularly a systematic consideration of stakeholders holds, value exchange scenarios, and the IT infrastructure is lacking; and (2) a lack of instruments dedicated to fostering accessibility of valuation, in terms of establishing a shared understanding communicating results, or actively involving different stakeholders in the process. Based on the findings we suggest the application of conceptual modeling as an instrument to address the identified deficiencies. Therefore, we reflect on the role that current modeling approaches can play in smart grid valuation. This paper is a part of a lager project whose ultimate goal is to develop a model-based method for multi-perspective valuation of smart grid initiatives. The purpose of this paper is to stablish a foundation for the realization of the envisioned method. The design of the model-based valuation method itself, its application and evaluation, are subjects of future work

Abstract:

Organizations increasingly have to cope with the digital transformation, which is ubiquitous in today’s society. Strategic analysis is an important first step towards the success of digital transformation initiatives, whereby all the elements (e. g., business processes and IT infrastructure) that are required to achieve the transformation can be aligned to the strategic goals and decisions. In this paper, we work towards a modeling method to perform model-based strategic analysis. We explicitly account for information technology (IT) infrastructure because of its key role for digital transformation. Specifically, (1) based on a conducted study on business scholar literature and existing work in conceptual modeling, a set of requirements is first identified; (2) then, we propose a modeling method that integrates, among others, goal modeling, strategic modeling, and IT infrastructure modeling. The method exploits, among others, three previously designed domain specific modeling languages in the Multi-Perspective Enterprise Modeling (MEMO) family: GoalML, SAML and ITML; (3) we illustrate the use of the modeling method in terms of a digital transformation initiative in the electricity sector; and finally, (4) we evaluate the proposed modeling method by comparing it with the conventional SWOT analysis and reflecting upon the fulfillment of the identified requirements

Abstract:

The competitive hospitality industry requires effective external and internal brand management. Since service employees bring the brand to life, insight regarding their motivational drivers is important. Given a multigenerational hospitality workforce, individual motivations will likely differ and therefore inform attitudes and behavior differently. Adopting work values as a motivational lens, and drawing on generational theory, this study surveys 303 hospitality employees to understand how generational collective memories (i.e., formative referents) inform individuals' work values. Further, it examines how generational work values differentially influence employees' perceived brand fit and brand citizenship behavior. The results suggest that an individual's collective memories from their formative years influence their work values, with altruistic, social and intrinsic work values having a positive impact on employee brand attitude and behavior, while extrinsic and leisure work values have no significant impact. Generational differences are evident, but not always in a manner that is consistent with previous literature

Abstract:

The hypercomplex 2D analytic signal has been proposed by several authors with applications in color image processing. The analytic signal enables to extract local features from images. It has the fundamental property of splitting the identity, meaning that it separates qualitative and quantitative information of an image in form of the local phase and the local amplitude. The extension of analytic signal of linear canonical transform domain from 1D to 2D, covering also intrinsic 2D structures, has been proposed. We use this improved concept on envelope detector. The quaternion Fourier transform plays a vital role in the representation of multidimensional signals. The quaternion linear canonical transform (QLCT) is a well-known generalization of the quaternion Fourier transform. Some valuable properties of the two-sided QLCT are studied. Different approaches to the 2D quaternion Hilbert transforms are proposed that allow the calculation of the associated analytic signals, which can suppress the negative frequency components in the QLCT domains. As an application, examples of envelope detection demonstrate the effectiveness of our approach

Abstract:

In the present paper, we generalize the linear canonical transform (LCT) to quaternion-valued signals, known as the quaternionic LCT (QLCT). Using the properties of the LCT, we establish an uncertainty principle for the two-sided QLCT. This uncertainty principle prescribes a lower bound on the product of the effective widths of quaternion-valued signals in the spatial and frequency domains. It is shown that only a Gaussian quaternionic signal minimizes the uncertainty

Abstract:

We analyze factors that lead to IMF approval of financial arrangements. We account for both economic variables that induce a country to seek an IMF arrangement ('demand-side' factors) and macroeconomic policy commitments that the IMF considers when deciding whether to approve it ('supply-side' factors). Using a pooled sample of annual observations for 91 developing countries over 1973-1991, we obtain maximum likelihood bivariate and univariate probit estimates of the approval of an IMF arrangement for a given country in a given year. A number of demand-side and supply-side variables are statistically significant determinants of the approval of an IMF arrangement

Abstract:

In this paper we propose a new algorithm that estimates on-line the parameters of a classical vector linear regression equation Y = Omega theta, where Y is an element of R-n, Omega is an element of R-nxq, are bounded, measurable signals and theta is an element of R-q is a constant vector of unknown parameters, even when the regressor Omega is not persistently exciting. Moreover, the convergence of the new parameter estimator is global and exponential and is given for both, continuous-time and discrete-time implementations. As an illustration example we consider the problem of parameter estimation of a linear time-invariant system, when the input signal is not sufficiently exciting, which is known to be a necessary and sufficient condition for the solution of the problem with standard gradient or least-squares adaptation algorithms

Abstract:

The term "aesthetic experience" corresponds to the inner state of a person exposed to the form and content of artistic objects. Quantifying and interpreting the aesthetic experience of people in different contexts can contribute towards (a) creating context and (b) better understanding people's affective reactions to different aesthetic stimuli. Focusing on different types of artistic content, such as movies, music, literature, urban art, ancient artwork, and modern interactive technology, the goal of this workshop is to enhance the interdisciplinary collaboration among researchers coming from the following domains: affective computing, aesthetics, human-robot/computer interaction, digital archaeology and art, culture, addictive games

Abstract:

We axiomatize a model of satisficing which features random thresholds and the possibility of choice abstention. Given a menu, the decision maker first randomly draws a threshold. Next, using a list order, she searches the menu for alternatives which are at least as good as the threshold. She chooses the first such alternative she finds, and if no such alternative exists, she abstains. Since the threshold is random, so is the resulting behavior. We characterize this model using two simple axioms. In general the revelation of the model’s primitives is incomplete. We characterize a specialization of the model for which the underlying preference and list ordering are uniquely identified by choice frequencies. We also show that our model is a special Random Utility Model

Abstract:

In this article, we prove two versions of the Lyapunov center theorem for symmetric potentials. We consider a second order autonomous system in the presence of symmetries of a compact Lie group ΓWe look for non-stationary periodic solutions of this system in a neighborhood of a Γorbit of critical points of the Γinvariant potential U. Our results generalize that of [13, 14]. As a topological tool, we use an infinite-dimensional generalization of the equivariant Conley index due to Izydorek, see [9]

Abstract:

This paper analyzes how an enforcement mechanism that resembles a court affects firm finance. The court is described by two parameters that correspond to enforcement costs and the amount of creditor/debtor protection. We provide a theoretical and quantitative characterization of the effect of these enforcement parameters on the contract loan rate, the default probability and welfare. We analyze agents' incentive to default and pursue bankruptcy and show that when the constraints that govern these decisions bind, the enforcement parameters can have a sharply non-linear effect on finance. We also compute the welfare losses of "poor institutions" and show that they are non-trivial. The results provide guidance on when models which abstract from enforcement provide good approximations and when they do not

Abstract:

In this paper, we consider a dynamic game with imperfect information between a borrower and lender who must write a contract to produce a consumption good. In order to analyze the game, we introduce the concept of a coalitional perfect Bayesian Nash equilibrium (cPBNE). We prove that equilibria exist and are efficient in a precise sense, and that deterministic contracts that resemble debt are optimal for a general class of economies. The cPBNE solution concept captures both the non-cooperative aspect of firm liquidation and the cooperative aspect of renegotiation

Abstract:

The promise of experimental approaches to help reduce poverty depends on the impact they have on policy and implementation. That makes it important to consider how institutional mechanisms and government structures translate findings into policy, or not. We look at each stage of the policy cycle and discuss how experimental findings could change the evidence use in policymaking for poverty reduction. We also discuss the role of experimental evidence in the design of a broader evaluation system. We use the experience of Mexico to illustrate our argument, from the influential evaluation of the Progresa cash transfer program to the establishment of the national evaluation council. We then conclude with some implications for governments in developing countries

Abstract:

Latin America led the world in introducing individual retirement accounts intended to complement or replace defined benefit state-sponsored, pay-as-you-go systems. After Chile implemented the first system in 1981, a number of other Latin American countries incorporated privately managed individual accounts as part of their retirement income systems beginning in the 1990s. This article examines the subsequent “reform of the reform” of these pension systems, with a focus on the recent overhaul of the Chilean system and major reforms in Mexico, Peru, and Colombia. The authors analyze key elements of pension reform in the region relating to individual accounts: system coverage, fees, competition, investment, the impact of gender on benefits, financial education, voluntary savings, and payouts

Abstract:

We develop a general framework for forcing with coherent adequate sets on H(λ) as side conditions, where λ≥ω2 is a cardinal of uncountable cofinality. We describe a class of forcing posets which we call coherent adequate type forcings. The main theorem of the paper is that any coherent adequate type forcing preserves CH. We show that there exists a forcing poset for adding a club subset of ω2 with finite conditions while preserving CH, solving a problem of Friedman [Forcing with finite conditions, in Set Theory: Centre de Recerca Matemática, Barcelona, 2003–2004, Trends in Mathematics (Birkhäuser-Verlag, 2006), pp. 285-295]

Abstract:

After the enactment of the Constitutional Reform in Telecommunications in Mexico, followed a few months later by the Federal Telecommunications and Broadcasting Law, one of the concepts that has stirred much controversy within the Mexican telecommunications industry is the one of a “preponderant economic agent”. Controversy is originated by the facts that a declaration of preponderance is given not for single services, but for a whole industry, and by the two sided definitions of preponderance: it can be based on a market share larger than 50% of the subscribers, or it on more than 50% of the traffic in the networks. Since preponderance will likely trigger special regulations for preponderant agents, these could be interested in strategies which could help them lose this characteristic. In this paper we present a method to quantify and compare the traffics handled by the operators, we develop a model for this purpose and we draw conclusions based on various hypotheses. Preliminary results show that there exist situations, in which a number-of-subscribers based preponderant agent, could get rid of some portion of its customer base, so as to lose its preponderance status, but retain this status and still be preponderant, if preponderance is based on volume of traffic

Abstract:

The development of the Mexican telecommunications market has followed a pattern which is similar to what has been observed in many other countries. Until several years ago telecommunications services had been offered in monopolistic environments. But advances in technology and changes in economic policies have originated dramatic modifications of the regulatory framework. The liberalization process in the Mexican telecommunications sector started in 1989. The intention of the government to privatize the monopoly that operated the public telephone network and offered telephone services, to grant licenses for the operation of mobile services and to make the necessary changes in the regulatory framework to implement these actions, became clear upon the publication of the National Telecommunications Development Plan in 1989. These ideas led to the enactment of the new Federal Telecommunications Law in June 1995. The law constitutes the basis for the entrance of new competitors into the telecommunications market. Just one year after the law was enacted, it is already possible to identify characteristics of a liberalized telecommunications sector in Mexico: new licenses for installing and operating telecommunications networks have already been granted, a Regulatory Authority was formed and new services based upon new networks have recently been offered. The evolution of this sector in Mexico during the last 6 years is focused upon in this report. After presenting a historical perspective, which analyzes Mexican telecommunications previous to the year 1990, the regulatory framework is described, highlighting the most important aspects of the law as well as of the additional regulations, which were issued by the ministry for Communications and Transport to complement the Law since its enactment. This is followed by a characterization of several segments of the telecommunications market for the years after the privatization of the monopoly

Abstract:

We discuss an algorithm which allows us to find the algebraic expression of a dependent variable as a function of an arbitrary number of independent ones where data may have arisen from experimental data. The possibility of such approximation is proved starting from the Universal Approximation Theorem (UAT). As opposed to the neural network (NN) approach to which it is frequently associated, the relationship between the independent variables is explicit, thus resolving the "black box" characteristics of NNs. It implies the use of a nonlinear function such as the logistic 1/(1 + e-x). Thus, any function is expressible as a combination of a set of logistics. We show that a close polynomial approximation of logistic is possible by using only a constant and monomials of odd degree. Hence, an upper bound (D) on the degree of the polynomial may be found. Furthermore, we may calculate the form of the model resulting from D. We discuss how to determine the best such set by using a genetic algorithm leading to the best L-L2 approximation. It allows us to find the best approximation polynomial consisting of a fixed number of monomials and yielding the degrees of the variables in every monomial and the associated coefficients. Furthermore, we trained a multi-layered perceptron network to determine the most adequate number of such monomials for a set of arbitrary data. We discuss how to analyze the explicit relationship between the variables by using a well known experimental database. We show that our method yields better quantitative and qualitative measures than those of the human experts

Abstract:

We discuss numerical algorithms (based on metric spaces) to explore data bases (DB) even though such DBs may not be metric. The following issues are treated: 1. Determination of the minimum equivalent sample, 2. The encoding of categorical variables and 3. Data analysis. We illustrate the methodology with an experimental mixed DB consisting of 29,267 tuples; 15 variables of which 9 are categorical and 6 are numerical. Firstly, we show that information preservation is possible with a (possibly) much smaller sample. Secondly, we approximate the best possible encoding of the 9 categorical variables applying a statistical algorithm which extracts the code after testing an appropriate number of alternatives for each instance of the variables. To do this we solve two technical issues, namely: a) How to determine that the attributes are already normal and b) How to find the best regressive function of an encoded attribute as a function of another. Thirdly, with the transformed DB (now purely numerical) it is possible to find the regressive approximation errors of any attribute relative to another. Hence, we find those attributes which are closer to one another within a predefined threshold (85%). We argue that such variables define a cluster. Now we may use algorithms such as Multi-Layer Perceptron Networks (MLPN) and/or Kohonen maps (SOM). The classification of a target attribute “salary” is achieved with a MLPN. It is shown to yield better results than most traditional conceptual analysis algorithms. Later, by training a SOM for the inferred clusters, we disclose the characteristics of every cluster. Finally, we restore the DB original values and get visual representations of the variables in each cluster, thus ending the mining process

Abstract:

A method which allows us to find a polynomial model of an unknown function from a set of tuples with n independent variables and m tuples is presented. A plausible model of an explicit approximation is found. The number of monomials of the model is determined by a previously trained neural network (NNt). The degrees of the monomials are bounded from the Universal Approximation Theorem yielding a reduced polynomial form. The coefficients of the model are found by exploring the space of possible monomials with a genetic algorithm (GA). The polynomial defined by every training pair is approximated by the Ascent Algorithm which yields the coefficients of best fit with the L∞ norm. The L2 error corresponding to these coefficients is calculated and becomes the fitness function of the GA. A detailed example for a well known experimental data set is presented. It is shown that the model derived from the method yields a classification error of <7% for the cross-validation data set. A detailed description of the NNt is included. The approximation to 46 datasets tackled with our method is discussed. In all these cases the approximation accuracy was better than 94%. The tool described herein yields general models with explanatory characteristics not found in other methods

Abstract:

The exploitation of large databases implies the investment of expensive resources both in terms of the storage and processing time. The correct assessment of the data implies that pre-processing steps be taken before its analysis. The transformation of categorical data by adequately encoding every instance of categorical variables is needed. Encoding must be implemented that preserves the actual patterns while avoiding the introduction of non-existing ones. The authors discuss CESAMO, an algorithm which allows us to statistically identify the pattern preserving codes. The resulting database is more economical and may encompass mixed databases. Thus, they obtain an optimal transformed representation that is considerably more compact without impairing its informational content. For the equivalence of the original (FD) and reduced data set (RD), they apply an algorithm that relies on a multivariate regression algorithm (AA). Through the combined application of CESAMO and AA, the equivalent behavior of both FD and RD may be guaranteed with a high degree of statistical certainty

Abstract:

Structured data bases may include both numerical and non-numerical attributes (categorical or CA). Databases which include CAs are called “mixed” databases (MD). Metric clustering algorithms are ineffectual when presented with MDs because, in such algorithms, the similarity between the objects is determined by measuring the differences between them, in accordance with some predefined metric. Nevertheless, the information contained in the CAs of MDs is fundamental to understand and identify the patterns therein. A practical alternative is to encode the instances of the CAs numerically. To do this we must consider the fact that there is a limited subset of codes which will preserve the patterns in the MD. To identify such pattern-preserving codes (PPC) we appeal to a statistical methodology. It is possible to statistically identify a set of PPCs by selectively sampling a bounded number of codes (corresponding to the different instances of the CAs) and demanding the method to set the size of the sample dynamically. Two issues have to be considered for this method to be defined in practice: (a) How to set the size of the sample and (b) How to define the adequateness of the codes. In this paper we discuss the method and present a case of study wherein the appropriateness of the method is illustrated

Abstract:

Structured Data Bases which include both numerical and categorical attributes (Mixed Databases or MD) ought to be adequately pre-processed so that machine learning algorithms may be applied to their analysis and further processing. Of primordial importance is that the instances of all the categorical attributes be encoded so that the patterns embedded in the MD be preserved. We discuss CESAMO, an algorithm that achieves this by statistically sampling the space of possible codes. CESAMO’s implementation requires the determination of the moment when the codes distribute normally. It also requires the approximation of an encoded attribute as a function of other attributes such that the best code assignment may be identified. The MD’s categorical attributes are thusly mapped into purely numerical ones. The resulting numerical database (ND) is then accessible to supervised and non-supervised learning algorithms. We discuss CESAMO, normality assessment and functional approximation. A case study of the US census database is described. Data is made strictly numerical using CESAMO. Neural Networks and Self-Organized Maps are then applied. Our results are compared to classical analysis. We show that CESAMO’s application yields better results

Abstract:

Multi-layered perceptron networks (MLP) have been proven to be universal approximators. However, to take advantage of this theoretical result, we must determine the smallest number of units in the hidden layer. Two basic theoretically established requirements are that an adequate activation function be selected and a proper training algorithm be applied. We must also guarantee that (a) The training data compile with the demands of the universal approximation theorem (UAT) and (b) The amount of information present in the training data be determined. We discuss how to preprocess the data in order to meet such demands. Once this is done, a closed formula to determine H may be applied. Knowing H implies that any unknown function associated to the training data may, in practice, be arbitrarily approximated by a MLP. We take advantage of previous work where a complexity regularization approach tried to minimize the RMS training error. In that work, an algebraic expression of H is attempted by sequential trial-and-error. In contrast, here we find a closed formula H = f(mO,N) where mO is the number of units in the input layer and N is the effective size of the training data. The algebraic expression we derive stems from statistically determined lower bounds of H in a range of interest of the (mO,N) pairs. The resulting sequence of 4250 triples (H,mO,N) is replaced by a single 12-term bivariate polynomial. To determine its 12 coefficients and the degrees of the 12 associated terms, a genetic algorithm was applied. The validity of the resulting formula is tested by determining the architecture of twelve MLPs for as many problems and verifying that the RMS error is minimal when using it to determine H

Abstract:

The exploitation of large data bases frequently implies the investment of large and, usually, expensive resources both in terms of the storage and processing time required. It is possible to obtain equivalent reduced data sets where the statistical information of the original data may be preserved while dispensing with redundant constituents. Therefore, the physical embodiment of the relevant features of the data base is more economical. The author proposes a method where we may obtain an optimal transformed representation of the original data which is, in general, considerably more compact than the original without impairing its informational content. To certify the equivalence of the original data set (FD) and the reduced one (RD), the author applies an algorithm which relies in a Genetic Algorithm (GA) and a multivariate regression algorithm (AA). Through the combined application of GA and AA the equivalent behavior of both FD and RD may be guaranteed with a high degree of statistical certainty

Abstract:

The latest AI techniques are usually computer intensive, as opposed to the traditional ones which rely on the consistency of the logic principles on which they are based. In contrast, many algorithms of Computational Intelligence (CI) are meta-heuristic, i.e. methods where the particular selection of parameters defines the details and characteristics of the heuristic proper. In this paper we discuss a method which allows us to ascertain, with high statistical significance, the relative performance of several meta-heuristics. To achieve our goal we must find a statistical goodness-of-fit (gof) test which allows us to determine the moment when the sample becomes normal. Most statistical gof tests are designed to reject the null hypothesis (i.e. the samples do NOT fit the same distribution). In this case we wish to determine when the sample IS normal. Using a Monte Carlo simulation we are able to find a practical gof test to this effect. We discuss the methodology and describe its application to the analysis of three case studies: training of neural networks, genetic algorithms and unsupervised clustering

Resumen:

Discutimos una metodología mediante la cual es posible identificar la posición de un robot móvil (R) en un ambiente controlado (A). Para lograr lo anterior es necesario obtener un modelo del mundo (M) que consiste de una secuencia de imágenes (I) que incluyen a cada uno de los posibles cuadros visuales (V) del robot en cada una de las posibles ubicaciones (U) dentro del ambiente. Cada una de las imágenes está formada por P píxeles. En total, pues, M consta de T = P X U píxeles. Para identificar su posición al tiempo t R debe capturar V(t) y compararlo con cada una de las imágenes en U. El número de comparaciones es muy elevado aún para M's de tamaños comunes. No es necesario, sin embargo, comparar cada uno de los píxeles en cada uno de los V cuadros. Es posible determinar un subconjunto de T tal que e l número de aciertos (X) en cada comparación se maximice al mismo tiempo que el número de píxeles a comparar (Y) sea mínimo. Encontrar los valores óptimos de X y Y constituye un problema de optimización multi-objetivo

Abstract:

In this paper we discuss a method that allows us to identify the location of a mobile robot (R) in a controlled environment (A). To achieve this it is necessary to obtain a model of the world (M). The model consists of a sequence of images (I), which includes each of the possible visual frames (V) of the robot at each possible location (U) within the environment. Each image consists of P pixels. The overall number of pixels in M is T = P X U pixels. To identify the position of R at time t, R has to take V(t) and compare it against each image in U. It is apparent that a large number of comparisons may need to be performed even for M’s of ordinary sizes. However, it is not necessary to compare each pixel on each one of the V frames. It is possible to determine a subset of T such that the number of matches (X) on each comparison is maximized while the number of pixels to compare (Y) is minimal. Finding the optimal values for X and Y is a Multi-Objective Optimization problem. In this work we propose an alternative method and use it to solve a problem for a specific A to illustrate it

Abstract:

Most of the practical databases consist not only of numerical variables but, also, of categorical ones. In such mixed databases it is not possible to apply metric clustering algorithms. A natural goal, therefore, is to replace categorical instances (i.e. the values in a category) by numerical valid codes. In this paper we address the problem of finding an appropriate numerical encoding scheme such that the patterns in a mixed database are preserved. Every instance (say t) of every category (say c) will be replaced by an adequate numerical code. Identifying these codes implies the solution of two problems. First, we must be able to preserve the structure present in all combinations of the relations between the purported numerical variables. In a database with n attributes we achieve this by iteratively finding a multivariate expression fi(n-1) of attribute i as a function of the remaining n-1 for i=1,...,n. This is done by training a three-layered perceptron network which takes the role of fi(n-1). The associated maximum absolute approximation error is ei and the global approximation error is ek=max(ei); where min(ek ) reflects the global adequacy of these codes. Our aim is to find the codes which minimize ek. Second, the best set of codes has to be identified out of a very large set of combinations. To solve this problem we appeal to a genetic algorithm (GA). Every individual of the GA will be a set of ct binary numbers. For every one such individual the fitness corresponds to ek and the GA will minimize it. The evolutionary optimization yields an accurate approximation of the ct numbers which minimize ek for all tuples in the dataset, i.e. those which best preserve the information (and patterns) present in the data. We test the validity of our hypothesis on an experimental database

Abstract:

At present very large volumes of information are being regularly produced in the world. Most of this information is unstructured, lacking the properties usually expected from, for instance, relational databases. One of the more interesting issues in computer science is how, if possible, may we achieve data mining on such unstructured data. Intuitively, its analysis has been attempted by devising schemes to identify patterns and trends through means such as statistical pattern learning. The basic problem of this approach is that the user has to decide, a priori, the model of the patterns and, furthermore, the way in which they are to be found in the data. This is true regardless of the kind of data, be it textual, musical, financial or otherwise. In this paper we explore an alternative paradigm in which raw data is categorized by analyzing a large corpus from which a set of categories and the different instances in each category are determined, resulting in a structured database. Then each of the instances is mapped into a numerical value which preserves the underlying patterns. This is done using a genetic algorithm and a set of multi-layer perceptron networks. Every categorical instance is then replaced by the adequate numerical code. The resulting numerical database may be tackled with the usual clustering algorithms. We hypothesize that any unstructured data set may be approached in this fashion. In this work we exemplify with a textual database and apply our method to characterize texts by different authors and present experimental evidence that the resulting databases yield clustering results which permit authorship identification from raw textual data

Abstract:

Computer Networks are usual1y balanced appealing to personal experience and heuristics, without taking advantage of the behavioral patterns embedded in their operation. In this work we report the application of too1s of computational intelligence to find, such patterns and take advantage of them to improve the network's performance. The traditional traffic flow for Computer Network is improved by the concatenated use of the following "tools": a) App1ying intelligent agents, b) Forecasting the traffic flow of the network via Mu1ti-Layer Perceptrons (MLP) and c) Optimizing the forecasted network's parameters with a genetic algorithm. We discuss the imp1ementation and experimental1y show that every consecutive new too1 introduced improves the behavior of the network. This incremental improvement can be explained from the characterization of the network's dynamics as a set of emerging patterns in time

Abstract:

We discuss an algorithm which allows us to find the algebraic expression of a dependent variable as a function of an arbitrary number of independent ones where data is arbitrary, i.e. it may have arisen from experimental data. The possibility of such approximation is proved starting from the Universal Approximation Theorem (UAT). As opposed to the neural network (NN) approach to which it is frequently associated, the relationship between the independent variables is explicit, thus resolving the “black box” characteristics of NNs. It implies the use of a nonlinear function (called the activation function) such as the logistic 1/(1+e^-x). Thus, any function is expressible as a combination of a set of logistics. We show that a close polynomial approximation of logistic is possible by using only a constant and monomials of odd degree. Hence, an upper bound (D) on the degree of the polynomial may be found. Furthermore, we may calculate the form of the model resulting from D. We discuss how to determine the best such set by using a genetic algorithm leading to the best L∞ - L2 approximation. It allows us to find the best approximation polynomial given a selected fixed number of coefficients. It then finds the best combination of coefficients and their values. We present some experimental results

Abstract:

When designing neural networks (NNs) one has to consider the ease to determine the best architecture under the selected paradigm. One possible choice is the so-called multi-layer perceptron network (MLP). MLPs have been theoretically proven to be universal approximators. However, a central issue is that the architecture of the MLPs, in general, is not known and has to be determined heuristically. In the past, several such approaches have been taken but none has been shown to be applicable in general, while others depend on complex parameter selection and fine-tuning. In this paper we present a method which allows us to determine the said architecture from basic theoretical considerations: namely, the information content of the sample and the number of variables. From these we derive a closed analytic formulation. We discuss the theory behind our formula and illustrate its application by solving a set of problems (both for classification and regression) from the University of California at Irvine (UCI) data base repository

Abstract:

Given the present need for Customer Relationship and the increased growth of the size of databases, many new approaches to large database clustering and processing have been attempted. In this work we propose a methodology based on the idea that statistically proven search space reduction is possible in practice. Following a previous methodology two clustering models are generated: one corresponding to the full data set and another pertaining to the sampled data set. The resulting empirical distributions were mathematically tested by applying an algorithmic verification

Abstract:

The optimization of complex systems one of whose variables is time has been attempted in the past but its inherent mathematical complexity makes it hard to tackle with standard methods. In this paper we solve this problem by appealing to two tools of computational intelligence: a) Genetic algorithms (GA) and b) Artificial Neural Networks (NN). We assume that there is a set of data whose intrinsic information is enough to reflect the behavior of the system. We solved the problem by, first, designing a system capable of predicting selected variables from a multivariate environment. For each one of the variables we trained a NN such that the variable at time t+k is expressed as a non-linear combination of a subset of the variables at time t. Having found the forecasted variables we proceeded to optimize their combination such that its cost function is minimized. In our case, the function to minimize expresses the cost of operation of an economic system related to the physical distribution of coins and bills. The cost of transporting, insuring, storing, distributing, etc. such currency is large enough to guarantee the time invested in this study. We discuss the methods, the algorithms used and the results obtained in experiments as of today

Abstract:

Genetic Algorithms (GAs) have long been recognized as powerful tools for optimization of complex problems where traditional techniques do not apply. However, although the convergence of elitist GAs to a global optimum has been mathematically proven, the number of iterations remains a case-by-case parameter. We address the problem of determining the best GA out of a family of structurally different evolutionary algorithms by solving a large set of unconstrained functions. We selected 4 structurally different genetic algorithms and a non-evolutionary one (NEA). A schemata analysis was conducted further supporting our claims. As the problems become more demanding, the GAs significantly and consistently outperform the NEA. A particular breed of GA (the Eclectic GA) is superior to all other, in all cases

Abstract:

Genetic Algorithms (GAs) have long been recognized as powerful tools for optimization of complex problems where traditional techniques do not apply. In [1J we reported the superior behavior, out of 4 evolutionary algorithms and a hill climber, of a particular breed: the so-called Eclectic Genetic Algorithm (EGA). EGA was tested vs. a set (TS) consisting of large number of selected problems most of which have been used in previous works as an experimental testbed. However. the conclusions of the saíd benchmark are restricted to the functions in TS. In this work we extend the previous results to a much larger set (U) consisting of ξ ≈ ∑i=[(1) (31)] (2^64)^i ≈ 11 ×10^50 unconstrained functions. Randomly selected functions in U were minimized for 800 generations each; the minima were averaged in batches of 36 each yielding Xi for the j-th batch. This process was repeated until the X¡'s displayed a Gaussian distribution with parameters μx and σx. From these, the parameters μ and σ describing the probabilistic behavior of each of the algorithms for U were calculated with 95% reliability. We give a sketch of the pfoof of the convergence of an elitist GA to the global optimum of any given function. We describe the methods to: a) Generate the functions: b) Calculate μ and σ for U and c) Evaluate the relative efficiency of all algorithms in our study. EGA's behavior was the best of all algorithms

Abstract:

An open problem in robotics, is the one dealing with the way a mobile robot locates ¡tself inside a specific area. The problem itself is vital, for the robot to correctly achieve its goals. There are several ways to approach this problem, for example, robot localization using landmarks [4], [5], calculation of the robot's position based on the distance it has covered [6], [7], etc. Many of these solutions imply the use of active sensors in the robot to calculate a distance or notice a landmark. However, there is a solution which has not been explored and, is the main topic of this paper. In essence the solution we tested has to do with the possibility that the robot, can determine its own position at any time using only a single sensor, and a reduced database. This database contains all the information needed to match what the robot is sensing with its spatial position. In order for the method to be practically implementable we reduced the number of necessary matches by defining a subset of the original database ¡mages. There, are two issues to solve: a) the number of elements in every subset of the matching ¡mages and b) the absolute positions of each of these elements. Once these are determined, the matching process is very fast and ensures the adequate identification of the robot's position without errors. On the one hand we seek for the largest subset so that position identification is accurate. On the other we wish, this subset to be as small as possible so that the online processing is as fast as possible. These conditions constitute a multi-objective optimization problem. To solve it we used a Multi Objective Genetic Algorithm (MOGA) which minimizes the number of pixels required by the robot to identify an image. To test the validity of this approach we also solved this problem using a statistical methodology which solves problem (a) and a random mutation Hill Climber to solve the problem (b)

Abstract:

The problem of finding clusters in arbitrary sets of data has been attempted using different approaches. In most cases, the use of metrics in order to determine the adequateness of the said clusters is assumed. That is, the criteria yielding a measure of quality of the clusters depends on the distance between lhe elements of ea eh cluster. Typically, one considers a cluster to be adequately characterized if the elements within a cluster are close to one another while, simultaneously, they appear to be far from those of different clusters. This intuitive approach fails if the. variables of the elements of a cluster are not amenable to distance measurements, i.e., if the vectors of such elements carmot be quantified. This case arises frequently in real world applications where several variables correspond to categories. The usual tendency is to assign arbitrary numbers to every category: to encode the categories. This, however, may result in spurious patterns: relationships between the variables which are not really there at the offset. It is evident that there is no truly valid assignment which may ensure a universally valid numerical value to this kind of variables. But there is a strategy which guarantees that the encoding will, in general, not bias the results. In this paper we explore such strategy. We discuss the theoretical foundations of our approach and prove that this is the best stralegy in tenns ofthe statislical behaviour ofthe sampled data. We also show Ihat, when applied 10 a complex real world problem, it allows us to generalize soft computing methods to find the number and characteristics of a set of clusters

Abstract:

The problem of finding clusters in arbitrary sets of data has been attempted using different approaches. In most cases, the use of metrics in order to determine the adequateness of the said clusters is assumed. That is, the criteria yielding a measure of quality of the clusters depends on the distance between the elements of each cluster. Typically, one considers a cluster to be adequately characterized if the elements within a cluster are close to one another while, simultaneously, they appear to be far from those of different clusters. This intuiti ve approach fails if the variables of the elements of a cluster are not amenable to distan ce measurements, i.e., if the vectors of such elements cannot be quantified. This case arises frequently in real world applications where several variables (if not most of them) correspond to categories. The usual tendency is to assign arbitrary numbers to every category: to encode the categories. This, however, may result in spurious pattems: relationships between the variables which are not really there at the offset. It is evident that there is no truly valid assignment which may ensure a universally valid numerical value to this kind of variables. But there is a strategy which guarantees that the encoding will, in general, not bias the results. In this paper we explore su eh strategy. We discuss the theoretical foundations of our approach and prove that this is the best strategy in terms of the statistical behavior of the sampled data. We also show that, when applied to a complex real world problem, it allows us to generalize soft computing methods to find the number and characteristics of a set of clusters. We contrast the characteristics of the clusters gotten from the automated method with those of the experts

Abstract:

In data clustering the more traditional algorithms are based on similarity criteria which depend on a metric distance. This fact imposes important constraints on the shape of the clusters found. These shapes generally are hyperspherical in the metric’s space due to the fact that each element in a cluster lies within a radial distance relative to a given center. In this paper we propose a clustering algorithm that does not depend on simple distance metrics and, therefore, allows us to find clusters with arbitrary shapes in n-dimensional space. Our proposal is based on some concepts stemming from Shannon's information theory and evolutionary computation. Here each cluster consists of a subset of the data where entropy is minimized. This is a highly non-linear and usually nonconvex optimization problem which disallows the use of traditional optimization techniques. To solve it we apply a rugged genetic algorithm (the so-called Vasconcelos’ GA). In order to test the efficiency of our proposal we artificially created several sets of data with known properties in a tridimensional space. The result of applying our algorithm has shown that it is able to find highly irregular clusters that traditional algorithms cannot. Some previous work is based on algorithms relying on similar approaches (such as ENCLUS' and CLIQUE's). The differences between such approaches and ours are also discussed

Abstract:

The unsupervised learning process of identifying data clusters on large databases, in common use nowadays, requires an extremely costly computational effort. The analysis of a large volume of data makes it impossible to handle it in the computer's main storage. In this paper we propose a methodology (henceforth referred to as "FDM" for fast data mining) to determine the optimal sample from a database according to the relevant information on the data, based on concepts drawn from the statistical theory of communication and L-infinity approximation theory. The methodology achieves significant data reduction on real databases and yields equivalent cluster models as those resulting from the original database. Data reduction is accomplished through the determination of the adequate number of instances required to preserve the information present in the population. Then, special effort is put in the validation of the obtained sample distribution through the application of classical statistical non parametrical tests and other tests based on the minimization of the approximation error of polynomial models

Abstract:

Given the present need for Customer Relationship and the increased growth of the size of databases, many new approaches to large database clustering and processing have been attempted. In this work, we propose a methodology based on the idea that statistically proven search space reduction is possible in practice. Two clustering models are generated: one corresponding to the full data set and another pertaining to the sampled data set. The resulting empirical distributions were mathematically tested to verify a tight non-linear significant approximation

Abstract:

Clustering is the task which allows us to identify groups, distributions or patterns over a set of data. To achieve it we must, first, adopt a measure of likelihood which allows us to group elements of similar characteristics into two or more such clusters. Presently there are several clustering algorithms which appeal to various metrics among which there outstand the ones based on Minkowski’s and Mahalanobis’ distances. In general this approach yields hyperspherical forms which restrict the fact that some clusters may be represented by irregular N-dimensional bodies. In this paper we use a formula originally developed by Johan Gielis which has been called the “superformula”. It allows the generation of N-dimensional bodies of arbitrary shape by modifying certain parameters. This approach allows us to represent a cluster as the set of data contained in a given body without resorting to a metric of distance. Therefore, we replace the idea of nearness by one of membership. To determine the more adequate values of the parameters in the superformula we applied a Genetic Algorithm (GA); the so called Vasconcelos’ GA

Resumen:

Dada la necesidad actual de mejorar la relación con los clientes y dado el tamaño cada vez mayor de las bases de datos de las empresas, se han desarrollado nuevos enfoques para el agrupamiento (clustering o aprendizaje no supervisado) de grandes bases de datos. En este artículo se describe una metodología basada en la idea de que en la práctica es posible utilizar un espacio de búsqueda reducido si se logra la adecuada caracterización de distribuciones muestrales. Se generan dos modelos de agrupamiento: uno que corresponde a los datos originales y otro a subconjuntos muestrales. Las distribuciones empíricas resultantes se verifican matemáticamente para atestiguar una aproximación no lineal con alta significación estadística. Por último, como caso de estudio se analiza la aplicación a una empresa que obtiene ventajas competitivas derivadas de la mejor caracterización de los perfiles clientelares

Abstract:

We discuss how algebraic explicit expressions modeling a complex phenomenon via an adequate set of data can be derived from the application of Genetic Multivariate Polynomials (GMPs), on the one hand, and Support Vector Machines (SVMs) on the other. A polynomial expression is derived in GMPs in a natural way, whereas in SVMs a polynomial kernel is employed to derive a similar one. In any particular problem an evolutionary determined sample of monomials is required in GMP expressions while, on the other hand, there is a large number of monomials implicit in the SVM approach. We make some experiments to compare the modeling characterization and accuracy obtained from the application of both methods

Abstract:

We present an algorithm which is able to adjust a set of data with unknown characteristics for approximation and classification. The algorithm is based on the simple notion that a sufficiently large set of examples will adequately embody the essence of a numerically expressible phenomenon, on the one hand, and, on the other, that it is possible to synthesize such essence via a polynomial of the form F(v1,...,vn)=∑i1 d1=0... ∑in dn=0μi1...invi1 d1...vin dn where the μi1...in may only take the values 0 or 1 depending on whether the corresponding monomial is adequate. In order to determine the adequateness of the monomial in case we resort to a genetic algorithm which minimizes the fitting error of the candidate polynomials. We analyze a collection of selected data sets for approximation and classification and find the best approximation polynomial. We compare our results with those stemming from multi-layer perceptron networks trained with the well known backpropagation algorithm (BMLPs). We show that our Genetic Mutivariate Plynomials (GMPs) compare favorably with the corresponding BMLPs without the "black box" characteristic of the latter; a frequently cited disadvantage. We also discuss the minimization (genetic) algorithm (GA)

Abstract:

We describe a methodology to train Support Vector Machines (SVM) where the regularization parameter (C) is determined automatically via an efficient Genetic Algorithm in order to solve multiple category classification problems. We call the kind of SVMs where C is determined automatically from the application of a GA a "Genetic SVM" or GSVM. In order to test the performance of our GSVM, we solved a representative set of problems by applying one-versus-one majority voting and one-versus-all winner-takes-all strategies. In all of these the algorithm displayed very good performance. The relevance of the problem, the algorithm, the experiments and the results obtained are discussed

Abstract:

Superheated steam temperature dynamic is statically non linear. The steam enthalpy exhibits nonlinear dependence on both steam pressure and temperature. Also, the heat transfer process on superheaters and attemperators is strongly non linear and it was, reportedly, very difficult to find a synthetic expression (model). The so-called Genetic Multivariate Polynomials (GMP) solve this problem by finding the coefficients of a multivariate polynomial for an arbitrary set of data. Although this regression problem has been tackled with success using neural networks (NN) the 'black box' characteristic of such models is frequently cited as a major drawback. Despite the restrictions of a polynomial basis, GPMs compete favorably with the NNs without the mentioned limitation. Therefore, a practical tool is proposed for temperature modeling on a wide range real plant operation and its static parameter estimations. Based on advanced simulation tools, the polynomial expression of enthalpy (on a wide range) and the empirical heat transfer equations in superheaters allow us to turn the static parameter estimation from a distributed to a lumped parameter system

Abstract:

A method to represent arbitrary sequences (strings) is discussed. We emphasize the application of the method to the analysis of the similarity of sets of proteins expressed as sequences of amino acids. We define a pattern of arbitrary structure called a metasymbol. An implementation of a detailed representation is discussed. We show that a protein may be expressed as a collection of metasymbols in a way such that the underlying structural similarities are easier to identify

Abstract:

This paper presents a genetic algorithm (GA) methodology for training a support vector machine (SVM). The SVM method may be viewed as a quadratic optimization problem with linear constraints, where the objective is to minimize the error learning rate and the Vapnik-Chervonenkis (VC) dimension in order to get an Optimal Separating Hyperplane (OSH) that classifies two sets of data. A SVM is a very good tool for classification problems which displays an excellent generalization ability. In order to test our method we solve the XOR problem, a canonical nonlinearly separable problem. We used a genetic algorithm (GA) called Vasconcelos' GA (VGA). The genome was selected to solve the dual SVM problem, where each individual corresponds to a Lagrange multiplier. Our interest lay in finding the "best" value of C (the so-called "regularization" parameter); C reflects a tradeoff between the performance of the trained SVM and its allowed level of misclassification. We solved the problem in two ways: (a) We provided C, as is traditional in the normal treatment of the problem; (b) We implemented a complementary approach, wherein C is also included in the genome. In case (b) VGA finds C's value freeing the user from having to find it from heuristics. We report an exact solution for case (a) and, importantly, encouraging results in which the error in the solution for case (b) is practically zero

Abstract:

This paper presents a genetic algorithm (GA) methodology for training a support vector machine (SVM). The SVM method may be viewed as a quadratic optimization problem with linear constraints, where the objective is to minimize the error learning rate and the Vapnik-Chervonenkis (VC) dimension in order to get an Optimal Separating Hyperplane (OSH) that classifies two sets of data. A SVM is a very good tool for classification problems which displays an excellent generalization ability. In order to test our method we solve the XOR problem, a canonical nonlinearly separable problem. We used a genetic algorithm (GA) called Vasconcelos' GA (VGA). The genome was selected to solve the dual SVM problem, where each individual corresponds to a Lagrange muItiplier. Our interest lay in fmding the "best" value of e (the so-called "regularization" parameter); e reflects a tradeoff between the performance of the trained SVM and its allowed level ofmisclassification. We solved the problem in two ways: (a) We provided e, as is traditional in the normal treatment of the problem; (b) We implemented a complementary approach, wherein e is also included in the genome. In case (b) VGA fmds e's value freeing the user from having to fmd it from heurlstics. We report an exact solution for case (a) and, importantly, encouraging results in which the error in the solution for case (b) is practically zero

Abstract:

In this paper we describe a methodology to train Support Vector Machines (SVM) where the regularization parameter (C) is determined automatically via an efficient Genetic Algorithm (Vasconcelos’ GA or VGA) in order to solve classification problems. We call the kind of SVMs where C is determined automatically from the application of a GA a “Genetic SVM” or GSVM. In order to test the performance of our GSVM, we solved a representative set of problems. In all of these the algorithm displayed a very good performance. The relevance of the problem, the algorithm, the experiments and the results obtained are discussed

Abstract:

One of the basic problems of applied mathematics is to find a synthetic expression (model) which captures the essence of a system given a (necessarily) finite sample which reflects selected characteristics. When the model considers several independent variables its mathematical treatment may become burdensome or even downright impossible from a practical standpoint, In this paper we explore the utilization of an efficient genetic algorithm to select the "best" subset of multivariate monomials out of a full polynomial of the form F(v 1,..., v n) = Σ i1=0 g1...Σ in gnc i1...i nv 1 i1...v ni n (where gi denotes the maximum desired degree for the i-th independent variable). This regression problem has been tackled with success using neural networks (NN). However, the "black box" characteristic of such models is frequently cited as a major drawback. We show that it is possible to find a polynomial model for an arbitrary set of data, From selected practical cases we argue that, despite the restrictions of a polynomial basis, our Genetic Multivariate Polynomials (GMP) compete with the NN approach without the mentioned limitation. We show how to treat constrained functions as unconstrained ones using GMPs

Abstract:

The analysis of data sets of unknown characteristics usually demands that subsets (or clusters) of the data are identified in such a way that the members of any one such cluster display common (in some sense) characteristics. In order to do this we must determine a) The number of clusters, b) The clusters thernselves and c) The labeling of every element in the data set such that each element belongs uniquely to one of the clusters. In a previous work [1] we discussed an algorithm which allowed us to solve (b) and (c) (assurning that (a) is given). Further, we also showed that the so-called labeling problem may be solved by minimizing an adequate measure of distance. The metrics discussed on a homogeneous distribution of the samples. In this paper we discuss several metrics as applied to self-organizing maps (SOMs) which make the said consideration unnecessary and, therefore, generalize our past method. Furthermore, the new metrics improve on our previous results. We also discuss the minimization (genetic) algorithm (GA) and offer some results derived from its application

Abstract:

In this paper we describe a heuristic approach to the problem of identifying a pattern embedded within a figure from a predefined set of patterns via the utilization of a genetic algorithm (GA). By applying this GA we are able to recognize a set of simple figures independently of scale, translation and rotation. We discuss the fact that this GA is, purportedly, the best among a set of alternatives; a fact which was previously proven appealing to statistical techniques. We describe the general process, the special type of genetic algorithm utilized, report some results obtained from a test set and we discuss the aforementioned results and we comment on these. We also point out some possible extensions and future directions

Abstract:

Most modern lossless data compression techniques used today, are based in dictionaries. If some string of data being compressed matches a portion previously seen, then such string is included in the dictionary and its reference is included every time it appears. A possible generalization of this scheme is to consider not only strings made of consecutive symbols, but more general patterns with gaps between its symbols. The main problems with this approach are the complexity of pattern discovery algorithms and the complexity for the selection of a good subset of patterns. In this paper we address the last of these problems. We demonstrate that such problem is NP-complete and we provide some preliminary results about heuristics that points to its solution

Abstract:

The problem of correctly diagnosing different types of ailments has been tackled with different artificial intelligence techniques since its inception. Both heuristic and statistically based algorithms have been discussed in the past. In this paper we establish a comparison between one heuristic algorithm based on partial precedence and majority decision rules and two types of statistical ones: multi-layer perceptrons (MLP) and self-organizing maps (SOMs) when applied to the automated diagnosis and treatment of cleft lip and palate. We show that although all three methods perform reasonably well (with efficiency ratios better than 0.9) the neural networks achieve their goals with a considerably diminished set of data without detriment in their performance. Furthermore, we are able to tackle an enlarged set and still retain the high yields with the use of MLPs and SOMs

Abstract:

A priori determination of the sex of a human individual before gestation is a desirable goal in some cases. To achieve this, it is necessary to perform the separation of sperm cells containing either X or Y chromosomes. As is well known, male sex depends on the presence of chromosome Y. Once this separation is achieved in principle, we require to determine, with a high degree of accuracy, whether the sperm cells of interest contain the desired X or Y chromosomes. If we are able to obtain certain simple measurements regarding the sperm cells under consideration we will be able to control the fertilization process reliably. In this paper we report a method which allows for non-invasive verification of the characteristics of the separated sperm. We determined a set of easily measurable characteristics. From a sample drawn from previously cropped sperm we trained a neural network with a genetic algorithm. The trained network was able to perform a posteriori classification with an error much smaller than 1 %. This percentage of efficiency is better than the ones reported in centers of assisted fecundation

Abstract:

Several lossless data compression schemes have been proposed over the past years. Since Shannon developed information theory in his seminal paper, however, the problem of data compression has hinged (even though not always explicitly) on the consideration of an ergodic source. In dealing with such sources one has to cope with the problem of defining a priori the minimum sized symbol. The designer, therefore, is faced with the necessity of choosing beforehand the characteristics of the basic underlying element with which he or she is to attempt data compression. In this paper we address the problem of finding the characteristics of the basic symbols to consider in information treatment without assuming the form of such symbols in the data source. In so doing we expect to achieve a pseudo-ergodic behavior of the source. Then we are able to exploit the characteristics of such sources. Finding the basic elements (which we call “metasymbols”) is a complex (NP complete) optimization task. Therefore, we make use of a non-traditional Genetic Algorithm (VGA) which has been shown to have excellent performance, to find the metasymbols. In this paper we discuss the problem, the proposed methodology, some of the results obtained so far and point to future lines of research

Abstract:

We discuss a set of metrics, which aims to facilitate the formation of symbol groups from a pseudoergodic information source. An optimal codification can then be applied on the symbols ( such as Hujfman Codes [1 J) for zero memory sources where it tends to the theorical limit of compression limited by the entropy. These metrics can be used as a fitness measure of the individuals in the Vasconcelos genetic algorithm as an alternative to exhaustive search

Abstract:

The solution of Systems of Simultaneous Non-Linear Equations (SNLE) remains a complex and as yet not closed problem. Although analytical methods to tackle such problems do exist, they are limited in scope and, in general, demand certain prior knowledge of the functions under study. In this paper we propose a novel method to numerically solve such systems by using a Genetic Algorithm (GA). In order to show the generality of the method we first prove a theorem which allows us to equate the SNLE problem to that of minimizing a linear combination of the equations to be solved, subject to a set of constraints. Then we describe a rugged GA (the so-called Vasconcelos GA or VGA) which has been proved to perform optimally in a controlled but unbounded problem space. Next, we show how the VGA may be efficiently utilized to adapt its behavior (which is inherently adequate to solve unconstrained minimization problems) to a constrained solution space. Finally, we offer some examples of systems of non-linear equations which were solved using the proposed methodology

Abstract:

We discuss alternative norms to train Neural Networks (NNs). We focus on the so called Multilayer Perceptrons (MLPs). To achieve this we rely on a Genetic Algorithm called an Eclectic GA (EGA). By using the EGA we avoid the drawbacks of the standard training algorithm in this sort of NNs: the backpropagation algorithm. We define four measures of distance: a) The mean exponential error (MEE), b) The mean absolute error (MAE), c) The maximum square error (MSE) and d) The maximum (supremum) absolute error (SAE). We analyze the behavior of an MLP NN on two kinds of problems: Classification and Forecasting. We discuss the results of applying an EGA to train the NNs and show that alternative norms yield better results than the traditional RMS norm

Abstract:

In this paper we report some results related to the use of genetic algorithms as learning machines (GLM) via a new regeneration/crossover model. In the past GAs have been applied as learning systems in traditional classifier systems. An alternative method to learning is the one where GAs are treated as computing devices where the genetic structure encodes some sort of automaton. That such an approach is theoretically possible results directly from Turing's thesis. We discuss a new model where the standard regeneration mechanism via biased individual survival and random crossover is replaced by a deterministic crossover scheme; furthermore, standard crossover is replaced by ring crossover. We statistically establish that such a genetic process results, indeed, in systems which learn. We also report on the way that crossover probabilities affect the learning convergence of such systems. After a statistical analysis, we found that GLMs, after being trained, show predictive behavior. Finally, we conclude from a statistical analysis, that the probability that a GLM shows a predictive behavior is better that 0.97, proving conclusively their viability as trainable learning systems.

Abstract:

Different aspects defining the nature of software engineering work have been analyzed by empirical studies conducted in the last 30 years. However, in recent years, many changes have occurred in the context of software development that impact the way people collaborate, communicate with each other, manage the development process and search for information to create solutions and solve problems. For instance, the generalized adoption of asynchronous and synchronous communication technologies as well as the adoption of quality models to evaluate the work being conducted are sorne aspects that define modem software development scenarios. Despite this new context, much of the research in the collaborative aspects of software design is based on research that does not reflect these new work environments. Thus, a more up-to-date understanding of the nature of software engineering work with regards to collaboration, information seeking and communication is necessary. The goal of this paper is to present findings of an observational study to understand those aspects. We found that· our informants spend 45% of their time collaborating with their colleagues; information seeking consumes 31,90% of developers' time; and 10w usage of software process tools is observed (9,35%). Our results also indicate a 10w usage of e-mail as a communication tool (~l % of the total time spent on collaborative activities), and software developers, of their total time on communication efforts, spending 15% of it looking for information, that helps them to be aware of their colleagues' work, share knowledge, and manage dependencies between their activities.Our results can be used to inform the design of collaborative software development tools as well as to improve team management practices

Abstract:

A refracted Levy process is a Levy process whose dynamics change by subtracting off a fixed linear drift (of suitable size) whenever the aggregate process is above a pre-specified level. More precisely, whenever it exists, a refracted Levy process is described by the unique strong solution to the stochastic differential equation : dU(t) = -delta 1({Ut > b})dt + dX(t), t >= 0 where X = (X-t, t >= 0) , is a Levy process with law P and b, delta is an element of R such that the resulting process U may visit the half line (b, infinity) with positive probability. In this paper, we consider the case that X is spectrally negative and establish a number of identities for the following functionals integral(infinity)(0) 1({Ut < b})dt, integral(kappa c+)(0) 1({Ut > b})dt, integral(kappa a-)(0) 1({Ut > b})dt, integral(kappa c+boolean AND kappa a-)(0) 1({Ut > b})dt, where kappa(+)(c) = inf{t >= 0 : U-t > c} and kappa(-)(a) = inf{t >= 0 : U-t < a} for a < b < c. Our identities extend recent results of Landriault et al. (Stoch Process Appl 121:2629-2641, 2011) and bear relevance to Parisian-type financial instruments and insurance scenarios

Abstract:

We analyse the behaviour of supercritical super-Brownian motion with a barrier through the pathwise backbone embedding of Berestycki, Kyprianou and Murillo-Salas (2011). In particular, by considering existing results for branching Brownian motion due to Harris and Kyprianou (2006) and Maillard (2011), we obtain, with relative ease, conclusions regarding the growth in the right-most point in the support, analytical properties of the associated one-sided Fisher-Kolmogorov-Petrovskii-Piscounov wave equation, as well as the distribution of mass on the exit measure associated with the barrier

Abstract:

This paper considers two extensions of the Uzawa-Lucas framework. In our first extension, physical capital is included as an input of the educational sector. In our second extension, leisure considerations are assumed to play a positive role in agents' welfare. The case with physical capital in the production of education features (under standard conditions) a unique steady state, and thus the dynamics are qualitatively similar to those of the original Uzawa-Lucas framework. In our second rnodel with leisure there could be a rnultiplicity of steady states with different rates of growth. What detennines here _ the chosen rate of growth is the initial relative amount of physical and human capital. Our results therefore illustrate tbat in a world without externalities there could coexist difIerent long-term growth rates

Resumen:

Este artículo propone una visión regional sobre la lucha feminista por el derecho al aborto. Inicia con un recordatorio de cómo se da la maternidad de las mujeres latinoamericanas en contextos de pobreza y marginación, y de las consecuencias mortales del aborto ilegal. Luego ofrece un panorama acerca de la tensión política entre algunos gobiernos de América Latina y las feministas, en especial a causa de la intervención de la jerarquía de la Iglesia católica. Trata algunos casos paradigmáticos que pintan a grandes rasgos tanto la estrategia amarillista del Vaticano como las respuestas feministas en redes y en espacios regionales e internacionales. Se argumenta el derecho al aborto como un asunto de justicia social, una cuestión de salud pública y una aspiración democrática. También se menciona el debate teórico sobre el tratamiento jurídico de la diferencia sexual

Abstract:

This article, which offers a regional overview of the feminist struggle for abortion rights in Latin America, begins by reminding the reader of the context, characterized by poverty and marginalization, in which the region's women become mothers, as well as the deadly consequences of illegal abortion. It subsequently outlines the political tension between some state governments and feminists, particularly the friction that results from interference by the Catholic church hierarchy. The article outlines a few paradigmatic cases that exemplify the Vatican's sensationalist strategy as well as feminist responses by means of networks and taking advantage of regional and international arenas. It argues that abortion rights are a question of social justice and public health and form part of aspirations for democracy. It also makes mention of the theoretical debate on how differences between the sexes are handled by legal systems

Abstract:

This paper studies the effect of self-regulation on the leniency of cinema age restrictions using cross-country variation in the classifications applied to 1922 movies released in 31 countries between 2002 and 2011. Our data show that restrictive classifications reduce box office revenues, particularly for movies with wide box office appeal. These data also show that self-regulated ratings agencies display greater leniency than state-regulated agencies when classifying movies with wide appeal. However, consistent with theoretical models of self-regulation, the degree of leniency is small because it is not costly for governments to intervene and regulate ratings themselves

Abstract:

One explanation for the comparatively lower quality of movie sequels is selection bias, known in personnel economics as the Peter principle. Only abnormally successful movies are selected for a sequel. Another explanation is a deterministic depreciation in quality due to the decline in the novelty of the sequel's characters and storyline. Both explanations predict that, relative to the original, the sequel's performance will revert towards the mean. We develop a structural model to decompose the two explanations, and estimate its parameters using detailed data on 306 franchise films and 2,823 non-franchise films between 1995 and 2014. Parameter estimates provide evidence of selection bias for action & adventure and horror movies, and evidence of a deterministic decline in quality for comedies

Abstract:

In the Russian cards problem, Alice, Bob and Cath draw a, b and c cards, respectively, from a publicly known deck. Alice and Bob must then communicate their cards to each other without Cath learning who holds a single card. Solutions in the literature provide weak security, where Alice and Bob’s exchanges do not allow Cath to know with certainty who holds each card that is not hers, or perfect security, where Cath learns no probabilistic information about who holds any given card. We propose an intermediate notion, which we call ε-strong security, where the probabilities perceived by Cath may only change by a factor of ε. We then show that strategies based on affine or projective geometries yield ε-strong safety for arbitrarily small ε and appropriately chosen values of a, b, c

Resumen:

El trabajo reflexiona sobre el concepto de interpretación conforme en el contexto del nuevo modelo de control regularidad constitucional en México, a raíz de un importante caso resuelto recientemente por la Suprema Corte de Justicia de la Nación, el llamado "caso Asperger", relacionado con el modelo de discapacidad adoptado en el Distrito Federal. El autor argumenta que la Corte yerra en la elección de la herramienta de la interpretación conforme y en su implementación y sostiene que el caso no podía (ni debía) ser resuelto con ella. El autor busca llamar la atención sobre el abuso de esta herramienta argumentativa en la resolución de casos importantes de violación de derechos humanos, señalando que la persecución de un fin legítimo no justifica el descuido en la elección del marco de análisis

Abstract:

The paper reflects on the concept of interpretation in conformity with the Constitution and the treaties in the context of the new parameters of judicial review in Mexico, in view of a recent Supreme Court ruling, the so-called "Asperger case", which deals with the disability model adopted by Mexico City legislation. The author argues that the Court erred in choosing the aforementioned interpretive tool. He warns about the abuse of this argumentative tool in the resolution of human rights cases and argues that legitimate purposes do not justify neglecting the selection of appropriate analytical frameworks

Abstract:

In this work we introduce a planar restricted four-body problem where a massless particle moves under the gravitational influence due to three bodies following the figure-eight choreography, and explore some symmetric periodic orbits of this system which turns out to be nonautonomous. We use reversing symmetries to study both theoretically and numerically a certain type of symmetric periodic orbits of this system. The symmetric periodic orbits (initial conditions) were computed by solving some boundary-value problems

Abstract:

We argue that education's effect on political participation in developing democracies depends on the strength of democratic institutions. Education increases awareness of, and interest in, politics, which help citizens to prevent democratic erosion through increased political participation. We examine Senegal, a stable but developing democracy where presidential over-reach threatened to weaken democracy. For causal identification, we use a difference-in-differences strategy that exploits variation in the intensity of a major school reform and citizens' ages during reform implementation. Results indicate that schooling increases interest in politics and greater support for democratic institutions-but no increased political participation in the aggregate. Education increases political participation primarily when democracy is threatened, when support for democratic institutions among educated individuals is also greater

Abstract:

This study outlines and tests a high commitment model of human resource (HR) practices and its association with outcomes through a path including employee perceptions and attitudes, thereby seeking a new way of opening the so-called 'black box' between human resource management (HRM) and performance

Abstract:

Group safety climate is a leading indicator of safety performance in high reliability organizations. Zohar and Luria (2005) developed a Group Safety Climate scale (ZGSC) and found it to have a single factor. Method: The ZGSC scale was used as a basis in this study with the researchers rewording almost half of the items on this scale, changing the referents from the leader to the group, and trying to validate a two-factor scale. The sample was composed of 566 employees in 50 groups from a Spanish nuclear power plant. Item analysis, reliability, correlations, aggregation indexes and CFA were performed. Results: Results revealed that the construct was shared by each unit, and our reworded Group Safety Climate (GSC) scale showed a one-factor structure and correlated to organizational safety climate, formalized procedures, safety behavior, and time pressure. “Impact on Industry: This validation of the one-factor structure of the Zohar and Luria (2005) scale could strengthen and spread this scale and measure group safety climate more effectively

Abstract:

This paper aims to examine how different ways in which a charitable organization communicates successes (highlighting individual or collective achievement) can influence potential future donors, and to determine whether the effectiveness of the communication strategy is contingent on the cultural context

Abstract:

Nicolas Sarkozy's presidency presented a mixed record on the issues of Muslim immigration and integration. On the one hand, his administration took novel and constructive steps to advance the integration of Muslim immigrants into French society, notably through the granting of unprecedented official recognition and institutional representation to Islam in the country. On the other, by placing the immigration issue at the centre of his 2012 re-elections strategy, he overshadowed and undermined the effectiveness of these integrative policies. Given the country's worsening economic outlook and rising unemployment, immigration is therefore likely to remain as salient and difficult an issue under the new Hollande administration as it was under Sarkozy's

Abstract:

Tobacco regulation efforts have been criticized by some academic economists for failing to provide adequate welfare-analytic justification. This paper attempts to address these criticisms. Unlike previous research that has discussed second-hand smoke and health care financing externalities, this paper develops the logic for identifying the much larger market failures attributable to the failure of smokers to fully internalize the costs of their addictive behavior. The focus is on teen addiction as a form of ‘‘intrapersonal’’ externality and observed adult consumption behavior consistent with partial myopia. The importance of peer effects, in the consideration of welfare impacts, is also emphasized.

Abstract:

As Hondius (2004, p. xi) observes, "Tax treatment is the barometer of the high or low esteem in which States hold the voluntary sector." At first blush it seems that most nations around the globe, including those discussed in this volume, hold their voluntary or nonprofit sectors in relatively high esteem, as evidenced by the near universality of their exemption from the payment of income taxes and the provision of fiscal incentives to promote donations. But in this theme, the devil is in the details: how many organizations are eligible for deductible donations? How beneficiai are the fiscal incentives to donors? What impact do fiscal incentives have on philanthropy? And, more generally, what impact does a favorable fiscal environment have on the development of a nation’s nonprofit sector?

Abstract:

The aim of this paper is to examine the factors underlying philanthropic behavior in Mexico. In particular, we analyze the influence of social capital on two types of behavior: giving and volunteering. This research is based upon groundbreaking national public opinion surveys conducted in 2005 and 2008, the first of their kind in Mexico to focus on donations and volunteerism. We find that membership in associations (an important component of social capital) is strongly and positively associated with secular giving and volunteering. We also tested the role of three other aspects of social capital: participation in informal personal networks, a belief in the norm of reciprocity, and interpersonal trust, and our findings show that the former two have a consistently significant effect on our dependent variables but interpersonal trust does not. We discuss the implications of this for a society where trust in others is comparatively low. Differences between Mexico and the USA, for example, highlight the importance of context in philanthropic behavior. Mexicans' religiosity also stands out as an important variable, particularly when it comes to understanding religious forms of giving and volunteering in the country. The practical significance of our findings for the promotion of philanthropy is that Mexican nonprofits must compensate for being in a low-trust culture by encouraging membership and a sense of group belonging

Abstract:

We study the optimal policy of the contracual arrangement in raising the debt-to-equity ratio for oil, gas and mining project finance deals. We investigate the impact of the optimal contractual relationship between counterparties on the soundness of projects, differing in output price volatility and country risk. Key findings are: first, the existence of EPC sponsors and off-takers generally raises the debt-to-equity ratio. In particular, EPC sponsors and off-taking sponsors jointly mitigate the credit risk caused by counntry risk. Seocond, off-taking and EPC contracts jointly help mitigate the credit risk caused by the country risk, rather than the price volatility. Indeed, the contractual structure raises the debt-to-equity ratio

Abstract:

In this letter, we present the following optimal multi-agent pathfinding (MAPF) algorithms: Hierarchical Composition Conflict-Based Search, Parallel Hierarchical Composition Conflict-Based Search, and Dynamic Parallel Hierarchical Composition Conflict-Based Search. MAPF is the task of finding an optimal set of valid path plans for a set of agents such that no agents collide with present obstacles or each other. The presented algorithms are an extension of Conflict-Based Search (CBS), where the MAPF problem is solved by composing and merging smaller, more easily manageable subproblems. Using the information from these subproblems, the presented algorithms can more efficiently find an optimal solution. Our three presented algorithms demonstrate improved performance for optimally solving MAPF problems consisting of many agents in crowded areas while examining fewer states when compared with CBS and its variant Improved Conflict-Based Search

Abstract:

We study the impact of economic policy uncertainty on the term structure of nominal interest rates. In a general equilibrium model populated by an uncertainty averse agent, we show that political uncertainty not only affects the yield curve and the corresponding volatility term structure but also bond risk premia carry a premium for political uncertainty. Our model simultaneously captures both the shape of the yield curve and the hump shape of yield volatilities, a stylized feature that is hard to match with a theoretical model. Our model gives rise to a set of testable predictions for which we find strong support in the data: Higher policy uncertainty leads to a significant decline in yield levels and increases bond yield volatilities. Moreover, policy uncertainty predicts future short rates and has an ambiguous effect on term premia. Finally, short (long) maturity bond risk premia respond negatively (positively) to increases in policy uncertainty

Abstract:

A natural generalization of the Hénon map of the plane is a quadratic diffeomorphism that has a quadratic inverse. We study the case when these maps are volume preserving, which generalizes the the family of symplectic quadratic maps studied by Moser. In this paper we obtain a characterization of these maps for dimension four and less. In addition, we use Moser's result to construct a subfamily of in n dimensions

Resumen:

México, como muchos otros países, proporciona a su sector agrícola protección respecto a la competencia internacional. Las consideraciones relativas a la distribución del ingreso y las preocupaciones acerca de los problemas de transición una vez que comience la liberación explican en buena parte la persistencia de tales políticas. Este trabajo presenta cuatro postulados: i) pueden esperarse de la liberación ganancias considerables en eficiencia; ii) los efectos de la protección en la distribución del ingreso son de hecho regresivos; iii) pueden elaborarse programas de ajuste bien encaminados que ayuden y no que demoren el ajuste, una vez que se haya hecho un análisis cuidadoso del efecto preciso en la distribución, y iv) una liberación amplia y bilateral por México y su principal mercado de exportación, los Estados Unidos, reduciría de manera significativa cualesquiera problemas de ajuste. Esto último se presenta en el contexto de una estimación del efecto potencial del Tratado de Libre Comercio (TLC) en la agricultura

Abstract:

Mexico, like many other countries, provides its agricultural sector with substancial protection from international competition. Income distributional considerations and concerns about transitional problems once liberalization starts explain much of the persistence of such policies. This paper argues four points: (i) substancial efficiency gains can be expected from liberalization; (ii) the income distributional effects of protection are in fact regressive; (iii) well targeted adjustment programs that help rather than delay adjustment can be designed once a careful analysis of the precise distributional impact has been made; and (iv) a comprehensive, two-sided liberalization by Mexico and its main export market, the US, would significantly reduce any adjustment problems. The latter point is made in the context of an assessment of the potential impact of the Free Trade Agreement (FTA) on agriculture

Abstract:

This paper discusses the relationship between foreign trade and employment in a small open economy, and carries out some empirical work using Mexican data. It is argued that employment multipliers are not stable if intermediate inputs are imported. Actual employment multipliers will be given by the relationship between effective demand and installed capacity in each sector, and will depend strongly on whether quotas or tariffs are in operation. It is also found that Mexican exports are capital intensive relative to its imports

Abstract:

We document the evolution of labor markets of five Latin American countries during the COVID-19 pandemic, with emphasis on informal employment. We show, for most countries, a slump in aggregate employment, mirrored by a fall in labor participation, and a decline in the informality rate. The latter is unprecedented since informality had cushioned the decline in overall employment in previous recessions. Using a business cycle model with a rich labor market structure, we recover the shocks that rationalize the pandemic recession, showing that labor supply shocks and informal productivity shocks are essential to account for the employment and output loss and the decline in the share of informal workers

Abstract:

We ask how labor regulation and informality affect macroeconomic volatility and the propagation of shocks in emerging economies. For this, we propose a small open economy business cycle model with frictional labor markets, endogenous labor participation, and an informal sector. Our own calculations from the ENOE national household survey reveal that these three margins are important to account for the labor market dynamics in Mexico. The model is calibrated to the Mexican economy, in particular to business cycle moments for employment and informality. We show that interest rate shocks, which affect specifically job creation in the formal sector, are key to obtain a counter cyclical informality rate. In our model, the presence of an informal sector might help to mitigate the impact of a stringent labor regulation on employment and consumption fluctuations. In that sense, it adds flexibility to the economy in its adjustment to shocks, but at the cost of a lower productivity and an excess TFP and output volatility. Reducing the burden of labor regulation to the formal sector might achieve the goal of reducing output volatility while improving at the same time the efficiency in the allocation of resource

Abstract:

Most consumer-level low-cost unmanned aerial vehicles (UAVs) have limited battery power and long charging time. Thus, they cannot accomplish some practical tasks such as providing service to cover an area for an extended time, also known as persistent covering. Algorithmic approaches are limited mostly due to the computational complexity and intractability of the problem. Approximation algorithms that assume unlimited energy have been considered to segment a large area into smaller cells that require periodic visits under the latency constraints. In this letter, we explore geometric and topological properties that allow us to significantly reduce the size of the optimization problem. Consequently, the proposed method can efficiently determine the minimum number of UAVs needed and schedule their routes to cover an area persistently. We experimentally demonstrate that the proposed algorithm has better performance than the baseline

Abstract:

The brain's extraordinary computational power to represent and interpret complex natural environments is essentially determined by the topology and geometry of the brain's architectures. We present a framework to construct cortical networks which borrows from probabilistic roadmap methods developed for robotic motion planning. We abstract the network as a large-scale directed graph, and use L-systems and statistical data to "grow" neurons that are morphologically indistinguishable from real neurons. We detect connections (synapses) between neurons using geometric proximity tests

Abstract:

The objective of this study was to investigate the effect of four levels of molasses on chemical composition, in vitro digestibility, methane production and fatty acid profile of canola silages. A canola (Brassica napus var. Monty) crop was established in a small-scale agricultural farm and harvested 148 days after sowing. Four levels of molasses were tested with respect to the fresh weight (1.5 kg); these were 1% (CS-1), 2% (CS-2), 3% (CS-3) and 4% (CS-4) molasses, and 0% molasses (CS-0) was included as a control. A total of 45 microsilages were prepared using PVC pipes (4 in. of diameter × 20 cm of length), and the forage was compressed using a manual press. The effects of control and treatments were tested using the general linear model Y = m + Ti + Eij. The linolenic acid (C18:3n3), palmitic acid (C16:0) and linoleic acid methyl ester (C18:2n6c) accounted for 30%, 21% and 10.5% of total fatty acids, respectively; the fermentation parameters and in vitro methane production were not affected (P > 0.05) by treatments; in vitro digestibility decreased significantly (P < 0.05) as the level of molasses increased. It was concluded that CS-4 improved the DM content by 9% and showed high content of linolenic acid methyl ester. The gross energy of canola silages could favour the oleic acid methyl ester

Resumen:

María Zambrano intenta una hermenéutica de la cultura dirigida hacia todas las dimensiones de la construcción social del ser del hombre, en particular, del occidental. La filosofía será el hilo conductor, pero sin desvincularla de lo sagrado, sino haciendo énfasis en ello por medio de la interpretación de símbolos, mitos y poesía

Abstract:

María Zambrano attempts to establish a cultural hermeneutics in the social construct of human beings particularly in the West. Philosophy will be the connecting thread without disassociating itself from the sacred, but instead emphasizing through it the interpretation of symbols, myths, and poetry

Resumen:

La obra del antropólogo francés tuvo un impacto importante sobre el pensador mexicano quien, en su libro El nuevo festín de Esopo, subraya sus coincidencias y divergencias, en torno al lenguaje, signo de signos; el mito y sus significados; y las metáforas, con sus analogías en música y poesía

Abstract:

This French anthropologist’s work had a big impact on the Mexican intellectual. In his book, El nuevo festín de Esopo, he showcases his affinities and disagreements regarding language, sign of signs; myth and meaning, and the metaphors, discussing their analogies in music and poetry

Resumen:

Se busca establecer tanto una aproximación a lo que se denomina sociedad del conocimiento como a sus implicaciones éticas, a partir del análisis de sus contenidos, generación, relaciones y, en particular, su impacto sobre las prácticas científicas y éticas

Abstract:

In this paper, an approximate definition of the term, information society, as well as its ethical implications will be examined based on an analysis of their content, origin, relationships, and particularly, their impact on scientific and ethical works

Abstract:

This paper uses a general equilibrium approach to analyze the consequences of import barriers and voluntary export restraints with respect to international prices and domestic prices under the assumption of perfect competition in all markets. It shows that the final outcome with respect to domestic prices depends on the way in which governments allocate the revenue arising from the restrictions. The equivalence result, obtained by using partial equilibrium, is only one of the possible outcomes

Abstract:

This paper develops three models for the determination of foreign exchange futures prices under fixed exchange rates and expectations of devaluation. These models show that certain characteristics of futures prices behavior that have been used as proof of inefficiency may be present even if the market is efficient. They also have several implications that are consistent with the observed behavior of Mexican Pesos futures

Abstract:

This paper proposes an approach that solves the Robot Localization problem by using a conditional state-transition Hidden Markov Model (HMM). Through the use of Self Organized Maps' (SOMs) a Tolerant Observation Model (TOM) is built, while odometer-dependent transition probabilities are used for building an Odometer-Dependent Motion Model (ODMM). By using the Viterbi AIgorithm and establishing a trigger value when evaluating the state-transition updates, the presented approach can easily take care of Position Tracking (PT), Global Localization (GL) and Robot Kidnapping. (RK) with an ease of implementation difficult to achieve in most of the state-of-the-art localization algorithms. AIso, an optimization is presented to allow the algorithm to run in standard microprocessors in real time, without the need of huge probability gridmaps

Abstract:

We use a modification of the parameterization method to study invariant manifolds for difference equations. We establish existence, regularity, and smooth dependence on parameters and study several singular limits, even if the difference equations do not define a dynamical system. This method also leads to efficient algorithms that we present with their implementations. The manifolds that we consider include not only the classical strong stable and unstable manifolds but also manifolds associated with nonresonant spaces. When the difference equations are the Euler–Lagrange equations of a discrete variational problem, we have sharper results. Note that, if the Legendre condition fails, the Euler–Lagrange equations cannot be treated as a dynamical system. If the Legendre condition becomes singular, the dynamical system may be singular while the difference equation remains regular. We present numerical applications to several examples in the physics literature: the Frenkel–Kontorova model with long-range interactions and the Heisenberg model of spin chains with a perturbation. We also present extensions to finite differentiable difference equations

Abstract:

This paper aims to show that sustainable behavior by firms may be impaired by regulatory restrictions. We challenge the assumption that regulation aimed at curbing greenhouse gas emissions (GHG) in the form of a target to meet the Country’s GHG emissions commitments will promote sustainable corporations. We argue that, in fact, such regulation may impair sustainability practices because it creates unintended consequences. This paper tackles the efficiency of the institutional framework chosen through the lenses of the analytical themes of fit, scale, and interplay, then we use a systems dynamic approach to represent how regulation in the arenas of energy efficiency and GHG emissions reduction may withhold competitive business outcomes and corporate sustainability schemes. We exemplify and simulate a single regulation scheme: a clean energy target for firms; and found that as a result of such scheme, the system is dominated by negative feedback processes resulting in lesser outcomes that would be better tackled by firms not being subject to the restrictions imposed by the regulation

Abstract:

This study uses empirical information to demonstrate the analysis of a corporate sustainability model and presents five leading Mexican companies as illustrative examples of sustainable, long-term firms whose strategic plans incorporate three views of sustainability: market-industry, resource-based, and institutional-based. By considering all three domains, companies better position themselves to adapt to the restrictions imposed by the economic, social, and environmental systems. Competitive success requires a constant awareness of the conditions under which the company may lose or generate value, and a company's competitiveness reflects its long-term performance and relationships within the industry and with competitors. Sustainable companies demonstrate successful long-term performance amid the restrictions imposed by economic, social, and environmental systems by developing a strategy that sustainably generates and captures value into the future. Sustainable practices are central to a company’s business model and survival because a strategy of targeted, enduring actions affords competitive advantages

Abstract:

In this note analyze efficiency improvement sover the Gaussian maximum likelihood (ML) estimator for frecuency domain minimum distance (MD) estimation for causal and invertible autoregressive moving average (ARMA) models. The analysis complements Velasco and Lobato (2017) where optimal MD estimation, which employs information in higher order moments, is studied for the general possibly non causal or non-invertible case. We consider MD estimation that combines in two manners the information contained in second, third, and fourth moments. We show that for both MD estimators efficiency improvements over the Gaussian ML occur when the distribution of the innovations is platykurtic. In addition, we show that asymmetry alone is not associated with efficiency improvements

Abstract:

This article compares the asymptotic power properties of the Wald, the Lagrange Multiplier and the Likelihood Ratio test for fractional unit roots. The paper shows that there is an asymptotic inequality between the three tests that holds under fixed alternatives

Abstract:

In this article we introduce efficient Wald tests for testing the null hypothesisof the unit root against the alternative of the fractional unit root. In a local alternative framework, the proposed tests are locally asymptotically equivalent to the optimal Robinson Lagrange multiplier tests. Our results contrast with the tests for fractional unit roots, introduced by Dolado, Gonzalo, and Mayoral, which are inefficient. In the presence of short range serial correlation, we propose a simple and efficient two-step test that avoids the, estimation of a nonlinear regression model. In addition, the first-order asymptotic properties of the proposed tests are not affected by the preestimation of short or long memory parameters

Abstract:

This article analyzes the fractional Dickey-Fuller (FDF) test for unit roots recently introduced by Dolado, Gonzalo and Mayoral (2002 Econometrica 70, 1963--2006) within a more general setup. These authors motivate their test with a particular analogy with the Dickey-Fuller test, whereas we interpret the FDF test as a class of tests indexed by an auxiliary parameter, which can be chosen to maximize the power of the test. Within this framework, we investigate optimality aspects of the FDF test and show that the version of the test proposed by these authors is not optimal. For the white noise case, we derive simple optimal FDF tests based on consistent estimators of the true degree of integration. For the serial correlation case, optimal augmented FDF (AFDF) tests are difficult to implement since they depend on the short-term component. Hence, we propose a feasible procedure that automatically optimizes a prewhitened version of the AFDF test and avoids this problem

Abstract:

This paper considers testing for normality for correlated data. The proposed test procedure employs the skewness-kurtosis test statistic, but studentized by standard error estimators that are consistent under serial dependence of the observations. The standard error estimators are sample versions of the asymptotic quantities that do not incorporate any downweighting, and, hence, no smoothing parameter is needed. Therefore, the main feature of our proposed test is its simplicity, because it does not require the selection of any user-chosen parameter such as a smoothing number or the order of an approximating model

Abstract:

This article considers consistent testing the null hypothesis that the conditional mean of an economic time séries is linear in past values. Two specific tests are discussed, the Cramér-von Mises and the Kolmogorov-Smirnov tests. The particular feature of the proposed tests is that the bootstrap is used to estimate the nonstandard asymptotic distributions of the test statistics considered. The tests are justified theoretically by asymptotics, and their finite-sample behaviors are studied by means of Monte Carlo experiments. The tests are applied to five U.S. monthly series, and evidence of nonlinearity is found for the first difference of the logarithm of the personal income and for the first difference of the unemployment rateo No evidence of nonlinearity is found for the first difference of the logarithm of the U.S. dollar/Japanese Yen exchange rate, for the first difference of the 3-month T-bill interest rate and for the first difference of the logarithm of the M2 money stock. Contrary to typically used tests, the proposed testing procedures are robust to the presence of conditional heteroscedasticity. This may explain the results for the exchange rate and the interest rate

Abstract:

The problem addressed in this paper is to test the null hypothesis that a time series process is uncorrelated up to lag K in the presence of statistical dependence. We propose an extension of the Box-Pierce Q-test that is asymptotically distributed as chi-square when the null is true for a very general class of dependent processes that includes non-martingale difference sequences. The test is based on a consistent estimator of the asymptotic covariance matrix of the sample autocorrelations under the null. The finite sample performance of this extension is investigated in a Monte Carlo study

Abstract:

This article investigates the finite-sample performance of a modified Box-Pierce Q statistic (Q*) for testing that financial time series are uncorrelated without assuming statistical independence. The finite-sample rejection probabilities of the Q* test under the null and its power are examined in experiments using time series generated by an MA (1) process where the errors are generated by a GARCH (1, 1) model and by a long memory stochastic volatility model. The tests are applied to daily currency returns

Abstract:

An analysis is presented of a new testing procedure for the null hypothesis that a stochastic process is uncorrelated when the process is possibly dependent. Unlike with existing procedures, the user does not need to choose any arbitrary number to implement the proposed test. The asymptotic null distribution of the proposed test statistic is not standard, but it is tabulated by means of simulations. The test is compared with two alternative test procedures that require selection of user-chosen numbers on the basis of asymptotic local power and finite sample behavior. Although the asymptotic local power of the proposed test is lower than those corresponding to the alternative tests, in a Monte Carlo study I show that in small samples the test typically better controls the type I error and that the loss of power is not substantial

Abstract:

This article examines consistent estimation of the long-memory parameters of stock-market trading volume and volatility. The analysis is carried out in the frequency domain by tapering the data instead of detrending them. The main theoretical contribution of the article is to prove a central limit theorem for a multivariate two-step estimator of the memory parameters of a nonstationary vector process. Using robust semiparametric procedures, the long-memory properties of trading volume for the 30 stocks in the Dow Jones Industrial Average index are analyzed. Two empirical results are found. First, there is strong evidence that stock-market trading volume exhibits long memory. Second, although it is found that volatility and volume exhibit the same degree of long memory for most of the stocks, there is no evidence that both processes share the same long-memory component.

Abstract:

This paper analyzes a two-step estimator of the long memory parameters of a vector process. The objective function considered is a semiparametric version ofthe multivariate Gaussian likelihood function in the frequency domain. In our context, semiparametric refers to the fact that only periodogram ordinates evaluated in a degenerating neighborhood of zero frequency are employed in the estimation procedure. Asymptotic normality is established under mild conditions that do not include Gaussianity. Furthermore, the simplicity of the form of the covariance matrix of the estimates facilitates statistical inference.We include an application of these estimates to exchange rate data

Abstract:

There is frequently interest in testing that a scalar or vector time series is I(0), possibly after first-differencing or other detrending, while the I(0) assumption is also taken for granted in autocorrelation-consistent variance estimation. We propose a test for I(0) against fractional alternatives. The test is nonparametric, and indeed makes no assumptions on spectral behaviour away from zero frequency. It seems likely to have good efficiency against fractional alternatives, relative to other nonparametric tests. The test is given large sample justification, subjected to a Monte Carlo analysis of finite sample behaviour, and applied to various empirical data series

Abstract:

We test for the presence of long memory in daily stock returns and their squares using a robust semiparametric procedure of Lobato and Robinson. Spurious results can be produced by nonstationarity and aggregation. We address these problems by analyzing subperiods of returns and using individual stocks. The test results show no evidence of long memory in the returns. By contrast, there is strong evidence in the squared returns

Abstract:

Several aspects of inference with long memory series in a multivariate framework are examined. The main result of this paper is to prove the consistency of the averaged cross-periodogram evaluated in a degenerating neighbourhood of zero frequency. We also illustrate several applications of that result and consider some specification issues

Resumen:

Diversos artículos han encontrado evidencia de memoria larga en los cuadrados de los rendimientos de varias series financieras. Esto ha sido interpretado como indicativo de que el proceso de volatilidad de estos rendimientos es persistente. En este artículo analizamos cómo esa evidencia puede ser explicada por la presencia de un componente estacional de memoria larga en los rendimientos. Además analizamos estimadores semiparamétricos de modelos de memoria larga en el dominio de la frecuencia y aplicamos el estimador de pseudo máxima verosimilitud de Robinson (1995a) al caso de memoria larga estacional. Por último, estas técnicas son empleadas en la serie de tipos de cambio libra-marco

Abstract:

Several studies have reported evidence of long memory in the squared returns of financial series. This evidence has been usually interpreted as an indicator of persistence in the volatility process of the returns. In this study we examine how that evidence can be explained by a seasonal long memory component in the returns. Also, we review semiparametric estimates of long memory models in the frequency domain and extend Robinson's (1995a) pseudo maximum likelihood approach to the seasonal long memory case. Finally, using these tools we analyze the British pound-Deutsche mark exchange rate series

Abstract:

This paper discusses estimates of the parameter H ε (1/2, 1) which governs the shape of the spectral density near zero frequency of a long memory time series. The estimates are semiparametric in the sense that the spectral density is parameterized only within a neighborhood of zero frequency. The estimates are based on averages of the periodogram over a band consisting of m equally-spaced frequencies which decays slowly to zero as sample size increases. Robinson (1994a) proposed such an estimate of H which is consistent under very mild conditions. We describe the limiting distributional behavior of the estimate and also provide Monte Carlo information on its finite-sample distribution. We also give an expression for the asymptotic mean squared error of the estimate. In addition to depending on the bandwidth number m, the estimate depends on an additional user-chosen number q, but we show that for H ε (1/2, 3/4) there exists an optimal q for each H, and we tabulate this

Resumen:

En este artículo defiendo una visión cognitivista moderada de las creencias religiosas. Las creencias religiosas se interpretan como "creencias cosmovisivas", que explico como indispensables para nuestra práctica cotidiana y científica; sin embargo, mi lectura es distinta de las lecturas no cognitivistas de la "creencia cosmovisiva" que aparecen ocasionalmente en la literatura

Abstract:

In this paper, I defend a moderately cognitive account of religious beliefs. Religious beliefs are interpreted as "worldview beliefs", which I explicate as being indispensable to our everyday and scientific practice; my reading is nonetheless distinct from non-cognitivist readings of "worldview belief" which occasionally appear in the literature

Abstract:

We develop a general theory of implicit generating forms for volume-preserving diffeomorphisms on a manifold. Our results generalize the classical formulas for generating functions of symplectic twist maps and examples of Carroll for volume-preserving maps on Rn

Abstract:

We study exact, volume-preserving diffeomorphisms that have heteroclinic connections between a pair of normally hyperbolic invariant manifolds. We develop a general theory of lobes, showing that the lobe volume is given by an integral of a generating form over the primary intersection, a subset of the heteroclinic orbits. Our definition reproduces the classical action formula in the planar, twist map case. For perturbations from a heteroclinic connection, the lobe volume is shown to reduce, to lowest order, to a suitable integral of a Melnikov function

Abstract:

We study perturbations of diffeomorphisms that have a saddle connection between a pair of normally hyperbolic invariant manifolds. We develop a first order deformation calculus for invariant manifolds and show that a generalized Melnikov function or Melnikov displacement can be written in a canonical way. This function is defined to be a section of the normal bundle of the saddle connection. We show how our definition reproduces the classical methods of Poincaré and Melnikov and specializes in methods previously used for exact symplectic and volume-preserving maps. We use the method to detect the transverse intersection of stable and unstable manifolds and relate this intersection to the set of zeros of the Melnikov displacement

Abstract:

We construct a family of integrable volume-preserving maps in R3 with a two-dimensional heteroclinic connection of spherical shape between two fixed points of saddle-focus type. In other contexts, such structures are called Hill’s spherical vortices or spheromaks. We study the splitting of the separatrix under volume-preserving perturbations using a discrete version of the Melnikov method. First, we establish several properties under general perturbations. For instance, we bound the topological complexity of the primary heteroclinic set in terms of the degree of some polynomial perturbations. We also give a sufficient condition for the splitting of the separatrix under some entire perturbations. A broad range of polynomial perturbations verify this sufficient condition. Finally, we describe the shape and bifurcations of the primary heteroclinic set for a specific perturbation

Abstract:

We study a two-parameter family of standard maps: the so-called two-harmonic family. In particular, we study the areas of lobes formed by the stable and unstable manifolds. Variational methods are used to find heteroclinic orbits and their action. A specific pair of heteroclinic orbits is used to define a difference in action function and to study bifurcations in the stable and unstable manifolds. Using this idea, two phenomena are studied: the change of orientation of lobes and tangential intersections of stable and unstable manifolds

Abstract:

We show how the shadowing property can be used in connection to rotation sets. We review the concept of periodic chains and the shadowing rotation property, and study a class of diffeomorphisms with invariant sets that have such property. In particular, we consider invariant sets that arise from homoclinic and heteroclinic connections for twist maps in higher dimensions. As a consequence, we can show the existence of a family of twist maps, each one with an open set of rotation vectors that are realized by points that are close to a fixed point

Abstract:

We study the relation between the Jacobian conjecture and the so-called jolt map representation. We define jolt maps as any map that is symplectic-conjugate to a shear map. In particular, we study a family of homogeneous-symplectic maps and conjecture that all homogeneous-symplectic maps are jolt maps. We prove the result in the plane

Abstract:

We develop a Melnikov method for volume-preserving maps that have normally hyperbolic invariant sets with codimension-one invariant manifolds. The Melnikov function is shown to be related to the flux of the perturbation through the unperturbed invariant surface. As an example, we compute the Melnikov function for a perturbation of a three-dimensional map that has a heteroclinic connection between a pair of invariant circles. The intersection curves of the manifolds are shown to undergo bifurcations in homology

Abstract:

Under take-it-or-leave-it offers, dynamic equilibria in the discrete time random matching model of money are a "translation" of dynamic equilibria in the standard overlapping generations model. This formalizes earlier conjectures about the equivalence of dynamic behavior in the two models and implies the indeterminacy of dynamic equilibria in the random matching model. As in the overlapping generations model, the indeterminacy disappears if an arbitrarily small utility to holding money is introduced. We introduce a different pricing mechanism, one that puts into sharp focus that agents are forward-looking when they interact

Abstract:

Explicit formulae are given for the saddle connection of an integrable family of standard maps studied by Y. Suris [Func. Anal. Appl. 23 1989. 74–76]. When the map is perturbed this connection is destroyed, and we use a discrete version of Melnikov’s method to give an explicit formula for the first order approximation of the area of the lobes of the resultant turnstile. These results are compared with computations of the lobe area.

Abstract:

We study families of volume preserving diffeomorphisms in R[sup 3] that have a pair of hyperbolic fixed points with intersecting codimension one stable and unstable manifolds. Our goal is to elucidate the topology of the intersections and how it changes with the parameters of the system. We show that the "primary intersection" of the stable and unstable manifolds is generically a neat submanifold of a "fundamental domain." We compute the intersections perturbatively using a codimension one Melnikov function. Numerical experiments show various bifurcations in the homotopy class of the primary intersections.

Abstract:

In the private values single object auction model, we construct a satisfactory mechanism -a symmetric, dominant strategy incentive compatible, and budget-balanced mechanism. The mechanism converges to efficiency at an exponential rate. It allocates the object to the highest valued agent with more than 99% probability provided there are at least 14 agents. It is also ex-post individually rational. We show that our mechanism is optimal in a restricted class of satisfactory ranking mechanisms. Since achieving efficiency through a dominant strategy incentive compatible and budget-balanced mechanism is impossible in this model, our results illustrate the limits of this impossibility

Resumen:

En este documento se presenta una propuesta para un levantamiento estratégico de encuestas electorales en los distintos estados de los Estados Unidos de América (EUA). Dicha propuesta está basada en un modelo probabilístico para estimar la probabilidad de ganar las elecciones presidenciales en los EUA

Abstract:

This paper presents a proposal for a strategic location of electoral surveys at different states in the United States. Such proposal is based on a probabilistic model to estimate the probability of winning the presidential elections in the US

Abstract:

Self-weighted two-stage sampling designs are popular in practice as they simplify field-work. It is common in practice to compute variance estimates only from the first sampling stage, neglecting the second stage. This omission may induce a bias in variance estimation; especially in situations where there is low variability between clusters or when sampling fractions are non-negligible. VVe propose a design-consistent jackknife variance estimator that takes account of all stages via deletion of clusters and observations within clusters. The proposed jackknife can be used for a wide class of point estimators. It does not need joint-inclusion probabilities and naturally includes finite population corrections. A simulation study shows that the proposed estimator can be more accurate than standard jackknifes (Rao, Wu, and Yue (1992)) for self-weighted two-stage sampling designs

Abstract:

We propose a new replicate variance estimator suitable for differentiable functions of estimated totals. The proposed variance estimator is defined for any unequal-probability without-replacement sampling design, it naturally includes finite population corrections and it allows two-stage sampling. We show its design-consistency and its close relationship with linearization variance estimators. When estimating a total, the proposed estimator reduces to the Horvitz–Thompson variance estimator. Simulations suggest that the proposed variance estimator is more stable than its replicate competitors

Résumé:

Nous proposons un nouvel estimateur de variance ré-échantillonnés adapté à des fonctions dérivables de totaux estimés. L’estimateur de variance proposé est défini pour tous les plans d’échantillonnage à probabilités inégales sans remise. Il comprend naturellement les corrections de population finie et il peut s’appliquer à l’échantillonnage à deux degrés. Nous montrons sa convergence asymptotique sous le plan d’échantillonnage et sa relation avec les estimateurs linéarisés de variance. Lors de l’estimation d’un total, l’estimateur proposé se réduit à l’estimateur de variance de Horvitz-Thompson. Des simulations suggèrent que l’estimateur de variance proposée est plus stable que ses concurrents

Resumen:

En una época caracterizada por el derrumbamiento de los referentes tradicionales, el debilitamiento de las religiones institucionalizadas y la aniquilación de los sentidos fuertes de los conceptos de verdad y bien, surgen los llamados movimientos de la Nueva Era como alternativa para la necesidad de sostén y sentido del hombre contemporáneo. La reivindicación del pensamiento crítico se torna urgente como principal salvaguarda individual y colectiva frente a lo que pudiera tratarse, en algunos casos, de fenómenos de mercado, consumo y charlatanería espiritual

Abstract:

In an era characterized by the fall of traditional references, the weakening of institutionalized religions and the destruction of the absolute senses of truth and good, emerge New Age movements, an alternative to the need of support and meaning to the contemporary man. It becomes urgent to revindicate critical thinking as the main individual and collective safeguard to the spiritual quackery, mercantilism, and consumerism phenomena

Abstract:

The search of educational resources is a constant activity done by college students during their learning process. Search engines help to solve this problem, but they are normally developed for commercial purposes. Personalizing educational resources is shown in literature and tries to provide students with materials that fulfill their academic needs. This study has the objective of installing a web platform with a search engine to explore the activities users do while surfing references of educational sources which are used for their learning process. The mentioned engine is a tool that allows users to follow their behavior and to build a personalization proposal. A first working prototype was designed and developed for the search of educational materials of a college subject called "Applied programming". Then, an evaluation of the platform was applied through a usability survey to 80 users, which included 8 specific tasks. Finally, a satisfaction survey was applied. The performed evaluation showed the opinion of the users while performing the activities of the web system. The 70% considered the platform was simple and easy to use, the 63% mentioned they were able to perform the tasks. In the feedback, the user expressed the importance of following developing this web platform, giving comments for its improvement in execution, content and interface

Resumen:

Este trabajo presenta un análisis de la reciente reforma al marco jurídico de la industria petrolera en México, desde una perspectiva distinta: la organización corporativa y su relación con el desempeño de cualquier empresa, sea estatal o privada. En este sentido el fin último de nuestro esfuerzo es generar nuevas y distintas dudas en el lector, acerca de esta industria y su organización. Estamos seguros, que detrás de cualquiera de esas preguntas, existen soluciones imaginativas a los problemas que aquejan a nuestra industria petrolera. El trabajo esta dividido en cinco capítulos: El primero aborda el tema de la organización corporativa en general; El segundo y el tercero describen dos casos de reestructuración en países con diversas culturas, realidades políticas y circunstancias históricas. El cuarto desarrolla el tema de la reciente reforma petrolera en México y se evalúa el modelo de organización que deriva de ella; el quinto presenta una breve conclusión

Abstract:

This chapter presents an analysis of the recent reform in the legal framework in the Mexican oil industry, but from a different perspective: that of corporate organization and its relation with the performance of any corporation be it government owned or private. In this sense, the ultimate purpose of this chapter is to generate new and distinct questions in the reader, regarding the industry and its organization. We are certain that, behind these questions, there are imaginative solutions to the problems suffered by the Mexican energy industry. This chapter is divided in five parts: the first addresses the issue of corporate organization in general; the second and third describe two cases of restructuring in countries with distinct cultural, political and historic features and circumstances; the fourth develops the topic of the recent oil reform in Mexico and evaluates the organizational model that derive from it; the fifth contains a brief conclusion

Resumen:

El autor muestra la inseparable y estrechísima relación entre mundo, pensamiento y lenguaje, y apuesta a que el hombre comprenda que dicha relación es, ante todo, contemplación esperanzada, pero activa, y no afán de dominio y explotación

Abstract:

The author reveals the inseparable and close relationship between world, thinking, and language and bets on the fact that man finally understands that such relationship is, above all, a hopeful but active contemplation and not something to be dominated and exploited

Resumen:

El objeto de este estudio es analizar el complejo diálogo que se produjo entre la poesía de posguerra española de uno y otro lado del Atlántico. Una reflexión crítica sobre la posible comunicación que se generó por medio de la poesía del exilio español y la poesía peninsular de posguerra nos ofrece una muestra de una afinidad mayor a la que supone la crítica. De esta forma, se intenta matizar el tópico de la incomunicación de las dos Españas; y se formulan los posibles itinerarios, rutas e intercambios poéticos que se produjeron entre la revista del exilio español en México titulada Las Españas (1946-1956) y distintas antologías publicadas en la península. En este orden de ideas, se estudian las poéticas de escritores como Gabriel Celaya, Victoriano Crémer, León Felipe, Juan Rejano, entre otros

Abstract:

The purpose of this study is to analyze the complex dialogue that occurred between the poetry of Spanish postwar either side of the Atlantic. A critical reflection on the possible communication that occurred by the poetry of Spanish exile and by the poetry of peninsular postwar imply an greater affinity. In this way, we analyze the wrong topic of the isolation of the two Spains; and also we analyze the itineraries, routes and poetic exchanges that occurred between the magazine of the Spanish exile in Mexico entitled Las Españas (1946-1956) and various anthologies published in the Peninsula. In this article, we also studied the poetics of writers like Gabriel Celaya, Victoriano Cremer, León Felipe, Juan Rejano and others

Resumen:

La condición de exiliado entraña la vivencia de un viaje -aun interior-, pero ¿cómo esta condición del exilio modifica la experiencia del viajero que lo sufre? Si bien es cierto que exiliarse supone el viaje como elemento intrínseco del acto mismo de abandonar la patria, la relación entre exilio y viaje es más complicada de lo que parecería y no en todos los casos sigue siendo el mismo patrón. Analizamos los diarios de viaje escritos a bordo de barcos como el Sinaia, el Ipanema y el Mexique para contrastarlos con la poética de dos autores exiliados: Pedro Garfias y Juan Rejano. Ambos ejemplifican la noción problemática que se establece entre viaje y exilio en relación con la posibilidad de una poética oblicua del viaje, y sus realizaciones en torno a metáforas como la memoria, el mar o el silencio

Abstract:

Exile status involves the experience of a journey -even interior-, but how this condition of exile modifies the traveler experience sufferer? While it is true that exile means the trip as an intrinsic element of the act of leaving the country itself, the relationship between exile and journey is more complicated than it seems, not in all cases follow the same pattern. We analyze the travel diaries written boards of ships like the Siania, Ipanema and Mexique to contrast with the poetic than two exiled writers: Pedro Garfias and Juan Rejano. Both exemplify the problematic notion established between travel and exile in relation to the possibility of an oblique poetic of the trip, and achievements around metaphors such as memory, sea or silence

Abstract:

University students participating in a course face the challenge of choosing educational resources when the suggested ones are not suitable. Educational trends encourage the use and development of technologies to improve learning processes. New technological proposals suggest the development of adaptive and/or recommendation (personalization) capabilities according to the user's specific needs. The proposed methodology focuses on the development of a web repository for exploring the recommendation of educational resources and evaluating its implementation. Two algorithms were implemented based on the publication of educational resource references. The first relates to the reference search engine, while the second focuses on analyzing user interactions with the references using an unsupervised model. The study concludes with an examination of students' perceptions as the web repository progresses. The web prototype was used for activities designed throughout an academic cycle with 115 students from the institution. Students perceived progress in retrieving references on the web platform as they advanced through the academic cycle, providing feedback on performance, content, and visualization improvements

Abstract:

Women's participation in the labour market in Central America, Panama, and the Dominican Republic (CAPADOM) is low by international standards. Increasing their participation is a goal of many policymakers who want to improve women's access to quality employment. This study uses data from CAPADOM to assess whether gender equality in the law increases women's participation in the labour force and, if that is the case, the extent to which this boosts GDP per capita. To do so, the authors use a panel VAR model. The results show that CAPADOM could increase female labour participation rate by 6 percentage points (pp) and GDP per capita by 1 pp by introducing gender-related legal changes such as equal pay for equal work, paid parental leave, and allowing women to do all the same jobs as men

Abstract:

With remittances reaching historic highs in many Latin American countries, this paper evaluates the existence of Dutch disease in that region on a country-by-country basis. To do so, we employ heterogeneous panel data models with cross-sectional dependence to estimate the determinants of the real exchange rate and calculate the effect of net remittance flows in the region by country. In this context, various countries' future economic development must address this potential loss of competitiveness

Abstract:

In this paper, an event-oriented modeling methodology for agent-based smart transportation systems supported by the n-LNS Petri net formalism is proposed. The methodology allows modeling and incorporating capabilities of smart transportation systems as well as modeling smart transportation systems as an event-oriented system of systems. The proposed framework makes use of three hierarchical levels of abstraction to describe smart transportation systems: (i) the highest level models a transportation network; (ii) the middle level models agents (e.g., cars); and (iii) the lowest level models individual agent behaviors (e.g., driving behaviors). In doing so, the components of an agent-based smart transportation system and their concurrent interrelationships are formally defined, which enables the quantitative evaluation of smart transportation systems by performing discrete-event simulations

Abstract:

A frequent problem in artificial intelligence is the one associated with the so-called supervised learning: the need to find an expression of a dependent variable as a function of several independent ones. There are several algorithms that allow us to find a solution to the bivariate problems. However, the true challenge arises when the number of independent variables is large. Relatively new tools have been developed to tackle this kind of problems. Thus, multi-Layer Perceptron networks (MLPs) may be seen as multivariate approximation algorithms. However, a commonly cited disadvantage of MLPs is that they remain a"black-box" kind of method: they do not yield an explicit closed expression to the solution. Rather, we are left with the need of expressing it via the architecture of the MLP and the value of the trained connections. In this paper we explore three methods that allow us to express the solution to multivariate problems in a closed form: a) Fast Ascent (FA), b) Levenberg-Marquardt (LM) and c) Powell's Dog-Leg (PM) algorithms. These yield closed expressions when presented with multiple independent variable problems. In this paper we discuss and compare these four methods and their possible application to pattern recognition in mobile robot environments and artificial intelligence in general

Abstract:

With the advent of high-throughput sequencing of immunoglobulin genes (Ig-Seq), the understanding of antibody repertoires and their dynamics among individuals and populations has become an exciting area of research. There is an increasing number of computational tools that aid in every step of the immune repertoire characterization. However, since not all tools function identically, every pipeline has its unique rationale and capabilities, creating a rich blend of useful features that may appear intimidating for newcomer laboratories with the desire to plunge into immune repertoire analysis to expand and improve their research; hence, all pipeline strengths and differences may not seem evident. In this review we provide a practical and organized list of the current set of computational tools, focusing on their most attractive features and differences in order to carry out the characterization of antibody repertoires so that the reader better decides a strategic approach for the experimental design, and computational pathways for the analyses of immune repertoires

Abstract:

The authors propose a novel decentralised mixed algebraic and dynamic state observation method for multi-machine power systems with unknown inputs and equipped with phasor measurement units (PMUs). More specifically, they prove that for the third-order flux-decay model of a synchronous generator, the local PMU measurements provide enough information to reconstruct algebraically the load angle and quadrature-axis internal voltage. Due to the algebraic structure, a high numerical efficiency is achieved, which makes the method applicable to large-scale power systems. Also, they prove that the relative shaft speed can be globally estimated combining a classical immersion and invariance observer with-the recently introduced-dynamic regressor extension and mixing parameter estimator. This adaptive observer ensures global convergence under weak excitation assumptions that are verified in applications. The proposed method neither requires the measurement of exogenous input signals such as the field voltage and the mechanical torque nor the knowledge of mechanical subsystem parameters

Abstract:

In 2013, the Chinese president, Xi Jinping, announced an ambitious new agenda for Chinese foreign policy: the Silk Road Economic Belt. Heralded as a gift from China to the world, his envisioned program aims to resurrect the historical Silk Road, which linked East and West by land and sea. The historical Silk Road was built to facilitate trade, but it grew to have a distinct wider political and cultural impact. In this article, we argue that this political and cultural impact is currently being threatened by what we call the 'Bamiyazation phenomenon'-an iconoclastic movement with transnational dimensions that particularly affects routes along the Silk Road. We contend that China should adopt a more holistic approach to the development of the twenty-first century Silk Road. To this end, we suggest that China should spearhead the defence and protection of the world's cultural heritage through implementing and promoting a specified 'crime against common cultural heritage' in order to counteract the effects of the Bamiyazation phenomenon and to support the development of China's new international identity as a power that, in Xi Jinping's own words, 'adopt[s] a right approach with some important principles'

Abstract:

SPEX Left LU is a software package for exactly solving unsymmetric sparse linear systems. As a component of the sparse exact (SPEX) software package, SPEX Left LU can be applied to any input matrix, A, whose entries are integral, rational, or decimal, and provides a solution to the system Ax=b, which is either exact or accurate to user-specified precision. SPEX Left LU preorders the matrix A with a user-specified fill-reducing ordering and computes a left-looking LU factorization with the special property that each operation used to compute the L and U matrices is integral. Notable additional applications of this package include benchmarking the stability and accuracy of state-of-the-art linear solvers and determining whether singular-to-double-precision matrices are indeed singular. Computationally, this article evaluates the impact of several novel pivoting schemes in exact arithmetic, benchmarks the exact iterative solvers within Linbox, and benchmarks the accuracy of MATLAB sparse backslash. Most importantly, it is shown that SPEX Left LU outperforms the exact iterative solvers in run time on easy instances and in stability as the iterative solver fails on a sizeable subset of the tested (both easy and hard) instances. The SPEX Left LU package is written in ANSI C, comes with a MATLAB interface, and is distributed via GitHub, as a component of the SPEX software package, and as a component of SuiteSparse

Abstract:

The roundoff-error-free (REF) LU factorization, along with the REF forward and backward substitution algorithms, allows a rational system of linear equations to be solved exactly and efficiently. The REF LU factorization framework has two key properties: all operations are integral, and the size of each entry is bounded polynomially --a bound that rational arithmetic Gaussian elimination achieves only via the use of computationally expensive greatest common divisor operations. This paper develops a sparse version of REF LU, termed the Sparse Left-looking Integer-Preserving (SLIP) LU factorization, which exploits sparsity while maintaining integrality of all operations. In addition, this paper derives a tighter polynomial bound on the size of entries in L and U and shows that the time complexity of SLIP LU is proportional to the cost of the arithmetic work performed. Last, SLIP LU is shown to significantly outperform a modern full-precision rational arithmetic LU factorization approach on a set of real world instances. In all, SLIP LU is a framework to efficiently and exactly solve sparse linear systems

Abstract:

Guanxi refers to a Chinese system of doing business on the basis of personal relationships, and it is representative of the way that business is done throughout much of the non-western world. In this paper we first evaluate guanxi from an ethical perspective, and then attempt to shed light on the sources of its economic advantages and disadvantages through the use of a simple mathematical model. Finally, we point out how eastern and western business practices may already be converging toward systems based on more complete models of trust to deal with the conditions of progress coupled with uncertainty that form our new economic reality

Abstract:

In this paper, we analyze the research productivity of faculty entrepreneurs at 15 research institutes using a novel database combining faculty characteristics, licensing information, and journal publication records. We address two related research questions. First, are faculty entrepreneurs more productive researchers (‘‘star scientists’’) compared to their colleagues? Second, does the productivity of faculty entrepreneurs change after they found a firm? We find that faculty entrepreneurs in general are more productive researchers than control groups. We use multiple performance criteria in our analysis: differences in mean publication rate, skewness of publication rate, and impact of publications (journal citation rate). These findings bring together previous work on star scientists by Zucker, Darby, and Brewer [Zucker, L. G., Darby M. R., & Brewer M. B. (1998). The American Economic Review, 88, 290–306.] and tacit knowledge among university entrepreneurs by Shane [Shane, S. (2002). Management Science, 48, 122–137.] and Lowe [Lowe, R. A. (2001). In G. Libecap (Ed.) Entrepreneurial Inputs and Outcomes. Amsterdam: JAI Press, Lowe, R.A. (2006). Journal of Technology Transfer, 31(4), 412–429]. Finally, we find that faculty entrepreneurs’ productivity not only is greater than their peers but also does not decrease following the formation of a firm

Abstract:

Runoff systems allow for a reversion of the first-round result: the most voted candidate in the first round may end up losing the election in the second. But do voters take advantage of this opportunity? Or does winning the first round increase the probability of winning the second? We investigate this question with data from presidential elections since 1945, as well as subnational elections in Latin America. Using a regression discontinuity design, we find that being the most voted candidate in the first round has a substantial positive effect on the probability of winning the second round in mayoral races - especially in Brazil -, but in presidential and gubernatorial elections the effect is negative, though not statistically significant at conventional levels. The positive effect in municipal races is much stronger when the top-two placed candidates are ideologically close - and thus harder to distinguish for voters - but weakens considerably and becomes insignificant when the election is polarized. We attribute these differences to the disparate informational environment prevailing in local vs. higher-level races

Abstract:

We claim that the overall effect of district magnitude on female representation is ambiguous because district magnitude increases both (a) party magnitude --which promotes the election of women, and (b) the number of lists getting seats --which hampers it, as marginal lists are usually headed by men. For identification, we exploit the fact that the Argentine Chamber of Deputies and the Buenos Aires legislature elect half of their members every two years, and thus some districts have varying magnitudes in concurrent and midterm elections. We find a positive but weak effect of district magnitude on female representation, which can be decomposed into a positive effect driven by party magnitude and a negative one channeled by the number of lists getting seats. We find similar results in a sample of seven Latin American countries

Abstract:

How do elections and the economy affect authoritarian survival? Distinguishing among (a) nonelection periods in autocracies that do not hold competitive elections, (b) election periods in autocracies that hold regular elections, and (c) nonelection periods in such autocracies, I argue that bad economic performance makes authoritarian regimes especially likely to break down in election years, but the anticipation of competitive elections should dissuade citizens and elites from engaging in antiregime behavior in nonelection periods, bolstering short-term survival. Thus, compared to regimes that do not hold competitive elections, electoral autocracies should be more vulnerable to bad economic performance in election periods but more resilient to it in nonelection years. A study of 258 authoritarian regimes between 1948 and 2011 confirms these expectations. I also find that the effect is driven by competitive elections for the executive office, and elections-related breakdowns are more likely to result in democratization

Abstract:

How does district magnitude affšect electoral outcomes? This paper addresses this question by exploiting a combination of two natural experiments in Argentina between 1985 —and 2015. Argentine provinces elect half of their congressional delegation every two years, and thus districts with an odd number of representatives have varying magnitudes in dišfferent election years. Furthermore, whether a province elects more representatives in midterm or concurrent years was decided by lottery in 1983. The results indicate that district magnitude (a) increases electoral support for small parties, (b) increases the (effšective) number of parties getting seats, and (c) reduces electoral disproportionality. The last two results are driven by the mechanical rather than the psychological ešffect of electoral rules

Resumen:

¿Cómo puede el oficialismo introducir la reelección ejecutiva cuando no tiene la posibilidad de imponer sus preferencias de manera unilateral? Interpretando una reforma constitucional como un juego de negociación entre un ejecutivo sin posibilidad de reelegirse y la oposición, en este artículo sostenemos que una reforma que incluya la reelección ejecutiva será más probable cuando (a) el partido de gobierno pueda cambiar la constitución unilateralmente o (b) la oposición sea pesimista sobre su desempeño electoral en el futuro; además, (c) este segundo efecto debería ser especialmente relevante cuando un único partido opositor pueda vetar una reforma constitucional, porque ello impide al ejecutivo adoptar una estrategia de "dividir para reinar." Para evaluar este argumento, examinamos la introducción de la reelección ejecutiva en las provincias argentinas entre 1983 y 2017. En línea con las expectativas, los resultados muestran que la probabilidad de iniciar una reforma institucional es máxima cuando el partido oficialista controla una supermayoría de bancas en la legislatura provincial, pero disminuye abruptamente cuando un partido opositor puede vetar una reforma, y este partido espera tener un buen desempeño en la próxima elección para gobernador

Abstract:

How do incumbents manage to relax term limits when they cannot impose their preferences unilaterally? Interpreting constitutional reforms as a bargaining game between a term-limited executive and the opposition, we argue that reforms involving term limits should be more likely when (a) the incumbent party can change the constitution unilaterally, or (b) the opposition is pessimistic about its future electoral prospects; moreover, (c) this second effect should be stronger when a single opposition party has veto power over a reform because this precludes the executive from playing a “divide-and-rule” strategy. We examine these claims with data from the Argentine provinces between 1983 and 2017. In line with expectations, the results show that the probability of initiating a reform is highest when the executive's party controls a supermajority of seats, but falls sharply when a single opposition party has veto power over a reform and this party expects to do well in the next executive election

Abstract:

Can subnational elections contribute to democratization? In autocracies that hold competitive elections at multiple levels of government, subnational executive offices provide opposition parties with access to resources, increase their visibility among voters, and let them gain experience in government. This allows opposition parties to use subnational executives as springboards from which to increase their electoral support in future races, and predicts that their electoral support should follow a diffusion process, that is, a party's electoral performance in municipality m at time t should be better if that party already governs some of m's neighbors since t - l. I evaluate this claim with data from municipal-level elections in Mexico between 1984 and 2000. Consistent with the fact that the Partido Accion Nacional (National Action Party, PAN) followed an explicit strategy of party-building from below but the Partido de la Revolucion Democratica (Party of the Democratic Revolution, PRD) did not, the results indicate that diffusion effects contributed to the growth of the former but not the latter

Abstract:

How do electoral opportunities affect politicians' career strategies? Do politicians behave strategically in response to the opportunities provided by the electoral calendar? We argue that in a legislature that combines nonstatic ambition with a staggered electoral calendar, different kinds of politicians will have dissimilar preferences towards running in concurrent or midterm elections. More specifically, politicians with no previous executive experience should strategically run in midterm legislative elections in order to increase their visibility among voters, while more experienced politicians should opt for concurrent elections. We support these claims with data from the Argentine Chamber of Deputies between 1983 and 2007

Resumen:

En este artículo se realiza un estudio comparativo entre la Cámara de Diputados de México y la de diferentes países latinoamericanos. Iniciando con el propio funcionamiento de la Comisión de Gobierno de la cámara, el autor quiere explicar la manera como diferentes países conciben su gobernabilidad parlamentaria. Para ello, el articulista ilustra su análisis con ejemplos de algunos congresos latinoamericanos de los cuales concluye proponiendo tres tipos fundamentales: a) congresos en los que la Mesa Directiva asume tareas de gobierno; b) congresos en los que la Mesa Directiva forma parte de la Comisión de Gobierno; y c) congresos en los que la Mesa Directiva y la Comisión de Gobiernos son órganos independientes entre sí

Abstract:

A comparative study between Congress in Mexico and other Latin American countries is offered in this article. Starting with the manner in which the Government’s Commission functions within Congress, the writer tries to explain the way in which different countries conceive their parliamentary governability. To that effect, the author illustrates his analysis with examples of some Latin American congresses which lead him to propose three main types: a) congresses where the Board of Chairmen takes over government tasks; b) congresses where the Boards form part of the government Commission; and c) congresses where the board and the Government Commission are entities totally independent from each other

Resumen:

El artículo analiza las elecciones estatales y locales celebradas en México en 1995. El autor destaca cuatro factores presentes en estos comicios. En primer lugar, se celebraron en un contexto de crisis económica. En segundo lugar, el PRI cosechó los peores resultados electorales de su historia. En tercer lugar los procesos electorales (salvo quizás el de Yucatán) fueron limpios y transparentes. Por último, la competitividad del mercado electoral aumentó al concentrarse el voto opositor en una de las opciones de la centro-derecha representada por l Partido Acción Nacional (PAN), y al perder presencia el partido opositor de centro-izquierda Partido de la Revolución Democrática (PRD). Así, pues, tendió a fortalecerse el esquema bipartito PRI-PAN y a debilitarse el tripartidismo PRD-PRI-PAN. El autor concluye que 1995 fue un año de aceleración para el proceso de transición a la democracia en México

Abstract:

This article analyzes the local and federal state elections held in Mexico in 1995. The author mark four principal factors in this elections. First, this elections took place in a context of economical crisis. Second, the PRI had got the worst results in his history. Third, the elections (excepts maybe in Yucatán) were clean. At last, the election competitiveness increased because of the concentration of the opposing vote in the Partido Acción Nacional (PAN, centre-right). The centre-leftist Partido de la Revolución Democrática (PRD) losed influence. The author conclusion is that 1995 was a year in wich the transition to democracy go faster

Abstract:

In the last few years considerable attention has been paid to the role of the prolate spheroidal wave functions (PSWFs) to many practical signal and image processing problems. The PSWFs and their applications to wave phenomena modeling, fluid dynamics and filter design played a key role in this development. It is pointed out in this paper that the operator W arising in the Helmholtz equation after the prolate spheroidal change of variables is the sum of three operators, Sε,α, Sη,β and ΤΦ, each of which acts on functions of one variable: two of them are modified Sturm-Liouville operators and the other one is, up to a variable coefficient, the Chebyshev operator. We believe that this fact reflects the essence of the separation of variables method in this case. We show that there exists a theory of functions with quaternionic values and of three real variables which is determined by the Moisil-Theodorescu-type operator with quaternionic variable coefficients, and that it is intimately related to the modified Sturm-Liouville operators and to the Chebyshev operator (we call it in this way, since its solutions are related to the classical Chebyshev polynomials). We address all the above and explore some basic facts of the arising quaternionic function theory. We further establish analogues of the basic integral formulae of complex analysis such as those of Borel-Pompeiu, Cauchy, and so on, for this version of quaternionic function theory. We conclude the paper by explaining the connections between the null-solutions of the modified Sturm-Liouville operators and of the Chebyshev operator, on one hand, and the quaternionic hyperholomorphic and anti-hyperholomorphic functions on the other

Resumen:

Un tema de recurrente discusión en la filosofía del derecho actual es el relacionado con el fenómeno profundo y persistente de los desacuerdos, enfatizado por R. Dworkin. Uno de los más brillantes y sugerentes abordajes al respecto es el del iusfilósofo italiano Giovanni Ratti, quien articula una refinada respuesta al desafío dworkineano. En el presente trabajo analizaremos la propuesta de Ratti y le presentaremos dos argumentos críticos, mostrando que aunque la vía sustantiva trazada por Ratti es provechosa, está comprometida en algunos aspectos por inconsistencias internas. Brevemente: por un lado, la taxonomía de tipos de desacuerdos en el derecho que Ratti delinea contiene clases que, en contra de su intención, no pueden ser netamente separadas, sino que se presuponen o colapsan entre sí. Por otro lado, no es del todo certero que pueda identificarse a las fuentes del derecho de un modo completamente preinterpretativo

Abstract:

A recurrent issue in nowaday's legal philosophy is that of the profound and persistent phenomenon of disagreement, emphasized by R. Dworkin. One of the most brilliant and suggestive developments about this is the one made by the Italian legal philosopher Giovanni Ratti, who articulates a refined response to the dworkinean challenge. In this paper we will analyze Ratti's proposal and advance two criticisms to it, showing that even though the substantive line of reasoning drawn by Ratti is fruitful, it is compromised in some aspects by internal inconsistencies. Briefly: on the one hand, Ratti's taxonomy of kinds of legal disagreements contains classes that, against what he intends, cannot be completely separated, but on the contrary presuppose each other or collapse among one another. On the other hand, it is not fully accurate to hold that the identification of the sources of law can be made in a totally pre-interpretative way

Abstract:

Digital convergence is becoming increasingly important to both network operators and end users. Network operators look to Next Generation Network (NGN) solutions to meet the ever growing user demand and to the IP Multimedia Subsystem (IMS) as a catalyzer for attractive and innovative services which aim at improving user Quality of Experience (QoE). IMS, presented as the first working implementation of a Service Delivery Platform (SDP), takes great strides to ease migration to NGN as well as help the network become transparent to the end user. Location Based Services (LBS) also play an important role in QoE by supplying relevant information on the user’s whereabouts directly enhancing their day-to-day activities. The combination of LBS within IMS holds great promise for future services where service delivery mimics a network responding directly to user activities. This helps dissolve the difference between the network and the User Equipment (UE), thus increasing user adoption through a transparent network and service delivery. This paper presents two personalized LBS implemented within an IMS test bed acting as a proof of concept for providing improved QoE on an IMS infrastructure as well as demonstrating that service creation is relatively easy to achieve in IMS

Abstract:

Network operators are in the process of adopting the IP Multimedia Subsystem (IMS) as their main underlying infrastructure which will in turn be used as a Service Delivery Platform (SDP). This process requires the migration of current services to their IMS counterparts, as well as a compatibility layer enabling successful communication between IMS and other network infrastructures. Any and all functionality within IMS is normally available through an Application Server (AS). This paper presents an implementation of a Public Switched Telephone Network (PSTN) Gateway which is able to successfully interface with an IMS test bed. Consequently, it also allows calls to be placed between IMS and other Voice over IP (VoIP) providers. The gateway is enabled as an AS with the help of Asterisk

Abstract:

This paper explores a nonlinear, adaptive controller aimed at increasing the stability margin of a direct-current (DC), small-scale, electrical network containing an unknown constant power load. Due to its negative incremental impedance, this load reduces the effective damping of the network, which may lead to voltage oscillations and even to voltage collapse. To overcome this drawback, we consider the incorporation of a controlled DC-DC power converter in parallel with the constant power load. The design of the control law for the converter is particularly challenging due to the existence of unmeasured states and unknown parameters. We propose a standard input-output linearization stage, to which a suitably tailored adaptive observer is added. The good performance of the controller is validated through experiments on a small-scale network

Abstract:

Typical models to estimate effectiveness factors include sphere, cylinder, slab and strong internal diffusion limitations models. However, a reliable model for complex-irregular catalyst shapes is still lacking. Therefore in this work a new approach is proposed which makes simultaneous use of the 1D and GC models to approximate a 3D problem in order to determine the effectiveness factor of a tri-lobular catalytic on a HDS reaction involving diesel. The restrictive internal diffusion phenomena were examined at the cross-section of the catalyst including the effects of its shape. Simulation for the HDS reaction involved the consideration of a small scale reactor using industrial operating conditions. Effects of the intrinsic kinetic, considering strong internal diffusion effects, were calculated through well established computational algorithms. Mathematical simulations of the reaction effectiveness using a given feed-stock of diesel are described completely for a tri-lobular particle considering organic sulfur compound for molecular diffusion, pore size, shape and radial transport across the section of the catalyst and results were validated with models and data published in the literature of typical analysis

Abstract:

Why do drug trafficking organizations (DTOs) sometimes prey on the communities in which they operate but sometimes provide assistance to these communities? What explains their strategies of extortion and co-optation toward civil society? Using new survey data from Mexico, including list experiments to elicit responses about potentially illegal behavior, this article measures the prevalence of extortion and assistance among DTOs. In support of our theory, these data show that territorial contestation among rival organizations produces more extortion and, in contrast, DTOs provide more assistance when they have monopoly control over a turf. The article uncovers other factors that also shape DTOs’ strategies toward the population, including the degree of collaboration with the state, leadership stability and DTO organization, and the value and logistics of the local criminal enterprise

Abstract:

Among presidents' lesser known legislative powers is urgency authority. Seven Latin American presidents wield it: the constitutional power to impose on lawmakers a short deadline to discuss and vote selected bills. This power is similar to the fast-track authority that Congress grants periodically to the US president. We claim that the key consequence of urgency authority is procedural: urgency prevents amendments during floor consideration. By using fast-track authority, presidents can protect bills and committee agreements, in essence becoming a single-member Rules Committee with the ability to impose closed rules on the floor. A formal model generates hypotheses that we test with original data from Chile between 1998 and 2014. Results confirm that preference overlap between the president and committee chairs drives the use of fast-track authority systematically. Patterns in Chile are reminiscent of restrictive rule usage in the United States

Abstract:

We extend the estimation of the components of partisan bias--i.e., undue advantage conferred to some party in the conversion of votes into legislative seats--to single-member district systems in the presence of multiple parties. Extant methods to estimate the contributions to partisan bias from malapportionment, boundary delimitations, and turnout are limited to two-party competition. In order to assess the spatial dimension of multi-party elections, we propose an empirical procedure combining three existing approaches: a separation method (Grofman et al. 1997), a multi-party estimation method (King 1990), and Monte Carlo simulations of national elections (Linzer, 2012). We apply the proposed method to the study of recent national lower chamber elections in Mexico. Analysis uncovers systematic turnout-based bias in favor of the former hegemonic ruling party that has been offset by district geography substantively helping one or both other major parties

Resumen:

La teoría del cartel procedimental de Cox y McCubbins predice conflicto mínimo o incluso nulo en el seno del partido mayoritario en votaciones plenarias. Esto es consecuencia del control férreo de la agenda legislativa que ejercen miembros clave del partido. El estudio de las votaciones nominales de la IVa Legislatura de la Asamblea Legislativa del Distrito Federal (2006–09) descubre escisiones frecuentes del partido mayoritario en la sesión plenaria. La estimación de puntos ideales de los diputados a partir de sus votaciones nominales revela dos líneas de segmentación en la Asamblea. Una contrapone izquierda y derecha clásicas en temas socioeconómicos. La otra se asocia con los nombramientos de las autoridades electorales de la ciudad. Si el PRD de centro-izquierda se mostró unido en la primera dimensión, la presencia de dos facciones enfrentadas es manifiesta en la segunda. El trabajo especula sobre la necesidad de teorizar acerca de instrumentos de control de la agenda de las etapas más tardías del proceso legislativo

Abstract:

Cox and McCubbins' procedural cartel theory expects majority party conflict in final passage votes to be reduced to a minimum. This is a consequence of agenda control by key party members. I inspect roll call voting in the 4th Legislature of Mexico City's Assembly (2006-09) uncovering a frequent majority party split in the plenary. Ideal point estimation with roll call data reveals two lines of Assembly cleavage, one the classic left-right divide on economic issues, the other mostly related to appointments of officers at different levels. While the left-leaning PRD majority showed cohesion in the first dimension, the presence of two distinct factions is manifest in the second. The paper speculates about the need for more theory on late-stage agenda control instruments in the legislative process

Abstract:

We investigate bill passage by party factions in Uruguay and show that joining coalition cabinets earns policy influence. The policy advantage of coalition is therefore not collected by the president alone, partners acquire clout in lawmaking and they use it to pass bills of their own and strike deals with outside factions. Analysis of all bills initiated between 1985 and 2005 reveals that the odds of passing a bill sponsored alone by a majority cabinet faction was about .5, up from about .15 otherwise. And contingent upon the cabinet status of factions involved, the odds of co-sponsored bills conform well to patterns expected by a view that policy rewards are a fundamental part of the politics of coalition in presidentialism

Abstract:

Mexican congressional elections 1979–2009 are examined to determine if gubernatorial candidates have coattails helping candidates on the same ticket get elected to higher office and how the advent of democracy changed this. Analysis distinguishes rates at which gubernatorial votes transfer to congressional races from vote thresholds that gubernatorial candidates must exceed to help, rather than hinder, copartisans. Regression estimates reveal that state parties transferred, on average, 49% of their gubernatorial success to congressional candidates in a concurrent race since 1979 and 69% since 1997. Thresholds indicate that it is easier for the PAN and the left to gain from coattails than the PRI, but the difference shrunk with democracy. Presidential coattails, examined for reference, are shorter on average than gubernatorial ones. So local forces appear to move Mexican congressional campaigns and elections as much as national forces since at least 1979, raising questions about the relevance of federalism in developing nations

Resumen:

2007 fue el primer año de la presidencia de Felipe Calderón, del Partido Acción Nacional. El nuevo gobierno hizo hincapié en la lucha contra la delincuencia organizada, usando al ejército en vez de la policía civil. Destacan también negociaciones entre presidente y partidos para cambiar tres normas de importancia: la ley de pensiones de servidores públicos (que subsana un enorme déficit en las finanzas del gobierno); la creación de nuevos impuestos (que aumentan levemente, pero por primera vez en décadas, la capacidad recaudatoria del gobierno); y una reforma electoral (como respuesta a la crisis postelectoral de 2006). Repasamos también las elecciones locales que se llevaron a cabo en 14 entidades, en las que el Partido Revolucionario Institucional consiguió recuperar algunas posiciones perdidas en los últimos años

Abstract:

2007 was the first year of Felipe Calderón, of the National Action Party, as president. The new government made the fight against organized crime a priority, relying on the army instead of civil police forces for the task. Three important changes to statutes were successfully negotiated between president and parties: the civil servants pensions law (to cut a bubbling deficit in public finance); the creation of new taxes (a slight increase, but for the first time in decades, in government's revenue capacity); and an electoral reform (as a result to the post-election crisis of2006). We also review local elections held in 14 states, in which the Revolutionary Institutional Party won back so me positions it had lost in the recent past

Resumen:

En este artículo se describe un marco de referencia conceptual, orientado a sistemas de Información Distribuídos Heterogéneos. Primero se discute el concepto de integración, y posteriormente se presenta un modelo basado en niveles y grados de integración deseables. Finalmente se ilustra la validez de su robustez mediante su aplicación a la problemática de una gran organización: PEMEX

Resumen:

Se analiza la asociación de corto plazo entre emisiones de PM10 y la mortalidad diaria por causas cardiovasculares, respiratorias y cardiorrespiratorias en siete municipios del Área Metropolitana del Valle de México (2001-2013) mediante el uso de un modelo de regresión Poisson semiparamétrico e incorporando splines cúbicos naturales para la temperatura. Los resultados muestran evidencia de que la evaluación de la estacionalidad, junto con la variabilidad de la temperatura, es importante para comprender la relación entre la contaminación del aire y los eventos de mortalidad. Además, nuestros hallazgos apoyan el umbral de PM10 propuesto por la Organización Mundial de la Salud dentro de los municipios evaluados. Se pudo identificar la asociación entre los efectos de la contaminación del aire y la mortalidad. Por último, demostramos que existen diferencias geográficas que modelan la relación entre los contaminantes atmosféricos y la mortalidad para los modelos con y sin rezagos. Nuestros hallazgos sugieren la necesidad de impulsar políticas de salud pública que consideren la dinámica y la variabilidad geográfica de los contaminantes para mitigar sus efectos nocivos sobre la salud y realizar un mejor manejo del riesgo de mortalidad

Abstract:

We utilize a time-series semi-parametric Poisson regression approach, incorporating natural cubic splines for temperature, to study the short-term associations between PM10 and daily mortality due to cardiovascular, respiratory, and cardiorespiratory events for seven municipalities in Mexico City Metropolitan Area (2001- 2013). Our results demonstrate that assessing seasonality, along with temperature variability, is vital in understanding the relationship between air pollution and mortality events. Additionally, our findings support the World Health Organization’s morbidity and mortality threshold for PM10 within the assessed municipalities. We were able to identify associations between different meteorological seasons and air pollutions effects on mortality. Lastly, we demonstrate that geographical differences are modulating the relationship between air pollutants and mortality for models with and without distributed lagged. Our findings highlight the need for policy-driven approaches that take into consideration the dynamics of meteorological influences and geographic variability in terms of mitigating future deleterious health impacts of air pollutants in facilitating mortality risk

Abstract:

We provide closed-form solutions for European option values when the dynamics of both the short rate and volatility of the underlying price process are modulated by a continuous-time Markov chain with a finite number of “economic states”. Extensions involving dividends, currencies and cost of carry are further explored

Abstract:

When groups of schools within a single district run their admission processes independently of one another, the resulting match is often inefficient: many children are left unmatched and seats are left unfilled. We study the problem of re-matching students to take advantage of these empty seats in a context where there are priorities to respect. We propose an iterative way in which each group may independently match and re-match students to its schools. The advantages of this process are that every iteration leads to a Pareto improvement and a reduction in waste while maintaining respect of the priorities. Furthermore, it reaches a non-wasteful match within a finite number of iterations. While iterating may be costly, as it involves asking for inputs from the children, there are significant gains from the first few iterations. We show this analytically for certain stylized problems and computationally for a few others

Abstract:

We examine how Impact Evaluation (IE) and associated syntheses contribute to evidence generation in low- and middle income (LMIC) countries. We interviewed over 50 individuals from relevant organisations and five LMIC countries and drew on data from reports and repositories. The number of development-oriented IEs has show sustained growth, but tracking of use is too often weak. We conclude that there has been improved. IEs are an importer part of a wider evidence ecosystem, and their evidence needs to be better used (and tracked). Good practice should be promoted more systematically

Resumen:

El presente escrito surge como una reflexión sobre el papel que juega hoy día la educación frente a los problemas sociales y ecológicos que enfrentamos y señalar la paradoja educativa en que nos encontramos, donde los centros de enseñanza intentan resolver los problemas sociales y ecológicos que la educación y los egresados de los propios sistemas educativos han ocasionado. A través de retomar la esencia y finalidad de la educación y del habitar, se propone desarrollar la cultura del saber, la cultura estética, la cultura ética para lograr una educación centrada en fortalecer los fines propios del ser humano y hacer un mundo más habitable. En este escrito se aborda la importancia del desarrollar una cultura del saber, que es la que puede contribuir a solucionar los problemas ecológicos a los que nos enfrentamos a causa una tecnología que ha perdido su esencia de ser amor a la verdad

Abstract:

The present writing arises as a reflection on the role that education plays today concerning the social and ecological problems that we face and to point out the educational paradox in which we find ourselves, where schools try to solve the social and ecological problems that education and the graduates of the educational systems themselves have caused. Through retaking the essence and purpose of education and living, it is proposed to develop the culture of knowledge, the aesthetic culture, the ethical culture to achieve an education focused on strengthening the ends of the human being and making a more habitable world. This paper deals with the importance of developing a culture of knowledge, which is what can contribute to solve the ecological problems that we face because of a technology that has lost its essence of being the love of truth

Resumen:

Se narran cronológicamente las fases por las que pasa el diseño de la Escuela Bauhaus, los principios estéticos que proponían algunos de sus representantes y las transformaciones que hicieron en nuestra forma de habitar, para reflexionar sobre sus aportes a nuestra vida cotidiana. Se subrayan las ideas de su fundador para mejorar la calidad de vida de los ciudadanos por el diseño arquitectónico y de mobiliario

Abstract:

The phases through which the Bauhaus School passed are narrated chronologically, the aesthetic principles proposed by some of its representatives and the transformations they made in our way of living, in order to consider its contribution to our daily life. The ideas of its founder to enhance the quality of life of citizens through architectural and furniture design are emphasized

Resumen:

En este texto se busca ahondar en la noción del habitar para comprender qué es habitar la ciudad. A lo largo de este artículo se advierte que más allá de ocupar un espacio, habitar es vivirlo de forma creativa, simbólica y libre, y señala como elementos fundamentales del habitar el "cuidado", el "amparo", el "arraigo" y el "encuentro". Estos elementos permiten comprender que habitar es un ethos y las ciudades son la manifestación de nuestra forma de expresar nuestros deseos e intereses por nosotros mismos, por los demás y por las cosas que nos rodean, por esto, se habita la ciudad cuando se hace ciudadanía, y se encarnan nuestras relaciones sociopolíticas y económicas, que se reflejan en las formas arquitectónicas y urbanas de las ciudades

Abstract:

This text seeks to clarify the notion of dwelling, in order to understand what to inhabit the city is and means. Throughout this article it is noticed that beyond occupying a space, to dwell is to live it in a creative, symbolic and free manner, and points out as fundamental elements of living the "care", the "protection", the "rooting" and the "meeting". These elements allow us to understand that dwelling is an ethos and cities are the manifestation of our way of expressing our desires and interests for ourselves, for others and for the things that surround us, and because of this, the city is inhabited when we make citizenship, and embody our socio-political and economic relations, which are reflected in the architectural and urban forms of cities

Resumen:

El presente artículo explica en qué consiste la pérdida de la experiencia de vida en la ciudad moderna y cosmopolita, las críticas a su lenguaje que aísla a los ciudadanos, anula su sentimiento de vida y cómo la hermenéutica busca restaurar la narrativa y la vida de las ciudades a través de recuperar una visión antropológica que explique cómo se relacionan las personas con el espacio y viven la ciudad desde una experiencia libre y creativa que funda lugares de encuentro que devuelve la vida a las ciudades y las hace lugares narrativos de la vida

Abstract:

This article explains the causes of the loss of life experience in modern and cosmopolitan city, the criticisms of its language that isolates citizens, nullifies their feeling of life and how hermeneutics seeks to restore the narrative and life of cities through recovering an anthropological vision that explains how people relate to space and live the city from a free and creative experience that founds meeting places that brings cities back to life and makes them narrative places of life

Resumen:

En este artículo se presentan algunas razones por las que es importante considerar la belleza en las ciudades como un derecho, centrado en el principio de que la belleza está vinculada con el habitar y, por tanto, contribuye a la dignidad y la calidad de vida a los ciudadanos. Además, se explica cómo podría entenderse este derecho a la belleza de modo que también ayude a coordinar una relación social entre sus habitantes y sus distintas preferencias estéticas

Abstract:

This article presents some reasons why it is important to consider beauty in cities as a right, based on the principle that beauty is linked to living and therefore contributes to greater dignity and quality of life to its citizens. Additionally, it explains how we can understand these rights to beauty, in a way that help coordinate a social relationship between its inhabitants and their different aesthetic preferences

Resumen:

En este texto se busca ahondar en la noción del habitar para comprender qué es habitar la ciudad. A lo largo de este artículo se advierte que más allá de ocupar un espacio, habitar es vivirlo de forma creativa, simbólica y libre, y señala como elementos fundamentales del habitar el "cuidado", el "amparo", el "arraigo" y el "encuentro". Estos elementos permiten comprender que habitar es un ethos y las ciudades son la manifestación de nuestra forma de expresar nuestros deseos e intereses por nosotros mismos, por los demás y por las cosas que nos rodean, por esto, se habita la ciudad cuando se hace ciudadanía, y se encarnan nuestras relaciones sociopolíticas y económicas, que se reflejan en las formas arquitectónicas y urbanas de las ciudades

Abstract:

This text seeks to clarify the notion of dwelling, in order to understand what to inhabit the city is and means. Throughout this article it is noticed that beyond occupying a space, to dwell is to live it in a creative, symbolic and free manner, and points out as fundamental elements of living the "care", the "protection", the "rooting" and the "meeting". These elements allow us to understand that dwelling is an ethos and cities are the manifestation of our way of expressing our desires and interests for ourselves, for others and for the things that surround us, and because of this, the city is inhabited when we make citizenship, and embody our socio-political and economic relations, which are reflected in the architectural and urban forms of cities

Resumen:

Este artículo trata de la importancia de repensar y recuperar el concepto de habitar como principio esencial de la Arquitectura y del Urbanismo. Mediante dos ejemplos de intervenciones realizadas en viviendas del Centro Histórico de Mérida, busca incitar a una reflexión sobre el sentido e importancia de la conservación de este espacio, lo que implica recuperar los hábitos de vida y costumbres previos a los planes de Modernización económica y social de Mérida, planes que generaron una transformación en la forma de vida e intereses de sus ciudadanos, y con ello, transformaron la traza de la ciudad de Mérida y el abandono y deterioro del Centro Histórico

Abstract:

This article deals about the importance of rethink and recover the principle of dwelling as an essential principle of Architecture and Urbanism, and through two examples of intervention in houses of the Meridas Historic Center, seeks to motivate the reflection of the sense and importance of Restauration and Intervention of this space, which implies the recover of habits and customs in daily life of Merida that used to exist previous the economic and social Modernization that generated a transformation in the way of life and interest of citizens, and with that, the transformation of the shape of Merida and the abandon and deterioration of the Historic City Center

Resumen:

Esta reflexión plantea la necesidad e importancia de investigar de forma más profunda una pedagogía del gusto a través del desarrollo autónomo de la autonomía, con el interés de que, mediante la contemplación de las formas bellas, pueda acercarse el individuo al desarrollo de su persona y a la vivencia de la belleza como un retorno a casa, algo que es posible si el hombre tiene la disposición de ser acogido por la belleza, pues para poder habitar en ella es necesario antes preparar en nosotros el lugar donde habite. Cuando el hombre desarrolla su libertad expresada como autonomía del gusto, puede dar verdadera acogida a la belleza, y ella a cambio, se manifestará como lo más íntimo a su ser y le entregará el mundo que contempla como su morada

Abstract:

This reflection gives rise to the need and importance of investigating more deeply a pedagogy of taste through the development of autonomy, with the concern that, by the contemplation of the beautiful, the individual can approach the development of your person and of the experience of beauty as a return home, something which is possible if the man has the disposition to be hosted by beauty because in order to dwell on it is necessary before preparing in us where it inhabits. When man develops his freedom expressed as autonomy of taste, can give real welcome to the beauty and she, on the other hand, emerges as the most close to your being and you will deliver the world which includes as its address

Resumo:

Esta reflexão suscita a necessidade e importância de investigar, de forma mais profunda, uma pedagogia do gosto através do desenvolvimento autônomo da autonomia, com o interesse de que, mediante a contemplação do belo, o indivíduo possa aproximar-se do desenvolvimento de sua pessoa e da vivência da beleza como um retorno a casa, algo que é possível se o homem tiver a disposição de ser acolhido pela beleza, pois, para poder habitar nela, é necessário antes preparar em nós o lugar onde habita. Quando o homem desenvolve a sua liberdade, expressa como autonomía do gosto, pode dar verdadeira acolhida à beleza, e ela, por outro lado, se manifestará como o mais íntimo ao seu ser e lhe entregará o mundo que contempla como sua morada

Abstract:

This paper to analyse the aesthetic thought of Plato and his relationship with the education of man, showing that their aesthetics design goes far beyond the sphere of art. It show that in Plato, while mistress beauty, the aesthetic reflection to transit in the sphere of morality, science and metaphysics, not being, therefore, the aesthetic attribute of the beauty unique field of art. I.e., the beauty pervades whole the philosophy of Plato since its metaphysics and cosmology, through its philosophy of nature, until you reach all human activities, such as: science, politics, ethics. Then, at first, search to understand the place of both aesthetics and ethics in metaphysics, showing that the beautiful and the good not assume equality with good morals, but with good metaphysical, this understood as perfection of nature

Resumo:

Este artigo analisa o pensamento estético de Platão e sua relação com a formação do homem, mostrando que sua concepção estética vai muito além da esfera da arte. Trata-se de mostrar que, em Platão, enquanto é amante da beleza, a reflexão estética transita na esfera da moral, da ciência e da metafísica, não sendo, por conseguinte, o atributo estético da beleza campo exclusivo da arte. Ou seja, a beleza permeia toda a filosofia de Platão desde sua metafísica e cosmologia, passando por sua filosofia da natureza, até chegar a todas as atividades humanas, tais como: a ciência, a política, a ética. Então, num primeiro momento, busca-se compreender o lugar tanto da estética quanto da ética na metafísica, mostrando que o belo e o bom não supõem uma igualdade com o bem moral, senão com o bem metafísico, esse entendido como perfeição da natureza. Num segundo momento, em sintonia com o passo anterior, trata-se de pensar a ética de Platão à luz da metafísica da beleza, mostrando que Eros é o mediador entre o plano sensível e o plano inteligível, entre o moral e o imoral e, enquanto tal, é quem faz aspirar e conduzir ao Bem e à Beleza. Num terceiro momento, evidencia-se a tese de que a estética de Platão assume uma postura não apenas metafísica, mas também existencial, reportando, por conseguinte, à relevância da Paidéia grega no sentido de que a beleza forma parte do ideal de formação do homem na Grécia clássica

Resumen:

El texto busca explicar cuáles son los presupuestos metafísicos, base del itinerario espiritual de Albert Einstein, desde la aceptación del empiriocriticismo, hasta la ruptura con Mach y el acercamiento a una metafísica, con el fin de mostrar que detrás de toda teoría física existen presupuestos filosóficos

Abstract:

This article endeavors to define the metaphysical assumptions underlying Albert Einstein’s spiritual journey starting from his acceptance of Empirio-Criticism to his falling-out with Mach, and finally his approach to metaphysics. By doing so, we will show that philosophical assumptions underlie physics’ theories

Resumen:

El objetivo de este trabajo presenta un método para encontrar la estrategia de pagos que maximiza la probabilidad de cerrar una negociación en escenarios cooperativos de utilidad transferible con fines comerciales. Este procedimiento utiliza una herramienta de simulación llamada JDSIM para muestrear el Core del juego y generar funciones empíricas de probabilidad acumuladas de los pagos de cada actor con el fin de establecer la negociación que maximiza el producto de estas funciones. Este método se compara con otros dos ya establecidos en la literatura, llamados Shapley Value (SV) y el centroide del Core, a través de treinta escenarios aleatorios y un caso práctico en los que se observa que este siempre ofrece soluciones únicas, implementables, y proporciona una justificación racional, a diferencia de los ya mencionados. Hay que considerar que este trabajo únicamente pretende resolver negociaciones en las que la distribución de los pagos permite a todos los actores unirse en una sola coalición, sin embargo, el paradigma presentado se puede adaptar a subcoaliciones de menor tamaño para extender su alcance. Esta solución propone una mejor alternativa a las que se encuentran en la literatura y representa un avance importante en áreas de negociación y teoría de juegos

Abstract:

This work presents a method to find the payment strategy that maximizes the probability of closing a negotiation in cooperative scenarios of transferable utility for commercial purposes. This procedure uses a simulation tool called JDSIM to sample the core of the game and generate cumulative empirical probability functions of the payments of each actor in order to establish the negotiation that maximizes the product of these functions. This method is compared with other two methods already established in the literature, called Shapley Value (SV) and the centroid of the Core, through thirty random scenarios and a practical case in which it is observed that this always offers unique and implementable solutions, and provides a rational justification unlike those already mentioned. It should be borne in mind that this work is only intended to resolve negotiations in which the distribution of payments allows all actors to unite in a single coalition; however, the paradigm presented can be adapted to smaller sub-coalitions to extend its scope. This solution proposes a better alternative to those found in the literature and represents an important advance in areas of negotiation and game theory

Resumen:

Este trabajo presenta una metodología que permite encontrar la estrategia que maximiza la probabilidad de cerrar una negociación en escenarios cooperativos de utilidad transferible con fines comerciales. Esta metodología genera estrategias superiores a los métodos previamente establecidos en la literatura tal como Shapley Value (SV) y el Core, porque a diferencia de estos, nuestra propuesta siempre ofrece una solución implementable y proporciona una justificación al maximizar la probabilidad de cerrar la negociación exitosamente. Nuestro procedimiento mapea el Core del juego utilizando una herramienta de simulación multidimensional denominada JDSIM que genera una muestra representativa del Core. A partir de esta información se obtienen las funciones empíricas de probabilidad acumuladas de los pagos de cada actor y se determina la negociación que maximiza el producto de estas funciones. Este nuevo método propone una estrategia que siempre excede o iguala la probabilidad de las estrategias propuestas por SV y el Centroide del Core para cerrar una negociación de modo exitoso

Abstract:

The Hicksian definition of complementarity and substitutability may not apply in contexts in which agents are not utility maximisers or where price or income variations, whether implicit or explicit, are not available. We look for tools to identify complementarity and substitutability satisfying the following criteria: they are behavioural (based only on observable choice data); model-free (valid whether the agent is rational or not) and they do not rely on price or income variation. We uncover a conflict between properties that it is arguably reasonable for a complementarity notion to possess. We discuss three different possible resolutions of the conflict

Resumen:

En el presente texto se reflexiona sobre la manera en que se puede hablar de una idea de hombre en el derecho, así como de los cambios que esta idea de hombre ha sufrido en las principales etapas de su desarrollo histórico

Abstract:

In this article, the autor discusses how we might talk about the notion of "man" in the law and considers the different changes this notion has had through different stages of its historical process

Resumen:

El artículo se centra en el alcance de las graves tensiones ideológicas de la segunda mitad del siglo XIX y primera del XX, que formaron notablemente los marcos de valores y crearon los acontecimientos sociales reales del entorno ruso. Fue una época en la que se produjo el enfrentamiento entre el pensamiento religioso y el secular. Este último es crucial, ya que describe la manifestación más dominante del pensamiento secular de la época, es decir, el populismo ruso (narodnismo). El artículo se enfoca en la rama revolucionaria del populismo ruso, cuya quintaesencia es P. N. Tkachev. Acercarse a los principales pilares conceptuales de las obras de Tkachev proporciona las bases para un análisis del posible impacto e influencia del populismo ruso radical en las acciones de una de las figuras más controvertidas no solo en la historia rusa sino también en la historia de la humanidad como tal: Vladímir Ilich Lenin. También se esbozan posibles marcos y cuestiones relativas a la relación entre el populismo ruso y Lenin. El objetivo es estimular nuevos estudios críticos, histórico-filosóficos sobre el populismo ruso, así como sobre el legado genuino de Lenin. Sin embargo, se rechaza cualquier ideologización o excusa del leninismo

Abstract:

The paper is set into the scope of dramatic ideological tensions of the second half of the 19th century and the beginning of the 20th century which notably formed the value frameworks as well as created real social events of the Russian environment. It was a time when the mutual confrontation between religiously and secularly oriented thinking took place. The latter is crucial since it describes the most dominant manifestation of the secular thinking of the period, i.e. the Russian Narodnism. The focus of the paper aims at the revolutionary branch of Narodnism, the quintessence of which is P. N. Tkachev. Approaching the main conceptual pillars of Tkachev’s works provides the foundations for an analysis of possible impact and influence of the radical Narodnism on the actions of one of the most controversial figures not only in the Russian history but also in human history as such – Vladimir Ilich Lenin. The paper also outlines possible frameworks and issues concerning the mutual relationship between Russian Narodnism and Lenin. The aim is to stimulate new critical, historical-philosophical studies of the Russian Narodnism as well as of the genuine legacy of Lenin. Nonetheless, I refuse any ideologizing or excusing the Leninism

Abstract:

Only in the mid-1990s did Brazil join the world-wide trend of states having diaspora engagement policies, and implemented specific programmes to address the needs and claims of its citizens abroad. In contrast to other Latin American countries, it did so following a low-visibility, technical approach led by consular offices, including very cautious attempts to organize emigrants and regulate their gradual access to migration policy-making. It also tied these outreach efforts to the building up of a global role in international affairs. This article analyses the politics and impact of these processes on foreign policy management, with special emphasis on the implications of these changes in terms of adapting policy instruments to a new notion of citizenship beyond borders and innovative techniques to manage populations abroad. It also investigates these issues in one major destination for Brazilians abroad: London, where Brazilians have lately become the largest Latin American community, but have faced serious obstacles to improving their resources for organization and mobilization. The findings suggest some discrepancies and tensions among officials' views and between policy design and actual results, thus illustrating a gap between foreign policy goals and implementation capacity at both the global and local levels. Thus, regarding the practice of foreign policy-making, the article provides novel information about recent institutional changes in state bureaucracies and the uncertainties and uneven impact of policy implementation. It also casts some doubts on Brazil's overall capacity to carry out a global strategy in this realm

Abstract:

This article addresses the problem of redundancy and reliability allocation in the operational dimensioning of an automated production system. The aim of this research is to improve the global reliability of the system by allocating alternative components (redundancies) that are associated in parallel with each original component. By considering a complex componential approach that simultaneously evaluates the interrelations among sub-systems, conflicting goals, and variables of different natures, a solution for the problem is proposed through a multi-objective formulation that joins a multi-objective elitist genetic algorithm with a high-level simulation environment also known as simulation optimization framework

Resumen:

El presente artículo analiza los problemas que en el derecho de los Estados Unidos de Norteamérica enfrenta la prueba digital. Al no contar con una regulación particular para este tipo de prueba, los jueces han aplicado las normas previstas para las pruebas físicas. Lo anterior si bien ha resultado útil en muchos casos, ha generado más de algún problema que la doctrina de Estados Unidos ha criticado. Desde esta perspectiva se debe tener presente que no todas las reglas y procedimientos previstos para las pruebas físicas, se pueden aplicar sin más a la confiscación de una prueba digital

Abstract:

This article focuses on the problems faced by digital evidence in U.S. law. As there are no specific regulations for this type of evidence, judges have applied norms contemplated for physical evidence. Although this has resulted useful in many cases, more than one problem has arisen and been criticized by U.S. doctrine. From this perspective, it is important to bear in mind that not all the rules and procedures contemplated for physical evidence can readily be applied to the confiscation of a piece of digital evidence

Resumen:

La presente investigación se enfoca en el análisis del concepto de orden público como causal de nulidad de un laudo tratándose de un arbitraje comercial internacional. El estudio inicia con una descripción de las principales características del arbitraje en Chile en sus dos versiones: nacional e internacional. Posteriormente se identifican y explican los tres principios rectores del arbitraje internacional que lo distinguen del arbitraje nacional, a saber, pleno respeto a la autonomía de las partes, plena libertad del árbitro para desarrollar el respectivo procedimiento y acotada intervención de la judicatura local. La segunda sección de la investigación se dedica al estudio del concepto de orden público como causal de nulidad de laudos internacionales. En este apartado se describen los elementos que tanto la doctrina como los tribunales de diversas jurisdicciones han identificado en su esfuerzo por delimitar el significado de este concepto

Abstract:

This research focuses on analyzing the concept of public order as bases for annulment of an award in the case of international commercial arbitration. The study begins with a description of the main features of arbitration in Chile in two versions: national and international. Subsequently, the three guiding principles of international arbitration that distinguish it from domestic arbitration are identified and explained, namely, full respect for the autonomy of the parties, the arbitrator complete freedom to develop the relevant procedure and the limited intervention of the local judiciary. The second section of the research is devoted to studying the concept of public order as bases for annulment of international awards. This section describes the elements that both the doctrine and the courts in various jurisdictions have identified in their effort to define the meaning of the concept

Resumen:

Si bien los caminos de la tutela cautelar del Derecho inglés y la de los países del Civil law no han sido uniformes, lo cierto es que en los últimos lustros se vislumbra un proceso de unificación. A la luz de los desafíos que actualmente enfrentan no se observan significativas discrepancias entre una tutela y la otra. Así, si analizamos los problemas que pretenden solucionar, las medidas que se adoptan y los efectos que estas medidas producen en los diferentes procesos, no encontramos significativas divergencias. Incluso, en el campo concreto de las interim injunctions los requisitos de concesión no difieren de los conocidos fumus, periculum y caución de la tutela cautelar del derecho continental

Resumen:

El desarrollo de la tutela provisional en el Derecho inglés siguió históricamente un camino diferente del que tuvo en el Derecho de la Europa continental. Con todo, en los últimos treinta años estas diferencias se han atenuado drásticamente. El avance e importancia que la tutela provisional muestra hoy día en el Derecho inglés es similar a la que posee en los países de la Europa continental; la excesiva dilación en la solución de los conflictos ha llevado a los tribunales ingleses, al igual que a los del continente, a utilizar cada vez en mayor medida soluciones provisionales y a introducir en las últimas décadas una gran variedad de estas herramientas. La denominada freezing injuntion, a la que se contrae el presente artículo, si bien nace a la vida jurídica inglesa sólo en el año 1975, constituye actualmente la medida provisional de mayor aplicación dentro de la maquinaria judicial inglesa

Abstract:

We present data from detailed observation of 24 information workers that shows that they experience work fragmentation as common practice. We consider that work fragmentation has two components: length of time spent in an activity, and frequency of interruptions. We examined work fragmentation along three dimensions: effect of collocation, type of interruption, and resumption of work. We found work to be highly fragmented: people average little time in working spheres before switching and 57% of their working spheres are interrupted. Collocated people work longer before switching but have more interruptions. Most internal interruptions are due to personal work whereas most external interruptions are due to central work. Though most interrupted work is resumed on the same day, more than two intervening activities occur before it is. We discuss implications for technology design: how our results can be used to support people to maintain continuity within a larger framework of their working spheres

Resumen:

En este artículo se estudian los principales determinantes de la inversión extranjera de cartera en México por medio de un análisis de causalidad con técnicas econométricas. Se introduce un indicador adecuado del riesgo país y se realizan pruebas de raíz unitaria en las variables del modelo. Posteriormente se efectúan pruebas de cointegración y se aplica la metodología de vectores autorregresivos con corrección de errores. Por último, se estudia la descomposición de la varianza y la función impulso-respuesta del modelo econométrico propuesto

Abstract:

In this paper, we study the main determinants of foreign investment in Mexico by means of a causality analysis with econometric techniques. We introduce a suitable country risk indicator and test for unit roots in the variables of the model. Moreover, we carry out tests of cointegration in vector autoregressive models with error correction. Finally, we study the variance decomposition and the impulse-response function of the proposed vector autoregressive model

Abstract:

In this work, we present the solution of the equations that govern the reactant transport in a well mixed system that contains particles where diffusion and first-order reaction occur. The transport equations are coupled by an interfacial boundary condition that includes mass transfer resistance. The statement of the problem allows arbitrary time depending feed functions. The evaluation of the solution obtained by the Laplace method requires the solution of an eigenvalue problem. We discuss the evaluation of the solution, and typical results for three different feed functions: step, pulse and oscillatory functions are presented. The resulting equations are able to show the effect of internal and external mass transfer limitations on the particle and fluid concentrations and on kinetic experimental results

Resumen:

Podría parecer que la protección constitucional del derecho al aborto se revocó de un plumazo, pero no fue así. Poco a poco, durante casi cincuenta años, los conservadores llevaron a cabo una efectiva estrategia para capturar el Poder Judicial. En consecuencia, hoy Estados Unidos podría vivir una contrarrevolución que elimine las libertades y los derechos de varios grupos

Abstract:

We investigate empirically the extent of misreporting in a poverty alleviation program in which self-reported information, followed by a household visit, is used to determine eligibility. In the model we propose and estimate, underreporting may be due to a deception motive, and overreporting to an embarrassment motive. We find that underreporting of goods and desirable home characteristics is widespread, and that overreporting is common with respect to goods linked to social status. Larger program benefits encourage underreporting and discourage overreporting. We also estimate the costs of lying and embarrassment for different goods, and show that the embarrassment cost for lacking a good is proportional to the percentage of households who own the good

Abstract:

We investigate the hypothesis that conditioning transfers to poor families on school attendance leads to a reallocation of household resources which enhances the human capital of the next generation, via the effect of the conditionality on the shadow price of human capital and (possibly) via the effect of the transfers on household bargaining. We introduce a model to study the effects of conditional transfers on intra-household allocations, and provide suggestive evidence of the importance of price effects using data from a conditional transfer program in Mexico

Abstract:

We model a two-alternative election in which voters may acquire information about which is the best alternative for all voters. Voters differ in their cost of acquiring information. We show that as the number of voters increases, the fraction of voters who acquire information declines to zero. However, if the support of the cost distribution is not bounded away from zero, there is an equilibrium with some information acquisition for arbitrarily large electorates. This equilibrium dominates in terms of welfare any equilibrium without information acquisition - even though generally there is too little information acquisition with respect to an optimal strategy profile

Abstract:

In an influential article, Alesina and Drazen [1991. Why are Stabilizations Delayed? American Economic Review 81, 1170-1188] model delay of stabilization as the result of a struggle between political groups supporting reform plans with different distributional implications. In this paper we show that ex ante asymmetries in the costs of delay for the groups will reduce the probability of conflict and will lead to a shorter expected delay. Accurate common information about the cost of delay may lead to no delay at all. In an asymmetric conflict, a wider divergence in the distributional implications of reform will reduce the probability of conflict but will lead to a longer expected delay. We motivate the asymmetric model of delay by reference to Latin American episodes of the 1980s

Resumen:

En este artículo hago revista de la bibliografía acerca de la microeconomía política en la participación electoral, el voto estratégico y la información de los votantes, con hincapié en avances recientes. Analizo brevemente además algunas consecuencias de esta bibliografía para la consolidación democrática en la América Latina

Abstract:

In this article I survey the political economy literature on electoral participation, strategic voting, and voter information, with some emphasis on recent contributions. In addition, I briefly discuss the relevance of this literature for the consolidation of democracy in Latin America

Abstract:

We analyze an election in which voters are uncertain about which of two alternatives is better for them. Voters can acquire some costly information about the alternatives. In agreement with Downs's rational ignorance hypothesis, individual investment in political information declines to zero as the number of voters increases. However, if the marginal cost of information is near zero for nearly irrelevant information, there is a sequence of equilibria such that the election outcome is likely to correspond to the interests of the majority for arbitrarily large numbers of voters. Thus, “rationally ignorant” voters are consistent with a well-informed electorate

Abstract:

This article considers the welfare implications of transfers to poor families that are conditional on school attendance and other forms of investment in children's human capital. Family decisions are assumed to be the result of (generalized) Nash bargaining between the two parents. We show that, as long as bequests are zero, conditional transfers are better for children than unconditional transfers. The mother's welfare may also be improved by conditional transfers. Thus, conditioning transfers to bequest-constrained families have potentially desirable intergenerational and intragenerational welfare effects. Conditioning transfers to unconstrained families make every family member worse off

Abstract:

This paper derives a necessary condition for unanimous voting to converge to the perfect information outcome when voters are only imperfectly informed about the alternatives. Under some continuity assumptions, the condition is also sufficient for the existence of a sequence of equilibria that exhibits convergence. The requirement is equivalent to that found by Milgrom [1979, Econometrica 47, 679-688] for information aggregation in single-prize auctions. An example illustrates that convergence may be reasonably fast for small committees. However, if voters have common preferences, unanimity is not the optimal voting rule. Unanimity rule makes sense only as a way to ensure minority views are respected. Journal of Economic Literature Classification Numbers: D72, D82, D44

Abstract:

We develop a spatial model of competition between two policymotivated parties. The parties know which policies are desirable for voters, while voters do not. The announced positions of the parties serve as signals to the voters concerning the parties' private information. In all separating equilibria, when the left-wing party attains power, the policies it implements are to the right of the policies implemented by the right-wing party when it attains power. Intuitively, when right-wing policies become more attractive, the left party moves toward the right in order to be assured of winning, while the right-wing party stays put in a radical stance

Abstract:

This paper compares two voting methods commonly used in presidential elections: simple plurality voting and plurality runoff. In a situation in which a group of voters have common interests but do not agree on which candidate to support due to private information, information aggregation requires them to split their support between their favorite candidates. However, if a group of voters split their support, they increase the probability that the winner of the election is not one of their favorite candidates. In a model with three candidates, due to this tension between information aggregation and the need for coordination, plurality runoff leads to higher expected utility for the majority than simple plurality voting if the information held by voters about the candidates is not very accurate

Abstract:

This paper studies a situation in which parties are better informed than voters about the optimal policies for voters. We show that voters are able to infer the parties' information by observing their electoral positions, even if parties have policy preferences which differ substantially from the median voter's. Unlike previous work that reach opposite conclusions, we assume that voters have some prívate information of their own. If the information available to voters is biased, parties' attempts to infiuence voters' beliefs will result in less than full convergence even if parties know with certainty the optimal policy for the median voter

Abstract:

Diagnostic and contact tracing apps are a needed weapon to contain contagion during a pandemic. We study how the content of the messages used to promote the apps influence adoption by running a survey experiment on approximately 23,000 Mexican adults. Respondents were randomly assigned to one of three different prompts, or a control condition, before stating their willingness to adopt a diagnostic app and contact tracing app. The prompt emphasizing government efforts to ensure data privacy, which has been one of the most common strategies, reduced willingness to adopt the apps by about 4 pp and 3 pp, respectively. An effective app promotion policy must understand individuals’ reservations and be wary of unintended reactions to naïve reassurances

Abstract:

While effective preventive measures against COVID-19 are now widely known, many individuals fail to adopt them. This article provides experimental evidence about one potentially important driver of compliance with social distancing: social norms. We asked each of 23,000 survey respondents in Mexico to predict how a fictional person would behave when faced with the choice about whether or not to attend a friend’s birthday gathering. Every respondent was randomly assigned to one of four social norms conditions. Expecting that other people would attend the gathering and/or believing that other people approved of attending the gathering both increased the predicted probability that the fictional character would attend the gathering by 25%, in comparison with a scenario where other people were not expected to attend nor to approve of attending. Our results speak to the potential effects of communication campaigns and media coverage of compliance with, and normative views about, COVID-19 preventive measures. They also suggest that policies aimed at modifying social norms or making existing ones salient could impact compliance

Abstract:

The purpose of this chapter is to introduce a new research field within experimental jurisprudence-which we refer to as experimental longtermist jurisprudence-aimed at informing legal-philosophical, doctrinal, and policy debates relating to the long-term future, and in particular the interests of future generations. Historically, the interests of future generations have been neglected by legal systems, which fail to grant them democratic representation in the legislature, standing to bring forth a lawsuit in the judiciary, and serious consideration in cost-benefit analyses in the executive. Recently, this neglect has increasingly been called into question based on normative philosophical arguments relating to the value of future generations; empirical evidence relating to the severity of the risks faced by future generations relative to those faced by previous generations; and legal reforms attempting to codify the interests of future generations via constitutional provisions. Experimental longtermist jurisprudence uses methods from experimental psychology to help identify the source of this disconnect, and in turn to help determine the appropriate level and form of legal protection for future generations. In this chapter we (a) provide an overview of the substantive and methodological underpinnings of experimental longtermist jurisprudence; (b) introduce three research programs within experimental longtermist jurisprudence; and (c) discuss the normative implications of each of these three research programs

Abstract:

To what extent, if any, should the law protect sentient artificial intelligence (that is, AI that can feel pleasure or pain)? Here we surveyed United States adults (n = 1,061) on their views regarding granting 1) general legal protection, 2) legal personhood, and 3) standing to bring forth a lawsuit, with respect to sentient AI and eight other groups: humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and humans living in the far future. Roughly one-third of participants endorsed granting personhood and standing to sentient AI (assuming its existence) in at least some cases, the lowest of any group surveyed on, and rated the desired level of protection for sentient AI as lower than all groups other than corporations. We further investigated and observed political differences in responses; liberals were more likely to endorse legal protection and personhood for sentient AI than conservatives. Taken together, these results suggest that laypeople are not by-and-large in favor of granting legal protection to AI, and that the ordinary conception of legal status, similar to codified legal doctrine, is not based on a mere capacity to feel pleasure and pain. At the same time, the observed political differences suggest that previous literature regarding political differences in empathy and moral circle expansion apply to artificially intelligent systems and extend partially, though not entirely, to legal consideration, as well

Resumen:

Esta investigación estudia las motivaciones económicas de la Ley para la Trasparencia, Prevención y Combate de Prácticas Indebidas en Materia de Contratación de Publicidad, publicada en 2021. El análisis se basa en las teorías económicas relativas al riesgo moral en la contratación y a las plataformas en red. Se describe el funcionamiento en general de la industria de publicidad y se documenta la estructura de la industria en México. La predicción principal de la economía de redes en plataforma es que las empresas buscan la mezcla de precios en cada lado del mercado para optimizar sus resultados, y pueden cobrar poco o incluso debajo de costos a medios o anunciantes que les ayudan a ampliar su red. Típicamente, sirven a editores y a anunciantes. La ley de publicidad establece prohibiciones absolutas para que una empresa atienda editores y anunciantes, y para que haga intermediación de espacios, y asigna a la Comisión Federal de Competencia Económica (COFECE) la función de hacer cumplir la ley. Sin embargo, las conductas planteadas por la iniciativa que derivó en la ley no son de competencia económica, ni son en general indebidas. Las prohibiciones absolutas que se establecen, de aplicarse, afectarían no sólo a las agencias tradicionales, sino que obligaría a la escisión de las grandes plataformas digitales que son propiamente intermediarias en mercados de publicidad

Abstract:

This research studies the economic motivations of the Law for the Transparency, Prevention and Combat of Improper Practices in Advertising Contracting, published in 2021. The analysis is bases on economic theories relating to moral hazard in contracting and networked platforms. It describes the general functioning of the advertising industry and documents the structure of the industry in Mexico. The main prediction in the economics of platform networking is that companies look for the price mix on each side of the market to optimize their results and can charge little or even below costs to media or advertisers that help them expand their network. Typically, they serve publishers and advertisers. The advertising Law establishes absolute prohibitions for a company to serve publishers and advertisers, and to broker spaces, and assigns to the Federal Economic Competition Commission the function of enforcing the law. However, the conduct raised by the initiative that led to the law is not of market abuse, nor are they per se illegal. The absolute prohibitions that are established, if applied, would affect not only traditional agencies, but would force the breakup of large digital platforms that are properly intermediaries in advertising markets

Resumen:

El valor económico de los aumentos en la duración de la vida se estima para un conjunto amplio de países. Para ello, se utilizan datos por grupo de edad de consumo, ingresos, ocio y mortalidad. Las estimaciones son sensibles tanto a los parámetros de sustitución intertemporal y consumo mínimo, como a la tasa de interés. Un escenario de mejora de 1/10 000 en la probabilidad de supervivencia a lo largo de la vida produce ganancias en el valor de la vida (VSL) al principio de la vida, de alrededor de 500 USD para los países más ricos, 200 USD para ingresos medios y 30 USD para los más pobres. Las elasticidades ingreso de referencia son, por lo regular, inferiores a 1.0, excepto para los países de bajos ingresos a edades más avanzadas. La elasticidad ingreso del valor de la vida estadística (VVE) se calcula directamente; a diferencia de la literatura anterior, que supone valores para calcular el VVE en países menos desarrollados, a partir de mediciones en países más ricos

Abstract:

The economic value of increases in the length of life is estimated for a large set of countries using age-specific data on consumption, leisure earnings, and mortality. Estimates are sensitive to parameters on intertemporal substitution and minimum consumption, and to the interest rate. A scenario of improvement of 1/10000 survival probabilities across life results in Value of Statistical Life (VSL) gains at the beginning of life of around USD $500 for the wealthier countries, $200 for middle-income, and $30 for the poorest. Benchmark income elasticities are in general below 1, except for low-income countries at older ages. The income elasticity of VSL is calculated directly, and not, as in previous literature, calculated for less developed countries based on measurements for wealthier countries

Resumen:

El salario mínimo en México es bajo. La causa es cierta visión monetaria que liga causalmente al salario mínimo con la inflación, motivando una política en que el salario mínimo aumenta igual o menos que los precios en general. No hay evidencia convincente de que tal causalidad exista. Por otro lado, una elevación grande del salario mínimo afectaría el empleo de mujeres y jóvenes de baja calificación en los estados más pobres. En vista del bajo nivel observado del salario mínimo, hay un margen de incremento con efecto pequeño sobre el empleo, y hay argumentos de la economía de la información que sugieren que ello podría ser benéfico. El reto institucional es definir un mecanismo de ajuste que no incida sobre la inflación, ni produzca nuevas burocracias. Se propone una regla de ajuste con el salario en general, sin relación con el calendario fiscal

Abstract:

The minimum wage is low in Mexico. The cause is a monetary view that causally links wages to inflation, encouraging a policy in which the minimum wage increases as much or less than prices in general. There is no convincing evidence that such causal link exists. On the other hand, a large minimum wage rise would affect the employment of low-skill women and youth in in the poorest states. In view of the observed low level of the minimum wage, there is a margin of increase with little effect on employment, and there are information economics arguments that suggest that it could be beneficial. The institutional challenge is to define an adjustment mechanism that markets about inflation, or produce new bureaucracies. We propose a rule of wage adjustment in general, unrelated to the fiscal calendar

Resumen:

Objetivo. Describir los mecanismos de asignación y compra del Seguro Popular, la forma en que operan y los controles que se dan sobre ellos. Discutir esquemas de incentivos que mejoren el desempeño en general, fortalezcan la atención primaria y mejoren el acceso a los hospitales de especialidades. Material y métodos. Se evalúan las reformas de 2014 a la Ley General de Salud para entender su intención, que es fortalecer los sistemas estatales y la relación con la autoridad federal. Se discuten opciones para que los mecanismos de asignación incentiven mejor la atención primaria y el acceso a los tratamientos de especialidades para avanzar hacia mejores garantías de acceso a los servicios de salud. Conclusiones. Para convertir a los Regímenes Estatales de Protección Social en Salud en agentes para la expansión de los servicios debe superarse el enfoque programático para lograr una relación más eficaz entre la Federación y los Estados

Abstract:

Objective. To describe the mechanisms of allocation and purchase of the Seguro Popular program, the way they operate and how are controls applied. To discuss incentive schemes that can improve performance in general, strengthen primary care and improve access to specialty hospitals. Materials and methods. The 2014 reforms to the General Health Law are evaluated to understand their intent, which is to strengthen State systems and the relationship with the Federal authority. Options for allocation mechanisms to encourage better primary care and access to specialty treatments towards are discussed, to guarantee access to health services. Conclusions. To make State schemes of social protection in health agents for the expansion of services, the programmatic approach shall be replaced to achieve a more effective relationship between the Federation and the States

Resumen:

Evaluamos las ganancias en longevidad con base en la disposición a pagar de los individuos, y calculamos el valor social de estas ganacias. Aplicamos el modelo de Murphy y Topel (2003, 2006) por primera vez para México, y así estimamos el valor de un año de vida y el valor de la vida restante a cada edad. Además, usamos los estimados en tres casos para ejemplificar cómo pueden usarse para decisiones de política pública: valoración de las ganancias en las tasas de mortalidad, costo de muertes por obesidad y análisis costo-beneficio de vidas salvadas por medio de tratamiento para niños con leucemia

Abstract:

This paper evaluates the gains in longevity based on individuals’ willingness to pay, and measures the social values of these gains. We apply the Murphy and Topel (2003, 2006) model for Mexico for the first time, which allows us to estimate the value of a life year and the value of remaining years of life at each age. We also apply the estimates to three cases to exemplify how these calculations can be used in decisions concerning public policy: value of gains in mortality rates, cost of deaths associated with obesity, and a cost-benefit analysis of lives saved through treatment for children with leukemia

Abstract:

This study proposes an improvement to the Insight Centre for Data Analytics algorithm, which identifies the most relevant topics in a corpus of tweets, and allows the construction of search rules for that topic or topics, in order to build a corpus of tweets for analysis. The improvement shows above 14% improvement in Purity and other metrics, and an execution time of 10% compared to Latent Dirichlet Allocation (LDA)

Resumen:

La defensa de la libertad humana constituye una de las tareas más problemáticas dentro del sistema de Leibniz. Esto se debe principalmente a que el concepto leibniziano de Dios posee características que resultan aparentemente contradictorias con cualquier tipo de libertad humana. Intentaré mostrar cómo, a fin de disolver las contradicciones que salen al paso, Leibniz recurre a su propio concepto de creación, único punto de vista desde el cual se pueden cumplir los requisitos de la definición leibniziana de libertad. Intentaré, además, mostrar que el mencionado recurso se encuentra presente en la monadología y que, por tanto, pertenecía a lo que Leibniz consideró una de las versiones más acabadas de su propio sistema

Abstract:

Defending human freedom is one of the most challenging tasks in the Leibnizian system due to the fact that his concept of God has characteristics that are apparently in contradiction with any other notion of human freedom. In this article, I will show how, in order to eliminate those contradictions, Leibniz resorts to his own idea of creation, the only point of view meeting the requirements of his definition of freedom. Furthermore, I will demonstrate how the latter idea is present in the Monadology and thus, it belongs to what Leibniz himself considered one of his most complete version of his own system

Resumen:

La filosofía nunca ha sido una ciencia popular. Usualmente aparece como el opuesto del sentido común y, por tanto, como una actividad, si acaso interesante, incapaz de responder a las circunstancias de la “vida real”. Este efecto es más intenso en algunos autores y en algunos temas. La solución leibniziana al problema de la libertad alcanza el máximo nivel de sensación de inaplicabilidad. En este trabajo se intentará averiguar si, como sospecha el hombre de a pie, el trabajo del filósofo de Hannover al respecto es absolutamente incompatible con la percepción cotidiana de la libertad

Abstract:

In this paper we present the results of a survey of research studies on the learning of multivariable calculus in a wide geographic spectrum. The goal of this study is to describe what research results tell us about students' learning of calculus of two-variable functions, and how they inform its teaching. In spite of the diversity of cultures and theoretical approaches, results obtained are coherent and similar in terms of students' learning, and of teaching strategies designed to help students understand two-variable calculus deeply. Results show the need to introduce students to the geometry of three-dimensional space and vectors, the importance of the use of graphics and geometrical representations, and of thorough work on functions before introducing other calculus topics. The reviewed research deals in different ways with the idea of generalization concerning how students' knowledge of one-variable calculus influences their understanding of multivariable calculus. Some research-based teaching strategies that have been experimentally tested with good results are included. Results discussed include the following: basic aspects of functions of two variables, limits, differential calculus, and integral calculus. The survey shows that there is still a need for research using different theoretical perspectives, on the transition from one-variable calculus to two-variable and multivariable calculus

Abstract:

In this paper we report on a study of how students understand some of the most fundamental ideas of Riemann integrals of functions of two variables. We apply Action-Process-Object-Schema (APOS) Theory to pose a preliminary genetic decomposition (GD), conjecturing mental constructions that students would need to relate Riemann sums to integrals of functions of two variables over rectangles. The genetic decomposition is informed by the researchers’ classroom experience, findings of a previous study that applied semiotic representation theory, and by a study on integrals of functions of one variable. We pay particular attention to the case of an integral of a continuous function over a rectangle and the simplest partition possible, that consisting only of the rectangle itself. We then explore students’ geometrical understanding of the relation between the single term f (a,b) ΔxΔy, where (a,b) is a point on the rectangle, and the double integral over the rectangle. We tested the GD by performing student interviews with 10 students who had just finished taking a lecture-based multivariable calculus course. The findings underscore the importance of each of the mental constructions described in the genetic decomposition and suggests that students have difficulty in some mental constructions that may commonly be assumed to be obvious during instruction

Abstract:

This article reports on an Action-Process-Object-Schema Theory (APOS) based study consisting of three research cycles on student learning of the basic idea of a two-variable functions and its graphical representation. Each of the three research cycles used semi-structured interviews with students to test a conjecture about mental constructions (genetic decomposition) students may use to understand functions of two variables, develop supporting classroom activities based on interview results, and successively improve the conjecture. The article brings together for the first time findings already reported in the literature from the first two research cycles, and the results of the third and final cycle. The final results show that students who were assigned special activities based on the research findings of the first two cycles were more likely to exhibit behavior consistent with a Process conception of function of two variables. An important contribution of the article is that it shows how different APOS research cycles may be used to successively improve students’ understanding of a mathematical notion. Also, the description of findings from the three research cycles, provides a potentially useful guide to improve student learning of function of two variables

Abstract:

APOS Theory is applied to study student understanding of directional derivatives of functions of two variables. A conjecture of mental constructions that students may do in order to come to understand the idea of a directional derivative is proposed and is tested by conducting semi-structured interviews with 26 students. The conjectured mental construction of directional derivative is largely based on the notion of slope. The interviews explored the specific conjectured constructions that student were able to do, the ones they had difficulty doing, as well as unexpected mental constructions that students seemed to do. The results of the empirical study suggest specific mental constructions that play a key role in the development of student understanding, common student difficulties typically overlooked in instruction, and ways to improve student understanding of this multivariable calculus topic. A refined version of the genetic decomposition for this concept is presented

Abstract:

APOS theory is applied to study student understanding of the differential calculus of functions of two variables, meaning by that, the concepts of partial derivative, tangent plane, the differential, directional derivative, and their interrelationship. A genetic decomposition largely based on the idea of a directional slope in three dimensions is proposed and tested by conducting semi-structured interviews with 26 students who had just taken a course in multivariable calculus. The interviews explored the mental constructions of the genetic decomposition they can do or have difficulty doing. Results give evidence of those mental constructions that seem to play an important role in the understanding of these important concepts

Abstract:

Action-Process-Object-Schema (APOS) Theory is applied to study student understanding of directional derivatives of functions of two variables. A conjecture of the main mental constructions that students may do in order to come to understand directional derivatives is proposed and is tested by conducting semi-structured interviews with 26 students who had just taken multivariable calculus. The interviews explored the specific constructions of the genetic decomposition that student are able to do and also the ones they have difficulty doing. The conjecture, called a genetic decomposition, is largely based on the elementary notion of slope and on a development of the concept of tangent plane. The results of the empirical study suggest the importance of constructing coordinations of plane, tangent plane, and vertical change processes in order for students to conceptually understand directional derivatives

Abstract:

In a series of previous studies, the authors have described specific mental constructions that students need to develop, and which help explain widely observed difficulties in their graphical analysis of functions of two variables. This new study, which applies Action-Process-Object-Schema theory and Semiotic Representation Theory, is based on semi-structured interviews with 15 students. It results in new observations on student graphical understanding of two-variable functions. The effect of research findings in designing a set of activities to help students carry out the specific constructions found to be needed is briefly discussed

Abstract:

This is a study about the didactical organization of a research based group of activities designed using APOS theory to help university students make constructions, needed to understand and graph two-variable functions, but found to be lacking in previous studies. The model of the "moments of study" of the Anthropological Theory of Didactics is applied to analyze the activities in terms of their institutional viability

Abstract:

In this study we analyze students’ understanding of two-variable function; in particular we consider their understanding of domain, possible arbitrary nature of function assignment, uniqueness of function image, and range. We use APOS theory and semiotic representation theory as a theoretical framework to analyze data obtained from interviews with thirteen students who had taken a multivariable calculus course. Results show that few students were able to construct an object conception of function of two variables. Most students showed difficulties finding domains of functions, in particular, when they were restricted to a specific region in the xy plane. They also showed that they had not fully coordinated their R3, set, and function of one variable schemata. We conclude from the analysis that many of the interviewed students’ notion of function can be considered as pre-Bourbaki

Abstract:

The study of student understanding of multivariable functions is of fundamental importance given their role in mathematics and its applications. The present study analyses students' understanding of these functions, focusing on recognition of domain and range of functions given in different representational registers, as well as on uniqueness of function value. APOS and semiotic representation theory are used as theoretical framework. The present study includes results of the analysis of interviews to 13 students. The analysis focuses on student' constructions after a multivariate calculus course, and on the difficulties they face when addressing tasks related with this concept

Abstract:

We study the stability of average optimal control of general discrete-time Markov processes. Under certain ergodicity and Lipschitz conditions the stability index is bounded by a constant times the Prokhorov distance between distributions of random vectors determinating the “original and the perturbated” control processes

Abstract:

Public streams of geo-located social media information have provided researchers with a rich source of information from which to draw patterns of urban-scale human-mobility. However, most of the literature relies on assumptions over the spatial distribution of this data (e.g., by considering only a uniform grid division of space). In this work the authors present a method that does not rely on such assumptions. They followed the social media activity of 24,135 Twitter users from Mexico City over a period of seven months (June 2013 - February 2014). The authors' method clusters user's geolocations into a 19 zone data-driven division of Mexico City. These results can be interpreted from a graph theory-based perspective, by representing each division as nodes, and the edges between them as the number of people traveling between locations. Graph centrality reveals city's infrastructural key points. Without these gateways the authors can argue that mobility would either be radically transformed or break the city apart

Abstract:

In this paper, we improve the named-entity recognition (NER) capabilities for an already existing text-based dialog system (TDS) in Spanish. Our solution is twofold: first, we developed a hidden Markov model part-of-speech (POS) tagger trained with the frequencies from over 120-million words; second, we obtained 2, 283 real-world conversations from the interactions between users and a TDS. All interactions occurred through a natural-language text-based chat interface. The TDS was designed to help users decide which product from a well-defined catalog best suited their needs. The conversations were manually tagged using the classical Penn Treebank tag set, with the addition of an ENTITY tag for all words relating to a brand or product. The proposed system uses an hybrid approach to NER: first it looks up each word in a previously defined catalog. If the word is not found, then it uses the tagger to tag it with its appropriate POS tag. When tested on an independent conversation set, our solution presented a higher accuracy and higher recall rates compared to a current development from the industry

Abstract:

Sociological studies on transnational migration are often based on surveys or interviews, an expensive and time-consuming approach. On the other hand, the pervasiveness of mobile phones and location aware social networks has introduced new ways to understand human mobility patterns at a national or global scale. In this work, we leverage geo located information obtained from Twitter as to understand transnational migration patterns between two border cities (San Diego, USA and Tijuana, Mexico). We obtained 10.9 million geo located tweets from December 2013 to January 2015. Our method infers human mobility by inspecting tweet submissions and user's home locations. Our results depict a trans national community structure that exhibits the formation of a functional metropolitan area that physically transcends international borders. These results show the potential for re analysing sociology phenomena from a technology-based empirical perspective

Abstract:

This paper presents a study of the usage of Twitter within the context of urban activity. We retrieved a set of tweets submitted by users located in Mexico City. Tweets were labeled as either positive or negative mood using a sentiment analyzer implementation. By calculating the average mood, we were able to run a Mann-Withney’s U test to evaluate differences in the calculated mood per day of week. We found that all days of the week had significantly different medians with Sunday being the most positive day and Thursday the most negative. Additionally, we study the location for the tweets as an indicator important events and landmarks around the city

Abstract:

We propose a statistical study of sentiment produced in an urban environment by collecting tweets submitted in a certain timeframe. Each tweet was processed using our own sentiment classifier and assigned either a positive or a negative label. By calculating the average mood, we were able to run a Mann-Withney’s U test to evaluate differences in the calculated mood per day of week. We found that all days of the week had significantly different medians. We also found positive correlations between Mondays and the rest of the week

Resumen:

El sentimiento de destino ha marcado la historia. Sin embargo, la historiografía de las emociones no le ha dedicado suficiente atención. En la época clásica se tomaba en cuenta para la toma de decisiones. Posteriormente hallamos ricas reflexiones sistemáticas: Maquiavelo, Montaigne, Smith, Tolstói, Weber, Geertz y Adorno, entre otros. Tras estudiar estos testimonios, analizamos el caso de Gandhi y su voz interior. Revitalizar la discusión sobre el sentimiento de destino contribuye a comprender fenómenos como el populismo. No es posible hacer política solo con razones: hay que tener en cuenta las emociones y, entre ellas, el entusiasmo

Abstract:

The "feeling of destiny" has marked history. However, the historiography of emotions has not devoted enough attention to it. In classical times it was considered for decision making. Later we find rich systematic reflections: Machiavelli, Montaigne, Smith, Tolstoy, Weber, Geertz, Adorno... After studying these testimonies, we analyze the case of Gandhi and his "inner voice". Revitalizing the discussion about the feeling of destiny contributes to understanding phenomena such as populism. It is not possible to do politics only with reasons: emotions must be taken into account; among them, the enthusiasm

Resumen:

A partir de las propuestas metodológicas de los Estudios del Imaginario, este trabajo busca mostrar que algunas de las principales tramas ideadas por Philip K. Dick tienen un sustrato gnóstico. Para ello, en primer lugar, mencionamos algunos datos biográficos que muestran el interés de Dick por el gnosticismo; en segundo lugar, identificamos mitemas idénticos entre ambos corpus ("este mundo es un engaño"; "este mundo es una prisión de hierro"; "el iniciado tiene un doble"; etc.). Con ello, esta investigación contribuye al debate científico en torno a la recepción y reutilización de antiguos mitos en la literatura de ciencia ficción

Abstract:

Based on the methodological proposals of the Studies of the Imaginary, this work aims to show that some of the main plots devised by Philip K. Dick have a Gnostic substratum. To do this, first, we mention some biographical data that show Dick's interest in Gnosticism; secondly, we identify identical mythemes between both corpora ("this world is a hoax"; "this world is an iron prison"; "the initiate has a double"; etc.). With this, this research contributes to the scientific debate about the reception and reuse of ancient myths in science fiction literature

Resumen:

La reciente publicación del libro Homo religiosus. Sociología y antropología de las religiones, sirve de excusa para recorrer las ideas fundacionales de algunas de las principales escuelas de estudios sobre religión. Hoy en día, el interesado en estudiar el fenómeno de lo religioso debe ser consciente del origen y significado de conceptos tales como "mana", "numinoso", "hecho social total", "estructura", "inconsciente colectivo", "arquetipo", "eterno retorno", entre otros, y este libro le será de gran ayuda

Abstract:

The recent publication of the book Homo religiosus. Sociología y antropología de las religiones serves as an excuse to review the foundational ideas of some of the major schools of religion studies. Nowadays, the scholar interested in studying the phenomenon of the religious must be aware of the origin and use of concepts such as "mana", "numinous", "total social fact", "structure", "collective unconscious", "archetype", "Eternal return", and so on. This book will be of great help to achieve this

Resumen:

Esta investigación analiza los precedentes míticos del que es, sin duda, uno de los principales argumentos de la ciencia ficción: el viaje en el tiempo. Tomando como punto de partida los trabajos de Joseph Campbell y el método de convergencia de Gilbert Durnad, hallamos que algunos de los mitemas de la mitología universal se repiten constantemente en la ciencia ficción. Nuestra hipótesis es que, precisamente por ello, el viaje en el tiempo tiene tanto éxito entre el público. En otras palabras: la ciencia ficción transmite enseñanzas esenciales que antes propagaba la tradición. Nos centramos en tres de ellas: 1) el tiempo del mundo ordinario es falso; 2) en consecuencia, el tiempo es relativo; 3) en último término, el tiempo no existe (lo que existe es la eternidad)

Abstract:

This research analyzes the mythical precedents of what is undoubtedly one of the main arguments of science fiction: the journey in time. Taking as starting point the works of Joseph Campbell and the method of convergence of Gilbert Durand, we find that science fiction constantly repeats some of the myths of the universal mythology. Our hypothesis is that time travel is so successful among the public precisely for this reason: science fiction conveys essential teachings previously propagated by tradition. We present three of them: 1) the time of the ordinary world id false; 2) consequently, time is relative; 3) ultimately, time does not exist (what exists is eternity)

Resumen:

El diseño de una infografía sencilla, clara, sólida y atractiva implica un trabajo previo de investigación, jerarquización y diseño. En esta ponencia indagamos las posibilidades docentes del uso de infografías en materias de escritura, y no tan solo por su pertinencia como herramientas visuales que incluyen textos discontinuos: las habilidades desarrolladas en el diseño de anuncios, visualizaciones de datos, esquemas, carteles de congresos, etc., son transferibles a la escritura de textos continuos (ensayos, artículos de opinión, trabajos de investigación, etc.). La infografía, además, no solo “refiere”, sino que también “simboliza”

Resumen:

A partir de la “mitodología” y la fenomenología de las religiones, analizo el cuento “Blancanieves y los siete enanos” e identifico elementos que sugieren un origen lunar e iniciático de él; además, encuentro semejanzas entre Blancanieves y algunas otras sacerdotisas de la luna

Abstract:

With the “mythodology” and phenomenology of religions, I analyze the story of Snow White and the Seven Dwarfs and identify elements that suggest a lunar and initiatory origin; moreover, I found close similarities between Snow White and some other priestesses of the moon

Resumen:

En 2008 publicamos una tesis en la que aplicábamos el método de clasificación de símbolos propuesto por Gilbert Durand al corpus órfico. Desde la antropología, la hermenéutica, el psicoanálisis y la fenomenología de las religiones, abordamos la barahúnda de símbolos y obtuvimos resultados interesantes. Según el mencionado "método de convergencia", las cosmovisiones humanas son tripartitas: los símbolos de un pueblo tienen que ver con (a) el instinto de "caminar erecto", (b) el de nutrirse (mamar) o (c) el reproducirse (el sexo). Platón, en el Timeo, explica también la estructura del mundo a partir de tres símbolos, a los que designa, entre otras formas, como padre, madre e hijo. En esta investigación nos embarcamos en el análisis de la palabra erikepaios. Partiendo de los estudios más recientes de Alberto Bernabé y sus colaboradores, y retomando ciertas interpretaciones de la filosofía griega (Marcel Detienne, Peter Kingsley, Walter Burkert, etcétera), concluimos que, efectivamente, en los textos órficos el protagonista es el muerto, pero lo es, sin embargo, por estar vivo: por ser el renacido, el iniciado, el niño santo

Abstract:

In my PhD thesis (published in 2008), I applied the classification of symbols' method proposed by Gilbert Durand to the Orphic corpus. From Anthropology, Hermeneutics, Psychoanalysis and Phenomenology of religions, I board the symbols' hubbub, and I obtain interesting results. Durand defends that the human cosmovisions are tripartites: the symbols of a people have to do with (a) the instinct of "walking upright", (b) the instinct of nourishment (breastfeed) or (c) the reproduction (sex). Plato, in the Timaeus, also explains the structure of the world from three symbols (father, mother and son). In this paper I analyze the word ERIKEPAIOS in order to find its original meaning. I based my study on the most recent studies by Alberto Bernabé and his collaborators, and in certain interpretations of Greek philosophy (Marcel Detienne, Peter Kingsley, Walter Burkert, etc.). I conclude that, indeed, in the Orphic texts the protagonist is the dead individual; however, he is the main character because he is alive, as the reborn, the initiated, the saint child

Resumen:

Los documentos jurídicos y administrativos son difíciles de entender. ¿Hay forma de corregirlos? Una de las técnicas principales en aras de la claridad es la de hacer preguntas. La fundamentación de nuestra tesis la encontramos en las teorías más relevantes de la pragmalingüística, en algunos textos clásicos y en bibliografía reciente sobre lenguaje claro y literacidad en español. Es posible y necesario fundamentar las propuestas del lenguaje claro en función de los efectos cognitivos que producen en el receptor del mensaje. Aquí, pues, dotamos de bases científicas al movimiento y contribuimos a atacar el escepticismo con el que se recibe

Abstract:

Legal and administrative documents are difficult to understand; is there a proper way to correct them? One of the main techniques for the sake of clarity is to ask questions. The basis of our thesis is found in the most relevant theories of the pragmatics, in some classic texts and in recent bibliography about plain language and literacy. It is possible to defend the use of plain language in terms of the cognitive effects. We will endow the scientific bases of plain language and, therefore, we will contribute to attack the scepticism with which it is received

Resumen:

A pesar de que muchas veces hemos oído hablar sobre orfismo, una pátina de confusión empaña nuestros conocimientos. Parte de ese desconcierto es inevitable: para adentrarse en el misterio, no hay camino (no hay método). Sin embargo, ¿es posible encontrar un bastón que nos ayude a caminar en la noche oscura del mito? Aquí, presentamos una somera y rigurosa introducción al orfismo y un par de hipótesis: el alma lograba escapar de la cadena de las reencarnaciones por el décalage que órficos y pitagóricos veían en las razones musicales, astronómicas y matemáticas; su destino final era su origen: la vasija de las almas o pensamiento cósmico

Abstract:

Although we may have heard of Orphism, however confusion obscures our understanding of it. Most of it is unavoidable since there is no established path or method to explore the mystery. Yet, is it possible to find a cane to aid us walking through the dark night of the myth? In this article, a brief yet rigorous introduction to Orphism will be presented along with a couple of conjectures namely, that the soul was able to escape from the chain of reincarnations by means of décalage which Orphics and Pythorageans attributed to musical, astronomical, and mathematical proportions. Its final destination was its origin: the jar of souls or cosmic thought

Abstract:

We initiate the study of the forward and backward shifts on the Lipschitz space of an undirected tree, L, and on the little Lipschitz space of an undirected tree, L-0. We determine that the forward shift is bounded both on L and on L-0 and, when the tree is leafless, it is an isometry; we also calculate its spectrum. For the backward shift, we determine when it is bounded on L and on L-0, we find the norm when the tree is homogeneous, we calculate the spectrum for the case when the tree is homogeneous, and we determine, for a general tree, when it is hypercyclic

Abstract:

High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension, can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral

Abstract:

A model is presented to describe the energization of charged particles in planetary magnetospheres. The model is based on the stochastic acceleration produced by a random electric field that is induced by the magnetic field fluctuations measured within the magnetospheres. The stochastic behavior of the electric field is simulated through a Monte Carlo method. We solve the equation of motion for a single charged particle—which comprises the stochastic acceleration due to the stochastic electric field, the Lorentz acceleration (containing the local magnetic field and the corotational electric field) and the gravitational planetary acceleration of the particle—under several initial conditions. The initial conditions include the ion species and the velocity distribution of the particles which depends on the sources they come from (solar wind, ionospheres, rings and satellites). We applied this model to Saturn’s inner magnetosphere using a sample of particles (H+, H2O+, N+, O+ and OH+ ) initially located on Saturn’s north pole, above the C-Ring, on the south pole of Enceladus, in the north pole of Dione and above the E-Ring. The results show that the particles tend to increase the value of their energy with time reaching several eV in a few seconds and the large energization is observed far from the planet. We can distinguish three main energization regions within Saturn’s inner magnetosphere: minimum (Saturn’s ionosphere), intermediate (Dione) and high-energy (Enceladus and the E-ring). The resulting energy spectrum follows a power-law distribution (>1 keV), a logistic, an exponential decay or an asymmetric sigmoidal (<1 keV)

Abstract:

Voyager and Cassini plasma probe observations suggest that there are at least three fundamentally different plasma regimes in Saturn: the hot outer magnetosphere, the extended plasma sheet, and the inner plasma torus. At the outer regions of the inner torus some ions have been accelerated to reach energies of the order of 43 keV. Protons are the dominant species outside about 9 Rs, whereas inside, the plasma consists primarily of a corotating comet-like mix of water-derived ions with similar to 3% N+. Over the A and B rings, an ionosphere -dominated by O-2(+) and O+ - can be observed. The energies of magnetospheric particles range from hundreds of keV to several MeV. Possible explanations to the observed high-energy population of particles involve the release of magnetic energy which heats the ion component of the plasma and then accelerates electrons to energies of some MeV. In this work we develop a model that calculates the acceleration of charged particles in the Saturn's magnetosphere. We propose that the stochastic electric field associated to the observed magnetic field fluctuations is responsible of such acceleration. A random electric field is derived from the fluctuating magnetic field -via a Monte Carlo simulation- which then is applied to the momentum equation of charged particles seeded in the magnetosphere. Taking different initial conditions, like the source of charged particles and the distribution function of their velocities, we find that particles injected with very low energies (0.103 eV to 558.35 eV) can be strongly accelerated to reach much higher energies (8.79 eV to 9.189 kcV) as a result of 200,000 hitting events. The components of the final velocity in the perpendicular direction to the corotation velocity (z-axis) show a bimodal behavior in their distribution.

Abstract:

Voyager’s plasma probe observations suggest that there are at least three fundamentally different plasma regimes in Saturn: the hot outer magnetosphere, the extended plasma sheet, and the inner plasma torus. At the outer regions of the inner torus some ions have been accelerated to reach energies of the order of 43 keV. We develop a model that calculates the acceleration of charged particles in the Saturn’s magnetosphere. We propose that the stochastic electric field associated to the observed magnetic field fluctuations is responsible of such acceleration. A random electric field is derived from the fluctuating magnetic field – via a Monte Carlo simulation – which then is applied to the momentum equation of charged particles seeded in the magnetosphere. Taking different initial conditions, like the source of charged particles and the distribution function of their velocities, we find that particles injected with very low energies ranging from 0.129 eV to 5.659 keV can be strongly accelerated to reach much higher energies ranging from 22.220 eV to 9.711 keV as a result of 125,000 hitting events (the latter are used in the numerical code to produce the particle acceleration over a predetermined distance)

Abstract:

This paper proposes an analytical method to characterize the behavior of multiple critical roots of a retarded system with two delays. Expressing locally the related characteristic function, as a Weierstrass polynomial, we derive several results to analyze the stability behavior of such characteristic roots with respect to small variations on the delay parameters. The proposed results are illustrated by considering several numerical examples

Abstract:

Side-Channel Analysis has become a relevant tool to analyze a cryptographic device. Here, an adversary looks for information leakage from emanation sources such as power consumption, thus obtaining sensitive information with a lower effort than the mathematical approach. In this manuscript, a distinguisher based on the Mahalanobis distance is applied. Instead of computing an inverse covariance matrix because of lack-sampling problems with tested datasets, a shrinkage calculation is implemented, thus obtaining efficient Mahalanobis distance implementations. The current approach is evaluated using different standardized tests such as stability and success probability, which are computed using unmasked public traces obtained from the Advanced Encryption Standard's typical implementations with a 128-bit key. We show that this technique's efficiency is better than Pearson correlation using few traces in terms of probability detection

Abstract:

In this paper we present complete characterizations of the expenditure functions for both utility representations and preference structures. Building upon these results, we also establish under minimal assumptions duality theorems for expenditure functions and utility representations, and for expenditure functions and preference structures. These results apply indistinctly to finite- and infinite-dimensional spaces; moreover, in the case of preference structures they are valid for non-complete preorders

Abstract:

We study efficiency and fairness properties of the equal cost sharing with maximal participation (ECSMP) mechanism in the provision of a binary and excludable public good. According to the maximal welfare loss criterion, the ECSMP is optimal within the class of strategyproof, individually rational and no-deficit mechanisms only when there are two agents. In general the ECSMP mechanism is not optimal: we provide a class of mechanisms obtained by symmetric perturbations of ECSMP with strictly lower maximal welfare loss. We show that if one of two possible fairness conditions is additionally imposed, the ECSMP mechanism becomes optimal

Abstract:

There is a point in predicting the past (retrodicting) because we lack information about it. To address this issue, we consider a truncated Lévy flight to model data. We build on the finding that there is a power law between truncation length and standard deviation that connects the bounded past and unbounded future. Even if a truncated Lévy flight cannot predict future extreme events, we argue that it can still be used to model the past. Because we avoid the exact form of the probability density function while allowing its distributional moments, with the exception of the mean, to vary over time, our method is applicable to a wide range of symmetric distributions. We illustrate our point by using US dollar prices in 15 different currencies traded on foreign exchange markets

Abstract:

From a modernization perspective, sub-Saharan Africa and Latin America—two of the poorest regions in the world—conform to one another in that citizens of both regions express very low levels of horizontal, generalized interpersonal trust. Indeed, these two regions are among the least trusting societies in the world. Both are low in terms of “bridging” trust, and both also have high degrees of particularized “bonding” trust. However, these regions differ sharply with respect to vertical, institutional trust. People in sub-Saharan Africa express relatively high levels of trust in national institutions; and Latin Americans offer very low levels of trust, with many expressing sheer cynicism. Finally, the data at hand reveal few important linkages between levels of trust in government or the state and various facets of democratic citizenship

Abstract:

In this article, we consider the problem of discrete-time, diffusion-based distributed parameter estimation with the agents connected via directed graphs with switching topologies and a self loop at each node. We show that, by incorporating the recently introduced dynamic regressor extension and mixing procedure to a classical gradient-descent algorithm, improved convergence properties can be achieved. In particular, it is shown that with this modification sufficient conditions for global convergence of all the estimators is that one of the sensors receives enough information to generate a consistent estimate and that this sensor is "well-connected." The main feature of this result is that the excitation condition imposed on this distinguished sensor is strictly weaker than the classical persistent excitation requirement. The connectivity assumption is also very mild, requiring only that the union of the edges of all connectivity graphs over any time interval with an arbitrary but fixed length contains a spanning tree rooted at the information-rich node. In the case of nonswitching topologies, this assumption is satisfied by strongly connected graphs, and not only by them

Abstract:

All sovereign governments face a commitment problem: how can they promise to honor their own agreements? The standard solutions involve reputation or political institutions capable of tying the government's hands. Mexico's government in the 1880s used neither solution. It compensated its creditors by enabling them to extract rents from the rest of the economy. These rents came through special privileges over banking services and the right to administer federal taxes. Returns were extremely high: as long as the government refrained from confiscating all their assets (let alone repaying their debts) less than twice a decade, they would break even

Abstract:

Mexico's initial industrialization was based on firms that were "grouped": that is, linked to other firms through close affiliations with a common bank. Most explanations for the prevalence of groups are based on increasing retums or missing formal capital markets. We propose a simpler explanation that better fits the facts of Mexican history. In the absence of secure property rights, tangible collateral could not credibly be offered to creditors; but there remained the possibility of using reputation as a form of intangible collateral. In such circumstances, firms had incentives to group together for purposes of mutual monitoring and insurance

Abstract:

Banks in Porfirian Mexico widely engaged in the practice of making long-term loans to their own directors, a practice known as 'auto-prestamo'. This was neither pernicious nor fraudulent. Rather, Porfirian banks behaved as the financial arms of extended kinship and personal business groups. These groups used banks to raise impersonal capital for their diversified enterprises and give their partnerships a more permanent institutional base. Investors in these banks knew full well that they were investing in the businesses of a particular group and developed sophisticated techniques to monitor bank directors. However, because entry into banking was severely restricted under Porfirian law, the system concentrated economic power in a few hands and contributed to Mexico's oligopolistic industrial structure

Resumen:

Este artículo revisa el desarollo más reciente en el campo de la historia económica mexicana. Argumenta que en los últimos diez años ha ocurrido un cambio significativo en la metodología, se ha pasado de la teoría de la dependencia y el analisis institucional tradicional, hacia el uso extensivo de técnicas cliométricas e hipótesis tomadas de la Nueva Economía Institucional. Este cambio ha aumentado nuestro conocimiento de diversas claves de la historia ecónomica de México. Ciertamente el área ha experimentado un progreso importante, pero todavía falta un cierto ordenamiento

Abstract:

This article surveys the most recent developments in the field of Mexican economic history. It argues that the last ten years has seen a significant shift in the methodology and focus of the field, away from dependency and traditional institutional analysis, and towards the extensive use of cliometric techniques and hypotheses drawn from the New Institutional Economics. This shift has greatly increased our understanding of several key issues in Mexico's economic history, but much of the work done to date has lacked a coherent focus on a specific set of issues. In other words, the field has recently made astounding progress, but still lacks sufficient order

Abstract:

In a panel setting, we analyse the speed of (beta) convergence of (cause-specific) mortality and life expectancy at birth in EU countries between 1995 and 2009. Our contribution is threefold. First, in contrast to earlier literature, we allow the convergence rate to vary, and thereby uncover significant differences in the speed of convergence across time and regions. Second, we control for spatial correlations across regions. Third, we estimate convergence among regions, rather than countries, and thereby highlight noteworthy variations within a country. Although we find (beta) convergence on average, we also identify significant differences in the catching-up process across both time and regions. Moreover, we use the coefficient of variation to measure the dynamics of dispersion levels of mortality and life expectancy (sigma convergence) and, surprisingly, find no reduction, on average, in dispersion levels. Consequently, if the reduction of dispersion is the ultimate measure of convergence, then, to the best of our knowledge, our study is the first that shows a lack of convergence in health across EU regions

Resumen:

En un contexto de incertidumbre sobre el uso de ChatGPT en la universidad, este trabajo analiza discursivamente un corpus de argumentaciones de estudiantes de primer semestre de una universidad mexicana. El objetivo es dilucidar qué representaciones sociales comparten, para explicar cómo conciben la inteligencia artificial, qué papel le atribuyen en el marco de la escritura académica y cómo se representan a sí mismos en relación con ella. Con ese fin, se rastrean índices de subjetividad/valoración, voces o ethos implicados y metáforas conceptuales, entre otros aspectos. Los resultados alertan sobre una visión problemática de la escritura, de su rol como estudiantes y de los vínculos académicos en general

Abstract:

In a context of uncertainty regarding the use of ChatGPT at the university, this work discursively analyzes a corpus of argumentative texts from first-semester students at a Mexican university. The objective is to elucidate the social representations they share, in order to explain how they conceive artificial intelligence, what role they attribute to it in the framework of academic writing, and how they represent themselves in relation to it. To achieve this, indices of subjectivity/appraisal, voices or ethos involved, and conceptual metaphors, among other aspects, are tracked. The results raise concerns about a problematic vision of writing, their role as students, and academic relations in general

Resumen:

Este trabajo exploratorio, enmarcado en una investigación más amplia sobre las identidades, la violencia y el discurso de odio (DO) en las redes sociales, indaga en la construcción del odio en un discurso transexcluyente desplegado en Facebook por un grupo de feministas radicales, a propósito de una deportista trans. Contra los trabajos previos que limitan el DO a la presencia explícita de ciertas palabras o al llamado a matar al vulnerable, el análisis ofrece evidencias lingüísticas y semióticas que permiten afirmar que el odio, en los discursos en línea, no siempre está dicho o es explícito, ni se limita a la palabra. En el marco de la aprobación de la "Ley trans" española, los comentarios analizados crean un frente común de pensamiento homogéneo, basado en una concepción de "ser mujer" conservadora, esencialista, biologicista y fija, que denigra (incluso con recursos verbales típicos del discurso misógino) a las mujeres trans, a quienes describe como amenaza o enemigo: son no-mujeres, animales, insectos y aun monstruos

Abstract:

This exploratory work, framed in a broader investigation on identities, violence and hate speech (HS) in social networks, investigates the construction of hate within trans-exclusionary discourse displayed on Facebook by a group of radical feminists, sepecifically targeting a trans athlete. Contrary to previous works that limit HS to the clean presence of certain words and the call to kill the vulnerable, the analysis provides linguistic and semiotic evidence revealing how hate permeates online discourses to varying degrees. It highlights that hate is not merely an individual passion, but a social construction echoing shares ideas and emotions. Moreover, it warns that hate is not always overtly said, but suggested by various linguistic and semiotic means. Within the framework of the approval of the Spanish "Trans Law", the examined comments create a cohesive front of homogeneous thought, rooted in a conservative, essentialist, biological, and fixed conception of "being a woman", which denigrates (even with verbal resources typical of the misogynistic discourse) trans women, whom they describe as a threat or an enemy: they are non-women, animals, insects and even monsters

Resumen:

Este artículo analiza discursivamente, desde un enfoque multidisciplinario y una visión amplia del hecho argumentativo, la dimensión polémica de un corpus extenso de tuits contrarios al expresidente mexicano Peña Nieto y al nuevo presidente, López Obrador, a raíz de la toma de protesta del último. Tomando distancia de los trabajos que asumen que en Twitter hay escasa o nula argumentación, y que lo que prevalece es la violencia, el análisis pone de manifiesto una cadena argumentativa reactiva, estructurada en argumentos no siempre entimemáticos, así como en otros recursos (muchos de ellos, visuales) que entroncan con la tradición polémica y panfletaria del siglo XIX mexicano, lo que nos lleva a describir al tuit como el sucedáneo actual del panfleto tradicional

Abstract:

This article analyzes the controversial dimension of a wide corpus of tweets that oppose to the former Mexican president Peña Nieto and the new president, López Obrador, in the context of the swearing-in ceremony of the latest, from a multidisciplinary approach and a broad vision of argumentation. Taking distance from the studies that assume that argumentation is not the preponderant on Twitter, and that violence is what prevails, our analysis reveals a reactive argumentative chain, structured in varied arguments, not just entimematic ones, as well as in other resources (many of them visual) that connect with the panfletary tradition of the Mexican 19th century, which leads us to describe the tweet as the current substitute of the traditional pamphlet

Resumen:

Frente a los estudios que caracterizan a las redes sociales como medios despolitizados, subjetivos y alejados del debate político serio, este trabajo propone, por el contrario, que en ellas hay discurso político y argumentación, pero de carácter cotidiano y polémico. Se analiza discursivamente un corpus formado por la cadena de discusión iniciada por una publicación en Facebook del entonces presidente mexicano, los comentarios de los usuarios y las reacciones del político a algunos de ellos. La aparente frivolidad de la entrada presidencial da paso a un intercambio signado por la polifonía enunciativa y el humor, no en su variante agresiva, sino como instrumento de la polémica, pues los mensajes viran al terreno político y crean un contradiscurso sustentado en la sátira, la parodia y la ironía. Lo significativo es que el presidente responde en un tono similar, con lo que las tensiones propias de la discrepancia no siempre son explícitas

Abstract:

Facing the studies that characterize social networks as depoliticized and subjective media, far from serious political debate. This work proposes, on the contrary, that there is actually political discourse and argumentation in them, but of a daily and controversial nature. The corpus is discursively analyzed and it is formed by the chain of discussion generated by a Facebook post of the last Mexican president, the comments of the users and the reactions of the politician to some of them. The apparent frivolity of the presidential post leads to an exchange marked by enunciative polyphony and humor, not in its aggressive variant, but as an instrument of controversy, since the messages turn to the political terrain and create a counter-discourse based on satire, parody and irony. The meaningful is that the president answers in a similar tone, so that the discrepancy tensions are not al-ways explicit

Résumé:

Face aux études qui caractérisent les réseaux sociaux comme des médias subjectifs et dépolitisés, éloignés d’un débat politique sérieux, cet ouvrage propose, au contraire, qu'ils proposent un discours et une argumentation politiques, mais de nature quotidienne et polémique. On analyse discursivement un corpus formé par la chaîne de discussion initiée par une publication sur Facebook du dernier président mexicain, les commentaires des utilisateurs et les réactions du politicien à certains d'entre eux. La frivolité apparente de l'entrée présidentielle cède la place à un échange caractérisé par la polyphonie énonciative et l’humour, non pas dans sa variante agressive, mais comme un instrument de polémique, car les messages se tournent vers le terrain politique et créent un contre-discours fondé sur la satire, la parodie et l’ironie. L'important est que le président réagisse sur un ton similaire, de sorte que les tensions liées à la divergence ne soient pas toujours explicites

Resumen:

Este trabajo exploratorio y cualitativo, centrado en la relación entre el humor, la ironía, la parodia y la sátira en el medio digital, analiza discursivamente alrededor de doscientos comentarios derivados de una publicación en el Facebook del presidente mexicano Enrique Peña Nieto con motivo de san Valentín (14 de febrero de 2018). Como es propio del Análisis del Discurso (AD), el marco teórico-metodológico central, el estudio del AD político se aborda desde una perspectiva interdisciplinar: los datos recabados son interpretados a la luz de conceptos procedentes de disciplinas como la teoría de la argumentación cotidiana, los estudios de comunicación y semiótica de los medios y el estudio de la ironía y la polifonía desde el AD francés. Partiendo de la parodia y de la caracterización de las «voces» enunciadoras, el trabajo discute la definición de ironía-eco como «insinceridad» y pone de manifiesto la complejidad discursiva de los comentarios de los internautas, políticos por cuanto son profundamente polémicos (según la definición de polémica de Amossy): mediante la sátira, crean un contradiscurso opuesto al discurso oficial del amor y la amistad, y emplean diversas formas de polarización y de agresión lingüística para ridiculizar al mandatario. El humor, así, adquiere un valor reivindicativo, ligado al deseo de un orden diferente, manifiesto en la irrupción de modos de decir y de formas humorísticas populares, como el albur

Abstract:

This exploratory and qualitative work, focused on the relationship between humour, irony, parody, and satire in the digital medium, performs a discursive analysis of approximately two hundred comments derived from the Mexican President Enrique Peña Nieto's Facebook output on the occasion of Valentine's Day (February 14, 2018). As it is usual in discourse analysis (DA), the main theoretical-methodological framework of the present work, the study of political DA, is studied from within an interdisciplinary framework: the data collected is interpreted through concepts from disciplines such as the daily argumentation theory, communication studies and media semiotics, and the French DA study of irony and polyphony. Starting from the parody and the characterization of the enunciating "voices", the work discusses the definition of echo-irony as "insincerity" and highlights the discursive complexity of Internet users' comments, political as a result of its controversial nature (according to Amossy's definition of controversy): through satire, they create a counter-discourse opposed to the official discourse of love and friendship, and employ various forms of polarization and linguistic aggression to ridicule the president. Humour, then, acquires a claim value, linked to the desire of a different order, manifested in the irruption of popular ways of saying and humorous forms, such as the Mexican "albur"

Resumen:

El diccionario de la Real Academia Española (o Diccionario de la Lengua Española-DLE), como producto cultural, de carácter general e históricamente situado, no es inocuo, sino que constituye un medio de control de los usuarios de la lengua, puesto que fija los límites de aceptabilidad y de corrección de los términos y organiza la comprensión social (Lara, 1997: 187). En este sentido, su valor fundante y consagrado fija la "producción simbólica" de la sociedad y contribuye a la perpetuación de una "memoria" social, a partir de la noción de "discursos constituyentes" (Maingueneau y Cossutta, 1995). En esa línea, las definiciones del DLE funcionan como instrumentos al servicio de una ideología, reproduciendo y propiciando creencias compartidas y organizando prácticas sociales (van Dijk, 2001), que marcan límites de aceptabilidad, de corrección y de transmisión de representaciones. En esta comunicación realizaremos un abordaje cualitativo desde el Análisis del Discurso francés (Arnoux 1986, 2005; Charaudeau y Maingueneau, 2005; Maingueneau, 1989) y con algunos aportes del Análisis Crítico del Discurso (Fairclough y Wodak, 1997) de diez definiciones del DLE relacionadas con la salud y la enfermedad, tal como aparecen en Internet (www.rae.es), con el propósito de analizar la orientación argumentativa que en los términos observamos para reflexionar sobre las representaciones subyacentes.

Abstract:

The Royal Spanish Academy’s dictionary (or Diccionario de la Lengua Española DLE - Dictionary of the Spanish Language) as a cultural product, of defining nature and historically situated, is not innocuous. On the contrary, it constitutes a mean for establishing control on the language users through defining what is acceptable and correct regarding terminology and organizing social understanding (Lara, 1997: 187). As a result, the foundational and consecrated value of this dictionary limits the 'symbolic production' of a society and contributes to the perpetuation of a social 'memory' that steams from the notion of 'constituent discourse' (Maingueneau & Cossutta, 1995). As seen from this perspective, the definitions in the DLE are the tools serving for a particular ideology, replicating and spreading shared beliefs and organizing social practices (Van Dijk, 2001). Both this beliefs and practices create the limits of correction, acceptability and transmitting representations. In the following paper, we will make a qualitative analysis of 10 definitions in the DLE that are related to medicine, as seen in the online version of the dictionary (www.rae.es). We will follow the French discourse analysis theory (Arnoux 1986, 2005, Charadeau and Maingueneau, 2005, Maingueneau, 1989), as well as some contributions of Critical Discourse Analysis (Fairclough and Wodak, 1997). Our purpose is to analyze the observed argumentative orientation of the selected terms to think about the underneath representations. In order to achieve that objective, we will describe grammatically and discoursively the definitions published in the two last editions: Diccionario de la Real Academia Española (DRAE) and the DLE. In addition, we will contrast them to search for similarities and differences, as well as for additions and omissions.

Resumen:

El discurso político actual se vale de nuevos canales para aplicar estrategias no tan nuevas de persuasión. En este trabajo se analizan, desde un abordaje cualitativo y con herramientas del Análisis del Discurso francés (Kerbrat-Orecchioni, 1986; Amossy, 2000; Adam, 2002; Arnoux, 2006; Charaudeau, 2009a y b), los recursos discursivos y retóricos que contribuyen a configurar la "imagen de sí" presidencial, tal como se advierte en un corpus de 80 tuits de dos mandatarios latinoamericanos: Cristina Fernández de Kirchner (Argentina) y Enrique Peña Nieto (México). Nuestro análisis se detiene, más concretamente, en las peculiaridades enunciativas que contribuyen a definir un ethos y un pathos particular, así como a dotar a cada "voz" de una dimensión política orientada a la persuasión del destinatario (el lector-"seguidor" de esos presidentes en la red social Twitter) y a la exclusión del adversario (Verón, 1984). Según una hipótesis de trabajo, el contraste discursivo entre los tuits analizados no puede entenderse como una mera decisión estilística; antes bien, obedece a una especial construcción simbólica de los sujetos de la enunciación y a una diferente concepción del quehacer político, que entronca con dos tendencias ideológicas diferentes: el llamado "populismo", por un lado, y la "tecnocracia", por otro, aspecto en el que deberemos profundizar en el futuro

Abstract:

The current political discourse takes advantage of new channels for applying not so new persuasion strategies. From a qualitative approach and with French Discourse Analysis tools (Kerbrat-Orecchioni, 1986; Amossy, 2000; Adam, 2002; Arnoux, 2006; Charaudeau, 2009a and b), this work studies the rhetorical and discursive sources that contribute to configure the presidential "self-image" as shown in a 80 twits corpus of two Latin-American presidents: Cristina Fernández de Kirchner (Argentina) and Enrique Peña Nieto (Mexico). Our analysis focuses specifically in the enunciative peculiarities that contribute to define a particular ethos and pathos, as well as giving each "voice" a political dimension directed to the addressee persuasion (the follower-reader of that president in Twitter social network) and to the opposing exclusion (Verón, 1984). According to one research hypothesis, the discursive contrast among the analyzed twits cannot be understood as a simple stylistic decision, but it is due to a special symbolic construction built by the enunciative subjects and to a different conception of the political work that relates with two different political tendencies: "populism", in one hand, and "technocracy", in the other, aspects that will be studied in the future

Resumen:

Desde una concepción de lo político como práctica argumentativa orientada a la acción y asociada al poder (Fairclough y Fairclough 2012), este trabajo exploratorio y cualitativo analiza los recursos discursivos que configuran la "imagen de sí" presidencial en un corpus de 60 tuits de Mauricio Macri. Para ello, identifica y explica, desde la teoría de la enunciación, cómo las opciones gramaticales contribuyen a construir simbólicamente al enunciador y de qué forma permiten rastrear su concepción del quehacer político. Se plantean al menos dos problemas: la adaptación de las categorías tradicionales del discurso político a las nuevas tecnologías o a ámbitos no institucionales como Twitter, y la aparente ausencia de confrontación, rasgo con-sustancial a la praxis política. El propósito final es reflexionar sobre los modos de desarrollar la labor política en Internet y ahondar en la supuesta degradación actual de lo político, a la que Macri parece contribuir también desde las redes sociales, con un mensaje sobre el cambio y la vuelta a la normalidad

Abstract:

From a conception of the political as an argumentative practice oriented to action and associated with power (Fairclough & Fairclough 2012), this exploratory and qualitative work analyzes the discursive resources that shape the presidential "self image" in a corpus of 60 tweets by Mauricio Macri. For this, it identifies and explains, from the theory of enunciation, the grammatical options that contribute to symbolically construct the enunciator and that allow to trace its conception of the political task. There are at least two problems: the adaptation of the traditional categories of political discourse to new and non-institutional technologies such as Twitter, and the apparent absence of confrontation, a feature consubstantial to political praxis. The final aim is to reflect on ways of developing political work on the Internet, and on the supposed current degradation of politics, to which Macri also seems to contribute from social networks, with a message about change and return to politics normal

Resumo:

A partir de uma concepção do político como prática argumentativa orientada à ação e associa-da ao poder (Fairclough & Fairclough 2012), este trabalho qualitativo exploratório analisa os recursos discursivos que configuram a "imagem de si" presidencial num corpus de 60 tuítes de Mauricio Macri. Para isso, identifica e explica, desde a teoria da enunciação, as opções grama-ticais que contribuem para construir simbolicamente o enunciador e que permitem rastrear a sua concepção do fazer político. Enfocam-se a adaptação das categorias tradicionais do discurso político às novas tecnologias e a aparente ausência de confrontação, traço central da praxis polí-tica. O propósito é refletir sobre o trabalho político na internet e a suposta degradação atual do político, a que Macri parece contribuir também desde as redes sociais, com uma mensagem sobre a mudança e a volta à normalidade

Resumen:

La bibliografía restringe el doblado de acusativo a los objetos definidos, animados y marcados prepositivamente, pero los estudios de corpus y de corte discursivo-pragmático han demostrado que en el español de Argentina el fenómeno es más flexible. Partiendo de datos reales y actuales, este trabajo focaliza en los condicionantes semánticos a los que habitualmente se atribuye el fenómeno, reflexionando también sobre la (supuesta) obligatoriedad de la marcación excepcional. El doblado de objetos 'atípicos' (e.g., con indefinidos y cuantificadores) sugiere la incidencia de otros rasgos, entre los cuales parecen relevantes la presuposicionalidad y ciertos condicionantes pragmáticos. Contra lo que suele afirmarse, el doblado puede presentarse con referentes no estrictamente accesibles, en contextos variados que estimulan procesos inferenciales más complejos que la mera identificación, la asignación de referencia o la resolución de ambigüedades

Abstract:

Previous studies limit accusative clitic doubling to definite, animate, and prepositionally marked objects, but corpus-based and discourse-pragmatic related studies have shown that in Argentinian Spanish this phenomenon is rather flexible. Based on actual and contemporary data, this study focuses on semantic elements to which this phenomenon is attributed. It also gives some insights about the (supposed) obligatory nature of exceptional markedness. 'Atypical' object doubling (e.g. together with indefinite and quantifiers) suggests the presence of other features, from which presupposition and certain pragmatic elements are particularly relevant. Opposite to what is commonly accepted, doubling may be combined with non-accessible referents within various contexts that may elicit more complex inferences than mere indexing, reference allocation, or ambiguity resolution

Resumen:

La bibliografía sobre el doblado de acusativo se ha centrado en sus múltiples dimensiones: fonológica, morfológica, sintáctica y semántica. Menos se han estudiado, en cambio, los factores discursivos y pragmáticos involucrados. Este trabajo intenta precisar algunos de esos condicionamientos partiendo de la definición de especificidad, noción clave pero cuestionada. Mediante el análisis de enunciados de diversa índole y procedencia se explicitan distintos usos de las construcciones dobladas, atendiendo a su probable finalidad comunicativa, y se discuten algunas afirmaciones frecuentes sobre el estatus cognitivo de los referentes doblados y el rol de la inferencia en su interpretación

Abstract:

The bibliography of the accusative clitic doubling has centered on its phonologic, morphologic, syntactic and semantic dimensions. On the other side, the discursive and pragmatic factors involved in the process have been less studied. This paper tries to precise some of these factors from the specificity definition, a basic but still discussed notion. Through the analysis of different classes of texts with various origins, several uses of the doubling construction are stated, considering its communicative purposes, and some common statements about the doubled referents cognitive status and the inference function on its interpretation are discussed

Abstract:

We use a nationally representative field experiment in Tanzania to compare two teacher performance pay systems in public primary schools: a 'pay-for-percentile' system (a rank-order tournament) and a 'levels' system that features multiple proficiency thresholds. Pay for percentile can potentially induce socially optimal effort among teachers, while levels systems can encourage teachers to focus on students near passing thresholds. Despite the theoretical advantage of the tournament system, we find that both systems improved student test scores across the distribution of initial learning levels after two years. However, the levels system is easier to implement and is more cost effective

Abstract:

We present results from a large-scale randomized experiment across 350 schools in Tanzania that studied the impact of providing schools with (i) unconditional grants, (ii) teacher incentives based on student performance, and (iii) both of the above. After two years, we find (i) no impact on student test scores from providing school grants, (ii) some evidence of positive effects from teacher incentives, and (iii) significant positive effects from providing both programs. Most important, we find strong evidence of complementarities between the programs, with the effect of joint provision being significantly greater than the sum of the individual effects. Our results suggest that combining spending on school inputs (the default policy) with improved teacher incentives could substantially increase the cost-effectiveness of public spending on education

Abstract:

Since Hurricane Katrina wrought devastation on the city of New Orleans in 2005, the city has seen a renaissance with the return of many of those who fled the storm, as well as an inward migration of young professionals and entrepreneurs who believe there is opportunity for an attractive future in the region in which they can participate. The city exercises a fascination upon many, in large part due to a Latin-French esthetic accompanied by an artistically creative impulse of African influence. In the following pages, we would like to explore in more detail what has happened in the city since the destruction of 2005, and some of the challenges New Orleans faces, while recognizing its resurrection is in large part due to its beauty as a subtropical, almost Caribbean, port city which is a crucible of distinct cultures, yielding a very appealing mix as expressed through music, literature, architecture and food. And this blend is attracting entrepreneurs, artists and artisans of all types. Without these cultural attributes, it is unlikely that New Orleans would have come back in its present form. On the contrary, it would likely have been reduced to a mere touristic museum of a once existent Creole Culture, a sort of “touristic boutique”

Resumen:

Demócrito ha sido etiquetado como "el filósofo que ríe". El propósito de este artículo es reflexionar acerca de los motivos que llevaron a este genio a reír. El encuentro con Hipócrates, el famoso médico, lleva a cuestionar conceptos como locura, cordura y verdadero conocimiento

Abstract:

Democritus has been labeled "the laughing philosopher". The purpose of this article is to reflect on the reasons that led this genius to laugh. The encounter with Hippocrates, the famous physician, leads to questioning concepts such as madness, sanity and truly knowledge

Resumen:

Junto con el comentario del libro de Vargas Llosa, se analiza la importancia de la lectura para la formación del pensamiento crítico, el cual nos hace hombres verdaderamente libres, y la actividad de escribir como vocación y decisión de transformar la realidad de forma bella y desafiante

Abstract:

While commenting on Vargas Llosa's book, the importance of reading for the formation of critical thinking, which makes us truly free human beings is examinated, and the activity of writing as a vocation and decision to transform reality in a beautiful and challenging way

Abstract:

Is there a relation between wealth and human nature? Can the Delphic maxim "know thyself" help us decide whether or not to be affluent and wealthy? And if so: how rich? Human beings, says Aristotle, can only use and benefit from a limited amount of goods and services. The very rich have more than they need; the poor are in need because they have the minimum required to live, or even less. Only in the 'middle' do we find those who enjoy 'true wealth'. Any society should search to increase the number of persons who possess enough and therefore are 'truly wealthy'. Every human being should have what one needs. To achieve a 'truly rich' 'middle class' (Aristotle) rather than aspiring to increase its Gross Domestic Product (GDP), per capita GDP or income equality in terms of a normal distribution a country must solve the welfare problem of its population. That means it must end with the food, health, education, employment, and other such gaps. This paper argues that the hierarchical stratification of the contemporary Mexican society that favors an outrageously rich minority could renew its social order with the understanding of two Aristotelian categories: attainment of ‘real wealth’ for large 'middle class'. This would allow Mexico to become a member of the developed world by turning into a mostly middle class country

Resumen:

A partir de la Teoría de los sentimientos morales de Adam Smith y de la sociología de los grupos de referencia, se pretende mostrar la relación entre la pobreza, la riqueza y la educación universitaria de los Estudios Generales. Estos deben brindar una adecuada ponderación de la verdadera riqueza que lleve a la formación de una clase media virtuosa, cosa que se echa de menos en México y Latinoamérica

Abstract:

Starting from The Theory of Moral Sentiments of Adam Smith and the sociology of the reference groups, the aim is to show the relationship between poverty, wealth and university education in General Studies. These should provide an adequate weighting of the true wealth that leads to the formation of a virtuous middle class, something that misses in Mexico and Latin America

Resumen:

Este artículo revisará las condiciones sociales en que se dan los Estudios Generales dentro de una educación universitaria para el caso de México, pero que puede ser generalizable a los países de América Latina en donde rige la desigualdad económica. El propósito de este escrito es reflexionar sobre el ideal equivocado de las riquezas desmedidas que surge de manera artificial, e incluso antinatural, en el ser humano y se incentiva en los grupos de referencias con los que los individuos se vinculan y que la sociedad promueve. Los estudiantes admiran a los ricos y quisieran alcanzar sus niveles de riqueza. Los valores de justicia y equidad empiezan a conocerse y a ejercitarse en el seno de la familia, pero su estudio debe hacerse en el aula, particularmente en la universidad. Los Estudios Generales pueden contribuir a la formación integral de las personas promoviendo el desarrollo de una sociedad más libre, más justa y más crítica. Para lograr este propósito analizaremos la propuesta de Aristóteles de una clase media verdaderamente rica. Este estudio concluye que, aunque vivimos en una sociedad desigual, es posible contribuir con la formación de seres humanos dignos que mantengan una recta ambición, no contentándose con la mediocridad, sino anhelando lo mejor y humanizándose a través de sus propias fuerzas

Abstract:

This article reviews the social conditions in which General Studies are conducted within university education in the case of Mexico, which could be generalized to other Latin America countries where economic inequality rules. The purpose of this paper is to reason on the mistaken ideal of surfeiting wealth that arises as an artificial and even unnatural way for human beings but is encouraged by reference groups to which individuals link themselves and is promoted by society in general. Students admire the rich and want to reach their wealth levels. Although values of justice and equity begin to be known and exercised within the family, their study must be completed in classrooms, particularly at a university level. General Studies can contribute to an integral formation by promoting the development of a freer, more just and more critical society. To achieve this purpose we will analyze Aristotle's proposal of a truly rich middle class. This study concludes that although in Latin America we live in an unequal society, it is possible to contribute to the formation of worthy human beings who maintain an accurate economic and social ambition, not pleased with mediocrity, but longing to excel in humanity by means of their own forces

Resumen:

Los estados displacenteros, como la melancolía y la depresión, se relacionan directamente con la experiencia de introspección en la que se alcanza o no la genialidad. Aristóteles se cuestionó hace dos mil cuatrocientos años esta relación entre la melancolía y la genialidad. Esta pregunta custodia la profunda distinción entre el Homo depressus y el Homo magnus; el primero no acepta el reto de no ser ordinario y tiene de sí una falsa conciencia. El Homo depressus busca el dominio de su condición humana, pero se estanca en lo ordinario y no puede superar esa consigna vulgar de ser un animal triste sin alcanzar el estatus de hombre melancólico. La medicina contemporánea parece no advertir la importancia y la vigencia que admiraron tanto a Hipócrates como a Aristóteles, pilares del pensamiento occidental. El objetivo de este trabajo es favorecer un diálogo entre las aproximaciones médica (τέχνη,tecné) y la filosófica (ἐπιστήμη, episteme) sobre la melancolía y la genialidad como misterios de la condición humana

Abstract:

The unpleasant states of mind, namely melancholy and depression, are directly related with the experience of introspection in which genius may or may not be obtainable. Aristotle pondered this relationship between melancholy and genius 2400 years ago. This topic defends the sharp distinction between Homo depressus and Homo magnus. The former rejects the challenge of being ordinary and has a false conscience of himself. Homo depressus searches to control his human condition, but he gets stuck in the ordinary and cannot overcome this vulgar motto of being a sad animal and not of a melancholic man. Modern medicine does not seem to realize the importance and relevance of these topics highlighted in the work of Hippocrates and Aristotle, pillars of Western thought. Our goal in this article is to foster a dialogue between the medical approaches (τέχνη, tecné) and the philosophical ones (ἐπιστήμη, episteme) regarding melancholy and genius as mysteries of the human condition

Resumen:

La realidad y el concepto de tradición, que surgió en el campo jurídico y que fue ampliando paulatinamente su campo de significación, ha llegado a convertirse en un fenómeno humano primordial que consiste en “la transmisión de bienes culturales abstractos a través de los tiempos”, cuya finalidad es “unir la serie de generaciones y mantener la continuidad entre pasado y presente”. Dado que la tradición no solamente se ha conservado, sino también cuestionado y criticado al paso del tiempo, ha llegado a ser un problema hermenéutico, filosófico y científico. De ahí que Josef Pieper la defina como la transmisión y asimilación de un vasto patrimonio de cosas ya pensadas. Por ello, en las siguientes contribuciones, cinco por parte de profesores de la Universidad de Dallas y cinco por parte de profesores del Departamento Académico de Estudios Generales (ITAM), se profundiza en la temática de la tradición desde la historia, el arte y la cultura (Wilhelmsen y Díez), desde la literatura (Roper, Espericueta y Orozco), desde la filosofía, el derecho y la cultura (Cowan y Zepeda), desde la filosofía y la teología (Parens, Zocco y Gutiérrez), subrayando siempre y de muy diversas maneras que la tradición es un concepto dinámico, abierto, creativo, que al tiempo que nos asienta sobre el terreno firme de las generaciones anteriores, nos abre siempre nuevas posibilidades de realización

Abstract:

The reality and the idea of tradition, which emerged from the judicial field and slowly extended its significance, has now become an important human phenomenon consisting of “the transmission of abstract cultural treasures throughout time” in order to fulfill its goal of “uniting all generations and maintaining the continuity between past and present.” Given that tradition has not only been preserved, but also has been questioned and criticized over the years, it has become a hermeneutic, philosophical, and scientific problem. Hence, Josef Pieper defines it as the transmission and assimilation of a vast heritage of past thoughts. Therefore, we plan to analyze tradition in-depth with the following contributions from five professors from the University of Dallas and five from the professors of the Academic Department of General Studies (ITAM): from the point of view of History, Art, Culture (Wilhelmsen and Díez), from Literature (Roper, Espericueta, and Orozco), Philosophy, Law, Culture (Cowan and Zepeda), and finally, from Philosophy and Theology (Parens, Zocco, and Gutiérrez). In this analysis, we will continuously showcase through various ways how tradition is a dynamic concept, open, creative, and how it helps us to relate to previous generations’ thoughts and opens new ways to fulfillment

Resumen:

Parece ser que es universalmente aceptado el principio ontológico “es imposible ser y no ser” y que Hegel es el primero que lo supera. No obstante, lo que Aristóteles llama “el principio más firme”, enuncia “que todo tiene que ser afirmado o negado”. La palabra ‘es’ tiene la fuerza de una afirmación, pues si no hay verbo no hay afirmación. El ‘no ser’ es una negación. Cuando la inteligencia compara la afirmación con su negación propia ve y puede distinguir cuál corresponde a la realidad. La oposición contradictoria existe entre la afirmación (‘A es’) y su negación propia (‘A no es’), por lo que la contradicción sólo existe en la “dicción” y nunca en la realidad. La filosofía moderna cambia la formulación. Leibniz, parte de la identidad: “cada cosa es lo que es” (‘A es A’); dice que el principio de contradicción se enuncia: “una proposición o es verdadera o es falsa”. La contradicción es conceptual (‘Ano-A’) y se da cuando se predica ‘verdad (V)’ y ‘falsedad (F)’ de una proposición. La diferencia radica en que Aristóteles opone ‘no es’ a ‘es’, mientras Leibniz opone ‘es V’ a ‘es no-V (F)’. Hegel, siguiendo a Leibniz, piensa que la oposición ‘S’ y ‘P’ en “‘S’ es ‘P’” es la contradicción. Ésta nace de que la identidad contiene la negación: “’S’ es ‘S’, porque “’S’ es lo que no es ‘P’”. Para Hegel, la contradicción se encuentra a nivel conceptual, sin ver que ‘non est’ non est ‘est non’. Su teoría de la negación desconoce la negación ontológica

Abstract:

It seems that the ontological principle “it is impossible to be and not to be” is universally accepted yet Hegel is the first philosopher overcome it. However, this principle, which Aristotle refers to as the “the most certain of all principles,” requires that “everything must be either affirmed or denied.” The word “is” has all the strength of a statement because without a verb there cannot be affirmation. It “is not” is a negation. When our intellect compares the affirmation with its proper negation, it becomes clear and one can distinguish which one corresponds to reality. The contradictory opposition occurs between an affirmation (“A is”) and its proper negation (“A is not”), therefore a contradiction strictly speaking never exists in reality but only in diction. Modern philosophy changes this formulation. Leibniz starts from the identity “everything is what it is” (“every A is A”); he also believes that the Principle of contradiction is articulated as follows: “a statement is either true or false”. His contradiction is conceptual (“A not-A”) and it also appears when judging whether a statement is “true (T)” or “false (F).” The difference between Leibniz and Aristotle lies in that Aristotle opposes it “is not” to it “is” whereas Leibniz opposes “it is true (T)” to it “not-T (F).” Hegel follows Leibniz’s line of thought in believing that the opposition between “S” and “P” in “S is P” is contradictory. The latter (a contradiction in itself) is the result of the fact that identity contains its own negation: “S is S” since “S” is what it is because “S is not P.” Hegel believes that a contradiction is given at a conceptual level, without taking no notice that it ““is not” is not it “is not-”” (non est non est est non-). Hegel’s theory of negation neglects the ontological negation

Abstract:

This study used World Values Survey data to learn the attitude toward tax evasion of sample populations in the four BRIC countries - Brazil, Russia, India and China. The study found that more than 75 percent of the Chinese and Indian samples believed that tax evasion was never justifiable, compared to only 34 percent of the Russian sample. Overall, the Chinese were most opposed to tax evasion, followed by the Indians, the Brazilians and the russians. Gender was not a significant demographic variable. Older groups tended to be more averse to tax evasion than younger groups. Education was not a significant demographic variable in 75 percent of the cases, and when it was a significant demographic variable, no clear trend could be identified. Religion was not a significant variable for the Brazilian and Chinese samples. The Indian sample found that Muslims were sometimes significantly less opposed to tax evasion than were other religions. In russia, those with no religion were significantly less opposed to tax evasion than were the Orthodox Christians. A trend analysis found that Brazilians, russians and Indians had become significantly less opposed to tax evasion over time. Although the Chinese views in 1991 and 2018 were about the same, in the interim period, they had become less averse to tax evasion

Abstract:

This paper reports on the results of a survey of more than 500 young and middle-aged college-educated adults regarding their views on the seriousness of 75 crimes. If the goal is to apply the legal principle that the punishment should fit the crime, one must first know how serious the crime is. This study ranks 75 crimes in terms of seriousness, using a Likert Scale where 1 is not at all serious and 100 is extremely serious. Some comparisons of mean scores were made, and p-values were computed, to determine whether certain crimes are significantly more serious than other crimes. Is the life of a prostitute more or less valuable than the life of a drug dealer, politician or lawyer? Are some kinds of discrimination more serious than others? In the case of statutory rape, should the criminal be punished more severely if it is a man rather than a woman, or should their punishments be equal? These and other questions are answered in this study. The authors grant permission to replicate this study using their survey instrument

Abstract:

This study reports the results of a survey of students in a Mexican university who were asked to rate the seriousness of 75 crimes on a scale of 1 to 100. Results are reported overall as well as by gender. In 46 of 75 cases, the male and female mean scores were not significantly different. In the other 29 cases, women usually rated particular crimes as more serious than the men in the sample. Buying a pirated CD/DVD was considered to be the least serious crime, overall, while various kinds of murder and rape ranked as most serious. Some kinds of murder were found to be more serious than others. Women thought that murdering a prostitute would result in a greater loss to society than would murdering a local politician, whereas the male sample valued the losses equally

Resumen:

Las guerras de agua en Bolivia en el año 2000 por los contratos de Bechtel para privatizar su suministro en la ciudad de Cochabamba impulsaron a la Conferencia Episcopal Boliviana a responder con dos importantes aplicaciones teológicas del pensamiento social católico a las cuestiones del agua como un derecho humano y a la administración ambiental. El carácter sacramental del agua, así como su importancia para la vida misma, se aplican de manera única en estas declaraciones que sostienen con argumentos teológicos que nunca se puede permitir que el agua sea una mercancía. La nueva constitución de Bolivia y las leyes de la madre tierra buscan aplicar aún más el principio del agua como un derecho humano a los derechos colectivos de los pueblos indígenas del país a poseer sus tierras ancestrales e incluso a otorgar personalidad jurídica a la naturaleza misma. La conexión de sus pueblos indígenas con la madre tierra, Pachamama, en armonía con sus ecosistemas, tiene una dimensión sagrada que está detrás de la nueva constitución de Bolivia, donde estos ideales han sido consagrados y, en última instancia, una visión social en desacuerdo con el neoliberalismo que tiene beneficios y peligros para la Bolivia moderna

Abstract:

The Bolivian water wars in 2000 over Bechtel contracts to privatize its water supply in the city of Cochabamba spurred the Episcopal Conference of Bolivia to respond with two important Catholic Social Thought theological applications to the questions of water as a human right and environmental stewardship. The sacramental character of water as well the importance of water for life itself are applied in a unique way in these statements that argues water can never be allowed to be a commodity with theological arguments. Bolivia’s new constitution and mother earth laws seek to further apply the principle of water as a human right to the collective rights the country’s indigenous peoples to own their ancestral land and even grant juridical personality to nature itself. The connection of its indigenous people to mother earth, Pachamama, in harmony with its ecosystems, has a sacred dimension that is behind Bolivia’s new constitution, where these ideals have been enshrined and ultimately a vision for society at odds with neoliberalism that has both benefits and dangers for modern Bolivia

Abstract:

Electricity tariff reform is an essential part of the clean energy transition. Existing tariffs encourage the over-adoption of residential solar systems and the under-adoption of electric alternatives to fossil fuels. However, an efficient tariff based on fixed charges and marginal cost pricing may harm low-income households. We propose an alternative methodology for setting fixed charges based on each household's willingness to pay to consume electricity at marginal cost. Using household-level data from Colombia, we demostrate the short-run and long-run distortions from the existing tariffs and how our new methodology could provide the economic, environmental, and health benefits from adopting clean energy technologies while still protecting low-income households from from higher bills

Abstract:

Existing literature, based on signaling theory, suggests that money-back guarantees (MBGs) will be utilized by high-quality firms, where high quality is defined as a low likelihood of product return. However, in today’s world, MBGs are ubiquitous among major retailers, even when the likelihood of product return varies greatly between them. To understand this phenomenon, we explore a competitive environment between high- and low-quality retailers where consumers are fully informed and risk neutral, and retailers realize a salvage value for returned products. When MBGs are profitable, under continuous demand it is Nash equilibrium for both retailers to offer MBGs, and the low-quality retailer gains while the high-quality retailer loses relative to when MBGs are not offered. In contrast, if demand is lumpy, retailers can act monopolistically over their respective market segments, allowing both retailers to gain from MBGs, although the low-quality retailer still gains more

Abstract:

Major retailers in the USA offer money back guarantees (MBGs) under which they return money to dissatisfied customers. Some of these retailers also offer low price guarantees (LPGs) under which they promise to refund price differences if buyers find a lower price after purchase. Some researchers have argued that LPGs should be legally challenged because they limit price competition and contribute to higher prices. This paper shows that adding an LPG to an MBG can help improve economic efficiency as both retailer loss and customer hassle costs from excessive returns are reduced. This reduction serves as a counter argument against those who believe that LPGs should be prohibited

Resumen:

En este trabajo se analiza el impacto del esparcimiento del retardo en redes IEEE 802.15.4a con receptor detector de energía (ED). Específicamente, se hace una revisión de los valores típicos del esparcimiento del retardo en las transmisiones de banda ultra ancha (UWB) reportados hasta la fecha en escenarios de interiores, exteriores e industriales, y además se estudia cómo el esparcimiento de retardo impacta la tasa de error de bit con y sin interferencia multiusuario (MUI)

Abstract:

This work analyzes the impact of delay spread on IEEE 802.15.4a networks using energy detection (ED) receivers. Specifically, we review the typical values for delay spread in Ultra Wide Band (UWB) systems reported to date for indoor, outdoor and industrial environments, and study how the delay spread impacts the bit-error rate with and without Multiuser Interference (MUI)

Resumen:

Los beneficios del cómputo en la nube han hecho que sea la tecnología de facto para la provisión de servicios informáticos. Sin embargo, con la masificación de la Internet de las cosas y el surgimiento de entornos inteligentes, esta tecnología muestra limitantes, como saturación (de corto plazo) de recursos de cómputo, ancho de banda o tiempos de respuesta largos. Por ello nacen nuevos paradigmas como el cómputo en la niebla. En este artículo se simulan arquitecturas de cómputo en la nube y en la niebla para un escenario hipotético de monitoreo de servicios en una ciudad inteligente. Las variables de interés son capacidad de procesamiento, tiempo de respuesta y nivel de utilización de los enlaces de red. Los resultados muestran que Fog Computing tiene un enorme potencial para ofrecer servicios con mejor calidad en los entornos que caracterizarán a las sociedades del siglo XXI

Abstract:

Cloud Computing's proven benefits have made this the de facto technology for the provision of computing services. Nonetheless, with the massification of the so-called Internet of Things and the emergence of smart environments, Cloud Computing architectures begin to show some limitations such as possible short term saturation of computing resources, or long response times. This has led to the emergence of new paradigms, such as Fog Computing. In this article, we make a comparison between Cloud and Fog Computing architectures for a hypothetical monitoring service scenario in a Smart City. The variables of interest are the processing capacity, the response time and the usage level of the network links. The results show that Fog Computing has enormous potential to offer services with higher quality of service in the environments that will characterize 21st Century Societies

Abstract:

Background and Aims: In the United States, 15 states and the District of Columbia have implemented recreational cannabis laws (RCLs) legalizing recreational cannabis use. We aimed to estimate the association between RCLs and street prices, potency, quality and law enforcement seizures of illegal cannabis, methamphetamine, cocaine, heroin, oxycodone, hydrocodone, morphine, amphetamine and alprazolam. Design: We pooled crowdsourced data from 2010–19 Price of Weed and 2010–19 StreetRx, and administrative data from the 2006–19 System to Retrieve Information from Drug Evidence (STRIDE) and the 2007–19 National Forensic Laboratory Information System (NFLIS). We employed a difference-in-differences design that exploited the staggered implementation of RCLs to compare changes in outcomes between RCL and non-RCL states. Setting and cases: Eleven RCL and 40 non-RCL US states. Measures: The primary outcome was the natural log of prices per gram, overall and by self-reported quality. The primary policy was an indicator of RCL implementation, defined using effective dates. Findings: The street price of cannabis decreased by 9.2% [β = −0.092; 95% confidence interval (CI) = −0.15–, –0.03] in RCL states after RCL implementation, with largest declines among low-quality purchases (β = −0.195; 95% CI = –0.282, –0.108). Price declines were accompanied by a 93% (β = −0.93; 95% CI = –1.51, –0.36) reduction in law enforcement seizures of cannabis in RCL states. Among illegal opioids, including heroin, oxycodone and hydrocodone, street prices increased and law enforcement seizures decreased in RCL states. Conclusions: Recreational cannabis laws in US states appear to be associated with illegal drug market responses in those states, including reductions in the street price of cannabis

Abstract:

This chapter describes the legal framework for the classification of various entities (both commercial and non-commercial) as corporations with social or environmental impact in Mexico. Considering the absence of ad hoc legislation and the widespread incentives for companies to become certified as enterprises with a commitment to social interest, several recommendations are put forward. Specifically, this chapter details how a legal regime could be integrated into the country's current regulatory environment to increase the number of firms eligible for B Lab certification

Abstract:

Importance of mechanisms of corporate debt restructuring feor economic recovery. As a preliminary basis for this analysis, it must be admitted that access to financing from companies is positively correlated with economic growth, allowing companies to channel resources into productive investment. Having access to credit also promotes innovation because the companies obtain resources to invest in new technologies and processes, improving their productivity

Resumen:

Los principios que han regido la distribución de activos en un proceso de insolvencia han sido: Par Condicio Creditoris, Absolute Priority Rule y Pari Passu. El trabajo presenta el concepto de la Regla de la Prioridad Absoluta y cómo se han venido introduciendo variantes como la Excepción de Nuevo Valor y la Regla de la Prioridad Relativa

Abstract:

The principles that have governed the distribution of assets in an insolvency process have been: Par Condicio Creditoris, Absolute Priority Rule and Pari Passu. The work presents the concept of the Rule of Absolute Priority and how variants such as the Exception of New Value and the Rule of Relative Priority have been introduced

Resumen:

El llamado Derecho de la Insolvencia surge cuando una persona llega a una situación en donde no puede cumplir con las obligaciones que ha contraído a favor de terceros. Esto que se dice con cierta facilidad resulta muy complejo cuando se trata de llegar en la práctica de definir el momento en el cual va a parecer la normatividad jurídica, sustantiva y adjetiva, propia de ese Derecho que es de naturaleza extraordinaria y que sobrepasa la aplicación del derecho ordinario. En un análisis de derecho comparado se identifican tales situaciones

Abstract:

The so-called Right of Insolvency arises when a person arrives at a situation where he cannot fulfill the obligations he has contracted in favor of third parties. This is said with some ease but is very complex when it comes to define the time at which it will be necessary to use the law, either substantive or adjective, considering that such law that is of an extraordinary nature and that overrides the application of ordinary law. In a comparative law analysis, such situations are identified

Resumen:

El presente artículo presenta la reforma de 2016 hecha por Francia a su Código Civil en la parte destinada al régimen de las obligaciones y contratos. Se presenta también el proyecto de reforma al mismo código sobre la responsabilidad civil

Abstract:

This article presents the 2016 reform made by France to its Civil Code in the part destined to the regime of obligations and contracts. The draft reform of the same code on civil liability is also presented

Resumen:

Este artículo explora la importancia que tiene, en la vida económica de un país, hacer frente a la carga de la deuda corporativa cuando reúne características de morosidad, especialmente en el caso de créditos con el sector financiero. En muchas economías del mundo, el alto nivel de apalancamiento que conlleva un crecimiento de la cartera vencida en el sector empresarial, inhibe el crecimiento económico e incluso puede suponer un riesgo para la estabilidad financiera. Una solución para lograr la reestructura financiera de las empresas afectadas es el uso del régimen de insolvencia (concurso mercantil). Se analizan los papeles que juegan el marco jurídico (legal y regulatorio) y el gobierno en esa tarea

Abstract:

This article explores the importance of dealing with corporate debt, in the national economy, when it is characterized by default, especially with credit market in the financial sector. In many economies around the world, the high level of leverage, which entails an increase in the past-due portfolio, reducing the economic growth and may even be a risk to the financial stability. A solution to achieve the financial restructuring of the affected companies is using the insolvency regime (bankruptcy). It analyzes the roles that plays the legal framework (legal and regulating) as well as the government, in this job

Resumen:

La Ley de concursos mercantiles mexicana presenta algunas peculiaridades tanto procesales como de fondo en cuanto a la regulación de los concursos de grupos empresariales, es decir, sociedades controladas y controladoras, dichas peculiaridades pueden dar lugar a confusiones y críticas, sin embargo, también han sido útiles en muchos otros aspectos. En el presente artículo se señalan y se describen brevemente algunos puntos clave para entender dicha regulación

Abstract:

The Mexican law of bankruptcy act, has some adjective and substantive peculiarities in the regulation of the bankruptcy proceedings of corporative groups, namely, holding and controlled corporations. These peculiarities can lead to confusion and critics; however, they have been useful in many other aspects. In the present article some of the main topics are briefly described in order to the understanding of this regulation

Resumen:

Recientemente la Ley concursal mexicana incorporó un tema que ha venido siendo importante en las legislaciones de insolvencia de todo el mundo: la responsabilidad que puede ser exigida a los que han administrado una empresa durante el tiempo previo a que ésta es declarada en estado de insolvencia (concurso mercantil). En qué casos se puede exigir, a quiénes se les puede exigir, quiénes pueden hacerlo, en que consiste el remedio y, finalmente, qué responsabilidad penal tienen dichos administradores, son los temas que se abordan en el presente

Abstract:

The Bankruptcy Mexican Law has recently incorporated a topic that has been important in the insolvency legislations around the world: the liability that could be claimed to those who managed the company during the previous period of time to which it was declared in estate of insolvency (bankruptcy). In which cases could it be demanded, to whom it could be claimed, who can start it up, what does the remedy consist on, and finally, what criminal liability do the managers of the enterprise have. These are the subjects that would be addressed in the present work

Resumen:

Se describen los tópicos más relevantes contenidos en la reciente reforma a la Ley de concursos mercantiles de México que incluyen temas de gran relevancia es los regímenes de insolvencia mundiales. Para que haga mayor sentido al lector se incluye un primer capítulo que describe en pinceladas generales cómo está estructurado el régimen concursal mexicano. Finalmente, el autor incluye aquellos temas que le hubiera gustado ver en la reforma y que podrán ser la materia de una posible futura reforma

Abstract:

The most relevant topics contained in the recent amendments to the Bankruptcy Act of Mexico are described including issues of great relevance in the global insolvency regimes. To make more sense to the reader, a first chapter describes in general brushstrokes how it is structured the Mexican bankruptcy regime. Finally, the author includes topics that have liked to see in the reform and that may be the subject of a possible future reform

Abstract:

An approach to assess competencies required by an engineer who will develop professionally within Society 5.0 is discussed. The approach focuses on the process of teaching, measurement, evaluation, and skills improvement across the curriculum, harmonized with criteria defined by leading engineering accrediting organizations (ABET, at an international level, CONAIC and CACEI in Mexico). Approach robustness has: (1) allowed obtaining different types of program accreditations (ABET accreditation, without deficiencies or weaknesses, twice in a row), and (2) guided the evolution of the assessment process over time, starting from simple evaluation schemes, all the way to the use of a Learning Management System (Canvas), in a straightforward manner. The historical evaluation process, its status, and proposals for short-term improvement are presented. Three different institutional initiatives are also included since they reinforce and evaluate the graduation skills of the engineering students to face the challenges presented by Society 5.0

Resumen:

El artículo presenta el programa de mejora continua implementado en una Institución de Educación Superior en México con el propósito de obtener la acreditación de ABET para un programa de Ingeniería en Computación. El artículo muestra las ideas clave que llevaron a la acreditación exitosa de dos programas de Ingeniería, de manera que puedan servir de referencia a otros programas en Latinoamérica y el Caribe. El programa se apoya en tres ciclos de mejora interrelacionados en los que se mide y analizan los resultados de los cursos, de los alumnos y de los egresados. Las evaluaciones involucran el uso de medidas directas e indirectas, que se complementan para obtener una visión más completa de los resultados obtenidos y ser capaces de proponer acciones de mejora con mayor certidumbre. El artículo concluye presentando el impacto del proceso en la calidad de los programas, la carga de los profesores y la acreditación misma, y los requerimientos clave del proceso tales como costo, tiempo y estructura del comité interno de evaluación

Abstract:

El artículo presenta el uso de una metodología establecida de minería de datos como base para proponer una metodología adaptada al problema de la calificación crediticia. Se traducen y mapean conceptos de minería de datos al problema particular y se resuelve un caso práctico real de una institución financiera mexicana utilizando regresión logística, un árbol de decisión y una red neuronal. Para evaluar los resultados de los modelos con la población actual se propone el uso de la Calificación de Brier. Los resultados obtenidos colocan a la red neuronal como la mejor técnica en la mayoría de las métricas que utilizan la población de validación. Sin embargo, al utilizar la Calificación de Brier se observa que la regresión logística es más estable a cambios en las características de la población

Resumen:

El uso de la estrategia primero-objetos en el curso introductorio de Computación para programas de Ingeniería permite que los alumnos desarrollen aplicaciones interactivas interesantes al término de un semestre. Esta estrategia se facilita al introducir los conceptos fundamentales de la programación orientada a objetos usando Alice y al hacer referencia continua a los conceptos aprendidos de manera visual al momento de desarrollar aplicaciones en Java. Para que el curso cubra el objetivo tradicional de desarrollar en el alumno la capacidad de analizar y resolver problemas de forma metódica y de expresar las soluciones de los mismos en términos de algoritmos, se hace énfasis en el uso adecuado y eficiente de las estructuras de control de flujo clásicas en la programación de los métodos de las clases

Abstract:

El presente artículo describe una experiencia de colaboración entre Hyperion y el ITAM para desarrollar un curso innovador sobre Desempeño Organizacional e Inteligencia de Negocios. Esta vinculación academia - industria permitió crear un curso de maestría formal y riguroso que toma en cuenta las necesidades de la industria y las tendencias del mercado en un área que se encuentra en constante cambio. Para demostrar la relevancia de sus programas a los estudiantes y a las empresas, las universidades pueden trabajar de manera cercana y alineada con la industria, manteniendo siempre una neutralidad con respecto a los vendedores y evitando caer en "modas" temporales. Usando de manera inteligente los convenios academia - industria puede llevarse a los salones de clase tecnología de punta y oportunidades de aprendizaje novedosas que se combinan con métodos pedagógicos y objetivos de aprendizaje probados por medio de la experiencia y el conocimiento profesional de las universidades

Abstract:

Este artículo presenta un diagnóstico de la industria de servicios de software en México. Como investigación primaria se realizó un estudio actitudinal, basado en un cuestionario aplicado a empresas de todo el país. Los resultados obtenidos ponen en relieve diversos aspectos críticos que parecen estar restringiendo la capacidad de las empresas para competir en el mercado. Algunos de los aspectos más importantes son: la falta de especialización de los competidores; la ausencia de estrategia referente al esquema con que venden los servicios y al lugar en que éstos se realizan; la diversidad de precios de cada tipo de servicio; el manejo de los recursos humanos; y la ausencia generalizada de mecanismos para medir y evaluar el desempeño de las empresas

Abstract:

Designing a network with a certain level of survivability can be modeled as an instance of the Generalized Steiner Problem with vertex connectivity constraints (GSP). In this work we use the Tabu Search (TS) metaheuristic to find low cost topologies that satisfy the vertex connectivity constraints. The basic building blocks that TS needs are: a feasible initial solution, and a move and neighborhood definition. The move is specified as an exchange of paths, therefore the neighborhood is path-based. The initial solution is constructed using a greedy algorithm. The software implementation time was reduced using reusable open source libraries and frameworks such as OpenTS, JUNG and Jakarta Commons Collections. Furthermore, the JUnit framework was used for unit testing to support every step of the implementation

Abstract:

Purpose– The aim of this paper is to explore interactions between change and stability during the implementation of a specific change initiative (ISO 9000). It attempts to develop a theoretical framework on change and stability management in small firms. Design/methodology/approach– This research uses a process approach based on retrospective comparative case study methodology. Data collection in the six companies lasted over a year. This gives the opportunity to contrast failed change initiatives against successful ones. Findings– Two models emerged from this approach; they support the notion that change and stability could be complementary during the different phases of the change initiative the authors analyzed. The findings show that total absence of stability variables in the change initiative could have negative effect on results. Research limitations/implications– The research is based on a multiple case study approach, which limits the generalizability of the findings. Originality/value– This is one of the first studies that applies and empirically tests the change and stability relation in small firms

Resumen:

Se recubrió un acero inoxidable austenítico 304 L con una capa de óxido de cromo por medio de la técnica de evaporación catódica reactiva. Se prepararon los recubrimientos de óxido de cromo sobre los sustratos de acero por la técnica de PVD. Las muestras se ensayaron en atmosferas de metano más hidrógeno en una termobalanza durante 20 horas de exposición a 800 °C. Con el fin de justificar si este recubrimiento es funcional o no se caracterizaron las muestras antes y después de las pruebas de corrosión por medio de Microscopia Electrónica de Barrido

Abstract:

A 304 L austenitic stainless steel was coated with a layer of chromium oxide by reactive evaporation technique. The coatings were prepared on the steel substrates by the PVD technique. The samples were tested in the atmospheres of methane plus hydrogen in a thermobalance for 20 hours of exposure at 800 °C. To justify this coating is functional the samples were characterized by SEM

Abstract:

For about 150 years, international trade theory has mostly highlighted trade in consumption goods. Such a modeling approach was likely supported by the then prevailing commodity structure of world trade. This paper shows that – at least since 1970 – only about 20 per cent of commodity trade has taken place in consumables, the remainder being intermediates and capital goods. Theoretically, this evidence suggests a move in the modelling strategy for traded goods: from utility functions to production functions. Sanyal and Jones' “new trade theory” (1982) does precisely that. In addition, some crucial research challenges emerge. These relate to dynamic changes in some general characteristics of production technologies throughout the world

Abstract:

The methodological apparatus in equilibrium macroeconomics and its implied behavioural functions and models are criticized by some development economists as being irrelevant for Latin American Countries (LACs). The argument is that such an equilibrium framework cannot incorporate crucial structural features peculiar to LACs. Can the representative-individual paradigm of equilibrium macroeconomics be used to model aggregate behaviour in an LAC? Can some of the key enduring structural features prevalent in Third World countries be incorporated into the methodological apparatus used in equilibrium macroeconomics? If so, how? This paper is an attempt to respond to these questions. The paper offers new insights for theoreticians interested in macromodelling LDCs or small open economies

Resumen:

Este trabajo presenta un modelo determinístico tipo Baumol-Tobin, para la demanda de dinero transaccional de las empresas. La producción no instantánea motiva la necesidad de saldos transaccionales. Ciertos costos de producción fijos implican gastos previsibles diseretos. En cambio, las empresas mantienen depósitos a plazo que pueden convertirse instantáneamente en depósitos a la vista. Surge así una demanda de un agregado monetario más amplio. Esta demanda depende negativamente de un spread de tasas de interés y positivamente del capital de trabajo. Este modelo tiene implicaciones potenciales para cuestiones empíricas, teóricas y de política económica, donde es básica la especificación de la función de demanda de dinero

Abstract:

This paper presents a deterministic Baumol-Tobin model of a transactions demand for money for businesses. Non-instantaneous production triggers working balances. Some fixed production costs involve foreseeable discrete outlays. As a counterpart of these, firms hold time deposits which can be instantaneously converted into demand deposits. A demand for a broader monetary aggregate emerges. It depends negatively on an interest rate spread, and positively on working capital. This model has potential implications for empirical, theoretical, and policy issues in which the specification of the demand for money function is crucial

Abstract:

What are the implications of the historically observed economic policy instability in Latin American countries (LACs) for macroeconometric testing? Two pressing restrictions on the econometrician using time-series of LACs arise: time-varying parameters and time-varying specifications. Such an instability also has profound impacts on time-series measurements of national accounts at constant prices. This, together with the "second best methodology" used in LACs for computing real GNP, implies that LACs figures on GNP growth reflect growth in gross production rather than in value added. LACs time-series for private consumption are unreliable. Crucial data set constraints in LACs further complicate the task for the econometrician

Resumen:

Las siguientes preguntas resumen brevemente la motivación y el contenido del presente trabajo. ¿Qué precauciones debemos tomar los economistas a la hora de realizar e interpretar estudios empíricos acerca del comportamiento de variables macroeconómicas en países latinoamericanos? ¿Con que confiabilidad deben tomar las autoridades (y los organismos) de política económica los estudios econométricos de ecuaciones de comportamiento y/o de modelos macroeconómicos acerca de las economías latinoamericanas? ¿Cuáles son las implicaciones de la inestabilidad en las políticas económicas en la América Latina para la especificación y los métodos de estimación de modelos econométricos? ¿Cómo afecta dicha inestabilidad a las mediciones de cuentas nacionales, a la interpretación de las diversas series de tiempo y a los requerimientos de estadísticas económicas necesarios para la implementación de pruebas macroeconométricas en la América Latina? ¿Cuál es el grado de confiabilidad de las series de tiempo de cuentas nacionales a precios constantes en la América Latina? ¿Son conceptualmente comparables nuestras mediciones de crecimiento económico y consumo privado con las que se efectúan en países desarrollados? Si el "subdesarrollo estadístico" no es sino otra faceta del subdesarrollo, ¿qué esfuerzos pueden y deben hacer los gobiernos en la América Latina por mejorar nuestros sistemas de estadísticas económicas? ¿Qué pueden hacer para disminuir la inestabilidad estructural del sistema económico?

Abstract:

The following represent questions which illustrate the underlying motivation and contents of this essay. What precautions we, as professional economists, need to take whenever we engage in (or interpret) econometric research dealing with aggregate behavior in Latin American countries? How-confident can policymakers be when using econometric studies concerned with behavioral equations and/or macroeconomic models of Latin American economies? Which are the implications of the historically observed economic/policy instability in Latin America for the specification and estimation techniques of econometric models? How does such instability affect National Accounts measurements, the interpretation of available time series, and the requirements of statistical information for macroeconometric testing in Latin America? How reliable are Latin American countries' time series of National Accounts at constant prices? Are the figures on economic growth and private consumption computed in Latin America conceptually comparable to the corresponding ones computed in developed countries? If the current status of "statistical underdevelopment" in Latin America is just another facet of underdevelopment, which policy/efforts can and must be undertaken so as to improve our statistical systems? What can and must be done in Latin America in order to diminish the degree of structural instability in the economic system? The contents of this essay reflects:my well founded skepticism about the feasibleness of macroeconometric research in Latin America

Abstract:

When dealing with risk models the typical assumption of independence among claim size distributions is not always satisfied. Here we consider the case when the claim sizes are exchangeable and study the implications when constructing aggregated claims through compound Poisson-type processes. In particular, exchangeability is achieved through conditional independence, using parametric and nonparametric measures for the conditioning distribution. Bayes’ theorem is employed to ensure an arbitrary but fixed marginal distribution for the claim sizes. A full Bayesian analysis of the proposed model is illustrated with a panel-type data set coming from a Medical Expenditure Panel Survey (MEPS)

Abstract:

Purpose: There are currently more than 480 primary immune deficiency (PID) diseases and about 7000 rare diseases that together afflict around 1 in every 17 humans. Computational aids based on data mining and machine learning might facilitate the diagnostic task by extracting rules from large datasets and making predictions when faced with new problem cases. In a proof-of-concept data mining study, we aimed to predict PID diagnoses using a supervised machine learning algorithm based on classification tree boosting. Methods: Through a data query at the USIDNET registry we obtained a database of 2396 patients with common diagnoses of PID, including their clinical and laboratory features. We kept 286 features and all 12 diagnoses to include in the model. We used the XGBoost package with parallel tree boosting for the supervised classification model, and SHAP for variable importance interpretation, on Python v3.7. The patient database was split into training and testing subsets, and after boosting through gradient descent, the predictive model provides measures of diagnostic prediction accuracy and individual feature importance. After a baseline performance test, we used the Class Weighting Hyperparameter, or scale_pos_weight to correct for imbalanced classification

Abstract:

Since franchise business can be an alternative to independent business, it seems interesting to look at those factors that encourage individuals to choose franchise rather than independent business. It is also relevant to consider the role of innovation because it is generally accepted that independent business facilitates innovations to a greater degree than does franchise. The main goal of this paper is to determine the factors that encourage individuals to choose franchise activity and what role plays innovation in their decision

Resumen:

Los frutos depositados en el piso del bosque tropical son un recurso sumamente atractivo para una amplia variedad de mamíferos. El estudio de las características de esta interacción puede permitir avanzar en el entendimiento de los mecanismos que favorecen la coexistencia entre especies de mamíferos. Se usaron cámaras trampa para registrar el consumo de frutos de Licania platypus y Pouteria sapota por Cuniculus paca, Dasyprocta punctata, Nasua narica, Dycotiles crassus y Tapirella bairdii. Con base en la información del día, la hora y el árbol donde se registró la fauna, se caracterizó el nivel de traslape en actividad a lo largo del día (coeficiente de traslape, delta) y espacialmente (visitas a los mismos árboles en los mismos días, índices de asociación de Jaccard, Ochiai y cociente V). Se encontró una alta segregación en la actividad diaria (delta promedio = 0.291 y 0.191 en L. platypus y P. sapota, respectivamente) y entre árboles/días (máx. Jaccard = 0.14 y 0.19 en L. platypus y P. sapota, respectivamente). Nuestros resultados indican que el grado de traslape en la actividad de mamíferos alimentándose de frutos en el piso de la selva es en general bajo. Esto puede deberse al hecho de que nuestro estudio analiza los patrones de actividad de la fauna con un mayor nivel de detalle que estudios previos que, por ejemplo, se han concentrado exclusivamente en la dieta de la fauna. Nuestro estudio permite avanzar en el entendimiento de los mecanismos que permiten la coexistencia entre distintas especies de mamíferos frugívoros

Abstract:

Fruits reaching the floor of tropical forests constitute an attractive resource for a variety of mammals. Study of the characteristics of the frugivory interaction can help to advance in the understanding of the mechanisms favoring animal’s coexistence. However, there are few studies focused on analyzing patterns of activity of mammals feeding on fruits in the forest floor. Camera traps were used to record consumption of Licania platypus and Pouteria sapota fruits by Cuniculus paca, Dasyprocta punctata, Nasua narica, Dycotiles crassus and Tapirella bairdii. Patterns of mammal activity were characterized based on the day, time and tree in which they were recorded. Overlap in daily (delta coefficient) and spatial occurrence (same tree and day, Jaccard, Ochiai indices and V ratio) was assessed. High segregation in the activity of frugivores occurred during the day (mean 0.291 and 0.191 for L. platypus and P. sapota, respectively) and among trees/days (max. Jaccard = 0.14 and 0.19 for L. platypus and P. sapota, respectively). Our results suggest that activity overlap among mammalian frugivores feeding in the forest floor is lower than expected. This likely relates to the fact we conducted our analysis at a finer detail than previous studies (for example those focusing exclusively on dietary overlap). Thus, our study increases our understanding of the possible factor that can favor coexistence of tropical frugivorous mammals

Abstract:

The pharmacokinetics, PK/PD ratios, and Monte Carlo modeling of enrofloxacin HCl-2H 2 O (Enro-C) and its reference preparation (Enro-R) were determined in cows. Fifty-four Jersey cows were randomly assigned to six groups receiving a single IM dose of 10, 15, or 20 mg/kg of Enro-C (Enro-C10 , Enro-C15, Enro-C20 ) or Enro-R. Serial serum samples were collected and enrofloxacin concentrations quantified. A composite set of minimum inhibitory concentrations (MIC) of Leptospira spp. was utilized to calculate PK/PD ratios: maximum serum concentration/MIC (C max /MIC 90 ) and area under the serum vs. time concentration of enrofloxacin/MIC (AUC 0-24 /MIC 90 ). Monte Carlo simulations targeted C max /MIC = 10 and AUC 0-24 /MIC = 125. Mean C max obtained were 6.17 and 2.46 μg/ml; 8.75 and 3.54 μg/ml; and 13.89 and 4.25 μg/ml, respectively for Enro-C and Enro-R. C max /MIC 90 ratios were 6.17 and 2.46, 8.75 and 3.54, and 13.89 and 4.25 for Enro-C and Enro-R, respectively. Monte Carlo simulations based on C max /MIC 90 = 10 indicate that only Enro-C 15 and Enro-C 20 may be useful to treat leptospirosis in cows, predicting a success rate ≥95% when MIC 50 = 0.5 μg/ml, and ≥80% when MIC 90 = 1.0 μg/ml. Although Enro-C 15 and Enro-C 20 may be useful to treat leptospirosis in cattle, clinical trials are necessary to confirm this proposal

Resumen:

Cuestión central dentro de la epistemología de Aristóteles es saber cuál es la esencia del acto de inteligir y cómo se relaciona con el cuerpo. Primero, se explica la diferencia entre cuerpo viviente y cuerpo no-viviente. Después, la estratificación de las formas de vida que arraigan en un cuerpo viviente, con énfasis en la situación del intelecto. Posteriormente, se referirá la actividad propia de inteligir; y, para terminar, algunas especificaciones acerca de la diferencia entre intelecto activo y pasivo, y una recapitulación sobre cómo se relaciona el intelecto con el cuerpo en el que se halla

Abstract:

In this article, we explore a main topic in Aristotles’ epistemology, namely, what is the act of intellection and its relation to the body. First, we will explore the difference between a living body and a non-living body. Then, we will delve into the stratification of lives in a living body, particularly regarding intellect. Later, we will make reference to the actual act of intellection. Finally, we will specify some differences between active and passive intellect and sum up with the relationship between intellect and the body it inhabits

Abstract:

Statistical methods to produce inferences based on samples from finite populations have been available for at least 70 years. Topics such as Survey Sampling and Sampling Theory have become part of the mainstream of the statistical methodology. A wide variety of sampling schemes as well as estimators are now part of the statistical folklore. On the other hand, while the Bayesian approach is now a well-established paradigm with implications in almost every field of the statistical arena, there does not seem to exist a conventional procedure-able to deal with both continuous and discrete variables-that can be used as a kind of default for Bayesian survey sampling, even in the simple random sampling case. In this paper, the Bayesian analysis of samples from finite populations is discussed, its relationship with the notion of superpopulation is reviewed, and a nonparametric approach is proposed. Our proposal can produce inferences for population quantiles and similar quantities of interest in the same way as for population means and totals. Moreover, it can provide results relatively quickly, which may prove crucial in certain contexts such as the analysis of quick counts in electoral settings

Abstract:

In all democracies, anticipating the final results of a national election the same day the voters go to the polling stations is a matter of interest, for television stations and some civil rights organizations, for example. The most reliable option is a quick count, a statistical procedure that consists in selecting a random sample of polling stations and analyzing their final counts to forecast the election results. In Mexico, a particularly important quick count is organized by the electoral authority. The importance of its results requires this exercise to be designed and executed with specially high standards far beyond those used in commercial studies of this type. In this paper, the model and the Bayesian analysis of the quick counts conducted by the Mexican authority, during the presidential elections in 2006 and 2012, are discussed

Resumen:

El estudio de la mortalidad, en particular la producción de tablas de mortalidad, juega un papel central en la industria de los seguros. Durante un tiempo, la estimación de las tasas de mortalidad de los asegurados se abordó ignorando la incertidumbre implícita en el problema. En fecha reciente, sin embargo, el análisis estadístico ha cobrado relevancia en el tema. En este artículo se describe el tratamiento bayesiano que subyace en la producción de las tablas de mortalidad que publicó la autoridad regulatoria mexicana en el 2000 y que son legalmente obligatorias para el cálculo de las reservas de las compañías de seguros de vida que operan en el país

Abstract:

Mortality tables play a key role in life insurance. However, for a long time, the estimation of mortality rates has been conducted without reference to the statistical nature of the problem. Nowadays, an appropriate risk-management strategy requires all sources of uncertainty to be considered in the evaluation of these and other financial systems. Thus, statistical models have become increasingly relevant. In this paper, the Bayesian analysis of linear models is described as the tool that the authorities in Mexico used to produce the mandatory mortality tables currently in use by the insurance industry. These ideas are illustrated with data from the Mexican insurance sector

Abstract:

Most financial institutions are required to comply with a minimum capital rule in order to face their obligations during a given period of time. Due to the random nature of the financial flows involved, the problem of assessing the amount of capital required must be analysed within a stochastic framework and the solution can be reduced to the estimation of a selected quantile. Given the financial impact of a specific capital requirement, a proper and careful choice of the underlying model is of great relevance. Here, we address the problem for insurance companies by proposing an autocorrelated model to describe the relative severity after a suitable transformation and compare the results with those of a model, which assumes independence among observations. We undertake a full Bayesian analysis and derive the reference priors for the models. Results are illustrated with a real data set from the Mexican insurance industry

Abstract:

The problem of forecasting a time series with only a small amount of data is addressed within a Bayesian framework. The quantity to be predicted is the accumulated value of a positive and continuous variable for which partially accumulated data are available. These conditions appear in a natural way in many situations. A simple model is proposed to describe the relationship between the partial and total values of the variable to be forecasted assuming stable seasonality, which is specified in stochastic terms. Analytical results are obtained for both the point forecast and the entire posterior predictive distribution. The proposed technique does not involve approximations. It allows the use of non-informative priors so that implementation may be automatic. The procedure works well when standard methods cannot be applied due to the reduced number of observations. It also improves on previous results published by the authors. Some real examples are included

Abstract:

In this article we consider the Bayesian statistical analysis of a simple Galton-Watson process. Problems of interest include estimation of the offspring distribution, classification of the process, and prediction. We propose two simple analytic approximations to the posterior marginal distribution of the reproduction mean. This posterior distribution suffices to classify the process. In order to assess the accuracy of these approximations, a comparison is provided with a computationally more expensive approximation obtained via standard Monte Carlo techniques. Similarly, a fully analytic approximation to the predictive distribution of the future size of the population is discussed. Sampling-based and hybrid approximations to this distribution are also considered. Finally, we present some illustrative examples.

Abstract:

The problem of making inferences about the ratio of two normal means has been addressed, both from the frequentist and Bayesian perspectives, by several authors. Most of this work is concerned with the homoscedastic case. In contrast, the situation where the variances are not equal has received little attention. Cox (1985) deals, within the frequentist framework, with a model where the variances are related to the means. His results are mainly based on Fieller's theorem whose drawbacks are well known. In this paper we present a Bayesian analysis of this model and discuss some related problems. An agronomical example is used throughout to illustrate the methods

Abstract:

A generalization is provided of the conditions to be satisfied in order to ensure asymptotic normality under transtormations. From a Bayesian viewpoint, this result may be used to select an appropriate parameterization or to avoid additional calculations when the parameter of interest does not coincide with the usual parameter of the model

Abstract:

The analysis of biological assays has received the attention of statisticians for many years. However, when an indirect assay is considered and a continuous response variable is measured, the standard models lead to the problem of estimating a ratio, which has proved to be rather controversial if the statistical analysis is conducted under the classical approach. In this paper, within the Bayesian framework, the reference posterior distribution of slope ratio is obtained. This is the parameter of interest in a large class of biological assays. The results obtained are provided to avoid the drawbacks of the classical methods and generalize previous Bayesian analysis of the ratio of normal means

Abstract:

Several tests proposed to deal with the problem that a switch occurs in a regression model are discussed and an attempt is made to improve them through a very natural modification. The modified tests are illustrated using data given by Quandt where a switch is present

Resumen:

Durante el siglo xix se ensayan diversos proyectos educativos, tanto conservadores como liberales. Sólo hasta la llegada de Porfirio Díaz al poder consigue uno de estos proyectos volverse realidad

Abstract:

In the nineteenth century, several conservative and liberal educational projects were tried out, however it was until Porfirio Diaz’s rise to power that one became a reality

Abstract:

The purpose of this article is to examine the way in which the academic schedule was organized in the primary schools of Mexico City, and how the life of the children was regulated by time schedules, disciplines, punishments and school calendars. Within this sense, the academic temporality expressed in schedules and calendars signified a cultural and educational discussion, which in turn reflected the aspirations and valuations of an era. The importance acquired by elementary education at the end of the nineteenth century was the result of economic and cultural changes that affected Mexico and that intended to modernize both the state and institutions instrumental in the country's progress, such as the educational institutions. The educated elite of the Porfirio Diaz regime, influenced by European ideas and societal changes, provided impetus to the transformation of the spaces and activities of children within schools. The school schedule was widely assessed by both the politicians and the educators of the Porfiriato era. They judged it to be the appropriate means by which to prepare the children under new schemes of thought and behavior, for which they needed to provide inputs to modern school schedules, that is to say: uniforms, secularization and hygiene, which catches up with the old temporality marked by the ecclesiastical influence and which emphasized mechanical, repetitive and linear schemes. Thenceforth, the educational authorities concentrated their efforts on the creation of a type of school discipline and order. A population of children who assume ethics of work, punctuality, respect and efficiency was exactly the changing society based on industrialization and modernization that was needed

Abstract:

For symmetric auctions, there is a close relationship between distributions of order statistics ofbidders' valuations and observable bids that is ofien used to estimate or bound the valuation distribution, optimal reserve price, and other quantities of interest nonparametrically. However, we show that the functional mapping from distributions of order statistics to their parent distribution is, in general, not Lipschitz continuous and, therefore, introduces an irregularity into the estimation problem. More specifically, we derive the optimal rate for nonparametric point estimation of, and bounds for, the private value distribution, which is typically substantially slower than the regular root-n rateo We propose trimming rules for the nonparametric estimator that achieve that rate and derive the asymptotic distribution for a regularized estimator. We then demonstrate that policy parameters that depend on the valuation distribution, including optimal reserve price and expected revenue, are irregularIy identified when bidding data are incomplete. We also give rates for nonparametric estimation of descending bid auctions and strategic equivalents

Abstract:

In this paper we investigate the capabilities of constrained clustering in application to active exploration of music collections. Constrained clustering has been developed to improve clustering methods through pairwise constraints. Although these constraints are received as queries from a noiseless oracle, most of the methods involve a random procedure stage to decide which elements are presented to the oracle. In this work we apply spectral clustering with constraints to a music dataset, where the queries for constraints are selected in a deterministic way through outlier identification perspective. We simulate the constraints through the ground-truth music genre labels. The results show that constrained clustering with deterministic outlier identification method achieves reasonable and stable results through the increment of the number of constraint queries

Abstract:

This paper employs newly collected historical data from Finland to present evidence of historically contingent, long-run consequences of a famine. We document high levels of local inequality in terms of income and land distribution until a violent uprising in 1918. These inequalities partly originated from the famine of 1866-1868, which increased the concentration of land and power to large landowners. We further show that regions with more exposure to the famine had more labor coercion by the early 1900s. These inner tensions led to violent conflict following the Russian Revolution and the Finnish independence from the Russian Empire. Using microdata on all the casualties of the 1918 Finnish Civil War, we demonstrate that the famine plausibly contributed to local insurgency participation through these factors. Although unsuccessful in replacing the government, the insurgency led to significant policy changes, including radical land redistribution and a full extension of the franchise. These national reforms led to a more drastic shift toward equality in locations more affected by the famine with greater pre-conflict inequality. Our findings highlight how historical shocks can have large and long-lasting, but not straightforward impacts

Abstract:

Does political selection matter for policy in representative governments? I use administrative data on local politicians in Finland and exploit exogenous variation generated by close elections to show that electing more high-income, incumbent and competent politicians (who earn more than observably similar politicians) improves fiscal sustainability outcomes, but does not decrease the size of the public sector. I also provide suggestive evidence that electing more university-educated local councillors leads to more public spending without adverse effects on fiscal sustainability. I reconcile these findings with survey data on candidate ideology and demonstrate that different qualities are differentially associated with economic ideology

Abstract:

Political parties frequently form coalitions with each other to pursue office or policy payoffs. Contrary to a prominent argument, the distribution of rents within the coalition does not always reflect the relative sizes of the coalition members. We propose that this is at least partially due to an incumbency advantage in coalitional bargaining. To evaluate this argument empirically, we construct a data set of candidates, parties, and members of the executive in Finnish local governments. We first use a regression discontinuity design to document a personal incumbency advantage in nominations to executive municipal boards. We then show that an incumbency premium is present also at the party level. Using an instrumental variable strategy that hinges on within-party close elections between incumbents and non-incumbents, we find that, ceteris paribus, having more re-elected incumbents increases party's seat share in the executive

Abstract:

This study proposes a useful combination of a spatial interaction model and simulation approaches for the reliable estimation of retail interactions and store sales based on data of consumer shopping behavior. The real case study empirically demonstrates this approach by building an operational retail interaction model to estimate expenditure flows from households to retail stores in Mexico. The analysis compares the results from the proposed forecasting method to actual performance data. The forecasting accuracy is satisfactory even when little retail and consumer information is available

Abstract:

This paper examines antecedent Latin American citizens' attitudes toward globalization, taking into account the effect of the extent of globalization at the national level. The model incorporates the effect of nonresponse and accounts for the hierarchical nature of individual opinions on globalization. Multilevel mixed-effects model estimations using both macro and micro level data show that there is a high potential for nonresponse bias in globalization studies, specifically in countries where digital access is still limited. The degree of acceptance of globalization among Latin American citizens is heterogeneous with important determinants at the country level being the degree of cultural globalization and its interaction with variables at the individual level. Young male individuals, with high levels of education, income and access to internet and international TV channels are likely to favor globalization in Latin America

Resumen:

Este artículo presenta un ordenamiento jerárquico de la propensión emprendedora de diez países latinoamericanos, basándose en los datos del Global Entrepreneurship Monitor recabados en 2006. En el enfoque metodológico se utiliza un análisis multinivel o regresión jerárquica que permite generar un ordenamiento de los países considerados, ajustado por los principales determinantes del potencial emprendedor tanto a nivel individual como a nivel país. Los resultados indican que varias de las naciones latinoamericanas no están desarrollando al máximo su potencial emprendedor

Abstract:

Using a multi-level approach, this study introduces a ranking of Latin American countries according to their likelihood of entrepreneurial activity at the national level, based on data provided by the Global Entrepreneurship Monitor collected in 2006. The method adjusts traditional rankings by considering the effect of major determinants of entrepreneurial activity both at the national and individual levels. Results indicate there are several Latin American countries that are not fully developing their entrepreneurial potential

Abstract:

We examine several measures of uncertainty to make five points. First, equity market traders and executives at nonfinancial firms have shared similar assessments about one-year-ahead uncertainty since the pandemic struck. Both the one-year VIX and our survey-based measure of firm-level uncertainty at a one-year forecast horizon doubled at the onset of the pandemic and then fell about half-way back to pre-pandemic levels by mid-2021. Second, and in contrast, the 1-month VIX, a Twitter-based Economic Uncertainty Index, and macro forecaster disagreement all rose sharply in reaction to the pandemic but retrenched almost completely by mid-2021. Third, Categorical Policy Uncertainty Indexes highlight the changing sources of uncertainty-from healthcare and fiscal policy uncertainty in spring 2020 to elevated uncertainty around monetary policy and national security as of May 2022. Fourth, firm-level risk perceptions skewed heavily to the downside in spring 2020 but shifted rapidly to the upside from fall 2020 onwards. Perceived upside uncertainty remains highly elevated as of early 2022. Fifth, our survey evidence suggests that elevated uncertainty is exerting only mild restraint on capital investment plans for 2022 and 2023, perhaps because perceived risks are so skewed to the upside

Abstract:

We document a transmission channel from credit conditions to capital accumulation via investment wedges. Using a simple multi-industry model of production and investment, we measure these wedges at the 4-digit industry level from Mexican manufacturing and show that they account for most of the changes in aggregate capital over time. We also find a robust relation between the wedges and financial variables: credit and interest rates, also measured at the industry level

Abstract:

We study the effect of credit conditions on the allocation of inputs, and their implications for aggregate TFP growth. For this, we build a new dataset for Mexican manufacturing merging real and financial data at the 4-digit industrial sector level. Using a simple misallocation framework, we find that changes in inter-industry allocative efficiency account for 41 percent of changes in aggregate TFP. We then construct a model of firm behavior with working capital constraints and borrowing limits which generate sub-optimal use of inputs, and calibrate it to our data. We find that the model accounts for 38 percent of the observed variability in efficiency. An important conclusion is that heterogeneity in credit conditions across industries is key in accounting for efficiency gains. Despite overall credit stagnation, more access to credit and lower interest rates to distorted industries contributed substantially to the recovery from the 2009 recession, suggesting a plausible mechanism for credit-less recoveries

Abstract:

The last twenty years have witnessed periods of sustained appreciations of the real exchange rate in emerging economies. The case of Mexico between 1988 and 2002 is representative of several episodes in Latin America and Central and Eastern Europe in which countries opening to capital flows experienced large appreciations accompanied by a significant reallocation of workers towards the non-tradable sector. We account for these facts using a two sector dynamic general equilibrium model of a small open economy with frictions to labor reallocation and two driving forces: (i) A decline in the cost of borrowing in foreign markets, and (ii) differential productivity growth across sectors. These two mechanisms account together for 60% of the decline in the domestic relative price of tradables in Mexico and for a large fraction of the observed reallocation of labor across sectors. The decline in the interest rate faced by Mexico in international markets is quantitatively the most important channel. Our results are robust to the inclusion of terms of trade into the model

Abstract:

Total factor productivity (TFP) falls markedly during financial crises, as we document with recent evidence from Latin America and Asia. We study the ability of various versions of the small open economy neoclassical growth model to account for the behavior of inputs, output, and aggregate productivity during Mexico’s 1994-95 crisis. We find that that capital utilization and labor hoarding can account for a large fraction of the fall in measured productivity. While capital utilization alone does little to improve the performance of the model during the crisis, introducing labor hoarding significantly reduces the gap between the evidence and the predicted fall in output and hours

Resumen:

El hombre americano busca sus orígenes, viaja, contempla, investiga. Mira pasar la Historia ante sus ojos, una historia intensamente mexicana, y recuerda con nostalgia el paso de dos mamuts por el estrecho de Bering…

Abstract:

In search of his roots, the American man, travels, contemplates, and scrutinizes. He observes history passing by his eyes and nostalgically remembers the crossing of two mammoths through the Bering Strait…

Abstract:

Sustainable Development Goal (SDG) 3 aims to "ensure healthy lives and promote well-being for all at all ages". While a substantial effort has been made to quantify progress towards SDG3, less research has focused on tracking spending towards this goal. We used spending estimates to measure progress in financing the priority areas of SDG3, examine the association between outcomes and financing, and identify where resource gains are most needed to achieve the SDG3 indicators for which data are available

Abstract:

In many developing countries, national legislative seats are considered less valuable than (subnational) executive positions. Even then, ambitious politicians may seek a legislative seat either (a) as a window of opportunity for jumping to an executive office; or (b) as a consolation prize when no better option is available. Using a regression discontinuity design adapted to a PR setting, we examine these possibilities in the Argentine Chamber of Deputies between 1983 and 2011. In line with the consolation prize story, we find that marginal candidates from the Peronist party-which controls most provincial governorships- are more likely to be renominated and serve an additional term in the legislature, but not necessarily to jump to an executive office. The effect is stronger in small provinces

Resumen:

Este artículo presenta una representación radial del conflicto político para analizar el juego imposible, la clásica interpretación de la inestabilidad del régimen en Argentina entre 1955 y 1966 de O'Donnell (1973). Evaluamos esta caracterización utilizando las votaciones nominales de la Cámara de Diputados de dicho país entre 1958 y 1966. Los resultados obtenidos a partir del uso de técnicas de escalamiento multidimensional (MDS por sus iniciales en inglés) reflejan una representación radial compuesta por dos facetas interconectadas: (1) la dimensión ideológica (siendo sus elementos «liberal» versus «nacionalista») y (2) actitudes con respecto al peronismo. Estos hallazgos indican que, dada la naturaleza circular -y, por ende, multidimensional- del conflicto político en el período entre 1955 y 1966, no había posibilidad de alcanzar acuerdos políticos estables. En términos substantivos, ello implica que la inestabilidad política no se debió a la imposibilidad de encontrar una única solución al juego imposible, sino más bien por la existencia de múltiples soluciones posibles

Abstract:

In this article, we examine a two-dimensional, circular model of political conflict. We consider O’Donnell¬s (1973) canonical interpretation of regime instability in Argentina between 1955 and 1966, the impossible game, and evaluate such characterization empirically through the analysis of roll call votes. Multidimensional Scaling (MDS) analysis supports a two-dimensional radex representation composed of two intercorrelated facets: (1) ideological outlook (its elements being «liberal» versus «nationalistic»), and (2) attitudes toward Peronism. The radex structure, resembling a dart board, derives from the combination of a one-dimensional simplex (with radial lines capturing stands on Peronism) and a one-dimensional circumplex (with concentric circles corresponding to ideological outlook). These findings indicate that, because of the circular —and thus multidimensional— nature of political conflict during the 1955-1966 period, stable political outcomes failed to exist

Abstract:

Argentine politics from 1955 to 1966 was characterized by the conflict between the Peronists and the anti-Peronists. While each camp could veto the other’s project, neither could advance their own agenda. In his canonical interpretation, O’Donnell (Modernization and Bureaucratic-Authoritarianism. Berkeley, CA: Institute of International Studies, University of California, 1973) concluded that party democracy during this era was tantamount to an 'impossible game'. While we recognize the significance of O'Donnell’s analysis, we believe that it presents a number of problems. To address its main shortcomings we consider a spatial model that emphasizes the importance of voters’ judgments about the characteristics of party leaders. We recover the positions of Argentine parties using a mixed logit stochastic model and an original dataset of recorded votes in the Argentine Chamber of Deputies during this era. Our results suggest that the electoral logic forced the Peronist party to adopt relatively radical positions away from the center in order to maximize its support. In turn, non-Peronist parties had little incentive to seek the support of moderate, and thus 'unrepresented', Peronist voters by locating themselves at the electoral mean. In particular, valence differences associated with Peronism prevented larger parties from converging toward the center. We thus conjecture that the rules of the impossible game were a constraint imposed by the populace on Argentine political elites rather than a choice made by the latter behind the people's back

Abstract:

How does political ambition affect strategies of cooperation in Congress? What activities do legislators develop with their peers to maximize their career goals? One of the main collaborative activities in a legislature, cosponsorship, has been widely analyzed in the literature as a position taking device. However, most findings have been restricted to environments where ambition is static (i.e., legislators pursuing permanent reelection), which restricts de facto the variety of causes and implications of legislative cooperation. I analyze patterns of cosponsorship as a function of ambition in a multilevel setting where legislators praise subnational executive positions more than a seat in the House. Through the analysis of about 48,000 bills introduced in the Argentine Congress between 1983 and 2007 and the development of a map of political careers, I follow a social networks approach to unfold different patterns of legislative cooperation. Findings show that patterns of cooperation among prospective gubernatorial candidates are strongly positive, while similar effects are not observed at the municipal level

Abstract:

How do legislators behave in systems where pursuit of re-election is not the rule, and ambition is channelled through multiple levels of government? Is their legislative behaviour biased towards their immediate career goals? In this paper, the Argentine case is analysed in order to explore the link between political ambition and legislative performance in a multilevel setting where politicians have subnational executive positions as priorities, rather than stable legislative careerism. The piece demonstrates that legislators seeking mayoral positions tend to submit more district-level legislation than their peers. This finding contributes to the knowledge of strategic behaviour in multilevel settings, and provides non-US-based evidence regarding the use of non-roll call position-taking devices

Abstract:

Studies analyzing the American Congress demonstrate that senators’ attention towards voters substantially increased after the 17th Amendment to the U.S. Constitution, which replaced their indirect appointment with direct election. Even though this finding seems useful for theoretical generalizations, expectations become unclear as concerns about career perspectives differ. Should politicians with nonstatic ambition shift their attention towards voters if they do not expect reelection? Making use of a quasi-experimental setting, I analyze the impact of the shift from indirect to direct election to select the members of the Argentine Senate. I develop an argument for why, in spite of the lack of systematic pursuit of reelection, elected senators have incentives to be more oriented towards voters. Through the analysis of about 55,000 bills, I evaluate senatorial behavior under both sources of legitimacy. The findings support the idea that audience costs make a difference in behavior, regardless of short-term career expectations

Abstract:

Traditional support policies for green energy have greatly contributed to the rise in prosumer numbers. However, it is believed that they will soon start to exert negative impact on stakeholders and on the grid. Policy makers advise to phase out two of the most widely applied policies -net metering and feed-in tariff, in favor of support policies that scale better with rising renewable generation. This work quantifies the impact of these traditional policies in future "what-if" scenarios and confirms the need for their replacement. Based on simulations with real data, we compare net metering and feed-in tariff to four state-of-the-art market-based mechanisms, which involve auction, negotiation and bitcoin-like currency. The paper examines the extent to which each of these mechanisms motivates not only green energy production but also its consumption. The properties and characteristics of the above mechanisms are evaluated from the perspective of key stakeholders in the low voltage grid - prosumers, consumers and energy providers. The outcome of this study sheds light on current and future issues that are relevant for policy makers in the evolving landscape of the smart grid

Abstract:

Proportional-Integral (PI) control is a commonly used control mechanism for regulation of power flows in distribution-level power systems. Such controls, however, come at the cost of guaranteeing only local stability, meaning that as the system load changes the controller gains need to be tuned as well in order to maintain stability of the primary control loop, thereby necessitating a large schedule of controller switching to cover the load space. This paper resolves this problem by designing an alternate PI controller that can stabilize a radial distribution network for any choice of load. The fundamental property behind this stabilization is passivity. It is shown that by choosing the right set of passive input-output pair for the models of the power converters it is possible to globally regulate the power flowing between the distribution feeder and the loads by a fixed set of control gains that are independent of the magnitude of the load as long as the power flow solution exists. Results are validated using a networked microgrid model with three power converters. Comparison is drawn between fully decentralized, sparsely distributed, and all-to-all connected communication topologies for implementing the controller

Abstract:

Global research on medical and health-related issues has experienced a profound reconfiguration over the last 30 years. The rise of new areas of inquiry has transformed the medical research landscape as staff with medical training gradually relinquished their prominence and specialists from other disciplines raised their profile within research teams. Given this, research priorities seem to be shifting increasingly towards laboratory-based and innovation-oriented research lines. The unfolding of these shifts in nonhegemonic countries such as Mexico is still to be understood. This paper surveys structural changes in Mexican medical research from 1993 to 2021 by observing temporal aggregation of authorships, emerging thematic features, and institutional affiliation patterns. It also explores correlations between these findings and their possible explanations. The results allow us to empirically describe significant changes in medical research done in Mexico. We detected periods of stability in authorship allowing us to describe stages in the accumulation of research and development (R&D) capabilities. The identified semantic patterns allowed us to characterize this transformation, observing subsequent stages of an accumulation and specialization process that began in the mid-1990s. Moreover, we found divergent thematic and institutional patterns that point towards a growing gap between research conducted in health institutions and scientific ones

Resumen:

Las empresas mexicanas han empezado su camino rumbo a la sostenibilidad y el sector aéreo no es la excepción. Las aerolíneas están implementando estrategias que les permitan la reducción de los impactos que tienen en el cambio climático y con ello cumplir con los objetivos establecidos en el Acuerdo de París: llegar a la meta de cero emisiones de CO2 para el 2050. Al respecto, una de las empresas de aviación que tiene metas claras para ofrecer vuelos más sostenibles es Volaris, la cual cuenta con un programa corporativo de sustentabilidad que está integrado por diferentes estrategias agrupadas en los criterios ASG (Ambiente, Social y Gobernanza)

Abstract:

The findings of prior research suggest that several factors, including consumer characteristics, parent brand characteristics, and extension characteristics, determine consumers' acceptance of brand extensions. Our research posits that an extension's associations, especially the one that transforms a product into a brand extension, are preeminent in determining consumer response to brand extensions. We present findings of five studies which show that the accessibility of the extension's association with its parent brand is paramount in determining consumer response to brand extensions contingent upon fit, a moderating effect that persists even in the presence of additional highly diagnostic non-parental information. Implications of our findings are presented for brand extension researchers and managers

Abstract:

It is well established that consumer acceptance of a brand extension depends on how strongly it fits with its parental origins. Less appreciated is how this acceptance also depends on the mental association created in consumers’ minds between the extension and its parent brand. Our investigation considers the gateway role played by this association’s mental accessibility in allowing extensions to fully benefit from their parental heritage. Six studies examine the effect of reinstating an extension’s association with its parent brand on extension evaluations. When reinstatement enhances the parental association’s accessibility, it strengthens the parent brand’s influence, leading to more or less favorable extension evaluations contingent upon the extension’s fit with its parental origins. These reinstatement effects carry important implications for brand-extension managers and researchers

Abstract:

Prior research suggests that partially comparative pricing-in which a retailer provides price comparisons for some, but not all, of its products-is a double-edged sword. On the one hand, such pricing improves beliefs about the retailer's competitive price advantage on comparatively priced products for which its prices are compared with a competitor. On the other hand, it has been shown to damage perceptions of the retailer's noncomparatively priced products relative to those charged by the competition. However, this latter outcome is based on evidence examining the influence of partially comparative pricing across different product categories. The authors propose and demonstrate in five studies that price comparisons may actually improve relative price beliefs about the noncomparatively priced brands within the same product category. They further show this improvement to be attenuated as the number of price comparisons increase or when the price comparison is attached to a brand perceived as less typical of the product category. The authors conclude by drawing managerial implications and offer suggestions for further research

Abstract:

The cash management of an automated teller machine (ATM) often combines a periodic review inventory policy with emergency orders, the latter according to a continuous review inventory policy. We present a simulation-based decision support system (DSS) that considers both regular (periodic) and emergency (continuous review) orders. This DSS was developed to assist an ATM’s manager in the selection of the appropriate (regular) order and period sizes as well as the (emergency) reorder point under a given service level. The DSS was developed using the software Arena and integrates a Visual Basic for Applications (VBA) front-end that allows the user to incorporate fixed and variable ordering costs as well as demand and arrival rates updates

Abstract:

Let f : R3 → R3 be a diffeomorphism with p0, p1 ∈ R3 distinct hyperbolic fixed points. Assume that Wu(p0) and Ws(p1) are two-dimensional manifolds which intersect transversally at a point q. Then the intersection is locally a one-dimensional smooth arc ϒ through q, and points on ϒ are orbits heteroclinic from p0 to p1. We describe and implement a numerical scheme for computing the jets of ϒ to arbitrary order. We begin by computing high order polynomial approximations of some functions Pu, Ps : R2 → R3, and domain disks Du,Ds ⊂ R2, such that Wu loc(p0) = Pu(Du) and Ws loc(p1) = Ps(Ds) with Wu loc(p0) ∩ Ws loc(p1) = ∅. Then the intersection arc ϒ solves a functional equation involving Ps and Pu. We develop an iterative numerical scheme for solving the functional equation, resulting in a high order Taylor expansion of the arc ϒ. We present numerical example computations for the volume preserving Hénon family and compute some global invariant branched manifolds

Abstract:

In the private values single object auction model, we construct a satisfactory mechanism - a dominant strategy incentive compatible and budget - balanced mechanism satisfying equal treatment of equals. Our mechanism allocates the object with positive probability to only those agents who have the highest value and satisfies ex-post individual rationality. This probability is at least (1-2 n ), where n is the number of agents. Hence, our mechanism converges to efficiency at a linear rate as the number of agents grow. Our mechanism has a simple interpretation: a fixed allocation probability is allocated using a second-price Vickrey auction whose revenue is redistributed among all the agents in a simpleway.We showthat our mechanism maximizes utilitarian welfare among all satisfactory mechanisms that allocate the object only to the highest-valued agents

Abstract:

In this work, we present a behavioral modeling framework and coverage behavior that accounts for a battery constraint. This framework allows a user to model robot teams performing common robotic tasks such as exploration. It uses roadmap-based methods that identify the available paths in potentially complex environments. We present a coverage strategy that accounts for the available battery. It allows the agent to calculate a path through an environment that maximizes coverage and allows the agent to get back to a charging location. This eliminates the need to decide when to return to a charging location based on a threshold, as related methods do. It considers the actual path length as opposed to Euclidean distance which is generally used for estimating the energy spent in traversing a path. Different path scoring functions are used to score the path generated

Abstract:

Transmitting digital audio signals in real time over packet switched networks (e.g. the Internet) has set forth the need for developing signal processing algorithms that objectively evaluate audio quality. So far, the best way to assess audio quality are subjective listening tests, the most commonly used being the mean opinion score (MOS) recommended by the International Telecommunication Union (ITU). The goal of this paper is to show how artificial neural networks (ANNs) can be used to mimic the way human subjects estimate the quality of audio signals when distorted by changes in several parameters that affect the transmitted audio quality. To validate the approach, we carried out an MOS experiment for speech signals distorted by different values of IP-network parameters (e.g. loss rate, loss distribution, packetization interval, etc.), and changes in the encoding algorithm used to compress the original signal. Our results allow us to show that ANNs can capture the nonlinear mapping, between certain characteristics of audio signals and a subjective five points quality scale, "built" by a group of human subjects when participating in an MOS experiment, creating, in this way, an "inter-subjective" neural network (INN) model that might effectively "evaluate", in real time, the audio quality in packet switched networks

Resumen:

Mediante el recurso a pasajes de textos antiguos, se hace un repaso de los factores que dieron origen a la democracia en la Grecia del siglo V; muestra cómo no siempre se caracterizó a la tiranía como gobierno despótico o cruel, sino que incluso las tiranías fueron un factor para el surgimiento de la democracia; finalmente, se comentan algunos pasajes del llamado "Anónimo de Jámblico", un texto poco estudiado, en el que un sofista del siglo V a. C. menciona política, economía y educación como factores estrechamente vinculados en el éxito o fracaso de los regímenes políticos

Abstract:

Referring to passages from ancient texts, the article reviews the factors that gave rise to democracy in fifth-century Greece. It shows how tyranny was not always characterized as a despotic or cruel government, but that even tyrannies were a factor for the emergence of democracy. Finally, it comments on some passages of the "Anonymous Iamblichi", an underexamined text, in which a sophist of the 5th century BC mentions politics, economics and education as closely-linked factors in the success or failure of political regimes

Resumen:

Una interpelación a los filósofos, en cuanto a la labor que desempeñan y cómo lo hacen, en particular acerca del lenguaje que utilizan. Una invitación a que sigan interrogándose e interrogándonos verdaderamente, como los proverbiales tábanos

Abstract:

This article questions the philosopher's work and what it entails, particularly the use of language. This is an invitation to all those who not only continue to question themselves but also the people around them just as the proverbial gadfly (Socrates) in the Apology

Abstract:

This article provides a positive reassessment of conceptual thinking for comparative constitutional law. While the realist claim has known a remarkable success story, both in comparative constitutional law and elsewhere, there is a growing unease that the realist approach might fail to capture essential aspects of legal phenomena. Empirical research seems hardly possible without conceptual efforts, and if the line between the empirical and the normative is blurred, concepts can serve as tools to provide an alternative description of reality without being inherently affirmative. The place of the conceptual affects the form through which legal knowledge is presented: for comparative constitutional law, this means that a dialogue ought to take place not only on the content, but also on the form through which comparative efforts are undertaken. The article ends with a short reflection on current scholarship in comparative constitutional law, its forms, and the methodological challenges that lie ahead

Abstract:

Over the years, connection or access to the Internet has shown a positive impact on users in their everyday activities, such as entertainment, online education, online business, and productivity increments in their communities. Unfortunately, rural communities, which usually are far away from cities, cannot enjoy these benefits due to inefficient or inexistent Internet access. We propose an algorithm to select which communities to connect to maximise the number of people connected to the Internet while minimising the length of the network, or while maximising the number of connected communities, or while maximising the linked people per kilometre of fibre. The algorithm estimates the shortest driving distance and the minimum spanning tree. Then, the algorithm creates a subset of linked communities to select the next one to connect based on one of the three criteria described above. To test the algorithm, we used data from a set of rural communities in Mexico. The results showed that the minimum length of the network to connect the 597 rural communities (with 454,514 people) in our test case was 949.09 km. Moreover, there was a difference of 204.1 km in the network length to connect 90% of the total population depending on the selected criterion to connect the communities. If the decision-maker wants to connect 90% of the population, the maximum number of connected communities was 507 using the PC criterion

Abstract:

Governments fund Public Higher Education Institutes (P-HEIs) using the taxes of the citizens of a country. This public funding is essential to a country because of the benefits for people who obtain an academic degree and the economy's development. Nevertheless, governments need to assess whether P-HEIs use public funds effectively and efficiently. Unfortunately, to evaluate them is not an easy task since many variables are involved; thus, this work proposes an evaluation method based on a non-parametric linear programming model, called Data Envelopment Analysis (DEA), to evaluate the relative efficiency of every P-HEI in different periods of time and then compute its Malmquist index (MI) to identify efficiency changes over those periods of time. To test the proposed evaluation method, we collect data from 36 Mexican public universities for year 2016 and 2019. The only DEA model's input variable is the amount of money from public funds allocated to the P-HEIs and eleven output variables that measure their ability to teach, research, and disseminate the knowledge. The results show that 36% of public universities improved from the year 2016 to 2019

Abstract:

The purpose of this work is to analyse the public database of the Mexico City’s bike sharing system known as Ecobici to compute the average travel time, the number of inbound and outbound trips in every stage, the number of trips to and from stations, and the number of incoming and outcoming bikes per ten–minute intervals per day. These pieces of information provide managers with insights about the time they need to rebalance the system, the traffic among all the stations, and the bikes' deficit or surplus per stage per ten-minute intervals. Based on about 4.6 million trips from July 2014 to January 2015 (downloaded from the data laboratory of Mexico City), an R script is programmed to manage the amount of information to compute the mean travel time, the traffic among stations, and the number of inbound and outbound bikes. The R script is available to the public to allow the reader to carry out the same experiments with the 444 stations, given that in this paper the results of some stations are presented. The system has not been studied to know the information we provide. The known studies focus on the kind of users and their needs but in the system behaviour

Abstract:

Governmental policies, such as educational ones, require money from taxes to implement them. Therefore, a government must spend its resources in such a way to maximize the benefits. However, to evaluate the Public Higher Education Institutions (P–HEI) is a very complex task since many factors that can be assessed are involved. In this study, we focus on the performance of P–HEI in three activities: teaching, research, and knowledge dissemination. To accomplish this end, we develop a Data Envelopment Analysis Model to evaluate the efficiency of each activity, separately. Using an official database called ExECUM, we compute the efficiency of 40 Mexican P–HEI from year 2008–2016. The funds allocated for the government are an input in our study. According to our results, most P-HEIs are efficient in only one activity, while few are efficient in two activities. Only one P–HEI reached efficiency of 100% in the three models. On the other hand, 37.5% of the P–HEI do not reach 100% efficiency in any model. Our study provides the set of references for each P–HEI and the increments/decrements in the inputs and outputs to increase its efficiency

Resumen:

El objetivo de este trabajo es desarrollar una aplicación computacional para la implementación del algoritmo simplex utilizando el método de las dos fases y la descomposición LU (del inglés Lower-Upper). Debido a los resultados obtenidos por las empresas en la aplicación de modelos de programación lineal, el algoritmo simplex es un tema obligatorio que se imparte en programas de pre y post grado en ingeniería y negocios. Sin embargo, la literatura utilizada para su enseñanza presenta el método tableau que es el menos eficiente para su implementación computacional. Para resolver este problema, programamos una aplicación computacional en Visual Basic estructurada en tres módulos: Declaraciones, Entradas/Salidas y Procedimiento. Para resolver un problema el usuario lo introduce en su forma estándar y realiza las iteraciones dando clic en el botón indicado en la aplicación. Al utilizar nuestra aplicación en el salón de clases, los alumnos obtienen mejores calificaciones en la evaluación de esta parte del curso. Podemos concluir que el alumno entiende y puede desarrollar el algoritmo y no solo obtener la solución como con el software comercial

Abstract:

The objective of this work is to develop a computational application for the implementation of the simplex algorithm using two-phase method and the Lower-Upper decomposition. Since companies have applied linear models with excellent results, the simplex algorithm is a mandatory topic in undergraduate and graduate programs in engineering and business. However, the available literature to teach it introduces the tableau method that is the least efficient for its computational implementation. To solve this problem, we programmed a computational application in Visual Basic with three modules: Declarations, Outputs/Inputs, and Procedure. To solve a problem, the user enters it in its standard form and performs the iterations by clicking on the button indicated in the application. By using our application in the classroom, students get better grades in the evaluation of this part of the course. We can conclude that student understand and can develop the algorithm and not only obtain the solution as done with commercial software

Resumen:

El Algoritmo Inteligente de Agua está inspirado en el movimiento de las gotas de agua en un río. Una gota de agua puede encontrar una ruta óptima desde un lago hacia el mar interactuando con su entorno. En el proceso de llegar a tal destino, las gotas de agua interactúan con el lecho del río mientras se mueven a través de él. Del mismo modo, el problema de la cadena de suministro puede ser modelado como un flujo de etapas de suministro, fabricación y entrega para producir un artículo terminado y luego entregarlo al usuario final. El problema es seleccionar la opción que realizará la etapa, por ejemplo en una etapa de aprovisionamiento, muchos proveedores podrían suministrar el componente. Como cada etapa tiene asociado un costo y un tiempo, un algoritmo multi-objetivo es usado para minimizar el tiempo de entrega y el costo de producción, simultáneamente. Basados en esta analogía, este trabajo propone una aproximación al problema de la cadena de suministro utilizando una extensión multi-objetivo al algoritmo de gotas de agua. Las gotas de agua artificiales que fluyen a través de la cadena de suministro minimizarán simultáneamente el costo de producción y el tiempo de entrega de cada producto utilizando el concepto de optimización de Pareto. Se soluciona una cadena de suministro de computadoras ampliamente utilizada en la literatura. Así mismo, algunas métricas de desempeño son calculadas y se compara el conjunto de Pareto calculado por el algoritmo propuesto con el obtenido por enumeración exhaustiva

Abstract:

The Intelligent Water Drop (IWD) algorithm is inspired by the movement of real water drops in a river. A water drop could find an optimum path to a lake or sea by interacting with the conditions of its surroundings. In the process of reaching such destination, the water drops interact with the river bed while they move through it. Similarly, the supply chain problem can be modelled as a flow of supply, manufacturing, and delivery stages that must be completed to produce a finished product and then to deliver it to the end user. The problem is to select one option that carries out the stage, e.g. for a supply stage, many suppliers could supply the component represented by it. As each stage is characterised by its time and cost, multi--objective optimisation algorithm is used to minimise the time to market and production cost, simultaneously. Focusing on this analogy, this paper proposes an approach to the supply chain problem using a multi--objective extension to the intelligent water drops algorithm. Artificial water drops, flowing through the supply chain, will simultaneously minimise the production cost and the time to market of every product in a generic BOM by using the concept of Pareto optimality. A widely-used notebook supply chain in literature is solved. We provide some performance metrics of the solution and compare the Pareto set computed by the proposed algorithm with the one returned by exhaustive enumeration

Resumen:

Los modelos de programación lineal han sido aplicados en muchos problemas prácticos como finanzas y producción con resultados positivos reportados en la literatura. Cuando se resuelven estos problemas en la práctica, se utiliza software comercial que implementan el método simplex específicamente el método de las dos fases. Por la aplicación y los resultados obtenidos, la programación lineal es un tema en muchos programas de pre-grado. Sin embargo, la literatura que se utiliza para la enseñanza del método simplex, lo hace por medio de tablas que no es la forma más eficiente de hacerlo en términos computacionales. Esta literatura también menciona al método simplex revisado como la forma más eficiente en términos computacionales para resolver los modelos de programación lineal cuando se implementa junto con la descomposición LU. Debido a que la implantación eficiente en computadora del método simplex es un tema importante para un estudiante de ingeniería, en esta investigación se describe un sistema de apoyo que permite enseñar al estudiante cómo implantar en computadora el método simplex revisado. El sistema de apoyo está desarrollado sobre una interfaz de Excel, y el código para la implantación del método simplex revisado está desarrollado en Visual Basic for Applications, aunque para la solución de los sistemas de ecuaciones lineales se utiliza la rutina de la descomposición LU. Este sistema de apoyo se distribuye libremente a través de la página web del autor de este artículo

Abstract:

Linear Programming models have been applied in many practical problems such as finance and production with positive results reported in the literature. When a linear programming model is implemented to solve practical problems, commercial software that implements the two-face method is utilized. Linear programming is a topic taught in undergraduate programs because of its applications and its results. However, the literature used to teach this topic solves the simplex methods in tables that is not the most efficient way to solve it. The literature mentions the revised simplex method as the most computationally efficient way to solve linear programming models when implemented in conjunction with LU decomposition. Because the computer-based implementation of the simplex method is an important topic for an engineering student, this research describes a support system that allows students to learn how to implement the revised simplex method in a computer. The support system is developed on an interface of Excel, and the code for the implementation of the revised simplex method is developed in Visual Basic for Applications, although for the solution of the systems of linear equations the routine of the decomposition LU is used. This support system is distributed freely through the website of the author of this article

Abstract:

The Intelligent Water Drop (IWD) algorithm is inspired by the movement of natural water drops (WD) in a river. A stream can find an optimum path considering the conditions of its surroundings to reach its ultimate goal, which is often a sea. In the process of reaching such destination, the WD and the environment interact with each other as the WD moves through the river bed. Similarly, the supply chain problem can be modelled as a flow of stages that must be completed and optimised to obtain a finished product that is delivered to the end user. Every stage may have one or more options to be satisfied such as suppliers, manufacturing or delivery options. Each option is characterised by its time and cost. Within this context, multi–objective optimisation approaches are particularly well suited to provide optimal solutions. This problem has been classified as NP hard; thus, this paper proposes an approach aiming to solve the logistics network problem using a modified multi–objective extension of the IWD which returns a Pareto set. Artificial WD, flowing through the supply chain, will simultaneously minimise the cost of goods sold and the lead time of every product involved by using the concept of Pareto optimality. The proposed approach has been tested over instances widely used in literature yielding promising results which are supported by the performance measurements taken by comparison to the ant colony meta-heuristic as well as the true fronts obtained by exhaustive enumeration. The Pareto set returned by IWD is computed in 4 s and the generational distance, spacing, and hyper–area metrics are very close to those computed by exhaustive enumeration. Therefore, our main contribution is the design of a new algorithm that overcomes the algorithm proposed by Moncayo-Martínez and Zhang (2011).

Abstract:

In this paper, we provide a tutorial to solve the problem of minimising the safety stock levels under guaranteed-service time over a supply chain (SC) using the dynamic programming (DP) algorithm proposed by Graves and Willems (2000a). We solve a small instance to exemplify the steps of the DP algorithm, then we solve two bigger instances by a Java-based application. As the DP algorithm has some insights that must be explained in detail to carry it out, the novelty and helpfulness of this tutorial lies in the fact that we account for those insights which lead researchers to develop new algorithms to solve bigger instances. This problem is included in some state-of-the-art SC books but none gives details about the solution algorithm as we do in this tutorial

Abstract:

This work proposes a new approach, based on Ant Colony Optimisation (ACO), to configure Supply Chains (SC) so as to deliver orders on due date and at the minimum cost. For a set of orders, this approach determines which supplier to acquire components from and which manufacturer will produce the products as well as which transportation mode must be used to deliver products to customers. The aforementioned decisions are addressed by three modules. The data module stores all data relating to SC and models the SC. The optimization engine is a multi-agent framework called SC Configuration by ACO. This module implements the ant colony algorithm and generates alternative SC configurations. Ant-k agent configures a single SC travelling by the network created by the first agent. While Ant-k agent visits a stage, it selects an option to perform a stage based on the amount of pheromones and the cost and lead time of the option. We solve a note-book SC presented in literature. Our approach computes pareto sets with SC design which delivers product from 38 to 91 days

Abstract:

The proposed work addresses the problem of placing safety stock under the guaranteed-service model when a set of supplying, manufacturing and delivery stages model the production system. Every stage has a set of options that can perform the stage and every option has an associated cost and time. Hence, the problem is to select an option per stage that minimises the safety stock and lead time at the same time. We proposed solving the problem using two swarm intelligent meta-heuristics, Ant Colony and Intelligent Water Drop, because of their results in solving NP-hard problems such as the safety stock problem. In our proposed algorithm, swarms are created and each one selects an option per stage with its safety stock and lead time. After that, the Pareto Optimality Criterion is applied to all the configurations to compute a Pareto front. A real-life logistic network of the automotive industry is solved using our proposed algorithm. Finally, we provided some multi-objective performance metrics to assess the performance of our approach and carried out a statistical analysis to support our conclusions

Abstract:

In this chapter, we address the problem of placing safety and in-transit inventory over a multi-stage manufacturing supply chains (SC) in which one or more products are manufactured, subject to a stochastic demand. The first part of the problem is to configure the SC given that manufacturers have one or more options to perform every supplying, assembly, and delivery stage. Then, a certain amount of inventory should be placed on each stage to ensure products are delivered to customers just in the stages’ service time. We tested a new nature-inspired swarm-based meta-heuristic called Intelligent Water Drop (IWD) which imitates some of the processes that happen in nature between the water drops of a river and the soil of the river bed. The proposed approach is based on the creation of artificial water drops, which adapt to their environment to find the optimum path from a river/lake to the sea. This idea is embedded into our proposed algorithm to find the cheapest cost of supplying components, assembling, and delivering products subject to the stages’ service time. We tested our approach using four instances, used widely as test bed in literature. We compared the results computed to the ones computed by Ant Colony Meta-heuristic and provided some metrics as well as graphical results of the outputs

Abstract:

This paper provides an analysis of mobility data of the bike sharing system in Mexico City from the year 2010 to 2015. Based on about 18 million trips, it is possible to compute the mean time of a trip and the number of trips from and to a single station. The 444 stations were classified according to the average number of input and output bikes using a Pareto chart. The patterns of mobility between the stations are shown graphically; thus, some clusters are identified based on the number of arrivals and departures. Finally, the number of input and output for some stations is plotted in ten-minute intervals for weekdays and weekends. Therefore, it is known the flow of bikes in stations per time interval per day. These patterns are applied to predict the behavior of a particular station to define policies to improve the bike system program. An R script, available to the public, is programmed to manage the amount of data and to carry on the analysis

Abstract:

In this paper, we introduce the use of an Ant System called Rank-based Ant System (ASRB) to solve the problem of configuring the Supply Chain (SC), when the Production Cost (PC) and the Lead Time (LT) are minimised simultaneously. The SC is modelled as a graph in which nodes represent supply, manufacturing, and delivery stages. Each stage could be performed by more than two options. Thus the problem is to select the options that can minimise the PC and LT at the same time, given that a reduction in time increases the cost and vice versa. We developed an algorithm to solve the SC configuration problem based on the (ASRB) in which ants, of different colonies, travel throughout the graph to configure the SC. Our algorithm is tested using a standard problem reported in the literature and we provide some metric of its performance

Abstract:

In this paper, the supply chain is modeled by a network of stages, which may have one or more options for being fulfilled, where each option is characterized by its time and cost. The objective is to determine the option for every stage that minimizes the Cost of Goods Sold and the Lead Time simultaneously for a set of products represented by a Generic Bill of Materials. In order to achieve it, this work proposes using a new meta-heuristic called Intelligent Water Drop, which mimics the process that takes place between water drops in a river and the changes that happen in the river bed. Artificial water drops minimize the cost and the lead time for every order using the concept of Pareto Optimal Set that, instead of obtaining a single solution, determines a set of optimal solutions by using the proposed equations to compute the velocity and time of the artificial drops. A widely used notebook supply chain in literature is solved and we provide some performance metrics of the algorithm. We compare the results of the proposed algorithm with the results computed by other meta-heuristics.

Abstract:

An assembly supply chain (SC) is composed of stages that provide the components, assemble both sub-assemblies and final products, and deliver products to the customer. The activities carried out in each stage could be performed by one or more options, thus the decision-maker must select the set of options that minimises the cost of goods sold (CoGS) and the lead time (LT), simultaneously. In this paper, an ant colony-based algorithm is proposed to generate a set of SC configurations using the concept of Pareto optimality. The pheromones are updated using an equation that is a function of the CoGS and LT. The algorithm is tested using a notebook SC problem, widely used in literature. The results show that the ratio between the size of the Pareto Front computed by the proposed algorithm and the size of the one computed by exhaustive enumeration is 90%. Other metrics regarding error ratio and generational distance are provided as well as the CPU time to measure the performance of the proposed algorithm

Abstract:

The aim of this paper is to solve the problem of placing safety stock over a Logistic Network (LN) that is represented by a Generic Bill of Materials (GBOM). Thus the LN encompasses supplying, assembling, and delivering stages. We describe, in detail, the recursive algorithm based on Dynamic Programming (DP) to solve the placing safety stock problem under guaranteed-service time models. We also develop a java-based application (JbA) that both models the LN and runs the recursive DP algorithm. We solved a real case of a company that manufactures fixed brake and clutch pedal modules of cars’ brake system. After running JbA, the levels of inventory decreased by zero in 55 out of 65 stages.

Abstract:

The problem of placing safety inventory over a network, which assembles a product, is a challenging issue in supply chain design (SCD) because manufacturers always want to reduce inventory all over the supply chain (SC). Moreover, the process of designing a SC and then placing inventory, to offer high service level at the lowest possible cost, across a complex SC, is not an easy task for decision makers. In this paper we use the SC representation proposed by Graves and Willems (2000), Manufacturing & Service Operations Management 2 (1), 68–83, where a SC is divided into many supplying, manufacturing, and delivering stages. Our problem is to select one resource option to perform each stage, and based on the selected options to place an amount of inventory (in-progress and on-hand) at each stage, in order to offer a satisfactory customer service level with as low as possible total supply chain cost. A resource option here represents a supplier, a manufacturing plant (production line), or a transport mode in a supplying, manufacturing, or delivering stage, respectively. We developed an approach based on ant colony optimisation (ACO) to minimise simultaneously the total supply chain cost and the products’ lead time to ensure product deliveries without delays. What are new in our approach are the bi-objective function and the computational efficiency of our ACO-based approach. In addition, ACO has not been applied to solve the inventory placement problem. As a validation of the model, we: (a) describe a successful application at CIFUNSA, one of the largest iron foundry in the world, and (b) compare different CPU time instances and metrics about multi-objective optimisation

Abstract:

This paper proposes a new approach to determining the Supply Chain (SC) design for a family of products comprising complex hierarchies of subassemblies and components. For a supply chain, there may be multiple suppliers that could supply the same components as well as optional manufacturing plants that could assemble the subassemblies and the products. Each of these options is differentiated by a lead-time and cost. Given all the possible options, the supply chain design problem is to select the options that minimise the total supply chain cost while keeping the total lead-times within required delivery due dates. This work proposes an algorithm based on Pareto Ant Colony Optimisation as an effective meta- heuristic method for solving multi-objective supply chain design problems. An experimental example and a number of variations of the example are used to test the algorithmand the results reported using a number of comparative metrics. Parameters affecting the performance of the algorithm are investigated

Abstract:

This note provides the first nonlinear analysis of the industry standard "partial decoupling plus nested PI loops" control of voltage sourced inverters. In spite of its enormous popularity, to date only linearization-based tools are available to carry out the analysis, which are unable to deal with large-signal stability and fail to provide estimates of the domain of attraction of the desired equilibrium. Instrumental to establish our result is the representation of the closed-loop dynamics in a suitable Lure-like representation, that is, a forward system in closed-loop with a static nonlinearity. The stability analysis is then done by generating an adequate Popov multiplier. Comparison with respect to linearization is discussed together with numerical results demonstrating non-conservativeness of the proposed conditions

Abstract:

We offer the quantitative estimation of stability of risk-sensitive cost optimization in the problem of optimal stopping of Markov chain on a Borel space X. It is supposed that the transition probability p(·|x), x Є X is approximated by the transition probability p(·|x), x Є X, and that the stopping rule f , which is optimal for the process with the transition probability p is applied to the process with the transition probability p. We give an upper bound (expressed in term of the total variation distance: sup x Є X ||p(·|x)− p(·|x)|| for an additional cost paid for using the rule e f instead of the (unknown) stopping rule f optimal for p

Abstract:

We test the accuracy of various methods for approximating underspecified joint probability distributions. In particular, we examine the maximum entropy and the analytic center approximations, and we introduce three methods for approximating a discrete joint probability distribution given partial probabilistic information. Our results suggest that recently proposed approximations and our new approximations more accurately represent the possible uncertainty models than do previous models such as maximum entropy

Abstract:

We present a model to characterize partial information using linear constraints and review the joint distribution simulator algorithm (JDSIM), which is the main tool in the analysis of potentially optimal alternatives. We revisit the basic concepts of decision under uncertainty and the geometry of the regions of optimality. Finally, we present an example to illustrate the concepts and conclude with new directions for future research

Abstract:

The assessment and characterization of multilinear utility functions (MLUFs) may require the elicitation of many attribute weights. In this case, the decision maker may find it difficult to provide precise assessments and may instead be more comfortable providing a range in which the scaling parameters fall or specifying that some parameters are larger than others. The question then becomes how the analyst should formulate a recommendation given this partial preference information. In this paper, we present a generalized Monte Carlo simulation procedure to test the sensitivity of MLUFs to changes in the scaling parameters. Specifically, we admit any preference information that can be expressed as a linear constraint. We then sample from the set of all possible MLUFs matching these constraints. We consider the additive MLUF, the multiplicative MLUF, the utility-independent MLUF, and the generalized utility-independent MLUF. In so doing, we also demonstrate how analysts can test the sensitivity of their analysis to the structure of the MLUF itself. We illustrate the flexibility of our method within the context of a coal-fired power plant siting decision used by previous authors

Abstract:

This paper presents two implementations of the Entropy Concentration Theorem that suggest contradictory approximations of an unknown joint probability distribution. This paradox of opposing recommendations was analyzed using a simple example. The first implementation satisfies the Theorem when applied to the outcomes of a random variable, whereas the second adheres to the metadistribution of the valid joint distribution approximations. The paper revisits the maximum entropy model and the Entropy Concentration Theorem to provide a clear understanding of the paradox and its implications for choosing an approximation to a probability distribution

Abstract:

In this paper, we propose new methods to approximate probability distributions that are incompletely specified. We compare these methods to the use of maximum entropy and quantify the accuracy of all methods within the context of an illustrative example. We show that within the context of our example, the methods we propose are more accurate than existing methods

Abstract:

In this paper, we develop a practical and flexible methodology for generating a random collection of discrete joint probability distributions, subject to a specified information set, which can be expressed as a set of linear constraints (e.g., marginal assessments, moments, or pairwise correlations). Our approach begins with the construction of a polytope using this set of linear constraints. This polytope defines the set of all joint distributions that match the given information; we refer to this set as the "truth set". We then implement a Monte Carlo procedure, the Hit-and-Run algorithm, to sample points uniformly from the truth set. Each sampled point is a joint distribution that matches the specified information. We provide guidelines to determine the quality of this sampled collection. The sampled points can be used to solve optimization models and to simulate systems under different uncertainty scenarios

Abstract:

The construction of a probabilistic model is a key step in most decision and risk analyses. Typically this is done by defining a single joint distribution in terms of marginal and conditional distributions. The difficulty of this approach is that often the joint distribution is underspecified. For example, we may lack knowledge of the marginal distributions or the underlying dependence structure. In this paper, we suggest an approach to analyzing decisions with partial information. Specifically, we propose a simulation procedure to create a collection of joint distributions that match the known information. This collection of distributions can then be used to analyze the decision problem. We demonstrate our method by applying it to the Eagle Airlines case study used in previous studies

Abstract:

This work presents the development and analysis of a discrete simulation model to study the payback of the investment in a pig farm dedicated to the production of piglets. The model considers the stages of mating, gestation, and lactation typical of the process. The random variables considered were litter size, success at fertilization, and residence times at each of the stages mentioned above. The results show the risks of reaching a certain return period, when the sows remain a certain number of cycles on the farm

Resumen:

Este trabajo presenta el desarrollo y análisis de un modelo de simulación de evento discreto para estudiar el periodo de retorno de la inversión en una granja de cerdas dedicada a la producción de lechones. El modelo considera las etapas de apareamiento, gestación y lactación, típicas de este proceso. Los componentes aleatorios considerados en el modelo fueron el tamaño de la camada, el éxito en la fecundación, y los tiempos de permanencia en cada una de las etapas antes mencionadas. Los resultados experimentales obtenidos nos han permitido cuantificar el riesgo de poder alcanzar un determinado tiempo de retorno, bajo la política de imponer un número determinado de ciclos de permanencia en la granja para las cerdas

Abstract:

This article presents the development and analysis of a discrete-event simulation model to study the payback in a sow farm dedicated to the production of piglets. The model considers the stages of mating, gestation and lactation, which are typical in this process. Random components considered in the model were the litter size, the pregnant success, and the stay times in each of the aforementioned stages. Experimental results allowed us to quantify the risk of attaining a given payback time under the policy of imposing a given number of residence cycles in the farm for the sows

Abstract:

A new framework to identify and classify the support capabilities provided by the full range of Decision-Making Support Systems is posed. This framework extends a previously reported framework by the same authors. The new framework adds a dimension of user interface support capabilities to the data, information and knowledge representation and processing capabilities dimensions of the previous framework. With this conceptual tool, an analysis of the achievements realized and a research agenda for the next generation of Intelligent DMSS is developed

Abstract:

This paper is concerned with Linear Canonical Transforms (LCTs) associated with two-dimensional quaternion-valued signals defined in an open rectangle of the Euclidean plane endowed with a hyperbolic measure, which we call Quaternion Hyperbolic Linear Canonical Transforms (QHLCTs). These transforms are defined by replacing the Euclidean plane wave with a corresponding hyperbolic relativistic plane wave in one dimension multiplied by quadratic modulations in both the hyperbolic spatial and frequency domains, giving the hyperbolic counterpart of the Euclidean LCTs. We prove the fundamental properties of the partial QHLCTs and the right-sided QHLCT by employing hyperbolic geometry tools and establish main results such as the Riemann-Lebesgue Lemma, the Plancherel and Parseval Theorems, and inversion formulas. The analysis is carried out in terms of novel hyperbolic derivative and hyperbolic primitive concepts, which lead to the differentiation and integration properties of the QHLCTs. The results are applied to establish two quaternionic versions of the Heisenberg uncertainty principle for the right-sided QHLCT. These uncertainty principles prescribe a lower bound on the product of the effective widths of quaternion-valued signals in the hyperbolic spatial and frequency domains. It is shown that only hyperbolic Gaussian quaternion functions minimize the uncertainty relations

Abstract:

We propose a single one-parameter family of orthogonal harmonic functions expressed in terms of spheroidal coordinates as independent variables to construct a common orthogonal basis for the L2-Hilbert spaces of quaternionic monogenic functions in the space exterior of a spheroidal domain (either prolate or oblate). We give recurrence relations for the elements that constitute such a basis, which are particularly easy to handle from a computational point of view. Conversion formulas among the classes of harmonic and monogenic functions associated with a spheroid of arbitrary eccentricity to those related to the Euclidean ball are derived

Abstract:

We construct a set of quaternionic metamonogenic functions (that is, in Ker(D+λ) for diverse real λ) in the unit disk, such that every metamonogenic function is approximable in the quaternionic Hilbert module L2 of the disk. The set is orthogonal except for the small subspace of elements of orders zero and one. These functions are used to express time-dependent solutions of the imaginary-time wave equation in the polar coordinate system.

Abstract:

We construct a one-parameter family of generalized Mathieu functions, which are reduced quaternion-valued functions of a pair of real variables lying in an ellipse, and which we call λ-reduced quaternionic Mathieu functions. We prove that the λ-RQM functions, which are in the kernel of the Moisil-Teodorescu operator D + λ (D is the Dirac operator and λ ε R \ {0}), form a complete orthogonal system in the Hilbert space of square-integrable λ-metamonogenic functions with respect to the L2-norm over confocal ellipses. Further, we introduce the zero-boundary λ-RQM-functions, which are λ-RQM functions whose scalar part vanishes on the boundary of the ellipse. The limiting values of the λ-RQM functions as the eccentricity of the ellipse tends to zero are expressed in terms of Bessel functions of the first kind and form a complete orthogonal system for λ-metamonogenic functions with respect to the L2-norm on the unit disk. A connection between the λ-RQM functions and the time-dependent solutions of the imaginary-time wave equation in the elliptical coordinate system is shown

Abstract:

The paper aims to study differential subordination and superordination preserving properties for certain analytic multivalent functions within the open unit disk related to a novel generalized fractional derivative operator for higher-order derivatives. As an application, we provide an explicit construction for the complex potential (the complex velocity) and the stream function of two-dimensional fluid flow problems over a circular cylinder using both vortex and source/sink. We further determine the fluid flow produced by a single source and construct a univalent function so that the image of a source is also a source for a given complex potential. Finally, we present some plot simulations that illustrate the results of this work

Abstract:

We propose third-order differential subordination results associated with new admissible classes of multivalent functions defined in the open unit disk on the complex plane. Besides, we investigate the geometric properties of multivalent functions associated with a novel convolution operator. This is done by taking suitable linear combinations of the classical Gaussian hypergeometric function and its derivatives up to third-order and applying the Hadamard product (or convolution) formula for power series. The complex velocity potential and the stream function of two-dimensional potential flow problems over a circular cylinder using both sources with sink and two sources are treated by the methods developed in the present paper. We further determine the fluid flow produced by a single source and construct a univalent function so that the image of source/sink is also source/sink for a given complex potential. Finally, some plot simulations are provided to illustrate the different results of this work

Abstract:

The problem of building an orthogonal basis for the space of squareintegrable harmonic functions defined in a spheroidal (either oblate or prolate) domain leads to special functions, which provide an elegant analysis of a variety of physical problems. Many generalizations of these ideas in the context of Quaternionic Analysis possess a similar elegant mathematical structure. A brief descriptive review is given of these developments

Abstract:

In this work, we show that there exists a theory of functions with quaternionic values and of two real variables, which is determined by a Cauchy-Riemann-type operator with quaternionic variable coefficients and that is intimately related to the well-known Mathieu functions. As a result, we introduce the Quaternionic Mathieu Functions and explain their connections to the solutions of the Heat-Conduction equation in the elliptical confocal coordinate system

Abstract:

Over the last years, considerable attention has been paid to the role of the prolate spheroidal wave functions (PSWFs) introduced in the early sixties by D. Slepian and H.O. Pollak to many practical signal and image processing problems. The PSWFs and their applications to wave phenomena modeling, fluid dynamics, and filter design played a key role in this development. In this paper, we introduce the prolate spheroidal quaternion wave functions (PSQWFs), which refine and extend the PSWFs. The PSQWFs are ideally suited to study certain questions regarding the relationship between quaternionic functions and their Fourier transforms. We show that the PSQWFs are orthogonal and complete over two different intervals: the space of square integrable functions over a finite interval and the three-dimensional Paley–Wiener space of bandlimited functions. No other system of classical generalized orthogonal functions is known to possess this unique property. We illustrate how to apply the PSQWFs for the quaternionic Fourier transform to analyze Slepian's energy concentration problem. We address all of the aforementioned and explore some basic facts of the arising quaternionic function theory. We conclude the paper by computing the PSQWFs restricted in frequency to the unit sphere. The representation of these functions in terms of generalized spherical harmonics is explicitly given, from which several fundamental properties can be derived. As an application, we provide the reader with plot simulations that demonstrate the effectiveness of our approach

Resumen:

Es verdaderamente raro que un artículo que ha sido dejado de lado hace casi ochenta años encuentre su camino de regreso al centro de la atención científica. Aún así, esto es exactamente lo que ha sucedido con el artículo escrito en 1934 por el Premio Nobel Frits Zernike en la última década. De hecho, en los ultimos años se ha prestado especial atención al papel de los polinomios de Zernike (ZPs) en muchos y diferentes campos de la óptica geométrica, la ingeniería óptica y la astronomía. Los ZPs están representados por el producto de una constante de normalización por polinomios radiales y un par de funciones trigonométricas. Estos polinomios forman un conjunto ortogonal y completo sobre el disco unitario. En este texto pasaremos por la construcción de los polinomios de Zernike esféricos en el contexto del análisis cuaterniónico y discutiremos resultados hasta la fecha

Abstract:

Complete orthogonal systems of monogenic polynomials over 3D prolate spheroids have recently experienced an upsurge of interest because of their many remarkable properties. These generalized polynomials and their applications to the theory of quasi-conformal mappings and approximation theory have played a major role in this development. In particular, the underlying functions of three real variables take on values in the reduced quaternions (identified with R3) and are generally assumed to be null-solutions of the well-known Riesz system in R3. The present paper introduces and explores a new complete orthogonal system of monogenic functions as solutions to this system for the space exterior of a 3D prolate spheroid. This will be made in the linear spaces of square integrable functions over R. The representations of these functions are explicitly given. Some important properties of the system are briefly discussed, from which several recurrence formulae for fast computer implementations can be derived

Abstract:

The scalar spherical wave functions (SWFs) are solutions to the scalar Helmholtz equation obtained by the method of separation of variables in spherical polar coordinates. These functions are complete and orthogonal over a sphere, and they can, therefore, be used as a set of basis functions in solving boundary value problems by spherical wave expansions. In this work, we show that there exists a theory of functions with quaternionic values and of three real variables, which is determined by the Moisil–Theodorescu-type operator with quaternionic variable coefficients, and which is intimately related to the radial, angular and azimuthal wave equations. As a result, we explain the connections between the null solutions of these equations, on one hand, and the quaternionic hyperholomorphic and anti-hyperholomorphic functions, on the other. We further introduce the quaternionic spherical wave functions (QSWFs), which refine and extend the SWFs. Each function is a linear combination of SWFs and products of D0-hyperholomorphic functions by regular spherical Bessel functions. We prove that the QSWFs are orthogonal in the unit ball with respect to a particular bilinear form. Also, we perform a detailed analysis of the related properties of QSWFs. We conclude the paper establishing analogues of the basic integral formulae of complex analysis such as Borel–Pompeiu's and Cauchy's, for this version of quaternionic function theory. As an application, we present some plot simulations that illustrate the results of this work

Abstract:

This paper introduces the Oblate Spheroidal Quaternionic Wave Functions (OSQWFs), which extend the oblate spheroidal wave functions introduced in the late 1950s by C. Flammer. We show that the theory of the OSQWFs is determined by the Moisil–Teodorescu type operator with quaternionic variable coefficients. We show the connections between the solutions of the radial and angular equations and of the Chebyshev equation, on one hand, and the quaternionic hyperholomorphic and anti-hyperholomorphic functions on the other. We proceed the paper establishing an analogue of the Cauchy’s integral formula as well as analogues of the boundary value properties such as Sokhotski–Plemelj formulas, the Dk-hyperholomorphic extension of a given Hölder function and on the square of the singular integral operator for this version of quaternionic function theory. To progress in this direction, we show how the D0-hyperholomorphic OSQWFs (with a bandwidth parameter c=0) of any order look like, without belabor them. With the help of these functions, we construct a complete orthogonal system for the L2-space consisting of D0-hyperholomorphic OSQWFs. A big breakthrough is that the orthogonality of the basis elements does not depend on the shape of the oblate spheroids, but only on the location of the foci of the ellipse generating the spheroid. As an application, we prove an explicit formula of the quaternionic D0-hyperholomorphic Bergman kernel function over oblate spheroids in R3. In addition, we provide the reader with some plot examples that demonstrate the effectiveness of our approach

Abstract:

Tax evasion stems from a lack of social consciousness. Similarly, a large number of variables intervene in the decision-making process of an individual or a company when determining whether to pay taxes or not. The purpose of this paper is to prove whether informality arises due to a lack of civic culture among the population of developing nations, particularly in the case of Mexico. Likewise, the methodology used for this investigation takes into account real options as a way to draw a theoretical model from this behaviour. A specific model was selected because it is flexible enough to consider variables related to the country’s economic, political and legal conditions. Additionally, the relative uncertainty of a given individual’s moral standards, education and social consciousness are evaluated through volatility in opinions. In using this methodology, an approximate model of the behavior of a tax evader can be reached and the potential benefits that fiscal authorities could enjoy by studying the conduct of tax evaders. All data was collected through a survey carried out in a sample of 200 people. Surprisingly, most of the answers provided by the traders express a willingness to step into the formal economy, each taking into consideration his/her specific needs and fiscal capacity. Moreover, the survey’s respondents suggested that a fixed monthly income tax could facilitate their shift. Once an analysis of the obtained data was made, results indicated that tax evasions are due to a lack of information and the absence of trust in government authorities. According to the methodology of real options, a positive effect on taxation as up to $65,000 million Mexican pesos could come from incorporating new taxpayers into the country’s formal economy

Abstract:

We address the motion planning problem for a single robot that manipulates passive objects in an otherwise static environment. We handle uncertainties in passive object responses while abstracting their dynamics by creating a roadmap that encodes the possible passive object responses. The robot extracts reference paths from the passive roadmap to plan approaching motions with an active roadmap. When the robot acts on the object, it receives response feedback and reacts accordingly. We apply this approach to pushing, a basic manipulation primitive. We propose four different pushing strategies, one of them using reinforcement learning. Our simulations show the effectiveness of the approach and of each pushing strategy

Abstract:

Many sampling methods for motion planning explore the robot's configuration space (C-space) starting from a set of configuration(s) and incrementally explore surrounding areas to produce a growing model of the space. Although there is a common understanding of the strengths and weaknesses of these techniques, metrics for analyzing the incremental exploration process and for evaluating the performance of incremental samplers have been lacking. We propose the use of local metrics that provide insight into the complexity of the different regions in the model and global metrics that describe the process as a whole. These metrics only require local information and can be efficiently computed. We illustrate the use of our proposed metrics to analyze representative incremental strategies including the Rapidly-exploring Random Trees, Expansive Space Trees, and the original Randomized Path Planner. We show how these metrics model the efficiency of C-space exploration and help to identify different modeling stages. In addition, these metrics are ideal for adapting space exploration to improve performance

Abstract:

There are many sampling-based motion planning methods that model the connectivity of a robot's configuration space (C-space) with a graph whose nodes are valid configurations and whose edges represent valid transitions between nodes. One of the biggest challenges faced by users or these methods is selecting the right planner for their problem. While researchers have tried to compare different planners, most accepted metrics for comparing planners are based on efficiency, e.g., number of collision detection calls or samples needed to solve a particular set of queries, and there is still a lack of useful and efficient quantitive metrics that can be used to measure the suitability of a planner for solving a problem. That is, although there is great interest in determining which planners should be used in which situations, there are still many questions we cannot answer about the relative performance of different planning methods. In this paper we make some progress towards this goal. We propose a metric that can be applied to each new sample considered by a sampling-based planner to characterize how that sample improves, or not, the planner's current C-space model. This characterization requires only local information and can be computed quite efficiently, so that it can be applied to every sample. We show how this characterization can be used to analyze and compare how different planning strategies explore the configuration space. In particular, we show that it can be used to identify three phases that planners go through when building C-space models: quick learning (rapidly building a coarse model), model enhancement (refining the model), and learning decay (oversampling - most samples do not provide additional information). Hence, our work can also provide the basis for determining when a particular planning strategy has 'converged' on the best C-space model that it is capable of building

Abstract:

Although there are many motion planning techniques, there is no method that outperforms all others for all problem instances. Rather, each technique has different strengths and weaknesses which makes it best-suited for certain types of problems. Moreover, since an environment can contain vastly different regions, there may not be a single planner that will perform well in all its regions. Ideally, one would use a suite of planners in concert and would solve the problem by applying the best-suited planner in each region. In this paper, we propose an automated framework for feature-sensitive motion planning. We use a machine learning approach to characterize and partition C-space into regions that are well suited to one of the methods in our library of roadmap-based motion planners. After the best-suited method is applied in each region, the resulting region roadmaps are combined to form a roadmap of the entire planning space. Over a range of problems, we demonstrate that our simple prototype system reliably outperforms any of the planners on their own

Abstract:

There are many randomized motion planning techniques, but it is often difficult to determine what planning method to apply to best solve a problem. Planners have their own strengths and weaknesses, and each one is best suited to a specific type of problem. In previous work, we proposed a meta-planner that, through analysis of the problem features, subdivides the instance into regions and determines which planner to apply in each region. The results obtained with our prototype system were very promising even though it utilized simplistic strategies for all components. Even so, we did determine that strategies for problem subdivision and for combination of partial regional solutions have a crucial impact on performance. In this paper, we propose new methods for these steps to improve the performance of the meta-planner. For problem subdivision, we propose two new methods: a method based on 'gaps' and a method based on information theory. For combining partial solutions, we propose two new methods that concentrate on neighboring areas of the regional solutions. We present results that show the performance gain achieved by utilizing these new strategies

Abstract:

In this paper we investigate how the coverage and connectedness of PRM roadmaps can be improved by adding a connected component (CC) connection step to the general PRM framework. We provide experimental results establishing that significant roadmap improvements can be obtained relatively efficiently by utilizing a suite of CC connection methods, which include variants of existing methods such as RRT and a new ray tracing based method. The coordinated application of these techniques is enabled by methods for selecting and scheduling pairs of nodes in different CCs for connection attempts. In addition to identifying important and/or promising regions of C-space for exploration, these methods also provide a mechanism for controlling the cost of the connection attempts. In our experiments, the time required by the improvement phase was on the same order as the time used to generate the initial roadmap

Abstract:

In the traditional AI approach, a controller of a mobile robot is organized into several modules, each one having a different task to perform. There is a module which takes data from sensors and sends them to another module that creates the world model. A planner uses the world model and the robot’s mission to generate an action plan that is followed by the execution model. On the other hand, is the behavior approach, it is based in multiple behavior generators. Each behavior generator can have all the aspects of the traditional approach, but its execution module generates just opinion and a referee choose which one to take. We based our experiment using both approaches. In a high level there is a planner and a navigator that generate a path and the movements to follow based on a map. In low level we have a pilot with two behaviors: reach a configuration and avoid obstacles. These behaviors allow the robot to reach the next configuration without collisions, even in a dynamic environment. The avoid obstacles behavior module uses an artificial neural network to estimate the position of the obstacles not included in the map- In the experiments we have done, the planner finds the shortest path in almost all the cases, and always finds a path with the minimal number movements. With the neural network, the pilot is capable of avoiding obstacles 60% of the time

Abstract:

A sequential quadratic programming (SQP) method is presented that aims to overcome some of the drawbacks of contemporary SQP methods. It avoids the difficulties associated with indefinite quadratic programming subproblems by defining this subproblem to be always convex. The novel feature of the approach is the addition of an equality constrained quadratic programming (EQP) phase that promotes fast convergence and improves performance in the presence of ill conditioning. This EQP phase uses exact second-order information and can be implemented using either a direct solve or an iterative method. The paper studies the global and local convergence properties of the new algorithm and presents a set of numerical experiments to illustrate its practical performance

Abstract:

We describe two modifications to Algorithm 778 (see Zhu et al. [1997]) that give rise to significant improvements in performance. The first modification consists of a refinement of the subspace minimization phase of the algorithm. The new implementation promotes larger steps during the subspace minimization, and thereby faster identification of the optimal active set. The second modification was necessitated by the fact that the MINPACK-2 routine dpmeps employed in Algorithm 778 produced an incorrect estimate of machine accuracy on some compilers

Abstract:

This paper studies algorithms for the solution of mixed symmetric linear complementarity problems. The goal is to compute fast and approximate solutions of medium to large sized problems, such as those arising in computer game simulations and American options pricing. The paper proposes an improvement of a method described by Kocvara and Zowe (Numer. Math. 68:95–106, 1994) that combines projected Gauss–Seidel iterations with subspace minimization steps. The proposed algorithm employs a recursive subspace minimization designed to handle severely ill-conditioned problems. Numerical tests indicate that the approach is more efficient than interior-point and gradient projection methods on some physical simulation problems that arise in computer game scenarios

Abstract:

A series of numerical experiments with interior point (LOQO, KNITRO) and active-set sequential quadratic programming (SNOPT, filterSQP) codes are reported and analyzed. The tests were performed with small, medium-size and moderately large problems, and are exaInined by problem classes. Detailed observations on the performance of the codes, and several suggestions on how to improve them are presented. Overall, interior methods appear to be strong competitors of active-set SQP methods, but all codes show much room for improvement

Abstract:

The application of quasi-Newton methods is widespread in numerical optimization. Independently of the application, the techniques used to update the BFGS matrices seem to play an important role in the performance of the overall method. In this paper, we address precisely this issue. We compare two implementations of the limited memory BFGS method for large-scale unconstrained problems. They differ in the updating technique and the choice of initial matrix. L-BFGS performs continuous updating, whereas SNOPT uses a restarted limited memory strategy. Our study shows that continuous updating techniques are more effective, particularly for large problems

Abstract:

This paper describes a class of optimization methods that interlace iterations of the limited memory BFGS method (L-BFGS) and a Hessian-free Newton method (HFN) in such a way that the information collected by one type of iteration improves the performance of the other. Curvature information about the objective function is stored in the form of a limited memory matrix, and plays the dual role of preconditioning the inner conjugate gradient iteration in the HFN method and of providing an initial matrix for L-BFGS iterations. The lengths of the L-BFGS and HFN cycles are adjusted dynamically during the course of the optimization. Numerical experiments indicate that the new algorithms are both effective and not sensitive to the choice of parameters

Abstract:

PREQN is a package of Fortran 77 subroutines for automatically generating preconditioners for the conjugate gradient method. It is designed for solving a sequence of linear systems A(i)x = bi, i = 1, ..., t, where the coefficient matrices Ai are symmetric and positive definite and vary slowly. Problems of this type arise, for example, in nonlinear optimization. The preconditioners are based on limited-memory quasi-Newton updating and are recommended for problems in which (i) the coefficient matrices are not explicitly known and only matrix-vector products of the form Aiv can be computed; or (ii) the coefficient matrices are not sparse. PREQN is written so that a single call from a conjugate gradient routine performs the preconditioning operation and stores information needed for the generation of a new preconditioner

Abstract:

This paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax=bi with different right-hand-side vectors or for solving a sequence of slowly varying systems Ak x = bk. The preconditioner has the form of a limited memory quasi-Newton matrix and is generated using information from the CG iteration. The automatic preconditioner does not require explicit knowledge of the coefficient matrix A and is therefore suitable for problems where only products of A times a vector can be computed. Numerical experiments indicate that the preconditioner has most to offer when these matrix-vector products are expensive to compute and when low accuracy in the solution is required. The effectiveness of the preconditioner is tested within a Hessian-free Newton method for optimization and by solving certain linear systems arising in finite element models.

Abstract:

It is well known that pipe network optimization problems are extremely difficult to solve when the number of variables is large. Some of the existing techniques are heuristical and do not obtain an optimal solution, instead a feasible solution, a point that satisfies the set of constraints, is obtained without any reference to the global optimum. In order to alleviate this drawback, other approaches start from feasible points computed using the above referred techniques, then an iterative procedure improves the objective function until a local minimum is reached. In this work an approach that resembles the SQP method is presented. Given an initial feasible approximation, at every iteration the algorithm builds a local quadratic model whose solution provides with a descent direction for solving the original optimization problem. This direction is used to compute, a possibly infeasible, trial point. Feasibility is then restored by a procedure based on a truncated Newton algorithm. The algorithm finishes with a local minimum

Abstract:

Handheld computers are increasingly being used by hospital workers. With the integration of wireless networks into hospital information systems, handheld computers can provide the basis for a pervasive computing hospital environment; to develop this, designers need empirical information to understand how hospital workers interact with information while moving around. To characterise the medical phenomena we report the results of a workplace study conducted in a hospital. We found that individuals spend about half of their time at their base location, where most of their interactions occur. On average, our informants spent 23% of their time performing information management tasks, followed by coordination (17.08%), clinical case assessment (15.35%) and direct patient care (12.6%). We discuss how our results offer insights for the design of pervasive computing technology, and directions for further research and development in this field such as transferring information between heterogeneous devices and integration of the physical and digital domains

Abstract:

Recent reports show that Personal Digital Assistants (PDAs) are being widely used in healthcare and their level of use is expected to rise rapidly. However, it seems that PDAs adoption is not experienced in the same way by all types of medical workers, in particular nurses. Even tough there is some evidence pointing to some barriers for PDAs adoption by nurses, there exists little empirical investigation that explains how the adoption occurs within the context of daily nurse practices and with a longitudinal perspective. This work presents the results of a technology adoption case study, carried out with a group of nurses of a public hospital. We identify PDAs critical adoption factors such as training, previous use of technology, language, fears, as they are experienced by our informants. In contrast with previous studies, our results highlight the importance of studying adoption as a multi-stage phenomenon as critical factors become more or less relevant at different stages. We discuss the implications of our findings for the successful introduction of PDAs in nursing practices

Resumen:

Los robots submarinos han revolucionado la exploración del fondo marino. Por otro lado, estos robots han permitido realizar operaciones en aguas profundas sin la necesidad de enviar un vehículo tripulado por humanos. El futuro de esta tecnología es prometedor. El propósito de este documento es servir de primer contacto con este tema y va dirigido a estudiantes de postgrado, ingenieros e investigadores con interés en la robótica submarina. Además, se reporta el estado actual de los diferentes aspectos que giran alrededor de esta área de la robótica

Resumen:

Desde 2018, cuando fue electo a la presidencia, Andrés Manuel López Obrador y su gobierno han difundido un sistema de creencias entre la sociedad mexicana que refleja los principios de su movimiento político y articula los contenidos de la llamada Cuarta Transformación, también conocida como 4T. Con esos principios se ha buscado destacar una cosmovisión distinta a lo que el presidente se ha referido como el modelo neoliberal o conservador que, en su narrativa, rigió en México en las tres décadas y media previas a la llegada de Morena al gobierno. Las conferencias matutinas del mandatario han jugado un papel central y predominante en la definición y difusión del sistema de creencias obradorista. En este artículo analizamos el alcance y posible impacto que eso ha tenido en la opinión pública mexicana. Entre los hallazgos principales documentamos una notable varianza en la recepción y asimilación de mensajes de la 4T, mostrando que estos han permeado con mucha mayor claridad entre los simpatizantes del partido gobernante y entre los públicos afines al presidente y su movimiento, reflejando fenómenos de atención y asimilación selectivas. Dado que la narrativa del obradorismo suele etiquetarse dentro de la actual retórica populista en el mundo, también discutimos ese aspecto a la luz de nuestros resultados

Abstract:

Since 2018, when he was elected to the presidency, Andrés Manuel López Obrador and his government have spread a belief system among Mexican society that reflects the principles of his political movement and articulates the contents of the so-called Fourth Transformation, also known as 4T. With these principles, an attempt has been made to highlight a different worldview from what the president has referred to as the neoliberal or conservative model that, in his narrative, governed Mexico in the three and a half decades prior to Morena's arrival in government. The president's morning press conferences have represented a central and predominant space for the definition and dissemination of the obradorist belief system. In this article, we analyze the scope that it has had among Mexican public opinion. Among the main findings, we document a notable variance in the reception and assimilation of 4T messages, showing that these have permeated much more clearly among supporters of the ruling party and among publics related to the president and his movement, reflecting phenomena of selective attention and assimilation. Given that the obradorism narrative is usually labeled within the current populist rhetoric in the world, we also discuss this aspect in light of our results

Abstract:

People’s electoral behavior is understood as political predispositions and attitudes in specific institutional contexts. Recent scholarly work has included personality as a key explanatory factor in individual-level models of political participation. In this paper we build upon these recent efforts. We utilize the Big Five approach to assess the effects of different personality traits on people's likelihood of political engagement during the 2012 presidential election in Mexico. We focus on the effects of personality on voting in the election and on individual views about the integrity of the electoral process. We use post-election survey data collected for the Comparative National Elections Project in the 2012 Mexican presidential election. Our findings show that extraversion is a critical individual-level factor accounting for the propensity to turnout in this election as well as to encourage political discussion with family members, friends, neighbors, and co-workers

Resumo:

O comportamento eleitoral das pessoas é entendido como predisposições políticas e atitudes em contextos institucionais específicos. Os recentes trabalhos acadêmicos incluem a personalidade como um fator explicativo central em modelos de nível individual de participação política. Este trabalho é desenvolvido levando em conta esses esforços recentes. Utiliza-se a abordagem Big Five para avaliar os efeitos de diferentes traços de personalidade sobre a probabilidade de engajamento político das pessoas durante a eleição presidencial de 2012 no México. O trabalho foca sobre os efeitos da personalidade no voto e em visões individuais sobre a integridade do processo eleitoral. Utilizam-se dados da pesquisa pós-eleitoral coletados para o Projeto Nacional de Eleições Comparativas na eleição presidencial mexicana de 2012. Os resultados mostram que a extroversão é um fator crítico em nível individual explicando a propensão e o comparecimento nesta eleição, bem como para incentivar a discussão política com os membros da família, amigos, vizinhos e colegas de trabalho

Resumen:

En este trabajo probamos varias hipótesis que reflejan algunas de las fuentes más comunes de error en las encuestas preelectorales. Probamos los efectos del diseño del cuestionario, los efectos del muestreo, los efectos del entrevistador, los efectos de la espiral del silencio y varios otros efectos contextuales (como la percepción de seguridad o de peligro en el sitio donde se realizan las encuestas personales cara a cara). Analizamos la información obtenida en una encuesta preelectoral a nivel estatal realizada en el Estado de México en junio de 2011, dos semanas

Abstract:

This article documents public opinion research activities in Mexico in the 1940s and the role played by Hungarian professor László Radványi, who immigrated to that country at the height of World War II. Our research relies on several of Radványis publications archived in different countries, as well as on interviews with family, acquaintances, and experts on the work of his wife, the German poet Anna Seghers. During his years in Mexico, Radványi founded the Scientific Institute of Mexican Public Opinion, in 1941, and the International Journal of Opinion and Attitude Research, in 1947—a forefather of todays IJPOR. He was also a founding member of WAPOR. His early “sample surveys” raised important methodological issues and recorded opinion results that reflect the vibrant times of war and policy making in a modernizing country. However, Radványis contribution to the profession has been virtually forgotten. Until now, accounts about how public opinion research began in Mexico either ignored Radványis works or reduced his ten years of survey research to a single footnote. This article is an attempt to fill this enormous omission and highlight some of Radványis contributions to these early stages of survey research

Resumen:

En este artículo se analizan diversas posturas de la opinión pública en el contexto inmediatamente posterior a las elecciones presidenciales de 2006 en México. El objetivo es determinar los componentes individuales que influenciaron la toma de una u otra postura en diversos temas del conflicto postelectoral. Se abordan preguntas relativas a la confianza en la elección, el Tribunal Electoral, la necesidad de un recuento voto por voto y la opinión ante las movilizaciones de protesta, entre otras. Se utiliza como fuente de evidencia empírica la encuesta mexicana del Proyecto de Elecciones Nacionales Comparadas (CNEP), la cual se llevó a cabo por primera vez en México en 2006 siguiendo un diseño tipo panel de dos rondas de entrevistas, una inmediatamente ames de la elección presidencial y la otra inmediatamente después de los comicios. Los resultados destacan la importancia de las predisposiciones políticas para analizar las posturas de opinión pública en México

Abstract:

In this article, I analyze public opinion about the 2006 Mexican presidential election in the context of the post-election conflict. My goal is to determine which individual-level variables influenced opinions about the post-election conflict. The analysis focuses on individual positions about the election fairness, confidence in the electoral Tribunal, claims for a full recount, and the public's stands on street protests and mobilization, among others. I use the Mexican component of the Comparative National Election Project (CNEP), conducted for the first time in Mexico in 2006 as a two-wave, preelection and postelection, panel design. The results highlight the importance of political predispositions in the analysis of public opinion in Mexico

Resumen:

En este artículo analizamos los patrones de cambio en la identificación partidista de los mexicanos en las elecciones presidenciales de 2000 y 2006. A partir de la información recopilada en encuestas nacionales de salida realizadas a los votantes, nuestro análisis se enfoca en tres fenómenos observados e interrelacionados relativos al partidismo: primero, una leve disminución del partidismo no sólo entre quienes salieron a votar, sino entre el electorado en general. Segundo, a pesar de que la identificación partidista se mantiene como una de las variables explicativas más importantes del voto en México, se observa un debilitamiento en el voto partidario de una elección a otra, como lo evidencian los niveles de voto cruzado y voto dividido registrados en cada elección. Y tercero, un análisis multivariadocon datos de ambos años muestra cambios significativos en la composición social e ideológica del partidismo, señalando una realineación partidaria importante entre ciertos segmentos del electorado mexicano. De 2000 a 2006, el PRI perdió identificados en nichos históricos, como las mujeres y los votantes rurales, muchos de los cuales se trasladaron al PAN y al PRD. Por otra parte, nuestro análisis documenta un traslado de votantes de izquierda del PAN al PRD, y una desalineación de votantes de escolaridad superior

Abstract:

In thís artícle, we analyze the parteros of change in party identification observed in the 2000 and 2006 presiden ti al elections in Mexico. Based primarily on national exit poll data, we focus on three observed and interrelated phenomena: first, there was a slight decline in the level of partisanship observed not only among voters who turoed out, but also among the electora te at large. Secondly, party identification remains one of the most important explanatory variables of the vote in Mexico, but partisan voting was slightly weaker in 2006, as evidenced by the levels of cross-over voting and split-ticket voting in each election. Finally, a multivariate analysis based on these data shows significant changes in the social and ideological composition of party identification, providing evidence of how partisan realígnment among segments of the Mexican electorate is taking place. From 2000 to 2006, fonnerly strong PRI identifiers, such as women and rural voters, adopted identification with either PAN or PRD. Our analysis also documents transformations such as leftist PAN voters changing to PRD and highly educated voters becoming increasingly independent

Abstract:

Mexico's gradual democratization had a critical point in 2000, when the presidential election brought about political alternation in that country. If democracy requires a compatible value system that helps such a system endure, how democratic are Mexicans today and what implications does this have for democratic consolidation in Mexico? This article examines new survey data to address this old question. Our findings reveal that the prevailing political culture in Mexico expresses comparatively low support for democracy and relatively high support for non-democratic government, on the one hand, and low interpersonal trust, low levels of tolerance, and a strong emphasis on deference, on the other. Education is an important determinant of democratic values, and individual variation is significant on a wide range of attitudes. Changes over time also indicate that Mexicans have reinforced both democratic and non-democratic values in the last few years, which makes it hard to assess whether, overall, Mexico's democratic values are expanding or shrinking

Abstract:

Sparse matrices are characterized due many of its elements have zero value. Examples of these types of matrices are lower and upper triangular, which arise frequently in the solution of linear equation systems. It is convenient to store the nonzero elements - whether on disk or in memory- in a way that saves space. In this paper sparse matrices called central triangular and rhombus are analyzed and mechanisms for their efficient storage by means of 1D arrays are proposed

Abstract:

It is known that no additive division rules exist for bankruptcy problems. In this paper, we study a restricted additivity property which we call “feasible set additivity” (FSA). This property requires division rules to be additive when the set of feasible allocation vectors for a sum of problems does not include allocations that were unfeasible when considering each problem separately. In addition, we characterize the random arrival rule as the only division rule satisfying FSA and equal treatment of equals for two and three-agent cases. We also show that this characterization holds when the endowment is small enough in relation to the claims, while the question of whether it holds in general remains open

Abstract:

An information system (I/S) is generally considered flexible when it can be modified easily to align with changing requirements of the business. Flexibility is crucial to survive in a hypercompetitive environment. Considering a comprehensive conceptual framework of I/S flexibility enablers, a research question was addressed: Are all the enablers in the framework always essential for achieving I/S flexibility? Research results on enterprise flexibility and manufacturing flexibility indicate that different types of flexibility should be considered according to the category of strategic uncertainty to be faced. Based on previous work, four forms of I/S flexibility have been identified: operation, adaptation, extension and transformation. An exploratory analysis was conducted on two case studies of flexible I/S that are closely linked to strategic decisions in a fast-changing environment. The adaptation and extension flexibility forms have emerged as different configurations of necessary enablers

Abstract:

Previous researchers, namely in manufacturing domain, have pointed out that flexibility is a polymorphous concept. This means that it displays different forms depending on flexibility requirements. However, information systems research has paid little attention to polymorphism in IS flexibility enablers. This framework has been used on three cases studies to show that flexibility can take different forms and to explain why they differ. Two of the cases are the result of our previous research; the third has been taken from literature. From this research, IS flexibility appears to depend on change constraints for the IS and on the type of pursued flexibility

Résumé:

L'étude du concept de flexibilité, appliqué aux systèmes de production, a conduit à mettre en évidence son caractère polymorphe, e'st-à-dire le fait qu'elle peut prendre des formes différentes selon l'objectif de flexibilité poursuivi. En ce qui concerne les systèmes d'information, cette particularité de la flexibilité n'a été que partiellement reprise. L'objectif de la communication est de proposer une grille d'analyse de la flexibilité d'un S.I., mettant en relation des besoins en flexibilité et des réponses possibles. Cette grille a été utilisée sur trois cas pour montrer des formes différentes de flexibilité et expliquer pourquoi elles diffèrent. Deux des cas ont fait l'objet d'une recherche antérieure [Jacome, 2009], le troisième est issu de la littérature. La recherche a mis en évidence que la forme de flexibilité d'un S.I. dépend des contraintes de changement auxquelles le S.I. est soumis et du type de flexibilité visée

Abstract:

We propose three new initial transient deletion rules to improve the quality of point estimators for steady-state performance measures from the output of a single (long) run of a stochastic simulation. Although the rules are designed for the estimation of a steady-state mean, preliminary experimental results show that these rules may perform well for the estimation of variances and quantiles of a steady-state distribution. One rule can be applied using the (univariate) output of interest only, and the other two rules are based on the use of multivariate batch means to test the null hypothesis that a current observation is a function of a Markov chain that has a stationary distribution. We present experimental results to compare the performance of the new rules against three variants of the Marginal Standard Error Rule. In our experiments, two of our proposed rules identified appropriate deletion points even when the output is not a Markov chain, and one of these rules performed the best performance from the point of view of halfwidths and coverages for the true parameter

Abstract:

We propose three new initial transient deletion rules (denoted H1, H2 and H3) to reduce the bias of the natural point estimator when estimating the steady-state mean of a performance variable from the output of a single (long) run of a simulation. Although the rules are designed for the estimation of a steady-state mean, our experimental results show that these rules may perform well for the estimation of variances and quantiles of a steady-state distribution. One of the proposed rules can be applied under the only assumption that the output of interest {Y(s) : s> 0} has a stationary distribution whereas the other two rules require that Y(s) = f(X(s)) for an Rd-valued Markov chain {X(s) : s> 0}. Our proposed rules are based on the use of sample quantiles and multivariate batch means to test the null hypothesis that a current observation Y(s) comes from a stationary distribution for {X(s) : s>0}. We present experimental results to compare the performance of the new rules against three variants of the Marginal Standard Error Rule and the Glynn-Iglehart deletion rule. When the run length was sufficiently large to provide a reliable confidence interval for the estimated parameter, one of the proposed rules (H3) provided the best reductions in Mean Square Error, so that the identification of an underlying Markov chain X for which Y(s) = f(X(s)) can be useful to determine an appropriate deletion point to reduce the initial transient, and one of our proposed rules (H2) can be useful to detect that a run length is too small to provide a reliable confidence interval

Abstract:

On July 1st, 2018, federal elections for president, senators and deputies took place in Mexico. In most states, elections for state governors and representatives took also place in the same polling booths. The Technical Unit for Information Services (UNICOM) of the National Electoral Institute (INE) of Mexico has the responsibility for planning and implementation of the Preliminary Electoral Results Program (PREP) for federal elections. For the 2018 elections UNICOM developed forecasting models for the performance of PREP based on simulation models that were developed using a special purpose simulation software and C++ subroutines for fast simulation of queues. These simulation models were a valuable tool for planning, scheduling and allocation of the main resources that participated in the operational process of the PREP. In this article we report the main features, applications and results obtained by using these simulation models

Abstract:

We propose three new initial transient deletion rules to improve the quality of point estimators for steady-state performance measures from the output of a single (long) run of a stochastic simulation. Although the rules are designed for the estimation of a steady-state mean, preliminary experimental results show that these rules may perform well for the estimation of variances and quantiles of a steady-state distribution. Our proposed rules are based on the use of sample quantiles and multivariate batch means to test the null hypothesis that a current observation Yj = f(Xj) is a function of an Rd-valued Markov chain X = {X0, X1, ... } that has a stationary distribution π. We present experimental results to compare the performance of the new rules against three variants of the Marginal Standard Error Rule and the Glynn-Iglehart deletion rule. In our experiments, two of our proposed rules performed the best (or very close to the best) in some of our experiments, and one rule performed the best (or very close to the best) in all of our experiments

Abstract:

In this Chapter we discuss the main techniques for output analysis of simulation experiments, with an emphasis on the estimation of performance measures for the assessment and mitigation of risk. After a review of the main concepts related to stochastic simulation and risk assessment, we discuss techniques available for estimating expectations, nonlinear functions of expectations, quantiles and M estimators, we consider transient simulation as well as steady-state simulation. We discuss methodologies for both point estimation and the assessment of the accuracy of point estimators for performance measures, and the application of output analysis techniques is illustrated through examples related to risk assessment and mitigation

Resumen:

La tasa de producción de un proceso de negocios tiene una gran importancia para la competitividad del mismo, ya que es un indicador importante del costo de producción. En particular, el logro de un mayor porcentaje de actas publicadas por corte, durante el Programa de Resultados Electorales Preliminares (PREP) de una elección, es uno de los objetivos más importantes de un PREP, ya que una mayor rapidez en la publicación de los resultados de las elecciones es indicador de una mayor transparencia y, en consecuencia, genera una mayor confianza de los electores en los resultados oficiales de las mismas. Para las elecciones federales del 2018 en México, la Unidad Técnica de Servicios de Informática del Instituto Nacional Electoral desarrolló un modelo de pronóstico para las tasas de producción de las casillas publicadas por el PREP, con base en modelos que fueron desarrollados en un software de propósito especial para simulación. Este modelo de simulación fue una herramienta valiosa para la planeación, programación y asignación de los principales recursos que participaron en el proceso operativo del PREP, y en este artículo se reportan las principales características, aplicaciones y resultados obtenidos con la utilización de este modelo de simulación, desarrollado por los autores de este artículo

Abstract:

The production rate of a business process is of great importance for its competitiveness, since it is an important indicator of the production cost. In particular, the achievement of a higher percentage of scrutiny and computation forms (SCF) published by cut, during the Preliminary Electoral Results Program (PREP) of an election, is one of the most important objectives of a PREP, since a greater speed in the publication of the results of the elections are indicative of greater transparency and, as a result, generate greater voter confidence in the official results of the elections. For the federal elections of 2018 in Mexico, the Technical Unit for Information Services of the National Electoral Institute developed a forecast model for the production rates of the SCF published by the PREP, based on models that were developed in special purpose software for simulation. This simulation model was a valuable tool for planning, scheduling and allocation of the main resources that participated in the PREP operational process, and in this article we report the main characteristics, applications and results obtained with the use of this simulation model that was developed by the authors of this article

Resumen:

Las metodologías actualmente propuestas para el análisis de experimentos por simulación con incertidumbre paramétrica son llamadas métodos de simulación anidados en dos niveles (exterior e interior). En el nivel exterior se generan (n) observaciones de los parámetros, y en el nivel interior se generan (m) observaciones usando el modelo de simulación con el valor del parámetro fijado en el nivel exterior. En este artículo nos enfocamos en el análisis de la salida de los experimentos por simulación estocástica de dos niveles, para el caso particular en que las observaciones del nivel interior son independientes, mostrando cómo la varianza de las observaciones simuladas se descompone en varianza paramétrica y varianza estocástica. Posteriormente, derivamos teoremas de límite central que nos permiten calcular intervalos de confianza asintóticos para medir la precisión de los estimadores por simulación, tanto del pronóstico puntual como de los componentes de varianza. La validez de nuestros resultados teóricos se confirma a través de experimentos con un modelo de pronóstico para demandas esporádicas, para el cual se han obtenido expresiones analíticas para el pronóstico puntual y los componentes de varianza

Abstract:

Proposed methodologies for the analysis of simulation experiments under parameter uncertainty are called two-level (outer and inner) nested simulation methods. On the outer level we generate (n) observations for the parameters and, on the inner level we generate (m) observations using the simulation model with the parameter value fixed on its corresponding value generated in the outer level. In this paper, we focus on the output analysis of two-level stochastic simulation experiments for the case where the observations in the inner level are independent, showing how the variance of the simulated observations can be decomposed in parametric and stochastic components. Furthermore, we derive central limit theorems that allow us to compute asymptotic confidence intervals to assess the accuracy of the simulation-based estimators for the point forecast and the variance components. Validity of our results are confirmed through experiments using a forecasting model for sporadic demand, where we have obtained analytical expressions for the point forecast and the variance components

Abstract:

Two-level nested simulation methods have been recently applied for the analysis of simulation experiments under parameter uncertainty. On the outer level of the nested run, we generate (n) observations of the parameters, while on the inner level; we fix the parameter on its corresponding value and generate (m) observations using a simulation model. In this paper, we focus on the output analysis of two-level stochastic simulation experiments for the case where the observations of the inner level are independent, showing how the variance of the simulated observations can be decomposed in the sum of parametric and stochastic components. Furthermore, we derive central limit theorems that allow us to compute asymptotic confidence intervals to assess the accuracy of the simulation-based estimators for the point forecast and the variance components. Theoretical results are validated through experiments using a forecasting model for sporadic demand, where we have obtained analytical expressions for the point forecast and the variance components

Resumen:

En este artículo se presentan resultados experimentales sobre la comparación de cinco metodologías para estimar los parámetros de la distribución lognormal con traslado. Los métodos considerados son el método de momentos, el método de máxima verosimilitud local, un método propuesto por Nagatsuka y Balakrishnan, una modificación de este último, y un nuevo método propuesto en este artículo. El método de máxima verosimilitud local proporcionó estimadores que tienen buenas propiedades cuando el tamaño de la muestra no es muy pequeño y el método no falla. Por otro lado, el método de Nagatsuka y Balakrishnan modificado tuvo el mejor desempeño desde el punto de vista de los errores cuadráticos promedio de los estimadores. El nuevo método propuesto en este trabajo mostró el mejor desempeño desde el punto de vista de una medida de bondad de ajuste

Abstract:

This article presents the results of the comparison done using five methodologies for the fitting of parameters of a shifted lognormal distribution. The methods considered in this research are: the method of moments, the method of local maximum likelihood, a method proposed by Nagatsuka and Balakrishnan, a modification of this latter method, and a new method proposed in this article. The method of local maximum likelihood provided estimators with good properties when the sample size is not too small and the method did not fail. On the other hand, the modified method of Nagatsuka and Balakrishnan exhibited the best performance from the point of view of the mean squared error. The new method proposed in this work exhibited the best performance from the point of view of a measure of goodness of fit

Abstract:

The main purpose of this article is to report the development and application of a simulation model that was used to estimate the effects on the level of glycosylated hemoglobin (HbA1c) and the cumulative costs of four different regimes of self-monitoring of blood glucose (SMBG) plus pharmacologic treatment experienced by patients with type 2 diabetes (T2D) in a typical Mexican public health institution (MPHI). The simulation model was designed to imitate the individual experience of a patient with T2D at a MPHI; the main drivers for cost computation were HbA1c evolution and its effect on the incidence and treatment (or not) of comorbidities, complications and acute events associated with T2D. Simulation runs using this model show that the expected average cumulative cost for a patient with T2D and no SMBG is $60,443 US dollars over a 10- year span, and the use of SMBG will reduce this expected cost in 6.5%, 7.5% and 7.8% for 1, 2 and 3 times daily regimes of SMBG, respectively

Resumen:

En este artículo se reporta el desempeño de 15 métodos heurísticos para encontrar soluciones iniciales y 4 meta-heurísticas para resolver un problema de asignación de frecuencias en el que el valor de las frecuencias asignadas depende de pesos correspondientes a los sitios donde se asigna la frecuencia. Los diferentes algoritmos fueron probados en un conjunto de problemas que se generaron utilizando un generador que representa situaciones similares a la asignación de frecuencias FM en México. Los resultados experimentales mostraron que las heurísticas que consideran los pesos de los sitios tienen un mejor desempeño y, de entre las 4 meta-heurísticas probadas, el mejor desempeño lo obtuvo el algoritmo basado en templado simulado

Abstract:

We report performance of 15 heuristics and 4 metaheuristics to solve a frequency assignment problem where the value of an assigned frequency depends on the weights of the corresponding site where the frequency is assigned. Different algorithms where tested on two sets of problems, the first one corresponds to the well-known Philadelphia problems and the second one to situations similar to the assignment of FM frequencies in Mexico. Experimental results showed that the heuristics that take into account the site weights performed the best and, among the four metaheuristics tested, the algorithm based on simulated annealing performed the best

Resumen:

En este artículo se presenta la formulación y solución del problema del voceador bajo un enfoque bayesiano. Este enfoque permite incorporar la incertidumbre en los parámetros de la distribución de la demanda (que es inducida por el proceso de estimación de dichos parámetros). Se presenta un ejemplo que tiene solución analítica y se conducen experimentos con dicho modelo para comparar los resultados del método bayesiano propuesto con los que se obtienen al aplicar el método clásico. Además, se calcula el tamaño óptimo de pedido utilizando simulación estocástica, método que se aconseja usar cuando la complejidad del modelo no permite la obtención de una solución analítica

Abstract:

The formulation and solution to the newsvendor problem using a Bayesian approach is presented. This approach allows us to incorporate the uncertainty in the parameters of demand distribution (introduced in the process of parameter estimation). An example that allows obtaining an analytical solution is presented and experiments are performed with the model in order to compare the results using the proposed Bayesian approach versus the classical method. In addition, the optimal order size is determined using stochastic simulation, methodology that is suggested when model complexity does not allow obtaining an analytical solution

Abstract:

We discuss the formulation and solution to the newsvendor problem under a Bayesian framework that allows us to incorporate the uncertainty in the parameters of demand modeling (introduced in the process of parameter estimation). We present an example with an analytical solution and use this example to show that a classical approach (without parameter uncertainty) tends to overestimate the expected benefit. Furthermore, we conduct experiments that confirm our results and illustrate the estimation of the optimal order size using stochastic simulation, method that is suggested when model complexity does not allow us to obtain an analytical solution

Abstract:

We show that, under reasonable assumptions, the performance of the jackknife, classical and batch means estimators for the estimation of quantiles of the steady-state distribution exhibit similar properties as in the case of the estimation of a nonlinear function of a steady-state mean. We present some experimental results from the simulation of the waiting time in queue for an M/M/1 system to confirm our theoretical results

Resumen:

En este artículo se presenta una revisión de las principales técnicas que se han propuesto para el análisis de los componentes aleatorios de una simulación estocástica. Se discuten dos temas principales: el modelado clásico utilizando familias univariadas de distribuciones y el modelado avanzado de la entrada. En el primer tema se introduce el uso de una librería, llamada Analizador Simple (Simple Analyzer), que puede ser descargada libremente desde la red, mientras que en el segundo tema se discuten las técnicas para el modelado de la entrada que actualmente no están incluidas en el software disponible para el análisis de la entrada de experimentos por simulación. Se analiza también el ajuste de datos de entradas multimodales, correlacionadas y dependientes del tiempo, y se incorpora la incertidumbre paramétrica en el modelo de entrada

Abstract:

In this article, a review of the main techniques proposed for the analysis of random components in stochastic simulations is presented. Two main topics are discussed: classical modeling using univariate distributions and advanced modeling techniques. The use of a library is introduced in the first topic; this library is named Simple Analyzer and can be downloaded for free from the web. In the second topic, techniques for input analysis that are currently not included in available commercial software of simulation experiments are discussed. Also the fitting of data of multimodal, correlated and time-dependent input components, and the incorporation of parameter uncertainty in the input model are analyzed

Resumen:

En este artículo se presenta la formulación y solución del problema el voceador bajo un enfoque Bayesiano, el cual permite incorporar la incertidumbre en los parámetros de la distribución de la demanda (que es inducida por el proceso de estimación de dichos parámetros). Se presenta un ejemplo de aplicación que tiene solución analítica y se conducen experimentos con dicho modelo para comparar los resultados bajo el método propuesto con los que se obtienen al aplicar el método clásico. Además, se ilustra la estimación del tamaño óptimo de pedido utilizando simulación estocástica, método que se aconseja usar cuando la complejidad del modelo no permite la obtención de una solución analítica

Abstract:

We discuss the formulation and solution to the newsvendor problem using a Bayesian approach that allows us to incorporate the uncertainty in the parameters of demand modeling (introduced in the process of parameter estimation). We present an example that allows us to obtain an analytical solution in order to compare the results obtained under our Bayesian approach versus the classical method. In addition, we illustrate the application of our proposed methodology using stochastic simulation, method that is suggested when model complexity does not allow us to obtain an analytical solution

Resumen:

En este artículo se presenta una comparación empírica entre los métodos de repeticiones independientes, grupos consecutivos y grupos espaciados para estimar la esperanza, la varianza y el 90%-cuantil del tiempo en la fila de espera (en estado estable) de una cola M/M/1. La comparación está basada en el cubrimiento empírico, el sesgo y el error relativo de los intervalos del 90% de confianza, a través de 1000 repeticiones independientes de las estimaciones. Los resultados experimentales muestran que los tres métodos proporcionan cubrimientos similares (y asintóticamente válidos) para estimar la esperanza, la varianza y el cuantil, mientras que el método de grupos consecutivos proporciona sesgos y errores relativos más pequeños que los otros dos métodos. Además se presenta el ejemplo de una cadena de Markov con espacio de estados continuo, en el que se obtienen intervalos de confianza válidos, siendo que esta cadena no es geométricamente ergódica

Abstract:

In this article three methods (multiple replications, non-overlapping batches and spaced batches) to estimate the expectation, the variance and the 90%-quantile of the steady-state waiting time in an M/M/1 queue are compared. Our comparison is based on the empirical coverage, bias and relative error of 90% asymptotic confidence intervals from 1000 independent replications of a simulation-based estimation. Experimental results show that all three methods provide similar (and valid) empirical coverage for the estimation of the expectation, variance and quantile; and the method of non-overlapping batches provided smaller bias and relative errors than the other two methods. In addition to this, an example of a continuous state-space Markov chain that is ergodic but non-geometric, is presented. For this case asymptotically valid confidence intervals by using the methods considered in this paper are also obtained

Resumen:

En este artículo se presentan resultados experimentales sobre la comparación de cinco metodologías para estimar los parámetros de la distribución lognormal con traslado. Los métodos considerados son el método de momentos, el método de máxima verosimilitud local (MMV), un método propuesto por Nagatsuka y Balakrishnan, una modificación de este último (MMNB), y un nuevo método propuesto en este artículo (NM). El MMV proporcionó estimadores que tienen buenas propiedades cuando el tamaño de la muestra no es muy pequeño y el método no falla, de otra forma, el MMNB tuvo el mejor desempeño desde el punto de vista de los errores cuadráticos promedio de los estimadores, y el NM mostró el mejor desempeño desde el punto de vista de una medida de bondad de ajuste

Abstract:

We present experimental results from the comparison among five methodologies to fit the parameters of a shifted lognormal distribution. The methods considered in this research are: the method of moments, the method of local maximun likelihood (MML), a method proposed by Nagatsuka and Balakrishnan, a modification of this last method (MMNB), and a new method proposed in this article (NM). MML provided estimators with good properties when the simple size is not too small and the method did not fail, otherwise the MMNB exhibited the best performance from the point of view of the mean squared error, and the NM exhibited the best performance from the point of view of a measure of goodness of fit

Abstract:

We present a decision support system (DSS) to compute performance measures of a project. Our approach allows us to incorporate the uncertainty on the activities’ duration as well as four different types of precedence relationships. The DSS generates replicates of the project’s performance, in which we simulate the duration of each activity. From these replicates, the expected completion time, the variance of completion time, the service time for a given service level and the probability that each activity will be in the critical path are estimated along with their corresponding measures of error. A validation of the DSS was performed by computing the empirical coverage, mean and standard deviation of half-widths, mean square error and empirical bias for the main performance metrics of a given project. Finally, we show experimental results where the procedures implemented in the DSS provide a good coverage and consistent half-widths even for a small number of replications

Abstract:

We present a decision support system (DSS) to compute performance measures of a project under uncertainty on the activities’ duration as well as four different types of precedence relationships. The DSS generates replicates of the project’s performance, in which we simulate the duration of each activity. From these replicates, the expected completion time, the variance of completion time, the service time for a given service level and the probability that each activity will be in the critical path are estimated along with their corresponding measures of accuracy. A validation of the DSS was performed by computing the empirical coverage, mean and standard deviation of half-widths, mean square error and empirical bias for the main performance metrics of a given project. Our experimental results show that the procedures implemented in the DSS provide a good coverage and consistent half-widths

Abstract:

In this paper, we discuss the formulation and solution to the newsvendor problem under a Bayesian framework that allows the incorporation of uncertainty on the parameters of the demand model (introduced by the estimation process of these parameters). We present an application of this model with an analytical solution and we conduct experiments to compare the results under the proposed method and a classical approach. Furthermore, we illustrate the estimation of the optimal order size using stochastic simulation, when the complexity of the model does not allow the finding of a closed form expression for the solution

Abstract:

When using a batch means methodology for estimation of a nonlinear function of a steady-state mean from the output of simulation experiments, it has been shown that a jackknife estimator may reduce the bias and mean squared error (mse) compared to the classical estimator, whereas the average of the classical estimators from the batches (the batch means estimator) has a worse performance from the point of view of bias and mse. In this paper we show that, under reasonable assumptions, the performance of the jackknife, classical and batch means estimators for the estimation of quantiles of the steady-state distribution exhibit similar properties as in the case of the estimation of a nonlinear function of a steady-state mean. We present some experimental results from the simulation of the waiting time in queue for an M/M/1 system under heavy traffic

Abstract:

The main purpose of this paper is to discuss how a Bayesian framework is appropriate to incorporate the uncertainty on the parameters of the model that is used for demand forecasting. We first present a general Bayesian framework that allows us to consider a complex model for forecasting. Using this framework, we specialize (for simplicity) in the continuous-review (Q, R) system to show how the main performance measures that are required for inventory management (service levels and reorder points) can be estimated from the output of simulation experiments. We discuss the use of two estimation methodologies: posterior sampling (PS) and Markov chain Monte Carlo (MCMC). We show that, under suitable regularity conditions, the estimators obtained from PS and MCMC satisfy a corresponding Central Limit Theorem, so that they are consistent, and the accuracy of each estimator can be assessed by computing an asymptotically valid half-width from the output of the simulation experiments. This approach is particularly useful when the forecasting model is complex in the sense that analytical expressions to obtain service levels and/or reorder points are not available

Abstract:

We report the performance of 15 construction heuristics to find initial solutions, and 4 search algorithms to solve a frequency assignment problem where the value of an assigned frequency is determined by the site where it is assigned. The algorithms we retested on 3 sets of problems, the first one corresponds to the well-known Philadelphia problems, and the last two correspond to situations frequently encountered when FM frequencies are assigned in Mexico. Our experimental results show that the construction heuristics that consider the weights of the sites perform well. Among the 4 search algorithms tested, the one based on cross entropy performed better than the others in small problems, whereas in large problems the algorithm based on simulated annealing performed the best

Abstract:

The Instituto para el Depósito de Valores (INDEVAL) is the Central Securities Depository of Mexico. It is the only Mexican institution authorized to perform, in an integrated manner, the activities of safe-keeping, custody, management, clearing, settlement and transfer of securities. In this article, we report the modeling, simulation and analysis of a new Securities Settlement System (SSS) implemented by INDEVAL, as part of a project for the implementation of a safer and more efficient operating system. The main objective of this research was to use reduced amounts of cash and securities, within reasonable periods of time, for the settlement of securities of the Mexican market. A linear programming model for the netting and clearing of operations was used. The performance of the new SSS was evaluated by performing experiments using a deterministic simulation model under different operation parameters, such as the number and monetary value of transactions, the time between clearing cycles and also under a new set of rules for pre-settlement operations. The results presented may be used by other Central Securities Depositories to make decisions related to the efficient and safer use of their resources. The implementation of the model took more than three years. Now many transactions that would remain pending if processed individually are settled together, thus reducing liquidity requirements dramatically —by 52% in cash and 26% in securities

Abstract:

The main purpose of this paper is to discuss how a Bayesian framework is appropriate to incorporate the uncertainty on the parameters of the model that is used for demand forecasting. We first present a general Bayesian framework that allows us to consider a complex model for forecasting. Using this framework we specialize (for simplicity) in the continuous-review (Q,R) system to illustrate how the main performance measures that are required for inventory management can be estimated from the output of simulation experiments. We discuss the use of sampling from the posterior distribution (SPD) and show that, under suitable regularity conditions, the estimators obtained from SPD satisfy a corresponding Central Limit Theorem, so that they are consistent, and the accuracy of each estimator can be assessed by computing an asymptotically valid halfwidth from the output of the simulation experiments

Abstract:

INDEVAL, Mexico's central securities depository for all financial securities, implemented a new operating system to increase its operational efficiency and comply with international securities-settlement best practices. INDEVAL used operations research techniques in its core clearing process and in testing the new system's viability. Best practices address high capacity and reliability, real-time settlement, delivery versus payment (DvP), secure data storage and communications (based on the 15015022 standard), central bank cash funds, and operational transparency. In this paper, we define the problem and discuss a linear programming model, system simulation, and major business process modeling that we devised for the new system. A novel application of operations research techniques provided several benefits, including (1) analysis and testing of the DvP settIement mechanism, which quickly and efficiently processes transactions and optimally uses available cash and securities balances; (2) a secure, reliable, and automatic clearing and settIement engine that opera tes continuousIy and efficiently handles a11 transactions that INDEVAL receives from its participants; and (3) an intelligent and flexible presettlement function that uses business rules and parameters to control the execution of the clearing and settlement process. The new system settles securities transactions that average over $250 billion daily; its successful implementation has substantially improved Mexico's financial system

Abstract:

This paper is about the development of a forecasting model that has been used in the Mexican Institute for Adult Education. This institute is in charge of an adult education program that is based on 42 modules, each of which requires books and materials each year. Accurate forecasts are needed to reduce the costs of wastage and shortages. Using data from previous years, the structure of the program, and interviews with officials in the Institute, we have developed a forecasting model based on Bayesian estimation. We further developed an approach that combines simulation with forecasting techniques, which may be appropriate when using a complex forecasting model. The initial results are encouraging, and we believe that the approach can be used in other situations, especially those concerned with service encounters. Our model also helps officials to understand several aspects of the education program

Abstract:

Most techniques for inventory management assume that the parameters of the corresponding model for demand forecasting are known with certainty. However in practice, parameters are estimated from information that may come from available data and/or expert judgment. In this article, we present a Bayesian framework that is appropriate to incorporate the uncertainty on the parameters of the model that is used for demand forecasting, and we show how the main performance measures that are required for inventory management (service levels and reorder points) can be estimated

Abstract:

We discuss the asymptotic validity of confidence intervals for quantiles of performance variables when simulating a Markov chain. We show that a batch quantile methodology (similar to the batch means method) can be applied to obtain confidence intervals that are asymptotically valid under mild assumptions

Resumen:

Se presenta el desarrollo de un sistema de apoyo a la toma de decisiones para calcular medidas de desempeño de un proyecto, incorporando la incertidumbre asociada a la duración de las actividades, considerando además relaciones de precedencia de cuatro diferentes tipos. El sistema genera repeticiones del desempeño del proyecto, donde se simulan las duraciones de las actividades. A partir de ellas, se estiman la duración esperada del proyecto y la probabilidad de que cada actividad forme parte de la ruta crítica, así como medidas del error de las estimaciones. Se realizó una validación de la herramienta con datos simulados, calculando el cubrimiento empírico, valores promedios y desviaciones estándar de los anchos medios, error cuadrático medio y sesgo empírico de la duración esperada. Se observó que se obtienen cubrimientos y anchos medios aceptables para un número pequeño de repeticiones

Abstract:

The development of a decision support system to compute performance measures of a project, including the uncertainty associated to the activities duration as well as four different types of precedence relationships is presented. The system generates replicates of the project performance, and in each replication activities’ durations are simulated, From them, the expected completion time and the probability that each activity be part of the critical path are estimated, as well as their corresponding measurements of error. A validation of the decision support system was performed by computing the empirical coverage, mean and standard deviation of halfwidths, mean square error and empirical bias for the expected completion time of a given project. It was observed that good coverage and halfwidth are obtained for a small number of replications

Abstract:

In order to postpone production planning based on information obtained close to the time of sale, decision support systems for supply chain management often include demand forecasts based on little historical data and/or subjective information. Particularly, when simulation models for analyzing decisions related to safety inventories, lot sizing or lead times are used, it is convenient to model (daily) demand by considering historical data, as well as information (often subjective) of the near future. This article presents an approach for modeling a random input (e.g., demand) in simulation experiments. Under this approach, the family of distributions proposed for modeling demand should include two types of parameters: the ones that capture information of historical data and the ones that depend on the particular scenario that is to be simulated. The approach is extended to the case where uncertainty on the appropriate family of distributions is present

Abstract:

The theory of standardized time series, initially proposed to estimate a single steady-state mean from the output of a simulation, is extended to the case where more than one steady-state mean is to be estimated simultaneously. Under mild assumptions on the stochastic process representing the output of the simulation, namely a functional central limit theorem, asymptotically valid confidence regions are obtained for a (multivariate) steady-state mean based on multivariate standardized time series. Examples are provided of multivariate standardized time series, including the multivariate versions of the batch means method and Schruben's standardization sum process. Large-sample properties of confidence regions obtained from multivariate standardized time series are discussed. The asymptotic expected volume of confidence regions produced by standardized time series procedures is larger than that obtained from a consistent estimation procedure. We present and discuss experimental results that illustrate our theory

Abstract:

We study the estimation of steady-state performance measures from an &Rstrike;dvalued stochastic process Y = {Y(t) : t ≥ 0} representing the output of a simulation. In many applications, we may be interested in the estimation of a steady-state performance measure that cannot be expressed as a steady-state mean r, e.g., the variance of the steady-state distribution, the ratio of steady-state means, and steady-state conditional expectations. These examples are particular cases of a more general problem--the estimation of a (nonlinear) function f(r) of r. We propose a batch-means-based methodology that allows us to use jack-knifing to reduce the bias of the point estimator. Asymptotically valid confidence intervals for f(r) are obtained by combining three different point estimators (classical, batch means, and jackknife) with two different variability estimators (classical and jackknife). The performances of the point estimators are discussed by considering asymptotic expansions for their biases and mean squared errors. Our results show that, if the run length is large enough, the jackknife point estimator provides the smallest bias, with no significant increase in the mean squared error

Abstract:

Information management in a hospital setting requires significant collaboration, mobility, and data integration. Patient care, a task often complicated by time-critical urgency, can involve many devices and a variety of staff. So far, no system has addressed these unique requirements. The authors designed a context-aware mobile system that empowers mobile devices to recognize the context in which hospital workers perform their tasks, accounts for contextual elements, and lets users send messages and access hospital services as needed

Abstract:

Over the course of a decade, Mexico transitioned from a peak of 1.8% of GDP given as fuel subsidies in 2008 to generating positive fuel tax revenues equivalent to 1.6% of its GDP in 2018. This paper analyzes Mexico's carbon pricing experience: the mechanisms that made fossil fuel subsidies such a large burden on public finances, the strategies followed in its five-year phase-out, and the institutional changes that enabled crossing into positive carbon taxation, both explicit and implicit. We analyze the effect of three carbon pricing instruments: 1) the subsidy phase-out, 2) the explicit carbon taxation, and 3) the implicit carbon pricing in excise fuel taxation. We present scenarios to assess the contribution of each policy to Mexico's voluntary commitments under the Paris Agreement. The subsidy phase-out and carbon taxes phase-in significantly contributed to Mexico's carbon emissions reductions. Importantly, we show that excise taxes applied to fossil fuels accrued the largest emissions reductions across all carbon pricing mechanisms due to their magnitude. We present evidence of decoupling between fuel (gasoline and diesel) consumption and economic growth. Our findings support the emerging view that carbon pricing through fiscal policy, in Mexico and elsewhere, shouldn't be restricted to explicit carbon pricing in the form of ETS or carbon taxes. Instead, it should be understood and calculated as the sum of excise taxes net of subsidies, carbon taxes and other forms of carbon pricing, subtracting any fiscal crediting or stimuli present

Resumen:

Antecedentes: las características clínicas de un paciente con sospecha de inmunodeficiencia primaria orientan el diagnóstico diferencial por medio del reconocimiento de patrones. Las inmunodeficiencias primarias son un grupo heterogéneo de más de 250 enfermedades congénitas con mayor susceptibilidad a padecer infecciones, autoinflamación, autoinmunidad, alergia y cáncer. El análisis discriminante lineal es un método multivariante de clasificación supervisada para agrupar a los sujetos a partir de encontrar combinaciones lineales de un número de variables

Abstract:

Background: The features in a clinical history from a patient with suspected primary immunodeficiency (PID) direct the differential diagnosis through pattern recognition. PIDs are a heterogeneous group of more than 250 congenital diseases with increased susceptibility to infection, inflammation, autoimmunity, allergy and malignancy. Linear discriminant analysis (LDA) is a multivariate supervised classification method to sort objects of study into groups by finding linear combinations of a number of variables

Abstract:

Purpose: The purpose of this study is to conduct a survey of Mexican millennials to measure the extent of negative bias and perceived advertising value they experienced toward the ads they encountered while performing a search for local products and services from their smartphones. Design/methodology/approach: Using a paper survey with a scenario question, responses were collected from 1,215 millennial smartphone owners about the strategies they used for scanning mobile search organic and sponsored results and quickly reaching the information they needed when performing a mobile search. The 315 participants who reported clicking on ads were further surveyed on their perceptions of ad informativeness, entertainment, irritation and credibility. These constructs were used as the predictors of advertising value in a structural equations model which was estimated with partial least squares. Findings: A substantial bias against sponsored results was found, with two-thirds of respondents skipping the ads when performing a mobile search from their smartphones. However, 28.2 per cent reported clicking on the most relevant result without regard to it being organic or sponsored, and an additional 5.6 per cent reported clicking on an ad as their first strategy. In the structural model, all four hypothesized antecedents of advertising value were significant, and some gender differences were detected. Practical implications: With the increasing penetration of smartphones, and rapid growth of mobile search, these results are particularly relevant for local merchants, who can use mobile search ads to leverage their location and communicate with searching consumers at the precise moment when they are most receptive to timely and relevant advertising. Originality/value: This study is the first to measure the extent of consumer bias against sponsored results in a mobile search, and the first empirical estimation of the advertising value of mobile-sponsored results

Abstract:

Introduction. As it approaches the two decade milestone, the concept of community of practice faces what can be described as a midlife crisis. It has achieved wide diffusion, but users have adapted it to suit their needs, leading to a proliferation of diverging interpretations. Recent critiques lament that the concept is losing its coherence and analytical power. Method. This review uses Benders and van Veen's model of a management fashion to account for the popularity of the concept of communities of practice in the business and organization studies literature and for the current crisis. Results. The literature displays considerable confusion between communities of practice and other social structures concerned with knowledge and learning, although recent typologies are helping to clarify concepts. Researchers have accepted the concept as an enduring element in the knowledge-based view of the firm, but practitioners have mostly used it in fashionable management discourse, specifically as a knowledge management tool, resulting in numerous publications based on pragmatic interpretations of the concept. By now, the fashion is fading in the practitioner literature, but the researcher community displays renewed interest in the form of several in-depth critiques and a resurgence of theory-grounded studies. Conclusions. The review predicts that the concept will successfully mature out of its current crisis through a new period, already started, of theory development grounded in rigorous studies conducted in organizations

Abstract:

Introduction. This research set out to determine whether communities of practice can be entirely Internet-based by formally applying Wenger's theoretical framework to Internet collectives. Method. A model of a virtual community of practice was developed which included the constructs Wenger identified in co-located communities of practice: mutual engagement, shared repertoire, joint enterprise, community and learning or identity-acquisition. The model included additional empirical attributes associated with higher community-of-practice potential: professional topic, high interaction-volume, non-conflictual focused discussions and core-periphery structure. A systematic search of the Usenet discussion network detected eleven news groups displaying these attributes and they were formally tested for the presence of the Wenger constructs. Analysis. A quantitative survey of news group participants and a qualitative content analysis of core-member discussions were applied to select news groups to detect the Wenger constructs, conservatively assessed as present only when both methods concurred. Results. Four online collectives, evenly divided between computer and non-computer topics, were assessed as Usenet-based communities of practice because they exhibited the complete set of Wenger constructs. Conclusions. This provides evidence that extra-organizational communities of practice can emerge spontaneously in the social areas of the Internet, just as they emerge in organizational settings and that true communities of practice are not inherently limited to face-to-face interaction.

Abstract:

The article discusses internet-based communities of practice (CoP), focusing on Usenet newsgroups and analyzing them in terms of sociological constructs such as essential and exemplary traits. The results of a survey of participants in such groups are presented and evidence is presented that some of these groups constitute exemplary examples of extra-organizational CoP. This is said to indicate that CoP are social structures which occur naturally, providing participants with a source of knowledge and professional identity

Abstract:

"Aesthetic experience" corresponds to the inner state of a person exposed to the form and content of artistic objects. Quantifying and interpreting the aesthetic experience of people in various contexts contribute towards a) creating context, and b) better understanding people's affective reactions to aesthetic stimuli. Focusing on different types of artistic content, such as movie, music, literature, urban art, ancient artwork, and modern interactive technology, the 4th international workshop on Multimodal Affect and Aesthetic Experience (MAAE) aims to enhance interdisciplinary collaboration among researchers from affective computing, aesthetics, human-robot/computer interaction, digital archaeology and art, culture, ethics, and addictive games

Abstract:

The term "aesthetic experience" corresponds to inner states of individuals exposed to art. Investigating form, content, and aesthetic values of artistic objects, indoor and outdoor spaces, urban areas, and modern interactive technology is essential to improve social behaviour, quality of life, and health of humans in the long term. Quantifying and interpreting the aesthetic experience of art receivers in different contexts can contribute towards (a) creating art and (b) better understanding humans' affective reactions to aesthetic stimuli. Focusing on different types of artistic content, such as movies, music, urban art, ancient artwork, and modern interactive technology, the goal of the Second International Workshop on Multimodal Affect and Aesthetic Experience is to enhance the interdisciplinary collaboration among researchers from the following domains: affective computing, aesthetics, human-robot interaction, and digital archaeology and art

Abstract:

We present a novel Bayesian approach to random effects meta analysis of binary data with excessive zeros in two-arm trials. We discuss the development of likelihood accounting for excessive zeros, the prior, and the posterior distributions of parameters of interest. Dirichlet process prior is used to account for the heterogeneity among studies. A zero inflated binomial model with excessive zero parameters were used to account for excessive zeros in treatment and control arms. We then define a modified unconditional odds ratio accounting for excessive zeros in two arms. The Bayesian inference is carried out using Markov chain Monte Carlo (MCMC) sampling techniques. We illustrate the approach using data available in published literature on myocardial infarction and death from cardiovascular causes. Bayesian approaches presented here use all the data, including the studies with zero events and capture heterogeneity among study effects, and produce interpretable estimates of overall and study-level odds-ratios, over the commonly used frequentist's approaches. Results from the data analysis and the model selection also indicate that the proposed Bayesian method, while accounting for zero events, adjusts for excessive zeros and provides better fit to the data resulting in the estimates of overall odds-ratio and study-level odds-ratios that are based on the totality of the information

Abstract:

In the dynamic environment of today's manufacturing industry, companies need to be changeable, i.e. capable of adapting to changes quickly and cost-effectively. In this context, the diagnosability characteristic, allowing fast and economic ramp-ups of new manufacturing settings, becomes particularly relevant. Depending on their diagnosability requirements, companies can exploit different technologies and applications. In this study, five diagnosability requirements have been identified. Through a literature review, the five requirements have been further investigated; thus, the extent to which these five requirements can be fulfilled, and their enabling technologies and applications has been specified. Finally, a case study has been conducted to show how diagnosability requirements are fulfilled differently in three manufacturing contexts

Abstract:

The educational resources of an online course in logical-analytical thinking, and the results of the Index of Learning Styles Questionnaire from 17041 students were analyzed to find if the majority of learning styles is favored according to the Felder-Silverman Model. We found that while the majority of students are supported by the course, there exists a non-supported minority with predominance of verbal and global styles. A set of strategies are proposed to support these students

Abstract:

In this work, we started from an existing Intelligent Tutoring System (ITS) called Sistema de Apoyo Generalizado para la Enseñanza individualizada (SAGE), able to supervise student’s learning according to the first four levels of Bloom’s taxonomy: knowledge, comprehension, application and analysis, and we propose to improve its reach by adding other functions according to Marzano’s taxonomy, that preserves the basic aspects of Bloom’s taxonomy and adds metacognition and emotional response. SAGE starts with a diagnostic test about the subject of study, to which a cognitive diagnostic test will be added, then the system assigns the lesson that must be completed, according to the previous knowledge of the student. While students navigate throughout the lesson, the system will monitor their emotional response and motivation using a camera, facial recognition and machine learning techniques. To decide which will be the next lesson a personalized advance route will be traced according to a student model, and to make the advance between lessons a shared control between the student and the system will be implemented. Using this methodology, teachers will be able to focus on activity planning and evaluation of assignments related with knowledge utilization, like essays or application projects, and the system will be in charge of the tasks of the remaining levels of Marzano’s taxonomy (retrieval, comprehension, analysis, metacognition and self-system)

Abstract:

The Mexican educational system faces diverse challenges related with the quality and coverage of education. The development of Intelligent Tutoring Systems (ITS) may help to solve some of them by helping teachers to customize their classes according to the performance of the students in online courses. In this work, we propose the adaptation of a functional ITS based on Bloom’s taxonomy called Sistema de Apoyo Generalizado para la Enseñanza Individualizada (SAGE), to measure student's metacognition and their emotional response based on Marzano's taxonomy. The students and the system will share the control over the advance in the course, so they can improve their metacognitive skills. The system will not allow students to get access to subjects not mastered yet. The interaction between the system and the student will be implemented through Natural Language Processing techniques, thus avoiding the use of sensors to evaluate student's response. The teacher will evaluate student's knowledge utilization, which is equivalent to the last cognitive level in Marzano's taxonomy

Abstract:

This article maps current constitutional adjudication systems in 17 Latin American democracies. Using recent theoretical literature, the authors classify systems by type (concrete or abstract), timing (a priori or a posteriori), and jurisdiction (centralized or decentralized). This approach captures the richness and diversity of constitutional adjudication in Latin America, where most countries concurrently have two or more mechanisms. Four models of constitutional adjudication are currently in use. In the past, weak democratic institutions and the prevalence of inter partes, as opposed to erga omnes, effects of judicial decisions, prevented the development of constitutional adjudication. Today, democratic consolidation has strengthened the judiciary and fostered constitutional adjudication. After discussing the models, the authors highlight the role of the judiciary in the constitutional adjudication bodies, the broad range of options existing to initiate this adjudication process, and the prevalence of amparo (habeas corpus) provisions

Resumen:

Hay una larga tradición de filósofos hermenéuticos que han investigado la filosofía de Leibniz, y también varios estudiosos leibnizianos que se han ocupado del pensamiento heideggeriano. En este texto se plantea la tesis de que existe una cierta convergencia entre la concepción hermenéutica de la filosofía (M. Heidegger) y algunas ideas de Leibniz. El resultado es que hay al menos tres ideas que, en diferentes formulaciones, comparten ambos filósofos: 1) no hay conocimiento puro, el conocimiento es siempre circunstancial. Esto es expresado por Heidegger en la noción de "situación hermenéutica" y por Leibniz con el concepto de "notio completa". 2) Heidegger hace un "giro hacia la facticidad" en torno a la noción de "situación hermenéutica". Leibniz también realiza un cierto "giro hacia la facticidad" concentrado en la noción de "corporeidad". Este elemento no se encuentra en el pensamiento heideggeriano. 3) La comprensión es también autocomprensión. Para Leibniz, el desarrollo es un proceso de despliegue y autoconocimiento del sujeto monádico. Para Heidegger la comprensión del mundo es también un proceso de autoconocimiento del Dasein. De este modo, Leibniz esboza el "espíritu" de la hermenéutica en el sentido de que el perspectivismo es una forma de interpretación

Abstract:

There is a long tradition of hermeneutic philosophers who have investigated Leibniz's philosophy, and also several Leibnizian scholars who have dealt with Heideggerian thought. In this text we propose the thesis that there is a certain convergence between the hermeneutic conception of philosophy (M. Heidegger) and some of Leibniz's ideas. The result is that there are at least three ideas that, in different formulations, are shared by both philosophers: 1) there is no pure knowledge, knowledge is always circumstantial. This is expressed by Heidegger in the notion of "hermeneutic situation" and by Leibniz with the concept of "notio completa". 2) Heidegger makes a "turn towards facticity" around the notion of "hermeneutic situation". Leibniz also makes a certain "turn towards facticity" concentrated on the notion of "corporeality". This element is not found in Heideggerian thought. 3) Understanding is also self-understanding. For Leibniz development is an unfolding and self-knowledge process of the monadic subject. For Heidegger understanding the world is also a process of self-knowledge of Dasein. Thus Leibniz outlines the "spirit" of hermeneutics in the sense that perspectivism is a form of interpretation

Abstract:

Control of Membrane Attached Biofilm (MAB) formation and accumulation is a key aspect in the operation of Extractive Membrane Bioreactors (EMBs). In this work, MAB control was attempted in a novel EMB configuration which presents two innovative aspects: the presence of a biphasic biomedium and a contained liquid membrane module. This reactor, where the benefits of high shear forces and the use of a biphasic biomedium are effectively combined, was operated without any biofilm formation or reduction in organic substrate flux over time

Abstract:

In this article, I reflect upon the influence that John Hart Ely's book Democracy and Distrust can have on our discussions about the Mexican Supreme Court. For that purpose, I distinguish the Mexican Court's role under authoritarian rule and in democracy. In the former era, Ely's theory would not have even been considered. In the latter, the Court has adopted a substantive conception of democracy and judicial review based on authors such as Ronald Dworkin, Robert Alexy, and Luigi Ferrajoli. Moreover, in the last twelve years, the Court has developed a human rights agenda. During this time, it has interpreted human rights and economic liberties, issuing rulings that are similar to those that preoccupied Ely forty years ago (Lochner v. New York or Roe v. Wade). In this regard, Ely's theory is useful to think about the current situation of constitutional justice in Mexico, even if we do not embrace his proposal that limits the role of the Court to procedural issues. In a country such as Mexico, where the reality of our democracy is much feebler and deep economic and social inequalities predominate, the Court is compelled to play a transformative role

Resumen:

En 2018 los mexicanos y mexicanas elegimos el cambio político más profundo desde la transición a la democracia, dejando atrás lo que en otro trabajo he denominado constitucionalismo autoritario. La alternancia ha significado un cambio de régimen en el que se anuncia una transformación social. La transformación puede tomar distintos rumbos y debe ser acompañada por ideas que la inspiren. En esta tesitura, el constitucionalismo popular puede ser una teoría útil para que la transformación sea en una dirección democrática, participativa e igualitaria, pues incentiva la participación política y la igualdad democrática. Es momento de dejar atrás las teorías elitistas del derecho constitucional y las concepciones minimalistas de la democracia

Abstract:

In 2018 Mexicans chose the most profound political change since the transition to democracy, leaving behind what in another work I have called authoritarian constitutionalism. The alternation has meant a change of regime in which a social transformation is announced. The transformation can take different paths and must be accompanied by ideas that inspire it. In this frame of mind, popular constitutionalism can be a useful theory in order for the transformation to take a democratic, participative and egalitarian direction, since it fosters political participation and democratic equality. It is time to forego the elitist theories of constitutional law and the minimalist understandings of democracy

Abstract:

In the study of life tables the random variable of interest is usually assumed discrete since mortality rates are studied for integer ages. In dynamic life tables a time domain is included to account for the evolution effect of the hazard rates in time. In this article we follow a survival analysis approach and use a nonparametric description of the hazard rates. We construct a discrete time stochastic processes that reflects dependence across age as well as in time. This process is used as a bayesian nonparametric prior distribution for the hazard rates for the study of evolutionary life tables. Prior properties of the process are studied and posterior distributions are derived. We present a simulation study, with the inclusion of right censored observations, as well as a real data analysis to show the performance of our model

Abstract:

We propose two novel ways of introducing dependence among Poisson counts through the use of latent variables in a three levels hierarchical model. Marginal distributions of the random variables of interest are Poisson with strict stationarity as special case. Order–p dependence is described in detail for a temporal sequence of random variables. A full Bayesian inference of the models is described and performance of the models is illustrated with a numerical analysis of maternal mortality in Mexico. Extensions to seasonal, periodic, spatial or spatio-temporal dependencies, as well as coping with overdispersion, are also discussed

Abstract:

We describe a procedure to introduce general dependence structures on a set of random variables. These include order-q moving average-type structures, as well as seasonal, periodic, spatial and spatio-temporal dependences. The invariant marginal distribution can be in any family that is conjugate to an exponential family with quadratic variance function. Dependence is induced via a set of suitable latent variables whose conditional distribution mirrors the sampling distribution in a Bayesian conjugate analysis of such exponential families. We obtain strict stationarity as a special case

Abstract:

We describe a procedure to introduce general dependence structures on a set of Dirichlet processes. Dependence can be in one direction to define a time series or in two directions to define spatial dependencies. More directions can also be considered. Dependence is induced via a set of latent processes and exploit the conjugacy property between the Dirichlet and the multinomial processes to ensure that the marginal law for each element of the set is a Dirichlet process. Dependence is characterized through the correlation between any two elements. Posterior distributions are obtained when we use the set of Dirichlet processes as prior distributions in a Bayesian nonparametric context. Posterior predictive distributions induce partially exchangeable sequences defined by generalized Polya urns. A numerical example to illustrate is also included

Abstract:

We propose a stochastic model for claims reserving that captures dependence along development years within a single triangle. This dependence is based on a gamma process with a moving average form of order which is achieved through the use of poisson latent variables. We carry out Bayesian inference on model parameters and borrow strength across several triangles, coming from different lines of businesses or companies, through the use of hierarchical priors. We carry out a simulation study as well as a real data analysis. Results show that reserve estimates, for the real data set studied, are more accurate with our gamma dependence model as compared to the benchmark over-dispersed poisson that assumes independence

Abstract:

One way of defining probability distributions for circular variables (directions in two dimensions) is to radially project probability distributions, originally defined on R2, to the unit circle. Projected distributions have proved to be useful in the study of circular and directional data. Although any bivariate distribution can be used to produce a projected circular model, these distributions are typically parametric. In this article, we consider a bivariate Pólya tree on R2 and project it to the unit circle to define a new Bayesian nonparametric model for circular data. We study the properties of the proposed model, obtain its posterior characterization and show its performance with simulated and real datasets. Supplemental materials for this article are available online

Abstract:

To study the impact of climate variables on morbidity of some diseases in Mexico, we propose a spatiotemporal varying coefficients regression model. For that we introduce a new spatiotemporal-dependent process prior, in a Bayesian context, with identically distributed normal marginal distributions and joint multivariate normal distribution. We study its properties and characterise the dependence induced. Our results show that the effect of climate variables, on the incidence of specific diseases, is not constant across space and time and our proposed model is able to capture and quantify those changes

Resumen:

Los datos de paleoclima incluyen mediciones de la cantidad de dióxido de carbono en la atmosfera, así como el nivel y temperatura de los océanos, entre otras. Los registros recientes de datos de cambio climático se han realizado en tiempos equidistantes, es decir, las distintas variables se han medido al mismo tiempo para que puedan llevarse a cabo estudios de asociación. Sin embargo, no hay registros de datos de hace miles de millones de años. Los científicos han tenido que diseñar formas alternativas de obtener esta información, por lo general a través de mediciones indirectas como las basadas en núcleos de hielo, donde tanto la variable de interés como el tiempo de medición tienen que estimarse. Como resultado de estos procedimientos, los datos de paleoclima son una colección de observaciones que no están distribuidas de manera uniforme. Aquí revisamos un método estadístico bayesiano para producir series equiespaciadas y lo aplicamos a tres bases de datos de paleoclima que van de 300 millones de años atrás a la fecha

Abstract:

Paleoclimatology data includes measures of the amount of carbon dioxide in the atmosphere and level and temperature of the oceans, among others. Recent records of climate change data were done at equidistant times; the different variables were typically measured at the same time to allow for association studies among them. However, there are no registered records of climate change data for thousands or millions of years ago. Scientists have had to device alternative ways of measuring these quantities. These methods are usually a result of indirect measurements, such as ice coring, where both the variable of interest and the time have to be estimated. As a result, paleoclimate data are a collection of time series where observations are unequally spaced. Here we review a Bayesian statistical method to produce equally spaced series and apply it to three paleoclimatology datasets that span from 300 million years ago to the present

Abstract:

In this work we introduce a spatio-temporal process with pareto marginal distributions. Dependence in space and time is introduced through the use of latent variables in a hierarchical fashion. For some specifications the process becomes strictly stationary in space and time. We present the construction of the process and study some of its properties and dependence measures such as correlation and tail dependence. We follow a Bayesian approach to estimate model parameters and show how to obtain posterior inference via MCMC methods. The performance of the process is illustrated with a pollution dataset of monthly maxima ozone concentrations over the metropolitan area of Mexico City. Our results show that our model is in many instances, superior to a couple of alternative models based on the generalized extreme value distribution

Abstract:

In this article, we propose a Bayesian non-parametric model for the analysis of multiple time series. We consider an autoregressive structure of order p for each of the series and borrow strength across the series by considering a common error population that is also evolving in time. The error populations (distributions) are assumed non-parametric whose law is based on a series of dependent Polya trees with zero median. This dependence is of order q and is achieved via a dependent beta process that links the branching probabilities of the trees. We study the prior properties and show how to obtain posterior inference. The model is tested under a simulation study and is illustrated with the analysis of the economic activity index of the 32 states of Mexico

Abstract:

We propose a two-step method for the analysis of copy number data. We first define the partitions of genome aberrations and conditional on the partitions we introduce a semiparametric Bayesian model for the analysis of multiple samples from patients with different subtypes of a disease. While the biological interest is to identify regions of differential copy numbers across disease subtypes, our model also includes sample-specific random effects that account for copy number alterations between different samples in the same disease subtype. We model the subtype and sample-specific effects using a random effects mixture model. The subtype’s main effects are characterized by a mixture distribution whose components are assigned Dirichlet process priors. The performance of the proposed model is examined using simulated data as well as a breast cancer genomic data set

Abstract:

A comparative analysis of time series is not feasible if the observation times are different. Not even a simple dispersion diagram is possible. In this article we propose a Gaussian process model to interpolate an unequally spaced time series and produce predictions for equally spaced observation times. The dependence between two observations is assumed a function of the time differences. The novelty of the proposal relies on parametrizing the correlation function in terms of Weibull and Log-logistic survival functions. We further allow the correlation to be positive or negative. Inference on the model is made under a Bayesian approach and interpolation is done via the posterior predictive conditional distributions given the closest m observed times. Performance of the model is illustrated via a simulation study as well as with real data sets of temperature and CO2 observed over 800,000 years before the present

Abstract:

This chapter presents some discrete and continuous Markov processes that have shown to be useful in survival analysis and other biostatistics applications. Both discrete and continuous time processes are used to define Bayesian nonparametric prior distributions. The discrete time processes are constructed via latent variables in a hierarchical fashion, whereas the continuous time processes are based on Lévy increasing additive processes. To avoid discreteness of the implied random distributions, these latter processes are further used as mixing measures of the parameters in a particular kernel, which lead to the so-called Lévy-driven processes. We present the use of these models in the context of survival analysis. We include univariate and multivariate settings, regression models and cure rate models

Abstract:

In this work we propose a model-based clustering method for time series. The model uses an almost surely discrete Bayesian nonparametric prior to induce clustering of the series. Specifically we propose a general Poisson-Dirichlet process mixture model, which includes the Dirichlet process mixture model as a particular case. The model accounts for typical features present in a time series like trends, seasonal and temporal components. All or only part of these features can be used for clustering according to the user. Posterior inference is obtained via an easy to implement Markov chain Monte Carlo (MCMC) scheme. The best cluster is chosen according to a heterogeneity measure as well as the model selection criterion LPML (logarithm of the pseudo marginal likelihood). We illustrate our approach with a dataset of time series of share prices in the Mexican stock exchange

Abstract:

A full Bayesian analysis is developed for an extension to the short-term and long-term hazard ratios model that has been previously introduced. This model is specified by two parameters, short- and long-term hazard ratios respectively, and an unspecified baseline function. Furthermore, the model also allows for crossing hazards in two groups and includes the proportional hazards, and the proportional odds models as particular cases. The model is extended to include covariates in both, the short- and the long-term parameters, and uses a Bayesian nonparametric prior, based on increasing additive processes mixtures, to model the baseline function. Posterior distributions are characterized via their full conditionals. Latent variables are introduces wherever needed to simplify computations. The algorithm is tested with a simulation study and posterior inference is illustrated with a survival study of ovarian cancer patients who have undergone a treatment with erythropoletin simulating agents

Abstract:

In this paper, we introduce a novel discrete Gamma Markov random field (MRF) prior for modeling spatial relations among regions in geo-referenced health data. Our proposition is incorporated into a generalized linear mixed model zero-inflated (ZI) framework that accounts for excess zeroes not explained by usual parametric (Poisson or Negative Binomial) assumptions. The ZI framework categorizes subjects into low-risk and high-risk groups. Zeroes arising from the low-risk group contributes to structural zeroes, while the high-risk members contributes to random zeroes.We aim to identify explanatory covariates that might have significant effect on (i) the probability of subjects in low-risk group, and (ii) intensity of the high risk group, after controlling for spatial association and subject-specific heterogeneity. Model fitting and parameter estimation are carried out under a Bayesian paradigm through relevant Markov chain Monte Carlo (MCMC) schemes. Simulation studies and application to a real data on hypertensive disorder of pregnancy confirms that our model provides superior fit over the widely used conditionally auto-regressive proposition

Abstract:

Using a new type of array technology, the reverse phase protein array (RPPA), we measure time-course protein expression for a set of selected markers that are known to coregulate biological functions in a pathway structure. To accommodate the complex dependent nature of the data, including temporal correlation and pathway dependence for the protein markers, we propose a mixed effects model with temporal and protein-specific components. We develop a sequence of random probability measures (RPM) to account for the dependence in time of the protein expression measurements. Marginally, for each RPM we assume a Dirichlet process model. The dependence is introduced by defining multivariate beta distributions for the unnormalized weights of the stick-breaking representation. We also acknowledge the pathway dependence among proteins via a conditionally autoregressive model. Applying our model to the RPPA data, we reveal a pathway-dependent functional profile for the set of proteins as well as marginal expression profiles over time for individual markers

Abstract:

Polya trees (PT) are random probability measures which can assign probability 1 to the set of continuous distributions for certain specifications of the hyperparameters. This feature distinguishes the PT from the popular Dirichlet process (DP) model which assigns probability 1 to the set of discrete distributions. However, the PT is not nearly as widely used as the DP prior. Probably the main reason is an awkward dependence of posterior inference on the choice of the partitioning subsets in the definition of the PT. We propose a generalization of the PT prior that mitigates this undesirable dependence on the partition structure, by allowing the branching probabilities to be dependent within the same level. The proposed new process is not a PT anymore. However, it is still a tail-free process and many of the prior properties remain the same as those for the PT

Abstract:

Bayesian nonparametric methods have recently gained popularity in the context of density estimation. In particular, the density estimator arising from the mixture of Dirichlet process is now commonly exploited in practice. In this paper we perform a sensitivity analysis for a wide class of Bayesian nonparametric density estimators, including the mixture of Dirichlet process and the recently proposed mixture of normalized inverse Gaussian process. Whereas previous studies focused only on the tuning of prior parameters, our approach consists of perturbing the prior itself by means of a suitable function. In order to carry out the sensitivity analysis we derive representations for posterior quantities and develop an algorithm for drawing samples from mixtures with a perturbed nonparametric component. Our results bring out some clear evidence for Bayesian nonparametric density estimators, and we provide an heuristic explanation for the neutralization of the perturbation in the posterior distribution

Abstract:

In this paper we introduce a Markov gamma random field prior for modelling relative risks in disease mapping data. This prior process allows for a different dependence effect with different neighbours. We describe the properties of the prior process and derive posterior distributions. The model is extended to cope with covariates and a data set of respiratory infections of children in Mexico is used as an illustration

Abstract:

We propose a Bayesian semiparametric model for survival data with a cure fraction. We explicitly consider a finite cure time in the model, which allows us to separate the cured and the uncured populations. We take a mixture prior of a Markov gamma process and a point mass at zero to model the baseline hazard rate function of the entire population. We focus on estimating the cure threshold after which subjects are considered cured. We can incorporate covariates through a structure similar to the proportional hazards model and allow the cure threshold also to depend on the covariates. For illustration, we undertake simulation studies and a full Bayesian analysis of a bone marrow transplant data set

Abstract:

In this paper we introduce a Bayesian semiparametric model for bivariate and multivariate survival data. The marginal densities are well-known nonparametric survival models and the joint density is constructed via a mixture. Our construction also defines a copula and the properties of this new copula are studied.We also consider the model in the presence of covariates and, in particular, we find a simple generalisation of the widely used frailty model, which is based on a new bivariate gamma distribution

Abstract:

In this paper we show that particular Gibbs sampler Markov processes can be modified to an autoregressive Markov process. The procedure allows the easy derivation of the innovation variables which provide strictly stationary autoregressive processes with fixed marginals. In particular, we provide the innovation variables for beta, gamma and Dirichlet processes

Abstract:

In the presence of covariate information, the proportional hazards model is one of the most popular models. In this paper, in a Bayesian nonparametric framework, we use a Markov (Lévy-driven) process to model the baseline hazard rate. Previous Bayesian nonparametric models have been based on neutral to the right processes, which have a number of drawbacks, such as discreteness of the cumulative hazard function. We allow the covariates to be time dependent functions and develop a full posterior analysis via substitution sampling. A detailed illustration is presented

Abstract:

In this paper we present and investigate a new class of nonparametric priors for modelling a cumulative distribution function. We take F(t) = 1 exp{-Z(t)}, where Z(t) = ∫0 t x(·)ds is continuous and x(·) is a Markov process. This is in contrast to the widely used class of neutral to the right priors (Doksum (1974)) for which Z(·) is discrete and has independent increments. The Markov process allows the modelling of trends in Z(·), not possible with independent increments. We derive posterior distributions and present a full Bayesian analysis

Abstract:

This paper introduces and studies a new class of nonparametric prior distributions. Random probability distribution functions are constructed via normalization of random measures driven by increasing additive processes. In particular, we present results for the distribution of means under both prior and posterior conditions and, via the use of strategic latent variables, undertake a full Bayesian analysis. Our class of priors includes the well-known and widely used mixture of a Dirichlet process

Abstract:

This paper generalizes the discrete time independent increment beta process of Hjort (1990), for modelling discrete failure times, and also generalizes the independent gamma process for modelling piecewise constant hazard rates (Walker and Mallick, 1997). The generalizations are from independent increment to Markov increment prior processes allowing the modelling of smoothness. We derive posterior distributions and undertake a full Bayesian analysis

Abstract:

In this paper we derive the Kalman filter equations when the system state and the observation error of a state-space model are correlated. We also consider: (i) error processes with nonnegative definite variance-covariance matrices and (ii) disturbance probability distributions that are conditional on some information related to the observation process

Abstract:

Background: It is well established that infection patterns in nature can be driven by host, vector, and symbiont communities. One of the first stages in understanding how these complex systems have influenced the incidence of vector-borne diseases is to recognize what are the major vertebrate (i.e., hosts) and invertebrate (i.e., vectors) host species that propagate those microbes. Such identification opens the possibility to identify such essential species to develop targeted preventive efforts. Methods: The goal of this study, which relies on a compilation of a global database based on published literature, is to identify relevant host species in the global transmission of mosquito-borne flaviviruses, such as West Nile virus, St. Louis virus, Dengue virus, and Zika virus, which pose a concern to animal and public health. Results: The analysis of the resulting database involving 1174 vertebrate host species and 46 reported vector species allowed us to establish association networks between these species. Three host species (Mus musculus, Sapajus flavius, Sapajus libidinosus, etc.) have a much larger centrality values, suggesting that they play a key role in flavivirus community interactions. Conclusion: The methods used and the species detected as relevant in the network provide new knowledge and consistency that could aid health officials in rethinking prevention and control strategies with a focus on viral communities and their interactions. Other infectious diseases that harm animal and human health could benefit from such network techniques

Abstract:

We study a normative model of an internal capital market that a company uses to choose between its two divisions' projects. Each project's value is initially unknown to all, but can be dynamically learned by the corresponding division. Learning can be suspended or resumed at any time and is costly. We characterize an internal capital market that maximizes the company's expected cash flow

Abstract:

We consider a single-item, independent private value auction environment with two bidders: a leader, who knows his valuation, and a follower, who privately chooses how much to learn about his valuation. We show that, under some conditions, an ex-post efficient revenue-maximizing auction -which solicits bids sequentially- partially discloses the leader's bid to the follower, to influence his learning. The disclosure rule that emerges is novel; it may reveal to the follower only a pair of bids to which the leader's actual bid belongs. The identified disclosure rule, relative to the first-best, induces the follower to learn less when the leader's valuation is low and more when the leader's valuation is high

Abstract:

The Internet of Things (IoT) brings the issue of connecting an immense amount of diverse devices. The vast diversity will present a challenge for communications, since it is not expected that all devices will follow the same rules and standards to communicate back and forth, due to the difficulty and inefficiency of developing a unique set rules and standards for each device. A classification of devices is needed, so rules and protocols of communications could be established among the different device categories, to deal with the diversity of the things to be interconnected. In this paper, a classification methodology using a clustering algorithm like k-means is proposed; as well as, a way to establish rules of classification using a decision tree implemented with the ID3 algorithm

Resumen:

En el presente estudio se identifican factores de la innovación que han tenido un impacto en el desarrollo del sector fotovoltaico. Considerando una muestra de 15 países entre 1992 y 2011, se encontró que la aplicación de los instrumentos de política pública, como: tarifa de retroalimentación, los subsidios directos al capital y los fondos de inversión, promueven la innovación en este sector. Además, la variación de precios de los módulos fotovoltaicos muestra una correlación inversa con el número de patentes internacionales, el efecto mayor se obtiene con un periodo de rezago de dos años. La cantidad de reservas de petróleo y las exportaciones de energía eléctrica con un rezago de dos años también favorecen la innovación en el sector fotovoltaico

Abstract:

This study presents the innovation factors that have had an impact in the development of photovoltaic sector. Using a sample of 15 countries and data from 1992 to 2011, it was found that the application of public policy instruments, like: feed-in tariff, direct capital subsidies and investment funds have promoted innovation in the photovoltaic sector. Moreover, the deployment of the photovoltaic module price shows an inverse correlation with the number of international patents and the greatest effect is obtained with a time lag of two years. The quantity of oil reserves and the quantity of electricity exports with a lag of two years have also fostered innovation in the photovoltaic sector

Abstract:

This article analyzes the determinants of annual installed capacity of photovoltaic power (PV) at a country level. Our results suggest that in the 15 countries studied, the factors promoting the deployment of PV systems are the net consumption of renewable electricity, the existence of a feed-in tariff and sustainable building requirements, as well as the quantity of scientific publications. Meanwhile, the variables that negatively impact the PV deployment are oil reserves and the carbon dioxide emissions from energy consumption. Based on data from 1992 to 2011, the analysis shows that the deployment of PV requires long-term support for scientific research. One successful policy for PV deployment has been the feed-in tariff. Sustainable building requirements also significantly support PV deployment. The deployment of PV is one step towards a low-carbon energy system but the emergence of any renewable energy technology must cope with the energy sector’s domination by fossil fuels interests

Abstract:

The automated screening of patients at risk of developing diabetic retinopathy represents an opportunity to improve their midterm outcome and lower the public expenditure associated with direct and indirect costs of common sight-threatening complications of diabetes. This study aimed to develop and evaluate the performance of an automated deep learning-based system to classify retinal fundus images as referable and nonreferable diabetic retinopathy cases, from international and Mexican patients. In particular, we aimed to evaluate the performance of the automated retina image analysis (ARIA) system under an independent scheme (ie, only ARIA screening) and 2 assistive schemes (ie, hybrid ARIA plus ophthalmologist screening), using a web-based platform for remote image analysis to determine and compare the sensibility and specificity of the 3 schemes

Abstract:

The purpose of this paper is to analyze and compare the results of applying classical and Bayesian methods to testing for a unit root in time series with a single endogenous structural break. We utilize a data set of macroeconomic time series for the Mexican economy similar to the Nelson Plosser one. Under both approaches, we make use of innovational outlier models allowing for an unknown break in the trend function. Classical inference relies on bootstrapped critical values, in order to make inference comparable to the finite sample Bayesian one. Results from both approaches are discussed and compared

Abstract:

Targeted social policies are the main strategy for poverty alleviation across the developing world. These include targeted cash transfers (CTs), as well as targeted subsidies in health, education, housing, energy, childcare, and others. Due to the scale, diversity, and widespread relevance of targeted social policies like CTs, the algorithmic rules that decide who is eligible to benefit from them---and who is not---are among the most important algorithms operating in the world today. Here we report on a year-long engagement towards improving social targeting systems in a couple of developing countries. We demonstrate that a shift towards the use of AI methods in poverty-based targeting can substantially increase accuracy, extending the coverage of the poor by nearly a million people in two countries, without increasing expenditure. However, we also show that, absent explicit parity constraints, both status quo and AI-based systems induce disparities across population subgroups. Moreover, based on qualitative interviews with local social institutions, we find a lack of consensus on normative standards for prioritization and fairness criteria. Hence, we close by proposing a decision-support platform for distributed governance, which enables a diversity of institutions to customize the use of AI-based insights into their targeting decisions

Abstract:

This study involves an experiment where 73 Chief Audit Executives and deputy Chief Audit Executives determine the amount of adjustment required to correct a misstatement. We manipulate the financial reporting location of the misstatement (recognized vs. disclosed) and the level of audit committee expertise (high vs. low). The results indicate that financial reporting location has significant effects on internal auditors’ decisions to correct misstatements. Specifically, internal auditors are more willing to waive disclosed misstatements relative to recognized misstatements. Contrary to expectations, the results do not indicate that increased audit committee expertise and associated increases in audit committee members’ perceived powers cause internal auditors to be less willing to waive misstatements

Abstract:

The study of the interaction among species is an active area of research in Ecology. In particular, it is of interest to evaluate the overlap of their ecological niches. Temporal activity is one of the niche’s axes most commonly used to explore ecological segregation among animal species, and many contributions focus on the overlap of this variable. Once the information of the temporal activity is obtained in the wild, the data is treated as a random sample. There exist different methods to estimate the overlap. Specifically, in the case of two species, one possibility is to estimate the density of the temporal activity of each species and then evaluate the overlap between these density functions. This leads naturally to the analysis of circular data. Most of the procedures currently in use impose some rather restrictive assumptions on the probabilistic models used to describe the phenomena, and only provide approximate measures of the uncertainty involved in the process. In this article, we propose a Bayesian nonparametric approach which incorporates a well-defined noninformative prior. We take advantage of the data structure to define such a prior in terms of the predictive distribution. To the best of our knowledge, this is a novel approach. Our procedure is compared with a well-known method using simulated data, and applied to the analysis of real camera-trap data concerning two mammalian species from the El Triunfo biosphere reserve (Chiapas, Mexico)

Abstract:

This paper presents a Bayesian analysis of the projected normal distribution, which is a flexible and useful distribution for the analysis of directional data. We obtain samples from the posterior distribution using the Gibbs sampler after the introduction of suitably chosen latent variables. The procedure is illustrated using simulated data as well as a real data set previously analysed in the literature

Abstract:

This article presents a Bayesian analysis of the von Mises-Fisher distribution, which is the most important distribution in the analysis of directional data. We obtain samples from the posterior distribution using a sampling-importance-resampling method. The procedure is illustrated using simulated data as well as real data sets previously analyzed in the literature

Abstract:

In this work, we present a diffusive predator-prey model with a finite interaction scale between species and an external flow. The system is confined to a two-dimensional domain with one coordinate larger than another, which allows us to use the one-dimensional projection of the diffusion operator, known as the Fick-Jacobs projection, here with an external force. Within this approach, we obtain analytical results for an exponential-shaped channel showing that patterns can emerge through the diffusiondriven instability mechanism. We show that the range of unstable modes where patterns can appear is modified by the species interaction's spatial scale and an effective advection term that includes external velocity and the shape parameter that characterizes the channel-like region

Abstract:

Most of the recent epidemic outbreaks in the world have as a trigger, a strong migratory component as has been evident in the recent Covid-19 pandemic. In this work we address the problem of migration of human populations and its effect on pathogen reinfections in the case of Dengue, using a Markov-chain susceptible-infected-susceptible (SIS) metapopulation model over a network. Our model postulates a general contact rate that represents a local measure of several factors: the population size of infected hosts that arrive at a given location as a function of total population size, the current incidence at neighboring locations, and the connectivity of the network where the disease spreads. This parameter can be interpreted as an indicator of outbreak risk at a given location. This parameter is tied to the fraction of individuals that move across boundaries (migration). To illustrate our model capabilities, we estimate from epidemic Dengue data in Mexico the dynamics of migration at a regional scale incorporating climate variability represented by an index based on precipitation data

Abstract:

In this paper, we explore the interplay between tumor cells and the human immune system, based on a deterministic mathematical model of minimal interactions by transforming it to stochastic model using a continuous-time Markov chain, where time is continuous but the state space is discrete. Furthermore, we simulate the stochastic basin of attraction to verify the behavior of the three critical points of interest in the deterministic system. Moreover, the stochastic simulations exemplify the cancer immunoediting theory in its three phases of development: elimination, equilibrium and escape. We extend the minimum model proposed in [DeLisi & Rescigno, 1977] to include a term of immunotherapy by lymphocyte injection, and we simulate two treatment regimes, equilibrium and escape, under several schemes

Resumen:

La incertidumbre en la política electoral es relevante si y sólo si tenemos un votante adverso al riesgo. Dado que las implicaciones de un supuesto como este, moldean incentivos tanto de partidos como candidatos, es necesario el estudio metodológico del origen y las implicaciones de la actitud frente al riesgo de personas. Este trabajo está encaminado a exponer algunas de las preguntas pendientes en torno a este tema, así como proponer un marco metodológico para abordarlas

Abstract:

By applying knowledge of different sources SYNET is capable of aid in the process of selecting personnel. SYNET combines traditional methods such as psychometrics with behaviorism to provide a useful comprehension of work. The different modules of SYNET enable its user to define the required job and to evaluate the potential candidates that could carry out this job. A key issue within the SYNET framework is to improve the interrelation worker-work atmosphere. It is necessary to put extra caution when conforming work groups considering that worker conduct is mainly influenced by the people that are closest to him

Abstract:

We study the evolution of the distribution of assets in a discrete time, deterministic growth model with log-utility, a minimum consumption requirement, Cobb-Douglas technology, and agents differing in initial assets. We prove that the coefficient of variation in assets across agents decreases monotonically in a transition to the steady state from below, if (i) the consumptionrequirement is zero, or (ii) the consumption requirement is not too big and the initial capital stock is large enough. We also show how a positive consumption requirement or asmall elasticity of substitution between capital and labor can generate non-monotonic paths for inequality

Abstract:

This paper develops a real business cycle model characterized by idiosyncratic employment shocks and quantitatively explores the behavior of aggregate variables under the assumptions of complete and incomplete insurance markets. The results show that the model with incomplete markets produces standard deviations and correlations of aggregate labor input and labor productivity close to the ones of the US economy for the post-war period

Abstract:

We prove a criterion for invertibility of operators on adequate adaptations to the boundary of a smooth domain of atomic subspaces of L1, originally defined on Rn by Sweezy [13]. As an application, we establish solvability of the Neumann problem for harmonic functions on smooth domains, assuming that the normal derivative belongs to said atomic subspaces of L1

Abstract:

Mexico adopted a floating exchange rate regime in December 1994 . The Bank of Mexico's monetary policy gives atlention to maintain "orderly conditions in foreign exchange markets." The Bank of Mexico relies primarily on the control of the overnight interest rate in conducting its monetary policy. The question arises whether this extremely short-term interest rate is the relevant instrument to achieve exchange rate objectives. Theory suggests that the relationship be1ween the exchange rate and the temí structure of interest rates can be complicated ahd counterintuitive when investors are risk averse. In this paper, we pursue an empirical investigation on the effect of the term structure of interest rates on the exchange rate for Mexico. This information could be useful to understand and manage the operation of the Mexican floating exchange rate regime.

Resumen:

En este trabajo se presentan los resultados de un proyecto de largo alcance en México cuyo propósito consiste en profundizar en la forma en que los estudiantes universitarios aprenden el álgebra lineal. Para ello se definen como metas del proyecto proporcionar un análisis teórico de las construcciones involucradas en los distintos conceptos de álgebra lineal utilizando la teoría APOE; validar dicho análisis para cada concepto mediante investigación empírica enfocando la atención en los distintos conceptos que la componen y en las relaciones entre ellos y, con base en los resultados obtenidos, hacer sugerencias didácticas que contribuyan a una enseñanza fundamentada en la investigación. En particular se presentan en este estudio los resultados obtenidos para los conceptos de espacio vectorial, transformación lineal, base y sistemas de ecuaciones lineales

Abstract:

This paper presents the results obtained so far in a long term project developed in Mexico with the purpose of studying in depth students' constructions when they study Linear Algebra at the university level. The goals of the project consist in developing theoretical analyses about the constructions involved in the learning of the different Linear Algebra concepts using APOS theory; validating those analysis by means of empirical research focusing on specific concepts and relationships between them; and making didactic suggestions that can contribute to the teaching of this subject. In particular we present in this study the results obtained for the following concepts: vector space, linear transformation, basis and systems of linear equations

Résumé:

On présente dans cet article les résultats d'un projet de long terme développé au Mexique. Le propos du projet consiste en approfondir sur les constructions des connaissances liées à l'Algèbre Linéaire par les étudiants universitaires. Pour accomplir cet objectif, les buts particuliers du projet consistent en développer un analyse théorique des différents concepts de l'Algèbre Linéaire en termes de la théorie APOS; valider l'analyse par moyen de la recherche empirique centrée sur les différents concepts de l'Algèbre Linéaire et ses relations et, utiliser les résultats obtenus pour proposer des suggestions didactiques pour les enseigner. En particulier on présente ici les résultats obtenus pour les concepts d'espace vectoriel, transformation linéaire, base et systèmes linéaires d'équations

Resumo:

Neste trabalho se apresentam os resultados de um projeto de longa duração no México cujo propósito consiste em aprofundar na forma em que os estudantes universitários aprendem a álgebra linear. Para tanto se definem como metas do projeto proporcionar uma análise teórica das construções envolvidas nos distintos conceitos de álgebra linear utilizando a Teoria APOE; validar referida análise para cada conceito mediante pesquisa empírica focando a atenção nos distintos conceitos que a compõe e nas relações entre eles e, com base nos resultados obtidos, fazer sugestões didáticas que contribuam a um ensino fundamentado na pesquisa. Em particular se apresentam neste estudo os resultados obtidos para os conceitos de espaço vetorial, transformação linear, base e sistemas de equações lineares

Abstract:

The log-likelihood of a nonhomogeneous Branching Diffusion Process under several conditions assuring existence and uniqueness of the diffusion part and nonexplosion of the branching process. Expressions for different Fisher information measures are provided. Using the semimartingale structure of the process and its local characteristics, a Girsanov-type result is applied. Finally, an Ornstein-Uhlenbeck process with finite reproduction mean is studied. Simulation results are discussed showing consistency and asymptotic normality

Resumen:

Objetivo. Analizar la asociación de la concentración de contaminantes atmosféricos y los indicadores epidemiológicos de Covid-19 en la Zona Metropolitana del Valle de México (ZMVM). Material y métodos. Se diseñó un estudio epidemiológico ecológico. Se utilizaron modelos lineales tipo Poisson para variables de conteo y modelos lineales de efectos aleatorios en variables continuas para cuantificar la asociación entre los contaminantes atmosféricos y los indicadores de Covid-19. Los datos obtenidos fueron del 28 de febrero de 2020 al 30 de junio de 2021. La exposición a contaminantes se estratificó por estaciones climáticas. Resultados. Los contaminantes que tuvieron asociación significativa con indicadores de morbilidad y mortalidad fueron CO, NOx, O3 y PM10. En la estación seca fría el CO y el NOx, tuvieron efecto sobre los casos diarios confirmados y las defunciones diarias. Las PM10 se asociaron con efecto en los indicadores de casos diarios confirmados, incidencia diaria, porcentaje de hospitalarios y la tasa de letalidad. Conclusiones. Los resultados sugieren una asociación entre el comportamiento epidemiológico de Covid-19 y la exposición a CO, NOx, O3 y PM10, en la que se encontró un mayor efecto en la estación seca-fría en la ZMVM

Abstract:

Objective. To analyze the association of the concentration of atmospheric pollutants in the epidemiological indicators of Covid-19 in the Metropolitan Zone of the Valley of Mexico (ZMVM). Materials and methods. An ecological epidemiological study was designed. Poisson-type linear models were used for counting variables and random effects linear models in continuous variables, to quantify the association between atmospheric pollutants and Covid-19 indicators. The data were obtained in the period from February 28, 2020 to June 30, 2021. The pollutants exposure was stratified by weather seasons. Results. The pollutants that had a significant association with indicators of morbidity and mortality were CO, NOX, O3 and PM10. In the cold-dry season, CO and NOx influenced daily confirmed cases and daily deaths. PM10 had effect on the indicators of daily confirmed cases, daily incidence, percentage of inpatients, mortality rate and fatality rate. Conclusions. The results suggest an association between the epidemiological behavior of Covid-19 and exposure to CO, NOx, O3 and PM10, finding a greater effect in the cold-dry season in the ZMVM

Abstract:

Throughout this century, and with increasing frequency, the National Electoral Institute (INE) has added the production of quick counts to the tasks that accompany the electoral processes in Mexico. Quick counts were initially conceived to produce estimates of the results of the Presidential elections; at present, these statistical exercises are used to estimate the integration of the federal chamber of deputies as well as the results of the elections of Governors in 31 states of the country and the election of Mexico City Head of Government. Given the role of the INE as an electoral authority, the levels of demand and quality of the quick counts it organizes are extremely high, and to carry them out, the Institute relies on a Committee made up of specialists who are in charge of both the sample design and the inference procedures. In this committee, conventional techniques for analyzing finite samples are combined with more modern procedures that use parametric models and simulation algorithms. For the 2020–2021 electoral process, the committee simultaneously estimated the results of the election of federal deputies and those of fifteen governorships. Concurrency of elections and the possibility that the planned sample may not be received entirely have revealed the need to efficiently process the available information and design measures to evaluate the potential biases that may arise when there are incomplete samples. In this work, some mechanisms developed to make the computations more efficient and faster are presented when the estimation in quick counts is carried out with the model proposed by Mendoza and Nieto (2016)

Resumen:

El método dialógico constituye el alma de los Estudios Generales en el ITAM. ¿Cómo se desarrolla en la práctica? En este artículo se parte de la distinción entre una instrucción y una verdadera educación universitaria que se enfoca en la formación del pensamiento crítico por medio de la pregunta y el diálogo. Como segundo punto, se analiza el papel de la pregunta en la formación de los alumnos y se muestra cómo constituye el centro del diálogo, así como de la motivación para el aprendizaje y la enseñanza. Finalmente, se explica cómo la pregunta constituye también un elemento central en el cambio de actitudes de los alumnos y de su verdadera transformación moral

Abstract:

The dialogic method constitutes the soul of General Studies at ITAM. How is it developed in practice? This paper starts from the distinction between an instruction and a true university education that focuses on the formation of critical thinking through questioning and dialogue. Afterwards, it analyzes the role of questioning in the training of students and wants to show how it constitutes the center of dialogue, as well as of motivation in learning and teaching. Finally, an attempt will be made to explain how the question also constitutes a central element in the students' attitudinal change and their true moral transformation

Resumen:

El presente artículo plantea la necesidad de un acercamiento histórico a los textos filosóficos tomando como ejemplo el caso de la propuesta ética de David Hume. Se muestra el interés de Hume por insertarse en el diálogo intelectual de su época y su propósito de integrar el método científico en las ciencias morales y cómo la crítica que hace a la razón debe ser comprendida bajo esta luz. Para ello se menciona el ambiente intelectual de la época y las posturas en conflicto en el debate filosófico del siglo XVIII: el escepticismo-relativista, el racionalismo exagerado y el sentimentalismo ingenuo, señalando que, en el fondo, Hume no puede ser excluido totalmente de ninguna de estas posturas pero tampoco encasillado en alguna de ellas

Abstract:

This article raises the need of a historical approach to philosophical texts taking as an example the case of David Hume's ethics proposal. It shows Hume's interest in participating actively in the intellectual dialogue of his time and his intention to integrate the scientific method into the moral sciences and how his critique of reason must be understood in this light. To do this, we quickly mention the intellectual atmosphere of the time and the positions in conflict in the philosophical debate of the 18th century: relativistic skepticism, radical rationalism and naive sentimentality, noting that, deep down, Hume cannot be excluded completely from any of these positions but not typecast in any of them

Resumen:

En el Tratado de la naturaleza humana de David Hume, la razón y la pasión se encuentran en interacción constante formando la creencia. Se distinguen tres niveles en los eventos morales: sentimiento moral, acción moral y juicio moral, en los que razón y pasión interactúan, aunque con diferentes funciones en cada nivel

Abstract:

In David Hume's A Treatise of Human Nature, reason and passion are in constant interaction forming belief. Moral events are distinguished on three levels: moral sentiment, moral action and moral judgment, in which reason and passion interact, although with different functions at each level

Resumen:

El presente artículo recorre las diferentes etapas de la constitución de la persona que va más allá del egoísmo autoafirmante y se conforma como la inseparable unidad del sujeto metafísico que se expresa en “el Otro-en-el-mismo” y un “Ser-para-el-Otro”. Esto lleva a afirmar que la persona es intrínsecamente relación, exterioridad que se concreta en el lenguaje que trasciende la interioridad del yo. Esta relación tanto en el interior como huella, como en el exterior como rostro invoca a un tercero que la funda y la sostiene. La persona se muestra como una relación asimétrica a la que llamo triádica

Abstract:

This article covers the different stages of the constitution of the person that goes beyond selfishness and self-affirmation and is formed as the inseparable unity of the metaphysical subject that is expressed in the "Other-in-the-same" and "Being-for-the-other ". This leads one to say that the person is intrinsically relation and exteriority that is realized in the language that transcends the inner self. This relationship, considered internally as a footprint, and in the exterior as visage, invokes a third party that establishes and holds it. The person is shown as an asymmetrical relationship that I call triadic

Resumen:

El Tratado de la naturaleza humana sólo encuentra su plena significación en el medio cultural interpretativo en que fue escrito. El presente artículo pretende señalar la fuerte influencia que el sistema newtoniano ejerció en el joven Hume y cómo el rechazo de las causas ocultas y la búsqueda de una causa común observable son la guía que inspira a Hume para buscar los principios de la ciencia del hombre y niega, por ello, el acceso a priori a verdades universales, estableciendo así la observación científica como el único camino posible para la ciencia del hombre. Hume no niega la validez de la inducción ni la posibilidad de alcanzar conocimientos generales sino que toma la propuesta newtoniana para elaborar la ciencia del hombre

Abstract:

The Treatise of Human Nature finds its full meaning in the interpretative cultural environment in which it was written. This article aims to point out the strong influence that the Newtonian system exerted in the young Hume and how the rejection of hidden causes and the search for an observable common cause are the guide that inspires Hume to look for the principles of the science of man denying, therefore, an a priori access to universal truths, establishing the scientific observation as the only possible way for the science of man. Hume does not deny the validity of induction and the possibility of reaching general knowledge but takes Newtonian proposal to develop the science of man

Abstract:

This exploratory study proposes a system that uses machine learning and data science to predict which students from e-Learning programs will finish their studies, allowing institutions to allocate scholarship resources more effectively. The system is built and tested with data from the National Autonomous University of Mexico (UNAM), with their Open University and Distance Education System, and shows promising results, meeting business restrictions for a false positive rate under 2%. This approach can be used to improve resource allocation for education in Mexico and worldwide. By leveraging technology to evaluate data and make educated choices, organizations can more effectively identify students who would gain the most from scholarships and maximize their educational investment

Abstract:

The concept of dispositional resistance to change has been introduced in a series of exploratory and confirmatory analyses through which the validity of the Resistance to Change (RTC) Scale has been established (S. Oreg, 20P3). However, the vast majority of participants with whom the scale was validated were from the United States. The purpose of the present work was to examine the meaningfulness of the construct and the validity of the stale across nations. Measurement equivalence analyses of data from 17 countries, representing 13 languages and 4 continents, confirmed the cross-national validity of the scale. Equivalent patterns of relationships between personal values and RTC across samples extend the nomological net of the construct and provide further evidence that dispositional resistance to change holds equivalent meanings across nations

Abstract:

This paper presents the conceptual design of ground stations used for monitoring low Earth orbit (LEO) satellites; as well as a proposal for a preliminary proof of concept setup, which can be implemented before trying to build a completely functional station. As a result, the setup allowed to locate and monitor signals from known satellites using available and inexpensive, "off the shelve", commercial components. The encouraging results that were obtained are also describe along the paper. The setup design that is presented can be easily replicated as an initial step; and then, sealed into an automated terrestrial station, with a more complex and precise communications system

Resumen:

Habitar la ciudad tiene que ver con el respeto entre sus miembros. Ese respeto deriva del reconocimiento que uno tiene del otro a partir del sentido de responsabilidad. El sujeto en sí mismo está obligado por el otro. El yo dispuesto al otro responde a su demanda, es libre en cuanto se debe al otro. Por otro lado, Sloterdijk habla de la topósfera como el lugar donde se habita en esferas en las que uno y otro se reconocen. Una ciudad cuyas esferas son armónicas es estética, y es a la vez una ciudad ética. En tiempo de pandemia prevalece el aislamiento, el desorden, la irresponsabilidad, y se vive en un thanatopo. Se hacen sugerencias para una mejor vida en la ciudad en tiempo de pandemia

Abstract:

Inhabiting the city has to do with respect among its members. That respect derives from the recognition that one has of the other from the sense of responsibility. The subject in itself is bound by the other. The self disposed to the other responds to his demand, he is free insofar as he owes itself to the other. On the other hand, Sloterdijk speaks of the toposphere as the place where one lives in spheres in which one and the other recognize each other. A city whose spheres are harmonious is aesthetic, and it is at the same time an ethical city. In times of pandemic, isolation, disorder and irresponsibility prevail, and one lives in a thanatopo. Suggestions are made for a better life in the city in times of pandemic

Resumen:

El objetivo de esta nota es explicar por qué fracasa el liberalismo en casi todo el mundo. El ascenso del populismo no tiene signo ideológico específico, de modo que el sello populista se refiere a la idea de gobiernos autoritarios, centralizadores, contrarios a la globalización comercial, enemigos de la separación abierta de poderes y, desde luego, intolerantes a la crítica. Hacer ver algunas causas es imperativo si se pretende que se revierta ese ascenso populista

Abstract:

The objective of this note is to explain why liberalism fails almost everywhere. The rise of populism has no specific ideological sign, so that the populist stamp refers to the idea of authoritarian, centralizing governments, opposed to commercial globalization, enemies of the open separation of powers, and certainly intolerant of criticism. To bring out some causes is imperative if this populist rise is to be reversed

Resumen:

El objetivo del trabajo es comprender el pensamiento compacto de Byung-Chul Han. En cada párrafo de Chul Han hay complejidades que demandan por sí mismas exegesis que podrían dar lugar a libros completos. Se exponen y critican las ideas que Byung-Chul Han ha desarrollado en tres obras breves: La sociedad del cansancio, La sociedad de la transparencia y La agonía del Eros

Abstract:

Our goal is to understand Byung-Chul Han’s particular succinct thoughts. In each paragraph of his works, there are complexities requiring exegesis which may result in complete books. We will present and criticize his ideas in his following brief works: The Burnout Society, The Transparency Society, and The Agony of Eros

Resumen:

El signo es la Idea, o lo en-sí. Su desdoblamiento es el para-sí de un significante; pero es significado al recuperarse como figura del en-sí-y-para-sí. Esa forma circular de ver el movimiento sígnico del sistema hegeliano puede ser abstracta; el signo es una marca apenas. El sentido puede ser una representación de lo emitido y el objeto la referencia, el estado de cosas al que se apunta. El sistema hegeliano rinde tributo a la idea de la pirámide; Hegel dice que el signo deriva en pozo y pirámide. El sentido se abre paso cuando el signo se encuentra con el significado mediando los significantes, igual que la pirámide. La idea del signo, el significante y el significado es una tríada en la que algo muerto cobra vida en el saber, que es libertad. Ahí se abren los ojos, pero también hay muerte: a menos que, lo que se abra, sea la significación como escritura interminable; de nuevo, Derrida lee a Hegel des-escribiendo lo que los manuales escriben

Abstract:

The sign is the Idea or the notion in itself. Its division is for itself a signifier but it is important to recover the person in itself and for itself. This circular way of perceiving the Hegelian system’s signifier movement may be considered abstract since the sign is barely a mark. The meaning could be a representation of that uttered and the object is its reference, the state of things. The Hegelian system pays tribute to the idea of the pyramid in that he mentions that the sign evolves into a pit and pyramid. The meaning comes through when the sign is found with its meaning among its signifiers as in the case of the pyramid. The concepts of sign, signifier, and meaning constitute a triad where freedom becomes alive through knowledge. But there is also death in knowledge, unless meaning arises as an endless chain of signifiers. Again, Derrida reads Hegel rewriting what manuals state

Resumen:

Hacia 1927, Manuel Gómez Morin reúne una serie de ensayos que posteriormente fueron publicados junto con los textos sobre la Universidad. Ve un país dividido y con graves problemas de injusticia, luchas entre los mexicanos, falta de nacionalismo humanista en México, durante el período de 1915 a 1934. Propone la necesidad de que la juventud se eduque en las ciencias bajo un sentido estético, que se oriente moralmente hacia la justicia. Defiende la idea de una autonomía universitaria dentro del Estado y no como espacio de excepción: la autonomía dentro y desde el Estado. Denuncia el materialismo, el marxismo dogmático, el estudiar para ganar dinero como única meta en la vida; sostiene que educar es enseñar a pensar, dialogar, analizar, investigar. Pide herramientas y tecnología, y configura una pedagogía que oriente, desde el saber, al servicio, bajo el sentido artístico y ético. Solamente así se superarán los grandes problemas nacionales que requieren soluciones nacionales: no imitar modelos extranjeros que no convienen a nuestra cultura

Abstract:

About 1927, Manuel Gómez Morin compiled a series of essays which were later published along with essays regarding universities. In this compilation, he portrays Mexico (1915-1934) as a divided country with serious problems of injustice, internal fighting, and a lack of humanistic nationalism. He puts forth the need for scientific education with an esthetic basis morally oriented toward justice for Mexican youth. He justifies the idea of university autonomy within the State and not as an area of exception: autonomy within the State and from the State. He decries materialism, dogmatic Marxism, and studying solely for material gain in life. He believes that educating is teaching how to think, converse, analyze, and investigate. He insists on the need of tools and technology as well as establishes a knowledge and service based pedagogy with an artistic and ethical foundation. According to him, Mexico will overcome its national problems solely in this manner through national solutions and not by copying foreign models unsuitable for our culture

Resumen:

En Hegel, la dialéctica del amo y el esclavo parte del problema de las conciencias, de la lucha de las autoconciencias. Si se reconocen unas a otras se da un contrato que permite la construcción de una comunidad. Hegel plantea que en la lucha por el reconocimiento las conciencias no alcanzan la aceptación que buscan. La conciencia dependiente busca el reconocimiento: es la figura del esclavo. La del amo no reconoce a la otra, se basta a sí misma e impone la doble negación. La conciencia presa de la ciencia intenta su liberación ilustrada en el contrato social: siempre es posible un paso a la fuerza de la ley, a la organización del Estado, donde la libertad se encauza. Allí se revela el Absoluto filosófico y se llega a la realización de la conciencia autoconsciente de la experiencia de la conciencia: lo ético deviene moral y se alcanza el nosotros. Este ensayo pretende mostrar que es posible ordenar un contrato que permita que las libertades se vinculen unas con otras en la trama del reconocimiento. ¿Se puede ser libre dentro de ese orden? Hegel piensa que sí, pues queda superada la lucha entre amos y esclavos. Habría que preguntarse si hoy ocurre eso en México

Abstract:

In Hegel’s work, the Master-slave dialectic is an issue in the struggle for consciousness and self-consciousness. By mutual recognition, a social contract can be established and thus, a community. Hegel proposes that in the struggle for recognition people’s consciousness do not obtain acceptance. The dependent self-consciousness, i.e. the slave, seeks recognition from the Master. In contrast, the Master does not recognize the slave, is self-sufficient, and thus, imposes the double negation. Consciousness moved by science seeks liberation in the social contract. It is always possible to step into the law or the organization of State where liberties are channeled. By doing so, the Absolute is achieved as well as the awareness of a consciousness, while being aware of the experience of being conscious. Thus, the ethical becomes moral originating the “We.” This article aims to demonstrate the possibility of creating a contract where liberties are tighten together in recognition. Can one be free in this establishment? Hegel believed so since the struggle between Master and slave is resolved. We must ask ourselves if this occurs currently in Mexico

Abstract:

Active queue management (AQM) algorithms are useful not only for congestion avoidance purposes, but also for the differentiated forwarding of packets, as is done in the DiffServ architecture. It is well known that correctly setting the parameters of an AQM algorithm may prove difficult and error-prone. Besides, many studies have shown that the performance of AQM mechanisms is very sensitive to network conditions. In this paper we present a detailed simulation study of an Adaptive RIO (A-RIO) AQM algorithm which addresses both of these problems. A-RIO, first introduced by Orozco and Ros (2003), draws directly from the original RIO proposal of Clark and Fang (1998) and the Adaptive RED (A-RED) algorithm described by Floyd et al. (2001). Our results, based on ns-2 simulations, illustrate how A-RIO improves over RIO in terms of stabilizing the queue occupation (and, hence, queuing delay), while maintaining a high throughput and a good protection of high-priority packets; A-RIO could then be used for building controlled-delay, AF-based services. These results also provide some engineering rules that may be applied to improve the behaviour of the classical, non-adaptive RIO

Resumen:

En el presente artículo, se adapta una metodología de evaluación para estimar si la infraestructura de red IP entre algunas instituciones académicas en la Ciudad de México y en Cuernavaca, Morelos, soporta el transporte de flujos de audio con calidad aceptable. El estudio relaciona métricas propias la red (retardo, pérdida, etc.), y de la aplicación (codec, intervalo de paquetización), con una evaluación subjetiva de la percepción humana de la calidad de la comunicación. En los escenarios evaluados, la latencia de los paquetes es el principal ingrediente para calificar la calidad de las transmisiones de voz como aceptable, o inaceptable

Abstract:

This paper develops an architecture for flying ad-hoc networks (FANETs) to enable monitoring of water quality in a shrimp farm. Firstly, the key monitoring parameters for the characterization of water quality are highlighted and their desired operational ranges are summarized. These parameters directly influence shrimp survival and healthy growth. Based on the considered sensing modality, a reference architecture for implementing a cost-effective FANET based mobile sensing platform is developed. The controlled mobility of the platform is harnessed to increase the spatial monitoring resolution without the need for extensive infrastructure deployment. The proposed solution will be offered to shrimp farmers in the Mexican state of Colima once the laboratory trials are concluded

Resumen:

El presente trabajo explora dos cuestiones: 1) existe un componente superestructural no ideológico y de cuño artístico que contribuye a erosionar el Muro que en 1989 deja de dividir al mundo en el corazón de Europa; 2) sin proponérselo, la diáspora resultante de la desarticulación de la Cortina de Hierro devino en un fenómeno de impacto cultural relevante en diversas regiones del mundo occidental en los ámbitos de la educación, el pensamiento y la creación artística. Tras el bosquejo panorámico de la situación del arte durante el estalinismo y la Guerra Fría, el texto menciona, a título de ejemplo, algunos casos de la prohibición, persecución, deportación y asesinato que sufren los artistas rusos, en particular en el campo de la música y la literatura. El autor se detiene en el análisis conceptual y los criterios de valoración de la importancia y la función del arte en el proceso revolucionario, desde un punto de vista crítico. Finalmente, comenta de un modo general el impacto de la diáspora de artistas y científicos rusos en el mundo, especialmente en América Latina, ante la disolución del régimen soviético

Abstract:

This paper explores two issues: 1) there is a superstructural, artistic, non-ideological element that contributes to the erosion of the Wall that in 1989 ceases to split the world through the heart of Europe; 2) without meaning to, the diaspora that resulted from the disarticulation of the iron curtain became a phenomenon of relevant cultural impact in various regions across the Western world in the fields of education, thought and artistic creation. After a general outline of the situation of at during Stalinism and the Cold War, it lists a few examples of the prohibition, persecution, deportation and murder that Russian artists were subjected to, particularly in the fields of music and literature. It studies the conceptual analysis and the assessment criteria for the importance and function of art in the revolutionary process from a critical perspective. Lastly, it provides a general commentary on the impact of the diaspora of Russian artists and scientists around the world, particularly in Latin America, once the Soviet regime collapsed

Abstract:

In this paper we address the challenging problem of designing globally convergent estimators for the parameters of nonlinear systems containing an exponential function whose power depends on unknown parameters. This class of non-separable nonlinearities appears in many practical applications, and none of the existing parameter estimators is able to deal with them in an efficient way. Our main technical contribution is the development of a lifting procedure for non-separable nonlinearly parameterized regressor equations to obtain separable ones, to which we can apply a recently reported estimation procedure. This is illustrated with a human musculoskeletal dynamics problem. The procedure does not assume that the parameters leave in known compact sets, that the nonlinearities satisfy some Lipschitzian properties, nor rely on injection of high-gain or the use of complex, computationally demanding methodologies. Instead, we propose to design a classical on-line estimator whose dynamics is described by an ordinary differential equation given in a compact precise form

Abstract:

In this paper, we propose a globally stable adaptive controller for the human shank motion tracking problem that appears in neuromuscular electrical stimulation systems. The control problem is complicated by the fact that the mathematical model of the human shank dynamics is nonlinear and the parameters enter in a nonlinear and nonseparable form. To solve the problem, we first derive a nonlinearly parameterized regressor equation (NLPRE) that is used with a new parameter estimator specifically tailored for this NLPRE. This estimator is then combined with a classical feedback linearizing controller to ensure the tracking objective is globally achieved. A further contribution of the paper is the proof that parameter convergence, and consequent global tracking, is guaranteed with an extremely weak interval excitation requirement. A simulation study comparing the proposed adaptive controller with existing ones in the literature shows comparable human shank tracking performance but with fewer parameter estimates and without requiring knowledge of bounds for the unknown parameters

Abstract:

In this note a new high performance least squares parameter estimator is proposed. The main features of the estimator are: (i) global exponential convergence is guaranteed for all identifiable linear regression equations; (ii) it incorporates a forgetting factor allowing it to preserve alertness to time-varying parameters; (iii) thanks to the addition of a mixing step it relies on a set of scalar regression equations ensuring a superior transient performance; (iv) it is applicable to nonlinearly parameterized regressions verifying a monotonicity condition and to a class of systems with switched time-varying parameters; (v) it is shown that it is bounded-input-bounded-state stable with respect to additive disturbances; (vi) continuous and discrete-time versions of the estimator are given. The superior performance of the proposed estimator is illustrated with a series of examples reported in the literature

Abstract:

In recent years, we have witnessed the appearance in the control literature of claims regarding the behavior of the trajectories of closed-loop systems, which are valid only for a specific set of initial conditions (ICs). Being these "trajectory-dependent" claims, a natural question that arises is whether these claims are robust, in some well-defined sense. Regarding Lyapunov stability claims, it is well-known that this property is equivalent to a form of continuity of solutions with respect to the ICs. For linear time-invariant (LTI) systems characterized by transfer matrices, the property of internal stability is widely adopted as a necessary condition to ensure robustness. However, to the best of our knowledge, this question has not been addressed for claims—different fromstability—concerning the behavior of trajectories of nonlinear time-varying (NLTV) systems, which is the scenario in the aforementioned claims. The main objective of this note is to propose a framework for the characterization of robustness or fragility (with respect to ICs) of claims of this nature for NLTV systems

Abstract:

In this note we address the problem of indirect adaptive (regulation or tracking) control of nonlinear, input affine dissipative systems. It is assumed that the supply rate, the storage and the internal dissipation functions may be expressed as nonlinearly parameterized regression equations where the mappings (depending on the unknown parameters) satisfy a monotonicity condition-this encompasses a large class of physical systems, including passive systems. We propose to estimate the system parameters using the "power-balance" equation, which is the differential version of the classical dissipation inequality, with a new estimator that ensures global, exponential, parameter convergence under the very weak assumption of interval excitation of the power-balance equation regressor. To design the indirect adaptive controller we make the standard assumption of existence of an asymptotically stabilizing controller that depends-possibly nonlinearly-on the unknown plant parameters, and apply a certainty-equivalent control law. The benefits of the proposed approach, with respect to other existing solutions, are illustrated with examples

Abstract:

The design of a position observer for the interior permanent magnet synchronous motor is a challenging problem that, in spite of many research efforts, remained open for a long time. In this paper we present the first globally exponentially convergent solution to it, assuming that the saliency is not too large. As expected in all observer tasks, a persistency of excitation condition is imposed. Conditions on the operation of the motor, under which it is verified, are given. In particular, it is shown that at rotor standstill -when the system is not observable- it is possible to inject a probing signal to enforce the persistent excitation condition. The high performance of the proposed observer, in standstill and high speed regions, is verified by extensive series of test-runs on an experimental setup

Abstract:

In this paper we address the problem of distributed state estimation of continuous- and discrete-time stable, LTI systems. The classical observer canonical form representation of the system dynamics is used to identify the observable states for each node (agent). The main novelty of our contribution is to obviate the need of the asymptotically (or finite-time) local Luenberger observers. Instead, following the generalized parameter estimation-based observer design approach, we use the state transition matrix of the system to translate the problem of reconstruction of the observable states into one of parameter estimation---namely, the initial conditions of the associated trajectory. Exploiting the local observability property of the agents we prove that, with a simple sample-and-hold or finite summation operation, it is possible to estimate these parameters algebraically. Consensus strategies are then used to fuse the parameter estimates of all agents to reconstruct the complete state vector. Asymptotic or finite convergence time of the observer are established for several scenarios for the graph, including time-varying, switching and with transmission delays

Abstract:

A novel approach to solve the problem of distributed state estimation of linear time-invariant systems is proposed in this paper. It relies on the application of parameter estimation-based observers, where the state observation task is reformulated as a parameter estimation problem. In contrast with existing results our solution achieves convergence in finite-time, without injection of high gain, and imposes very weak assumptions on the communication graph -namely the existence of an open Hamiltonian walk. It is shown that this assumption is strictly weaker than the usual strong connectivity requirement. The scheme is shown to be robust vis-a-vis external disturbances and communication delays

Abstract:

In this paper we propose a new state observer design technique for nonlinear systems. It consists of an extension of the recently introduced parameter estimation-based observer, which is applicable for systems verifying a particular algebraic constraint. In contrast to the previous observer, the new one avoids the need of implementing an open loop integration that may stymie its practical application. We give two versions of this observer, one that ensures asymptotic convergence and the second one that achieves convergence in finite time. In both cases, the required excitation conditions are strictly weaker than the classical persistent of excitation assumption. It is shown that the proposed technique is applicable to the practically important examples of multimachine power systems and chemical–biological reactors

Abstract:

We present some new results on the dynamic regressor extension and mixing parameter estimators for linear regression models recently proposed in the literature. This technique has proven instrumental in the solution of several open problems in system identification and adaptive control. The new results include the following, first, a unified treatment of the continuous and the discrete-time cases; second, the proposal of two new extended regressor matrices, one which guarantees a quantifiable transient performance improvement, and the other exponential convergence under conditions that are strictly weaker than regressor persistence of excitation; and, third, an alternative estimator ensuring convergence in finite-time whose adaptation gain, in contrast with the existing one, does not converge to zero. Simulations that illustrate our results are also presented

Abstract:

In this paper we propose a solution to the problem of parameter estimation of nonlinearly parameterized regressions - continuous or discrete time - and apply it for adaptive control. We restrict our attention to parameterizations that can be factorized as the product of two functions, a measurable one and a nonlinear function of the parameters to be estimated. Although in this case it is possible to define an extended vector of unknown parameters to get a linear regression, it is well-known that overparameterization suffers from some severe shortcomings. Another feature of the proposed estimator is that parameter convergence is ensured with an excitation assumption that is strictly weaker than persistency of excitation. It is assumed that, after a coordinate change, some of the elements of the transformed function satisfy a monotonicity condition. The proposed estimators are applied to design adaptive controllers for nonlinearly parameterized systems. In continuous-time we consider a general class of nonlinear systems and those described by Euler-Lagrange models, while in discrete-time we apply the method to the challenging problem of indirect adaptive pole-placement. The effectiveness of our approach is illustrated with several classical examples, which are traditionally tackled using below par performance overparameterization and assuming persistency of excitation

Abstract:

In this brief note we present two new parameter identifiers whose estimates converge in finite time under weak interval excitation assumptions. The main novelty is that, in contrast with other finite-convergence time (FCT) estimators, our schemes preserve the FCT property when the parameters change. The previous versions of our FCT estimators can track the parameter variations only asymptotically. Continuous-time and discretetime versions of the new estimators are presented

Abstract:

A key assumption in the development of system identification and adaptive control schemes is the availability of a regression model which is linear in the unknown parameters (of the plant and/or the controller). Applying standard -e.g., gradient descent-based- parameter estimators leads to a linear time-varying equation for the parameter errors, whose stability relies on the usually stringent persistency of excitation assumption. As suggested in Kreisselmeier (1977) and Lion (1967), with the inclusion of linear filters, it is possible to generate alternative regression models, whose parameter error equations have different stability properties. In Duarte and, Narendra (1989), Panteley, Ortega,and Moya, (2002) and Slotine and Li, (1989) estimators that combine tracking and identification errors, to generate new parameter error equations, were proposed. The main objectives of this paper are: first, based on the two key developments mentioned above, provide a unified framework for the analysis and design of parameter estimators and, in particular, show that they lie at the core of some modified schemes recently proposed in the literature. Second, extend the realm of application of these estimators to the class of nonlinear systems considered in Panteley et al. (2002). Third, use this framework to propose some new schemes with relaxed conditions for convergence and improved transient performance. Particular attention is given to the task of obviating the persistency of excitation assumption, which is rarely verified in applications and is, certainly not, the only way to ensure robustness of the schemes

Abstract:

Immersion and invariance is a technique for the design of stabilizing and adaptive controllers and state observers for nonlinear systems. In all these applications the problem considered is the stabilization of equilibrium points. Motivated by some modern applications, we show that the technique can also be used to solve the problem of orbital stabilization, where the final objective is to generate periodic solutions that are attractive. The feasibility of our result is illustrated by means of some classical mechanical engineering and power electronics examples

Abstract:

We propose a solution to the problem of parameter estimation of nonlinearly parameterized regressions-continuous or discrete time-and apply it for system identification and adaptive control. We restrict our attention to parameterizations that can be factorized as the product of two functions, a measurable one and a nonlinear function of the parameters to be estimated. Another feature of the proposed estimator is that parameter convergence is ensured without a persistency of excitation assumption. It is assumed that, after a coordinate change, some of the elements of the transformed function satisfy a monotonicity condition. The proposed estimators are applied to design identifiers and adaptive controllers for nonlinearly parameterized systems, which are traditionally tackled using overparameterization and assuming persistency of excitation

Abstract:

In this brief note we pose, and solve, the problem of robustification of controller designs where the actuator dynamics was neglected. This situation is very common in applications where, to validate the assumption that the actuator dynamics can be neglected, a high-gain inner-loop that enforces a time-scale separation between the actuator and the plant dynamics is implemented. Of course, the injection of the high-gain has well-known deleterious effects. Moreover, a stability, and robustness, analysis of such a control configuration is usually unavailable. Our first main contribution is to provide an alternative to such a scheme, with provable robust stability properties. The second contribution is to, applying this result, propose a robustification procedure to the industry standard field-oriented control of current-fed induction motors, which is usually implemented neglecting the actuator dynamics, with no rigorous proof of stability available to date. Finally, we propose the first solution of smooth, time-invariant regulation of the dynamic model of a class of nonholonomic systems, that includes the widely popular unicycle example. Simulation examples prove the superior performance of the proposed controller compared with the existing switching and/or time-varying alternatives reported in the literature

Abstract:

Stabilization of mechanical systems by shaping their energy function is a well-established technique whose roots date back to the work of Lagrange and Dirichlet. Ortega and Spong in 1989 proved that passivity is the key property underlying the stabilization mechanism of energy shaping designs and the, now widely popular, term of passivity-based control (PBC) was coined. In this chapter, we briefly recall the history of PBC of mechanical systems and summarize its main recent developments. The latter includes: (i) an explicit formula for one of the free tuning gains that simplifies the computations, (ii) addition of PID controllers to robustify and make constructive the PBC design and to track ramp references, (iii) use of PBC to solve the position feedback global tracking problem, and (iv) design of robust and adaptive speed observers

Abstract:

Staining of histological slides with Hematoxylin and Eosin is widely used in clinical and laboratory settings as these dyes reveal nuclear structures as well as cytoplasm and collagen. For cancer diagnosis, these slides are used to recognize tissues and morphological changes. Tissue semantic segmentation is therefore important and at the same time a challenging and time-consuming task. This paper describes a UNet-like deep learning architecture called DRD-UNet, which adds a novel processing block called DRD (Dilation, Residual, and Dense block) to a UNet architecture. DRD is formed by the combination of dilated convolutions (D), residual connections (R), and dense layers (D). DRD-UNet was applied to the multi-class (tumor, stroma, inflammatory, necrosis, and other) semantic segmentation of histological images from breast cancer samples stained with Hematoxylin and Eosin. The histological images were released through the Breast Cancer Semantic Segmentation (BCSS) Challenge. DRD-UNet outperformed the original UNet architecture and 15 other UNet-based architectures on the segmentation of 12, 930 image patches extracted from regions of interest that ranged in size between 1036 × 1222 to 6813 × 7360 pixels. DRD-UNet obtained the best performance as measured with Jaccard similarity index, Dice coefficient, in a per-class comparison and accuracy for overall segmentation

Abstract:

In this paper we present an endogenous growth model with physical and human capital accumulation and study the effects of labor and capital income taxation on the transitional dynamics to the balanced path. We show that parameters on preferences, technologies and depreciation rates, as well as fiscal policy parameters, are relevant to determine qualitatively the dynamic behavior of the economy. We also offer a measure of the inefficiency derived from the taxation of capital earnings. Finally, we consider the taxation welfare cost in two non-trivial generalizations of our basic model which include the case of physical capital in the educational sector and leisure as an additional argument in the utility function.

Abstract:

In this paper we analyze the speed of convergence to a balanced path in a class of endogenous growth models with physical and human capital. We show that such rate depends locally on the technological parameters of the model, but does not depend on preferences parameters. This result stands in sharp contrast with that of the one-sector neoclassical growth model, where both preferences and technologies determine the speed of convergence to a steady-state growth path

Abstract:

Given the size of the Hispanic bilingual market in the United States, it is important to understand the relative effectiveness of using English versus Spanish when advertising to these consumers. This research proposes that Hispanic bilinguals' cultural stereotypes about the users of Spanish living in America are a potent determinant of which language is most effective in advertising. Depending on the favorableness of these cultural stereotypes, our results show that Spanish may be persuasively superior, inferior, or functionally equivalent to English in creating favorable attitudes toward the advertised product. The uniqueness of cultural stereotypes about Spanish users in shaping the influence of an ad's language is underscored by our findings that cultural stereotypes about English users do not exert similar effects in determining the relative persuasiveness of advertising in English or Spanish. The paper offers suggestions for advertising practice and future research

Abstract:

Despite the extensive research conducted on language standardization in advertising across several countries, little attention has been given to the use of English versus Spanish and code-switching when advertising to Latin American bilingual consumers. We propose that stereotypes about English speakers and code-switching have potential to help determine which language is most effective in print advertising. The results of experiments conducted in Chile, Ecuador, and Mexico show that the effects of language-related stereotypes on the persuasiveness of English ads vary across different countries. In the case of Chile, English may be persuasively superior, depending on the favourableness of the stereotype of individuals that code-switch. Differently, print ads in English are functionally equivalent, in terms of ad attitudes, to Spanish and code-switching ads in Mexico, and superior in Ecuador, regardless of the favourability of the language-related stereotypes. Suggestions for advertising practice and future research are offered

Abstract:

Service encounters often become negotiations between the customer and the service provider. For speakers of multiple languages, the language used in a negotiation can be a critical factor in the success of that encounter. By investigating how U.S. bilinguals negotiate in either English or Spanish, this research examines the effect that the activation of the stereotype related to the minority language-speakers has on negotiation outcomes. The results of two experiments support the general notion that, among U.S. Hispanic bilinguals, the majority language (English) yields more favorable outcomes compared to the minority language (Spanish); a third study with a comparison group of bilinguals in Mexico, where no language-related stereotype exists, shows no effect of the negotiation language on the outcome. The paper discusses theoretical and practical implications of the findings and areas for future research

Abstract:

Most undergraduate marketing majors will spend at least some time in a sales role, and employers are requiring greater professionalism and more varied skill sets from their sales hires. In addition, there is an increasing demand for online and higher order learning in sales education. In response, this article proposes that sales courses using structured learning activities can increase critical thinking skills, irrespective of the learning environment. To test these propositions, a pre- and post-test interventional research study was conducted. Results show that students' objective critical thinking scores showed some improvement over a semester for both face-to-face and fully online courses. Moreover, a control group of students did not experience a similar increase in their critical thinking skills

Abstract:

This article develops and tests a segmentation scheme for the U.S. Hispanic market based on the extent and nature of acculturation. Acculturation is conceptualized as driven by language preferences and two dimensions of cultural identification, Hispanic and American. Structural equation modeling develops and assesses the proposed scales, and a latent class clustering procedure (latent discriminant analysis) tests propositions on a sample of 403 U.S. Hispanics. Consistent with theory, four clusters of U.S. Hispanics emerge: retainers, biculturals, assimilators, and non-identifiers that vary according to language preference and cultural identification

Resumen:

Este artículo explora la forma desigual en que se juzga con perspectiva de género -elemento indisoluble de la aplicación normativa precisa de leyes y tratados internacionales a favor de la vida libre de violencia para las mujeres- en los campos universitarios. Se advierte que ocurre un proceso paradójico: por un lado, una creciente emisión de protocolos; por otra parte, persiste una baja asociación entre los bloques de convencionalidad, de constitucionalidad vigentes y las medidas propuestas desde el Poder Judicial de la Federación con las disposiciones establecidas en dichos protocolos, esto con base en el estudio de 56 protocolos de Instituciones de Educación Superior (IES) -ya sean privadas o públicas-. Este artículo documenta cómo las instituciones de educación superior pueden mejorar el tratamiento de la violencia de género en la medida de comprender el alcance del control de convencionalidad para poder cumplir con la reforma a la Ley de Educación Superior. De acuerdo con esta reforma -emitida en 2021-, las instituciones de educación superior, y con el apoyo de las autoridades respectivas, promoverán las medidas necesarias para la prevención y atención de todos los tipos y modalidades de violencia, en específico la de género, así como para la protección del bienestar físico, mental y social de sus estudiantes y del personal que labore en ellas

Abstract:

This article explores the unequal way in which rulings with a gender perspective-a necessary element of the precise application of laws and international treaties aimed at women being able to lead a life free of violence-are uneven in university campuses. A paradoxical process unfolds: on the one hand, protocols are increased, but on the other, there is a low association between the active constitutionality and conventionality blocks and the measures the federal judicial branch proposes and the course of action stipulated in said protocols, as shown by the study of the protocols enacted by 56 higher education institutions (private or public). The article documents the way higher education institutions can improve the way they address gender violence through an understanding of the scope of conventionality control in order to comply with the Higher Education Law reform. This 2021 reform states that higher education institutions, with the support of the appropriate authorities, shall encourage the necessary measures to prevent and address all kinds and forms of violence, specifically gender violence, and protect the physical, social and mental wellbeing of their students and staff

Abstract:

Computational models of emotion (CMEs) are software systems designed to imitate particular aspects of human emotions. The main purpose of this type of computational model is to capture the complexity of the human emotion process in a software system that is incorporated into a cognitive agent architecture. However, creating a CME that closely imitates the actual functioning of emotions demands to address some challenges such as (i) sharing information among independently developed cognitive and affective components, and (ii) interconnecting complex cognitive and affective components that must interact with one another in order to generate realistic emotions, which may even affect agents’ decision making. This paper proposes an architectural pattern aimed at cataloging and describing fundamental components of CMEs and their interrelationships with cognitive components. In this architectural pattern, external cognitive components and internal affective components of CMEs do not interact directly but are extended by including message exchange methods in order to use a publish-subscribe channel, which enables their intercommunication, thus attenuating issues such as software heterogeneity. This structural approach centralizes communication management and separates the inherent complexity of the cognitive and affective processes from the complexity of their interaction mechanisms. In so doing, it enables the design of CMEs’ architectures composed of independently developed affective and cognitive components. The proposed architectural pattern attempts to make progress in capturing the complex process of human emotions in a software system that adheres to software engineering best practices and that incorporates quality attributes such as flexibility and interoperability

Abstract:

Computational Models of Emotions (CMEs) are software systems designed to explain the phenomenon of emotions. The mechanisms implemented in this type of computational models are based on human emotion theories reported in the literature and designed to provide intelligent agents with affective capabilities and improve human-computer interaction. However, despite the growing interest in this type of models, the development process of CMEs does not seem to follow formal software methodologies. In this paper, we present an analysis of CMEs from a software engineering perspective. We aim to identify what elements of software engineering are used in the development process of CMEs and to demonstrate how some software engineering techniques may support and improve their development process. We discuss a series of challenges to be addressed in order to take advantage of software engineering techniques: (1) definition of guidelines to help decide which emotion theories should be implemented computationally, (2) homogenization of terms about human emotions, their components, phases, and cycles implemented in CMEs, (3) design of CMEs whose components can be reusable, (4) definition of standard criteria for comparative analysis between CMEs, (5) identification of software engineering principles, concepts, and design practices useful in the construction of CMEs, and (6) definition of standard frameworks to validate CMEs

Abstract:

In this paper we introduce and study the classes VMp,η(λ, α, β) and VNp,η(λ, α, β) of multivalent functions with varying arguments of coefficients. We obtain coefficients inequalities, distortion theorems and extreme points for functions in these classes. Also, we investigate several distortion inequalities involving fractional calculus. Finally, results on partial sums are considerd

Abstract:

The achievable region approach seeks solutions to stochastic optimization problems by characterizing the space of all possible performances (the achievable region) of the system of interest and optimizing the overall system-wide performance objective over this space. This is radically different from conventional formulations based on dynamic programming. The approach is explained with reference to a simple two-class queueing system. Powerful new methodologies due to the authors and co-workers are deployed to analyse a general multi-class queuing system with parallel servers and then to develop an approach to optimal load distribution across a network of interconnected stations. Finally, the approach is used for the first time to analyse a class of intensity control problems

Abstract:

The purpose of this study was to examine the relationships between perceived co-worker support, commitment to colleagues, job satisfaction, intention to help others, and pro-environmental behavior with the emphasis on eco-helping, with a view to determining the extent to which peer relationships encourage employees to engage in pro-environmental behaviors at work. This paper is framed by adopting social exchange theory through the lens of ethics of care. Data from a sample of 449 employees showed that receiving support from peers triggers an exchange process that encourages eco-helping among colleagues. The implications of the findings are discussed in the light of the social exchange literature

Abstract:

This study uses the tenets of social exchange theory to examine employee willingness to perform proenvironmental behaviours (PEBs) in a workplace setting. The first aim of the study was to examine the indirect effect of perceived organisational support on pro-environmental behaviours via job attitudes. The second objective was to clarify whether a psychological contract breach affects the relationships between perceived organisational support and job attitudes. Using a convenience sample (N = 449), we report that perceived organisational support has an indirect effect on PEBs through employee commitment to the organisation. Additionally, organisational support moderates the effect of a perceived breach on employee job satisfaction

Resumen:

Este artículo estudia los entornos de trabajo donde grupos de personas interactúan de manera síncrona y remota (distribuida), con el propósito de crear y desarrollar software dentro del marco institucional de una organización, en lo que se conoce como desarrollo distribuido de software (DSD, por sus siglas en inglés). En este tipo de esquemas colaborativos, los desarrolladores requieren trabajar en grupos que están geográficamente distribuidos y su interacción generalmente es realizada con el apoyo de tecnología de información y comunicación. Es común que las tecnologías de colaboración no estén diseñadas para apoyar lo que llamamos inicio de colaboración informado, es decir los escenarios donde el iniciador de la colaboración pueda contar con la información de la actividad que realiza la persona buscada, con la cual, el iniciador pueda inferir si el momento para iniciarla es óptimo y apropiado para ambos. Para lograr esto, es necesario conocer el contexto de la actividad de la persona buscada en un momento determinado. Para apoyar el inicio de colaboración informado se propone la conceptualización y caracterización tecnológica de esferas de trabajo colaborativas, la cual aporta ideas de diseño para el desarrollo de una herramienta prototipo. A esta herramienta le llamamos CWS-IM (Collaborative Working Spheres – Instant Messaging). La herramienta es un mensajero instantáneo extendido con soporte para inicios de colaboración informados, la cual se introdujo a las actividades reales de DSD en una fábrica de software con la finalidad de evaluarla mediante un estudio de caso. Los resultados de esta evaluación proporcionan evidencia que muestra la aceptación favorable de CWS-IM por parte de los participantes en términos de utilidad, facilidad de uso, apoyo al inicio de interacción y gestión del nivel de interrupción

Abstract:

This paper studies work environments where groups of people interact synchronously and remotely (distributed) with the purpose of creating and developing software within the institutional framework of an organization, in what is known as distributed software development (DSD). In this type of collaborative schemes, developers are required to work in groups that are geographically distributed and their interaction is usually conducted with the support of information and communication technologies. It is common that collaborative technologies are not designed to support what we call informed collaboration initiation, i.e., scenarios where the initiator of the collaboration can have the information of the activity that is being performed by the person he/she is looking for, with which the initiator can infer whether the time to start is optimal and appropriate for both of them. To achieve this it is necessary to know the context of the activity of the person that is being looked for at a given moment. To support the initiation of informed collaboration, the conceptualization and technological characterization of collaborative working spheres is proposed, which provides design ideas for the development of a prototype tool. We call this tool CWS-IM (Collaborative Working Spheres - Instant Messaging). The tool is an extended instant messenger with support for the initiation of informed collaboration, which was introduced to the actual activities of DSD in a software factory in order to evaluate it through a case study. The results of this evaluation show the favorable acceptance of CWS-IM by participants in terms of usefulness, ease of use, support for the initiation of informed interaction and disruption level management

Abstract:

The software industry is facing a recent trend called distributed software development (DSD), in which distributed teams require continuous support in their communication and coordination. However, there is a lack of communication tools that actually support the coordination of DSD activities. Current communication mechanisms appear to favour the issuer of an interaction, because the context of the receiver is not always considered. In this study, the authors introduce selective availability (SA), a mechanism with which to provide information about the current activities of the members in a distributed team, in order to motivate a more suitable means to initiate interactions, thus facilitating the communication and coordination of DSD activities. Moreover, the authors describe the CWS-IM tool, an extended instant messaging application that supports SA, by notifying collaborators about each of their colleague’s activities. Therefore, issuers can decide whether the time is right to start the interaction. The results of an evaluation of the actual use of the tool in a DSD software development company are also presented. These results indicate that developers perceive CWS-IM to be more useful and easier to use than other traditional instant messaging applications when initiating collaboration in DSD environments

Abstract:

The softwore industry is facing a paradigm shift towards distributed software development (DSD). This change creates situations from which organizations may benefit, and challenges to which they must adapt to (e.g. the absence of opportunities for informal interaction). In this paper we present the results of an analysis of the knowledge flow problems which hinder members of a DSD team to get into synchronous collaboration. We used the KoFI methodology to identify knowledge sources, topics, and existing flows. and to identify knowledge flow problems in the DSD environment. These have been used as a basis for the design and development of CWS-IM, an extended instant messenger tool which implements "Selective Availability" lo facilitate "l'entreé en collaboration" of the members of a DSD team. Therefore, it is expected that the interruptions during DSD will be less disturbing as this tool helps users lo decide when it is the most suitable moment to establish synchronous contact for the sender and the receiver

Abstract:

Distributed software development is a new working philosophy that the software industry is currently facing. Organisations may benefit from the situations that this shift has created, although they must also confront new challenges related to them. In this study, the authors focused on the lack of timely adequate opportunities for informal interaction, which has been identified as an important issue to overcome coordination, communication and trust limitations. The authors attempted to confiont this problem through obtaining information from the personal activities of remote colleagues. In this respect, the authors propose introducing and defining collaborative working spheres (CWS) because the authors argue that CWS permit the identification of opportunities for interaction at appropriate moments. This concept is illustrated with the design of CWS-instant messaging (IM), an extended IM tool that supports the CWS concept. This tool was tested by 16 distributed software development (DSD) workers during an initial scenario-based evaluation. The results show favourable evidence towards both the perceived usefulness and case of use of CWS-IM

Resumen:

Este artículo presenta una opción de preservación del patrimonio tangible e intangible de la ciudad por medio de la preservación e intervención de la casa General León 51, ubicada en la Colonia San Miguel Chapultepec en la Ciudad de México, y busca mostrar una opción de hacer ciudad, frente al problema de la creciente densificación urbana actual que construye de manera indiscriminada edificios y condenan a la demolición a un sin número de casas que tienen un valor estético, así como a la transformación del entorno urbano y de las tradiciones de las antiguas colonias de la ciudad de México

Abstract:

This article wants to show an option of preservation of tangible and intangible heritage in cities through the preservation and intervention of the General León 51 House, located in Colonia San Miguel Chapultepec in Mexico City, and seeks to show an option to build and live in cities, against the increasing problem of the current urban densification that builds in an indiscriminate way buildings and condemn to the demolition a considerable number of houses (although they can have aesthetic value), as well as the transformation of the urban environment and the traditions of the old colonies Mexico City

Abstract:

This chapter explains and describes a detailed framework based on integrating a number of different methodological strands from the literature. A literature review was conducted in three different domains - business process re-design, supply chain re-design and e-business process design-. The literature review revealed potential for integrating elements of a number of different methods and techniques found in different methodological strands into a framework for conducting Business Process Re-design (BPR) to support Supply Chain Integration (SCI).The proposed BPR methodology can be applied in any company or sector; methods and techniques incorporated are not specific to any sector

Abstract:

One of the challenges faced by organizations in the construction of Supply Chain Integration (SCI) is the re-design of business processes. Accordingly a detailed methodology was constructed based on the integration of a number of different methodological strands from the literature. The proposed BPR methodology was validated by applying it to an Airline Maintenance Repair and Overhaul (MRO) supply chain. This application lead to a re-design of the aircraft component repair services offered by an independent Airline MRO provider. Results from this application shows that the proposed methodology can clearly guide the re-design of business processes to support SCI

Abstract:

A supply chain consists of different processes and when conducting supply chain re-design is necessary to identify the relevant processes and select a target for re-design. Through a literature review a solution is presented here to identify first the relevant processes using the Supply Chain Operations Reference (SCOR) model, then to use Analytical Hierarchy Process (AHP) analysis for target process selection. AHP can aid in deciding which supply chain processes are better candidates to re-design in light of predefined criteria

Abstract:

Although a number of methodologies exist for business process re-design (BPR), supply chain re-design (SCR), and e-business process design, there is a lack of an integrated BPR methodological framework to support supply chain integration (SCI). This paper proposes a detailed framework based on integrating a number of different methodological strands from the literature. A literature review was conducted in three different domains – business process re-design, supply chain re-design and e-business process design. The literature review revealed the potential for integrating elements of a number of different methods and techniques found in different methodological strands into a framework for conducting BPR to support SCI. Accordingly a number of relevant methodologies were identified, decomposed and compared at their stage and technique/method level to identify a combination for development of the integrated framework. The proposed BPR methodology can be applied in any company or sector; methods and techniques incorporated are not specific to any sector. The proposed BPR methodology proposed constitutes an aid for supply chain practitioners in the construction of SCI

Abstract:

Some industries have consumers who seek novelty and firms that innovate vigorously and whose organizational structure is loosely coupled, or easily adaptable. Other industries have consumers who take comfort in the traditional and firms that innovate little and whose organizational structure is tightly coupled, or not easily adaptable. This paper proposes a model that explains why the described features tend to covary across industries. The model highlights the pervasiveness of equilibrium inefficiency (innovation can be insufficient or excessive) and the nonmonotonicity of welfare in the equilibrium amount of innovation

Abstract:

This paper studies a standard dynamic trading environment with asymmetric information. A trading mechanism, called a dark market, is proposed that achieves allocative efficiency (i.e., maximizes the total surplus). The mechanism’s critical feature is that it conceals from traders the history of past trades. Under plausible conditions, the dark market is stable (i.e., impervious to non-conforming trades offered by an entrant market-maker)

Abstract:

In the standard market-microstructure model of Glosten and Milgrom (1985), public information can have negative social value. Equivalently, an increase in informational asymmetry can raise the total surplus from trade

Abstract:

In pure limit-order markets, the use of large orders is discouraged by potential front-runners. This problem can be mitigated by using expandable orders or iceberg orders, or by splitting a large order into smaller ones. An expandable order gives a trader an option to sequentially expand the size of his trade while holding the price fixed. An iceberg order enables a trader to commit to a maximal trade size at some price, without fully revealing that size to other traders. This paper provides a theoretical model of expandable orders and the accompanying quantity negotiation, called (also by practitioners) the “workup”. The workup is ubiquitous in the U.S. and Canadian bond markets and in over-the-counter markets. The model suggests that even when iceberg orders are available, traders will still use the workup if splitting an order is prohibitively costly, front-running is a threat, and the exchange lacks trust

Abstract:

Negotiations about a merger or acquisition are often sequential and only partially disclose to bidders information about each otherʼs bids. This paper explains the seller optimality of partial disclosure in a single-item private-value auction with two bidders. Each bidder can inspect the item at a nonprohibitive cost. If a revenue-maximizing seller cannot charge bidders for the information about the otherʼs bid, then the seller optimally runs a sequential second-price auction with a reserve price and a buy-now price. The seller prefers to keep the bids confidential and, sometimes, to hide the order in which he approaches the bidders

Abstract:

The communication of ideas fosters technological progress and prevents regress. This paper develops a growth model wherein an economy's technology is endogenous to agents' communication decisions. In equilibrium, there is too little communication and insufficient risk-taking relative to the first best. The model can generate an abrupt take-off of output growth without an exogenous "catastrophe." A numerical example illustrates such a take-off. In that example, the endogenous fall in the cost of communication leads to the acceleration of the growth rate of output by facilitating the transmission of knowledge and by encouraging risk-taking

Abstract:

Schelling [Schelling, T.C., 1969. Models of Segregation. American Economic Review, Papers and Proceedings, 59, 488-493, Schelling, T.C., 1971a. Dynamic Models of Segregation. Journal of Mathematical Sociology, 1 (2), 143–186, Schelling, T.C., 1971b. On the Ecology of Micromotives. The Public Interest, 25, 61–98, Schelling, T.C., 1978. Micromotives and Macrobehavior. New York: Norton.] presented a microeconomic model showing how an integrated city could unravel to a rather segregated city, notwithstanding relatively mild assumptions concerning the individual agents' preferences, i.e., no agent preferring the resulting segregation. We examine the robustness of Schelling's model, focusing in particular on its driving force: the individual preferences. We show that even if all individual agents have a strict preference for perfect integration, best-response dynamics may lead to segregation. This raises some doubts on the ability of public policies to generate integration through the promotion of openness and tolerance with respect to diversity. We also argue that the one-dimensional and two-dimensional versions of Schelling's spatial proximity model are in fact two qualitatively very different models of segregation

Abstract:

Ongoing research aims at finding clues that reveal the existence of Under Extremely Low Frequency (UELF) classical electromagnetic waves. Considering their extreme wavelengths, ranging from the actual size of Jupiter up to millions of light years, UELF waves cannot fit within Earth, so the search for them needs to look after much larger regions in space. If UELF waves could interact with cosmic structures (like a galaxy or a planetary system), wich dimensions scale with the corresponding wavelengths, then observable geometric patterns should be created and their existence revealed. This paper analyses the feasible trapping of UELF electromagnetic waves within cosmic structures with a disk geometry that has an approximately circular sharp edge. By making a direct analogy with an optical fiber slice, we find characteristic time periods in the order of tens of thousands of years for spiral galaxies like the Milky Way, and in the order of several hours for a solar systmes like ours. Actual geometric patterns, like the planetary magnetic field alignment and the recently published 3D Milky Way mapping, support and encourage our research. While our search for clues supporting the existence of UELF waves continues, the present working paper provides novel elements of analysis to advance with it

Abstract:

The clock and wavefront paradigm is arguably the most widely accepted model for explaining the embryonic process of somitogenesis. According to this model, somitogenesis is based upon the interaction between a genetic oscillator, known as segmentation clock, and a differentiation wavefront, which provides the positional information indicating where each pair of somites is formed. Shortly after the clock and wavefront paradigm was introduced, Meinhardt presented a conceptually different mathematical model for morphogenesis in general, and somitogenesis in particular. Recently, Cotterell et al. [A local, self-organizing reaction-diffusion model can explain somite patterning in embryos, Cell Syst. 1, 257-269 (2015)] rediscovered an equivalent model by systematically enumerating and studying small networks performing segmentation. Cotterell et al. called it a progressive oscillatory reaction-diffusion (PORD) model. In the Meinhardt-PORD model, somitogenesis is driven by short-range interactions and the posterior movement of the front is a local, emergent phenomenon, which is not controlled by global positional information. With this model, it is possible to explain some experimental observations that are incompatible with the clock and wavefront model. However, the Meinhardt-PORD model has some important disadvantages of its own. Namely, it is quite sensitive to fluctuations and depends on very specific initial conditions (which are not biologically realistic). In this work, we propose an equivalent Meinhardt-PORD model and then amend it to couple it with a wavefront consisting of a receding morphogen gradient. By doing so, we get a hybrid model between the Meinhardt-PORD and the clock-and-wavefront ones, which overcomes most of the deficiencies of the two originating models

Resumen:

Se examinan críticamente algunas de las principales interpretaciones del pensamiento de Leo Strauss que lo relacionan tanto con el gobierno conservador de George Bush y la justificación de la guerra en Irak, como con la teología política de Carl Schmitt, para concluir que Strauss tiene muy poco que ver con ambos, si bien comparte algunos puntos de vista

Abstract:

Main interpretations of the thought of Leo Strauss that relate him to the conservative government of George Bush and the justification of the war in Iraq, as with the political theology of Carl Schmitt, are critically examined. It has been concluded that Strauss has very little to do with both, although he shares some points of view

Resumen:

El género gauchesco, la invención más original del Río de la Plata (Uruguay y Argentina), encuentra una de sus cumbres estéticas en el Fausto criollo, de Estanislao del Campo. Por su estructura y motivos, que parten de la obra musical de Gounod, el poema gauchesco ha sobrevivido a todos los avatares culturales y político-ideológicos: un texto asombrosamente moderno, en el que parodia, carnavalización y dialogismo se manifiestan en todo su esplendor, con lenguaje arcaico reelaborado con un género popular deliberado. Las redes intertextuales contribuyen a la perduración del poema

Abstract:

The Gaucho genre, an original creation from Río de la Plata (Uruguay and Argentina), reaches its pinnacle in Estanislao del Campo’s Fausto criollo. Due to its structure and purpose originating from Gounod’s opera, the Gaucho poem has stood the test of time while confronting cultural and politico- ideological vicissitudes. It is an astounding modern text where parody, carnavalization, and dialogism are shown in all their splendor in a reworked archaic language yet in a deliberate popular genre. The significant intertextuality has contributed to the poem’s lasting fame

Resumen:

Literatura y alcohol: binomio bien conocido como tema literario y como forma de vida. Este es un recorrido por la historia del tango: desde sus orígenes, a principios del siglo XX, hasta la controvertida renovación de Piazzolla, haciendo énfasis en los contenidos sobre borracheras, ‘crudas’ y la relación machista hombre-mujer

Abstract:

Literature and alcohol: a well known pair both in the literary world and in real life. This is a journey to discover the history of tango: from its origins in the early twentieth century to Piazzolla’s controversial renovation, while emphasizing topics such as drunkenness, hangovers, and machismo

Resumen:

El artículo se ocupa de Silvina Bullrich, escritora argentina cuya popularidad y cuyo prestigio se basan en libros mediocres que se transformaron en best-sellers y ocultaron, parcialmente, su mala fe y su apoyo a la dicadura militar más despiadada que sufrió el país sudamericano desde 1976 hasta los primeros años de la década de 1980

Abstract:

This article is about Silvina Bullrich, an Argentinean female writer whose popularity and prestige are due to mediocre books that became best sellers and partially hid her bad faith and support of the Argentinean military dictatorship, which was responsible for the cruelest events in Argentina’s history from 1976 to the early 1980s

Resumen:

Paquete estadístico con interfaz gráfica para realizar el análisis no paramétrico

Abstract:

Non-Parametric Module (NPMOD) is a graphical interface dedicated to the testing of nonparametric data for educational purposes

Abstract:

Background Sustainable Development Goal 3.2 has targeted elimination of preventable child mortality, reduction of neonatal death to less than 12 per 1000 livebirths, and reduction of death of children younger than 5 years to less than 25 per 1000 livebirths, for each country by 2030. To understand current rates, recent trends, and potential trajectories of child mortality for the next decade, we present the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019 findings for all-cause mortality and cause-specific mortality in children younger than 5 years of age, with multiple scenarios for child mortality in 2030 that include the consideration of potential effects of COVID-19, and a novel framework for quantifying optimal child survival

Abstract:

We characterize the following choice procedure. The decision maker is endowed with two binary relations over alternatives, a preference and a similarity. In every choice problem she includes in her choice set all alternatives which are similar to the best feasible alternative. Hence she can, by mistake, choose an inferior option because it is similar to the best. We characterize this boundedly rational behavior by suitably weakening the rationalizability axiom of Arrow (1959). We also characterize a variation where the decision maker chooses alternatives on the basis of their similarities to attractive yet infeasible options. We show that similarity-based mistakes of either kind lead to cyclical behavior. Finally, we reinterpret our procedure as a method for choosing a bundle given a set of individual items, in which the decision maker combines the best feasible item with those that complement it

Resumen:

La economía peruana mostró entre 1998 y 2012 un rápido crecimiento acompañado de una mejora en la composición de la fuerza laboral en términos de capital humano (educación y experiencia). Sin embargo, el salario real promedio permaneció prácticamente constante. Mostramos que el estancamiento salarial está asociado a la disminución de los retornos a la educación y, en menor medida, a la experiencia. Exploramos el papel de la oferta relativa de trabajadores con diferentes niveles de capital humano como una explicación de la disminución de la prima salarial de la educación. También discutimos las implicaciones de los cambios en los salarios relativos de distintos grupos de trabajadores para la desigualdad de ingresos, la participación laboral y la productividad total de los factores

Abstract:

From 1998 to 2012, the Peruvian economy exhibited rapid growth. Moreover, the composition of the labor force improved in terms of education and experience, two variables that are typically associated to higher human capital. The average worker in 2012 had a higher level of education and was one and a half years older than in 1998, reflecting the impact of the demographic transition. However, the average real wage was roughly constant. We show that a decline in the wage premium for education, and to a minor extent for experience, is responsible for the lack of growth in the average real wage. Had these two premia remained constant throughout the period of analysis, average labor earnings would have increased by about 2.6% per year, of which 0.7 percentage points are accounted for by the changes in the composition of the labor force in terms of age and education. We explore the role of the relative supply of workers with different levels of human capital as an explanation for the decline in the wage premium for education. Finally, we analyze the implications of these findings for some macroeconomic variables, as earnings and wage inequality, the labor share and total factor productivity

Abstract:

Sampling based motion planning methods have been highly successful in solving many high degree of freedom motion planning problems arising in diverse application domains such as traditional robotics, computer-aided design, and computational biology and chemistry. Recent work in metrics for sampling based planners provide tools to analyze the model building process at three levels of detail: sample level, region level, and global level. These tools are useful for comparing the evolution of sampling methods, and have shown promise to improve the process altogether. Here, we introduce a filtering strategy for the Probabilistic Roadmap Methods (PRM) with the aim to improve roadmap construction performance by selecting only the samples that are likely to produce roadmap structure improvement. By measuring a new sample’s maximum potential structural improvement with respect to the current roadmap, we can choose to only accept samples that have an adequate potential for improvement. We show how this approach can improve the standard PRM framework in a variety of motion planning situations using popular sampling techniques

Resumen:

Durante más de un siglo, la psicología del trabajo y de las organizaciones ha realizado aportaciones científicas a la mejora del trabajo persiguiendo promover organizaciones humanas y productivas. La crisis económica está poniendo en riesgo algunos de los logros en calidad de vida laboral y en condiciones de trabajo y plantea la cuestión de la sostenibilidad del bienestar laboral. Esta presentación pretende ofrecer una conceptualización del bienestar laboral que tiene en cuenta la investigación y desarrollos recientes de la psicología en este campo y sus implicaciones para la intervención en las organizaciones. Además, a partir del modelo del “trabajador feliz y productivo” y la investigación sobre la relación entre estos dos “resultados” de la actividad laboral se plantea la necesidad de considerar la combinación de ambos para explorar los antecedentes que llevan a la sinergia de estos dos elementos. El análisis de los principales antecedentes del bienestar laboral sostenible y los retos que plantea a la investigación y práctica profesional basada en la evidencia de los psicólogos en este ámbito serán planteados en la presentación

Resumen:

En el presente ensayo se analizan los rasgos sobresalientes del periodo 2012-2018, en que las instituciones políticas entraron en crisis y el país enfrentó retos inéditos en la relación con Estados Unidos. Examina las condiciones en que se llevó a cabo la política exterior de México, tanto internas como externas, y su incidencia en las decisiones políticas. Como resultado de ello se divide el sexenio en tres partes derivadas de los hechos sustantivos que lo marcaron, para referirse en la parte final a la actividad del presidente electo en los meses previos a la toma de posesión

Abstract:

This article analyzes the salient features of the 2012-2018 administration, during which political institutions entered crisis and the country faced unprecedented challenges in its relationship with the United States. It examines the conditions under which Mexico’s foreign policy, both internal and external, was implemented, and its impact on political decisions. As a result, it divides the administration into three parts on the basis of the major events that defined it, before describing in the final section the activity of the president-elect in the months prior to taking office

Résumé:

Ce texte analyse les traits saillants de la période 2012-2018, durant laquelle les institutions politiques du Mexique sont entrées en crise, juste au moment où le pays devait faire face à des défis jamais vus dans ses rapports avec les États-Unis. On décrit les conditions, internes et internationales, dans lesquelles s’est déroulée la politique étrangère mexicaine, ainsi que leur in-fluence sur les décisions qui ont été prises. Dans ce but, on distingue trois étapes le long du gouvernement en question, suivant les faits essentiels qui ont marqué chacune d’elles. La dernière partie de l’article fait allusion aussi aux activités du successeur de Peña Nieto –le président élu, López Obrador– du-rant les mois qui ont précédé son accès au pouvoir

Abstract:

Most societies today are culturally diverse. Increasingly, minority groups are demanding recognition and self-governing rights to protect their ways of life against that of the majority. These demands represent a serious challenge for the state: how is it to balance between the equally legitimate claims of the many cultures inhabiting its territories, all the while promoting a set of common practices and democratic institutions? In several influential publications, Will Kymlicka has offered persuasive answers to those questions. This article examines his theory, with particular emphasis on the distinction he draws between what he calls national minorities and polyethnic (or immigrant) groups. Given his hierarchical structuring of both groups, this article attempts to show that Kymlicka falls into somewhat contradictory positions, especially evident when considering the implications of his theory on how education is structured within multicultural states

Resumen:

Octavio Paz analizó la pintura desde la perspectiva de un poeta. Como su mentor artístico Charles Baudelaire, Paz tuvo dos características muy difíciles de encontrar: fue un poeta y un agudo crítico de arte. En este artículo el autor comenta la relación de Octavio Paz entre poesía y artes plásticas y su papel como editor de arte en las revistas Plural (1971-1976) y Vuelta (1976-1998)

Abstract:

Octavio Paz analyzed painting from the perspective of a poet. Like his artistic mentor Charles Baudelaire, Paz had two very difficult characteristics to find: he was a poet and a keen art critic. In this article the author discusses Octavio Paz’s relationship between poetry and the visual arts and his role as art editor for the magazines Plural (1971-1976) and Vuelta (1976-1998)

Resumen:

El presente ensayo trata sobre la influencia de Octavio Paz como colaborador destacado de publicaciones culturales, en particular, de la revista Mundo Nuevo, fundada por el famoso crítico uruguayo y biógrafo de Jorge Luis Borges, Emir Rodríguez Monegal

Abstract:

This article showcases Octavio Paz’s influence in his role as a distinguished contributor to cultural publications, namely, Mundo Nuevo magazine, created by the famous Uruguayan critic and biographer of Jorge Luis Borges, Emir Rodríguez Monegal

Resumen:

Hay muchos escritores europeos –Aldous Huxley, Antonin Artaud, André Breton, Benjamin Péret, D. H. Lawrence, Malcolm Lowry, Graham Greene, Max Frisch, Italo Calvino, entre otros- que han viajado y escrito sobre México. Pero muchos de ellos, escribiendo sobre México, no han hecho otra cosa que escribir sobre ellos. Jean Marie Gustave Le Clézio vivió en México varios años y sus principales preocupaciones, en muchos de sus libros, no fue él mismo, sino México, los sobrevivientes que han guardado la memoria de sus ancestros. Trois villes saintes es un testimonio sobre eso. Trois villes saintes es un viaje a través de la selva de Chiapas y Yucatán buscando a los pueblos indígenas que guardan la memoria de sus antepasados

Abstract:

There are many European writers –Aldous Huxley, Antonin Artaud, André Bréton, Benjamim Péret, D.H. Lawrence, Malcolm Lowry, Graham Greene, Max Frischi, Italo Calvino, among others– who have traveled and written about Mexico. However many of them, when writing about Mexico, have only written about themselves. Jean Marie Gustave Le Clézio lived in Mexico for many years and his main concern in many of his books was not writing about himself but about Mexico, namely the survivors who have kept the memory of their ancestors alive. Trois villes saintes serves as a testimony to that. Trois villes saintes is a journey through the jungles of Chiapas and Yucatán in search for indigenous people who strive to maintain their ancestral ties

Resumen:

Este trabajo discute las posibilidades y los límites de la teoría del populismo de Ernesto Laclau. En primer lugar, reconstruyo la lógica específica del populismo, la cual se expresa en la articulación de dos dimensiones contradictorias, a saber: la complejidad y la simplificación de lo social. En segundo lugar, analizo el carácter anti institucional del populismo. Por último, discuto el carácter "radical" del populismo y su relación con las categorías de populismo de izquierda y de emancipación

Abstract:

This work discusses the possibilities and limits of Ernesto Laclau's theory of populism. First, the specific logic of populism is reconstructed, expressed in the coordination of two contradictory dimensions, namely complexity and the simplification of what is social. Second, the anti-institutional character of populism is analyzed. Finally, the "radical" character of populism and its relation to the categories of left-wing populism and emancipation are discussed

Resumen:

Las ecuaciones diferenciales con retardo han sido utilizadas en la modelación de problemas de diversas disciplinas, como lo son la biología, economía, física e ingeniería, entre otras. En este artículo describimos sistemas modelados con ecuaciones diferenciales con retardo, y exponemos algunos de los métodos empleados para resolverlos; utilizamos el método de los pasos, transformada de Laplace, y ecuación característica para resolver la ecuación x 0 (t-II) =a x, con α y τ parámetros reales. También mostramos, por medio de integración numérica, el cambio cualitativo de las soluciones en función de los parámetros α y τ

Abstract:

Purpose – The purpose of this paper is to show how employees’ work cultural values in three cities of two different South American countries (Buenos Aires, Sao Paolo, and Rio de Janeiro) differ, and how these differences are related to the manner in which people perceive risk and construe the meaning of danger. Design/methodology/approach – A total of 220 line employees of a multinational enterprise in Rio de Janeiro, Sao Paulo and Buenos Aires participated in this study. The paper compared the means of reported job satisfaction and cultural values among the cities. Furthermore, regressions are used for cultural values on perceptions of risks from job hazards. Findings – There are different cultural values across the cities. These cultural values are associated with the manner people understand risk and respond to risk management programs. This could eventually influence the success of the implementation of safety management programs. Research limitations/implications – This is a study carried out in a single organization within the transportation industry. Managers and scholars must be careful in generalizing these findings across geographical locations and industries. Practical implications – The findings challenge the assumption that safety-training methods can be applied indiscriminately in every country without taking into account national culture and intra-national subculture differences. Originality/value – This study explores the importance of culture in the transfer and administration of US-made safety programs to South America within the context of the high-risk transportation industry segment. Its findings are important for multinational enterprises concerned with the safety of workers in high-risk industries

Abstract:

This paper reports on the development of the Trust in Risk Communication scale (TRC). The TRC measures beliefs in government, management and union trustworthiness. The TRC is found to be significantly associated with (1) perception of risk from industrial hazards, (2) the Mayer, Davis and Schooner trust related factors, (3) dimensions defined by the theory of reasoned action, (4) job satisfaction, and (5) hope. There was no significant relationship between Schwartz values and the TRC. The TRC is shown to have appropriate psychometric characteristics across cultural groups. Data from workers (N = 506) in Argentina, Brazil, Canada, Mexico and the United States suggest that the TRC predicts workers’ willingness to accept risk at the workplace. The TRC can assist researchers and practitioners by providing them with an overall assessment of workers’ trust in risk management programmes

Resumen:

Objetivo. Proponer un índice de seguridad de cruces peatonales (ISCP) sobre vialidades primarias en Ciudad de México para calificar los cruceros peatonales semaforizados, y contrastar el ISCP con hechos de tránsito para probar, en forma empírica, si hay alguna asociación entre la calidad de los cruceros y la siniestralidad. Métodos. Identificación de los criterios del índice mediante una revisión del estado del arte, ponderación de los criterios para generar el ISCP mediante el método de análisis multicriterio, diseño de una muestra aleatoria estratificada de cruces peatonales (n = 490) y su evaluación. Resultados. Relativo a la evaluación de los cruceros mediante el ISCP, destaca que 91.3% de los cruces evaluados en Ciudad de México no cuentan con las condiciones óptimas para resguardar la seguridad de los peatones, con el macrocriterio “Accesibilidad” como el peor calificado. En lo referente al modelaje, resalta que tanto la mezcla de usos del suelo como la distancia de cruce son las variables explicativas más importantes para predecir hechos de tránsito. Conclusiones. El análisis mostró con relativo éxito la relación entre algunas de las variables (criterios) que conforman el ISCP con los hechos de tránsito. En muchos casos, esto muestra coherencia teórica. En otros, abre preguntas de investigación

Abstract:

Objective. Propose a pedestrian crosswalk safety rating (PCSR) for primary roads in Mexico City in order to rate crosswalk safety at intersections with a traffic light and then compare the PCSR with traffic accidents so as to empirically determine any association between the quality of the crosswalk and the traffic accident rate. Methods. Identify criteria for the rating system through a state-of-the art review; weight the criteria to create a rating system through multicriterion analysis; design a stratified random sample of crosswalks (n = 490); and evaluate the data set. Results. Through the PCSR, 91.3% of the crosswalks evaluated in Mexico City were found not to offer the conditions required to protect pedestrian safety; the “access” macro-criterion received the worst scores. The modelling shows that mixed land use and the length of the crosswalk are the most important variables in predicting traffic accidents. Conclusions. The analysis was relatively successful in showing the relationship between some variables (criteria) of the PCSR and traffic accidents. In many cases, this shows theoretical coherence; in others, research questions are raised

Resumo:

Objetivo. Propor um índice de segurança de travessia de pedestres (ISTP) para as principais vias públicas na Cidade do México para classificar as travessias de pedestres semaforizadas e comparar o ISTP com os dados de trânsito para comprovar empiricamente se existe associação entre a qualidade dos locais de travessia e a taxa de acidentes. Métodos. Foram identificados os critérios do índice com uma revisão do conhecimento atual e os critérios para gerar o ISTP foram ponderados com uso do método de análise de decisão multicritério e delineamento e avaliação de uma amostra aleatória estratificada de travessias de pedestres (n = 490). Resultados. Com respeito à avaliação das travessias com o uso do ISTP, verificou-se que 91.3% das travessias avaliadas na Cidade do México não têm condições ideais para resguardar a segurança dos pedestres, com o macrocritério “acessibilidade” com a pior qualificação. Quanto à modelagem, observou-se que a mescla de usos do solo e a distância da travessia são as variáveis explicativas mais importantes para predizer a ocorrência de acidentes de trânsito. Conclusões. A análise demonstrou com relativo sucesso a relação entre algumas variáveis (critérios) que compõem o ISTP e acidentes de trânsito. Houve uma coerência teórica em muitos casos, porém em outros suscitou dúvidas a serem investigadas

Abstract:

Quick counts are usually based on a pre-specified, stratified, sampling design. To define the specifications of the design, results from previous elections are used and quantification of the estimation error is obtained. However, in practice, information is gathered gradually with no guarantee of obtaining the whole designed sample. If the original strata are also study domains, incomplete samples worsen the problem because there might be strata with no information at all. To produce partial estimates, we must adequate the estimation procedure. A specific technique based on dynamic clusterings plus a credibility level correction is considered to solve the lack of information in a stratified sampling design, where strata are also study domains. In particular, we use a hierarchical clustering method, based on data from previous elections, to create a poststratification to impute the sufficient statistics for those strata without sample observations available. We present the details of the sampling design and illustrate the results of the federal quick count in the 2021 Mexican election

Resumen:

En este trabajo se lleva a cabo un análisis mediante un modelado jerárquico de las emisiones de gases de efecto invernadero (GEI) de los principales sectores industriales de México desde 1999 hasta el 2012, el cual se utiliza para obtener estimadores de eficiencia económico-ambiental para cada sector. El objetivo es cuantificar la relación entre indicadores económicos clave y las series GEI al analizar su comportamiento dinámico. La información es obtenida de los Censos Económicos del Instituto Nacional de Estadística y Geografía y de los inventarios nacionales de emisiones del Instituto Nacional de Ecología y Cambio Climático

Abstract:

This work presents a hierarchical analysis of Greenhouse Gas Emissions (GHG) of the main industrial sectors in Mexico from 1999 to 2012. Hierarchical dynamic models are used to obtain estimates of economic-environmental efficiency indicators for each sector. The objective is to quantify the relationship between key economic indicators and the GHG series by analyzing their dynamic behavior. Information was obtained from the economic censuses of the National Institute of Statistics and Geography (INEGI) and the national emissions inventories of the National Institute of Ecology and Climate Change (INECC)

Abstract:

We examine the relative equilibria of the four vortex problem where three vortices have equal strength, that is Γ1=Γ2=Γ3=1 and Γ4=m, where m is a parameter. We study the problem in the case of the classical logarithmic vortex Hamiltonian and in the case the Hamiltonian is a homogeneous function of degree α. First we study the bifurcations emanating from the equilateral triangle configuration with the fourth vortex in the barycenter. Then, in the vortex case, we show that all the convex configurations contain a line of symmetry. We also give a precise count of the number of concave asymmetric configurations. Our techniques involve a combination of analysis and computational algebraic geometry

Abstract:

We consider the n–body problem defined on surfaces of constant positive curvature modelled by the united sphere S2 . For the 5 and 7–body symmetrical problem where all particles are located on the same geodesic (collinear configuration), we analyze all possible configurations of the masses with respect to the united circle, and show when there are or there are not masses which generate relative equilibria for each given configuration. Given positions for which relative equilibria exist, there are infinitely many values of the masses that generate such solutions. We give explicitly the values of masses in terms of the initial positions

Abstract:

We consider the n–body problem defined on surfaces of constant negative curvature. For the five and seven body case with a symmetric configuration, we give conditions on initial positions for the existence of collinear hyperbolic relative equilibria, where here collinear means that the bodies are on the same geodesic. If they satisfy the above conditions, then we give explicitly the values of the masses in terms of the positions, such that they can lead to these type of relative equilibria. The set of parameters that lead to these type of solutions has positive Lebesgue measure. For the general case of n–equal masses, we prove the no-existence of hyperbolic relative equilibria with a regular polygonal shape. In particular the Lagrangian hyperbolic relative equilibria do not exist

Abstract:

We consider (n + 1) bodies moving under their mutual gravitational attraction in spaces with constant Gaussian curvature K. In this system, n primary bodies with equal masses form a relative equilibrium solution with a regular polygon configuration; the remaining body of negligible mass does not affect the motion of the others. We show that the singularity due to binary collision between the negligible mass and the primaries can be regularized locally and globally through suitable changes of coordinates (Levi-Civita and Birkhoff type transformations)

Abstract:

We consider three point positive masses moving on S2 and H2. An Eulerian-relative equilibrium, is a relative equilibrium where the three masses are on the same geodesic. In this paper we analyze the spectral stability of these kind of orbits where the mass at the middle is arbitrary and the masses at the ends are equal and located at the same distance from the central mass. For the case of S2, we found a positive measure set in the set of parameters where the relative equilibria are spectrally stable, and we give a complete classification of the spectral stability of these solutions, in the sense that, except on an algebraic curve in the space of parameters, we can determine if the corresponding relative equilibrium is spectrally stable or unstable. On H2, in the elliptic case, we prove that generically all Eulerian-relative equilibria are unstable; in the particular degenerate case when the two equal masses are negligible, we get that the corresponding solutions are spectrally stable. For the hyperbolic case we consider the system where the mass in the middle is negligible; in this case the Eulerian-relative equilibria are unstable

Abstract:

Using the techniques of equivariant bifurcation theory we prove the existence of non-stationary periodic solutions of Γ-symmetric systems q(t) = -vU(q(t)) in any neighborhood of an isolated orbit of minima Γ(q0) of the potential U. We show the strength of our result by proving the existence of new families of periodic orbits in the Lennard-Jones two- and three-body problems and in the Schwarzschild three-body problem

Abstract:

In this article, using an infinite-dimensional equivariant Conley index, we prove a generalization of the profitable Liapunov center theorem for symmetric potentials. Consider a system (*) ¨q = − vUU(q), where U(q) is a Г-invariant potential and Г is a compact Lie group acting linearly on Rn. If system (*) possess a non-degenerate orbit of stationary solutions Г(q0) with trivial isotropy group, such that there exists at least one positive eigenvalue of the Hessian v2U(q0), then in any neighborhood of Г(q0) there is a non-stationary periodic orbit of solutions of system (*)

Abstract:

We study the relative equilibria in the 4-vortex problem when two of vorticities are equal to 1, and the other two equal to m are enough small. We prove that for m > 0 there is a unique concave kite relative equilibria. We also prove that there is a unique convex planar relative equilibria having two pairs of equal vorticities located at the adjacent vertices of the configuration and it is an isosceles trapezoid

Abstract:

We examine in detail the relative equilibria of the 4-vortex problem when three vortices have equal strength, that is, Γ1 = Γ2 = Γ3 = 1, and Γ4 is a real parameter. We give the exact number of relative equilibria and bifurcation values. We also study the relative equilibria in the vortex rhombus problem

Abstract:

This paper shows that active risk management policies lead to an increase in firm value. To identify the effect of hedging and to overcome endogeneity concerns, we exploit the introduction of weather derivatives as an exogenous shock to firms'ability to hedge weather risks. This innovation disproportionately benefits weather-sensitive firms, irrespective of their future investment opportunities. Using this natural experiment and data from energy firms, we find that derivatives lead to higher valuations, investments, and leverage. Overall, our results demonstrate that risk management has real consequences on firm outcomes

Abstract:

I use data from chief executive officer (CEO) successions to examine the impact of inherited control on firms' performance. I find that firms where incoming CEOs are related to the departing CEO, to a founder, or to a large shareholder by either blood or marriage underperform in terms of operating profitability and market-to-book ratios, relative to firms that promote unrelated CEOs. Consistent with wasteful nepotism, lower performance is prominent in firms that appoint family CEOs who did not attend "selective" undergraduate institutions. Overall, the evidence indicates that nepotism hurts performance by limiting the scope of labor market competition

Abstract:

The increasing necessity of delivering software applications in non-stable business environments with rapidly changing requirements, have made a review of the role of the traditional software development methodologies relevant. This paper presents one author's experience using Agile Methodologies inside an airline to quickly solve a planning problem after the events of September 11th. It also shows the importance of “hybrid” personnel that masters both the information technology and business aspects, as a determinant factor to achieve the company's objectives

Abstract:

This paper presents a new model of political competition in which candidates belong to factions. Before elections, factions compete to direct local public goods to their local constituencies. The model of factional competition delivers a rich set of implications relating the internal organization of the party to the allocation of resources. In doing so, the model provides a unified explanation of two prominent features of public resource allocations: the persistence of (possibly inefficient) policies and the tendency of public spending to favor incumbent party strongholds over swing constituencies

Abstract:

The aim of this commentary is to discuss the findings of Ying Huang and Minjie Huang (2022) in "CEO Incentive Alignment: Intensity and Time Horizon." Drawing on agency theory, the authors argue that incentives can lead to unintended negative consequences mainly due to goal incongruence and information asymmetry while on the contrary, incentive intensity strengths the alignment between CEO's and firm performance. In order to provide a significantly more complete framework for understanding incentive alignment, the authors include CEO incentive time horizon, defined as the degree to which performance is evaluated and aligned with firm performance over a longer time period. Huang and Huang (2022) findings suggest (a) that firms with longer CEO-incentive time horizons exhibit significantly greater total shareholder returns, ROA growth, and reduced financial misconduct, (b) that longer investor time horizon strengthens the relationship between CEO incentive time horizon and financial misconduct, so as the relationship between CEO incentive time horizon and long-term performance, and (c) that higher information asymmetry strengthens the relationship between CEO incentive time horizon and financial misconduct, as well as the relationship between CEO incentive time horizon and long-term performance

Abstract:

In recent years, an interdisciplinary effort between archaeologists and computer vision experts has emerged to provide image retrieval tools that facilitate and support cultural heritage preservation. The performance of these tools largely depends on the hieroglyph representation quality. In the literature, the most successful hieroglyph representation for retrieval following the BoVWmodel includes a thinning hieroglyph process and selects interest points through uniform random sampling. However, thinned hieroglyphs could have noise or redundant information, and a random set of interest points could include non-useful interest points that are different in each iteration. In this article, we propose improving this hieroglyph representation by pruning thinned hieroglyphs and introducing an improved interest-point selection. Our experiments show that our proposal significantly improves the hieroglyph retrieval results of state-of-the-art methods

Abstract:

The authors present and test a theory about the effects of polítical competítíon on the sources of economic growth. Using Mankíw, Romer and Weil's model of economic growth and data for roughly 80 countries, the authors show that political competition decreases the rate of physical capital acclumulation and labor mobilization but increases the rate of human capital accumulation and (less conclusively) the rate of productivíty change. The results suggest that political competition systematically affects the sources of growth, but those effects are cross-cutting, explaining why democracy itself may be ambiguous. These findings help clarify the debate about regime type and economíc performance and suggest new avenues for research

Abstract:

We establish new facts about the way consumers allocate debt among their credit cards using data for a representative sample of cardholders in Mexico. We find that relative prices are weak predictors of the allocation of debt, purchases, and payments. Consumers allocate a large fraction of their debt to high-interest cards, incurring a cost of 31 percent above the minimum. Using an experiment, we find that consumers do not substitute in the price margin, although they respond to salient temporary low-interest offers. We conclude that limited attention and mental accounting best rationalize our results and discuss implications for the market

Resumen:

Se examinan las combinaciones de las predicciones o proyecciones inflacionarias en México mediante encuestas quincenales a expertos. La inflación de los precios al consumidor se mide dos veces cada mes empleando varios métodos de combinación. Se aconseja usar técnicas reductivas de la dimensión, comparables con diferentes métodos de referencia, incluida la predicción más sencilla basada en el promedio. La imputación de los valores faltantes de la base de datos se realiza mediante dos métodos diferentes, cuyos resultados son básicamente robustos a la elección del método. El análisis preliminar se basó en la estructura de datos de panel y mostró la posible utilidad de emplear técnicas de reducción de la dimensión para combinar las predicciones de los expertos. Las principales conclusiones son: las primeras proyecciones mensuales se combinan mejor mediante el primer componente principal de las predicciones disponibles; la mejor segunda se obtiene calculando la proyección mediana y es más precisa

Abstract:

This study investigates the relationships of two job stressors (work overload and interpersonal conflict) with organizational citizenship behavior (OCB), as well as the potential mediating effect of organizational commitment and the moderating effect of social interaction in these relationships. Multisource data from employees and their supervisors in a Mexican-based organization reveal that organizational commitment fully mediates the relationship between work overload and OCB. Empirical support also emerges for a direct negative relationship between interpersonal conflict and OCB, as well as a partial mediation effect of organizational commitment in this relationship. Further, social interaction moderates the negative effects of the two job stressors on organizational commitment, such that the relationships are attenuated when social interaction increases. Finally, the results indicate support for the presence of moderated mediation, in that the indirect effects of work overload and interpersonal conflict on OCB are attenuated at higher levels of social interaction. Human resource professionals aiming to instill OCB among their employees thus can counter the inherent pitfalls of stressful work conditions by promoting social relationships among their employees

Abstract:

Believable artificial opponents, for example, believable virtual drivers, are fundamental to engage players and make (car racing) video games more entertaining. This paper lays the foundations for the design of believable virtual drivers by proposing a methodology for profiling players using the open racing car simulator. Data collected from 125 players about their driving behaviors and personality traits give insights into how personality traits should model the behavior of believable virtual drivers. The data analysis was conducted using a correlation analysis and the J48 decision tree algorithm. Empirical evidence shows that goal-oriented driving behaviors can be used to determine personality traits of players. In addition, this work also (i) gives preliminary insights into the relationship between the driving behavior and personality of racing game players and actual car drivers; and (ii) presents evidence of the relevance of gender as a predictor of personality traits of racing game players

Abstract:

This paper lays the foundations for the design of believable virtual drivers by proposing a methodology for profiling players using the open racing car simulator. Data collected from fifty-nine players about their driving behaviors and personality traits give insights into how personality traits should affect the behavior of believable virtual drivers. The data analysis was conducted using the J48 decision tree algorithm. Empirical evidence shows that goaloriented driving behaviors can be used to determine whether players are either introvert or extravert

Abstract:

The cross entropy method (CEM) was initially developed to estimate rare event probabilities through simulation, and has been adapted successfully to solve combinatorial optimization problems. In this research we explore the viability of using cross entropy methods for the capacitated vehicle routing problem, where we want to find an optimal routing of vehicles with limited capacity to satisfy a set of customers demands served from a single depot. We will present various implementations based on CEM for the VRP. We provide extensive computational results to evaluate these approaches, and discuss its advantages and drawbacks. The innovative method, developed in this research, to generate clusters may be applied in other settings. We also suggest improvements to the convergence of the general cross entropy method

Abstract:

We evaluate a model to optimize the bicycle inventory logistic of the Mexico City bike-sharing system, Ecobici. A satisfaction function for a given inventory level is defined in terms of the probability of finding available bicycles to take out, and the number of vacant lockers to return them. We estimate this function with real data for each station. Later we linearize this objective function and propose a mixed integer linear programming model that optimizes the inventory policy (removal or deposit) of bicycles at each station as well as the routing of the vehicle within a time limit to minimize a weighted sum of the travel time and the sum of satisfactions over all stations. We present results of the model and compare it with the performance of the system on a particular day. We find that the application of the model may improve the customer satisfaction with the system

Abstract:

We study aircraft departure scheduling optimization with a focus on optimal sequencing on the runway and applied it to data from the Mexico City Airport. Our model considers runway layout operational, and safety restrictions. We evaluate solution methods including dynamic programming, and a beam search approach, providing computational results and comparisons. Our proposed methods run at reasonable times for implementation in an on-line decision system. The results also provide insight into the impact of runway layout and scheduling to reduce aircraft total time throughput and waiting times

Abstract:

We report results on an approach to teaching linear algebra using models. In particular we are interested in analyzing the use of two theories of mathematics education, namely, Models and Modeling and APOS in the design of a teaching sequence that starts with the proposal of a “real life” decision making problem to the students. We briefly illustrate the possibilities of this methodology through the analysis and description of our classroom experience on a problem related to traffic flow that elicits the use of a system of linear equations and different parameterizations of this system to answer questions on traffic control. We describe cycles of students’ work on the problem and discuss the advantages of this approach in terms of students’ learning and the possibilities of extending it to other problems and linear algebra concepts

Abstract:

Start-up companies are a vital ingredient in the success of a globalised networked world economy. We believe that such companies are interested in maximising the chance of surviving in the long term. We present a Markov decision model to analyse survival probabilities of start-up manufacturing companies. Our model examines the implications of their operating decisions, in particular how their inventory strategy is influenced by purchasing, shortage, transportation and ordering costs, as well as loans to the firm. It is shown that although the start-up company should be more conservative in its component purchasing strategy than if it were a well-established company it should not be too conservative. Nor is its strategy monotone in the amount of capital available

Abstract:

In this chapter, the authors examine the turnover of employees in Latin America, with a particular focus on Mexico. Employee turnover is important in Latin America and in Mexico, as it is in many other places, because the cost of labor typically accounts for 70% of a firm’s operating cost. When employees leave, it requires that the employer replaces the workers through human resource management processes that include recruiting, selection, orientation, and training. These costs are a significant expense to firms that they could avoid if turnover was lower. The authors identify cultural, economic, legal, and other factors that could influence employee turnover. The authors also summarize many managerial practices that can help employers to effectively manage employee turnover. Finally, the authors provide insights for future research on employee turnover in this important region of the world

Resumen:

Considerando que los negocios se enfrentan una mayor turbulencia en el entorno, en este estudio nos basamos en la investigación previa para proponer un modelo comprehensivo que ayude a mejorar la educación en las escuelas de administración en Latinoamérica

Abstract:

To address the increasingly turbulent environments that businesses face, the purpose of this study is to build on prior research to propose a comprehensive model aimed at enhancing business school education in Latin America

Resumo:

Considerando o ambiente turbulento que o mundo dos negócios vive, este estudo, baseado em investigação prévia, propõe um modelo compreensivo para melhorar as escolas de administração da América Latina instituições de ensino superior

Abstract:

This article focuses on Latin American constitutionalism with two goals in mind. The first goal is to identify narratives of constitutional mixity or hybridity that have been influential in Latin America, something that habilitates a comparative analysis with references to mixed or hybrid constitutionalism in other scenarios. One narrative underlines the combination of U.S.-inspired constitutionalism with background civil law systems. Another narrative highlights the way classic regional constitutional designs feature a liberal-conservative hybridation that, some claim, continue to influence constitutional dynamics up to this day. And the third narrative is associated with the establishment in regional countries of hybrid or mixed systems of judicial review. The second goal of the article, more methodological in kind, is to explore what do Latin American narratives on mixity teach in the way to clarifying the nature and implications of exercises constitutional taxonomy. As it emerges, constitutional taxonomy is a way of organizing and generating knowledge which is useful to describe, but most meaningful when addressed to provide an explanation, which is often just an antechamber to normative evaluation. Whether a particular exercise of constitutional taxonomy makes sense in a particular scenario, however, is a different matter, and the article suggests that describing, explaining and evaluating Latin American constitutions as hybrid is illuminating but also problematic in a number of ways

Resumen:

Asumiendo que la igualdad en sus distintas vertientes está en el centro de la agenda interamericana del siglo XXI, este texto participa del seguimiento crítico de la jurisprudencia a partir del análisis del caso Fábrica de fuegos contra Brasil, que extiende o profundiza de varias maneras los criterios existentes, dando un lugar distintivo a la discriminación estructural, la discriminación interseccional, la discriminación por pobreza y la igualdad sustantiva. Se destacan sus aportes, pero también el uso todavía impreciso de las categorías de interseccionalidad y discriminación estructural, así como la complejidad de trasladar a los resolutivos encuadres interseccionales y estructurales simultáneos

Abstract:

Assuming equality in its different dimensions is at the center of the Inter-American agenda for the 21st century, this text joins in the collective follow-up and critical analysis of the case-law by focusing on Fábrica de fuegos v. Brazil. In this case the Court deepens or expands in various ways existing criteria, giving a distinctive place to structural discrimination, intersectional discrimination, poverty-based discrimination, and substantive equality. The contributions of the ruling are emphasized, but so is the still non entirely precise use of the notions of structural discrimination and intersectionality, as well as the complexity of simultaneously translating structural and intersectional framings into remedies

Abstract:

This article explores and develops a comprehensive set of arguments demonstrating the unconstitutionality of drug prohibition policies in Mexico. Some of these arguments have been used to validate recreational and therapeutic uses of marijuana, while others remain unused and unexplored. There are more than ten constitutional framings that are useful to evaluate the unconstitutionality of prohibitionist drug policies. These framings can be grouped into two subcategories: rights-centered and non-rights centered. Rights-centered framings are grounded on equality, health, and the free development of the personality. Non-rights-centered framings include federalism, market regulation and preservation of basic rule of law guarantees. This paper details resources that may be helpful for judges, policy makers, and civil society organizations interested in a human-rights or constitutional based approach to drug policy. An in-depth case study of Mexico reveals useful arguments to evaluate the legal status of drug policy in many other countries

Abstract:

Within the conventional mapping of different modalities of constitutionalchange – replacement, amendment, and interpretation – Mexico exemplifies the reformist strategy taken to an extreme: its 1917 Constitution has been amended, sometimes radically, 706 times. It also illustrates a scenario in which constitutional amendment and democratisation appear to have gone hand in hand: while the rate of amendments was at first moderate, it dramatically accelerated over the last three decades, when political plurality took hold after 70 years of hegemonic party rule. Trough continuous, piecemeal reform, the country has progressively incorporated rights, institutions, and regulatory solutions that are part and parcel of the characteristic contemporary Latin American constitutional 'kit' , such as a long and robust declaration of rights, instruments of direct democracy, openness to international sources of law, or a multi-faceted system of judicial review

Resumen:

El artículo hace un análisis crítico de dos sentencias recientes de la Suprema Corte que abordan esquemas regulatiorios de la convivencia de hecho. La primera declara constitucional una norma de Chiapas que regula de manera distinta en matrimonios y parejas de hecho el reparto de los bienes al momento de la ruptura. La segunda analiza la regulación de la adopción por parte de ciertas parejas de hecho en Campeche, declarando inconstitucional la prohibición de adopción legalmente prevista. La Corte centra sus argumentos en el derecho a la no discriminación por estado marital y presenta por primera vez esta categoría como categoría sospechosa. El artículo examina la construcción y el uso en las sentencias de varias categorías de derecho antidiscriminatorio, en particular el sistema de escrutinios. Aunque señala que la Corte decide bien en cuanto al resultado final y provee herramientas analíticas valiosas, identifica equivocidades y lagunas en el plano argumentativo que deberán decantarse con mayor claridad en el futuro. Echando una mirada al tratamiento de la discriminación por estado marital en el derecho comparado, el artículo identifica con precisión los puntos de ambigüedad y sugiere posibles direcciones de evolución

Abstract:

This essay critically analyses two recent rulings by the Mexican Supreme Court on the regulation of non-marital unions. The first validates a Chiapas statutory scheme that treats differently the economic consequences of breaking up in marital and non-marital unions. The second declares unconstitutional the banning of adoption in the case of certain non-marital unions in Campeche. The Court decides both cases on the right not to be discriminated against by reason of marital status and presents for the first time marital status as a "suspect ground" of classification. The article examines how the Court crafts and uses several anti-discrimination categories, particularly the tiers of scrutiny. Though it holds that the Court decides well the bottom-line issues and provides valuable analytical tools, it identifies ambiguities and lacunae in the arguments regarding aspects that will require further clarification in the future. Peering at marital-status anti-discrimination criteria in the comparative scenario, the essay identifies ambiguity points and suggests possible directions of evolution

Resumen:

En este texto voy a analizar una sentencia de la Primera Sala mexicana que a mi juicio amerita un análisis cuidadoso. No solo porque se centra en problemas -el sexismo, el edadismo, la discriminación sistémica, el abuso de poder en el mundo empresarial, los estereotipos laborales- que son centrales en toda América Latina, sino porque los razonamientos que en ella se vierten participan de un empresa que, entre todas las que reposan sobre la mesa del derecho constitucional contemporáneo, se ha revelado distintivamente desafiante: la construcción jurídica de la noción de igualdad sustantiva

Abstract:

In this article I will analyze a judgment of the First Mexican Chamber that I believe deserves careful analysis. Not only because it focuses on problems -sexism, ageism, systemic discrimination, abuse of power in the business world, labor stereotypes- that are central throughout Latin America, but because the arguments poured participate in an undertaking that, among all those resting on the table of contemporary constitutional law, has revealed distinctly challenging : the legal construction of the notion of substantive equality

Resumen:

El artículo analiza una sentencia reciente de la Suprema Corte mexicana según la cual los daños derivados del uso de lenguaje discriminatorio -en ciertos casos- deben contrapesar el ejercicio de la libertad de expresión. La sentencia avanza tesis en tres niveles distintos de la teoría constitucional; primero, expresa una visión del tipo de ejercicio que la Corte debe desplegar cuando revisa sentencias en la vía de amparo. En un segundo y tercer nivel despliega, con diferentes grados de concreción, la lectura constitucional específica propuesta para la revisión del caso. En este punto y atendiendo al contexto histórico-constitucional relevante, el autor considera claramente acertada la decisión de la Corte de contrapesar la libertad de expresión mediante consideraciones derivadas del paradigma anti-discriminatorio, aunque en el plano de las subreglas de decisión usadas para aterrizar la lectura general señale tanto aciertos como desaciertos

Abstract:

The article discusses a recent decision by the Mexican Supreme Court whereby damage resulting from the use of discriminatory language may in certain cases appropriately counterweight freedom of speech. The ruling expresses thesis at three different levels, all of them relevant from the viewpoint of constitutional theory. First, it expresses a vision of the kind of exercise the Court should deploy when reviewing sentences in amparo: it is a maximizing vision that the author considers to be fundamentally correct. At a second and third level, with different degrees of specificity, it proposes a particular constitutional reading for the revision of the relevant historic-constitutional context, the article celebrates the Court's willingness to counterweight free speech with antidiscrimination-based considerations, though in terms of the sub-rules of decision used to pin down the general reading it identifies both successes and failures

Resumen:

La Suprema Corte de México resolvió en 2007 varios amparos relativos a militares expulsados de las Fuerzas Armadas por ser portadores de VIH. La autora identifica las principales cuestiones bajo discusión y los argumentos centrales de la Corte y enfatiza tres razones por las cuales estos amparos merecen ser destacados: en sentido positivo, por reforzar el uso del principio de proporcionalidad como herramienta para identificar normas discriminatorias y por abrir las puertas al uso de conocimiento científico especializado en fa corte; en sentido negativo, por la ausencia de argumentos basados en la eficacia normativa directa del derecho a la salud

Abstract:

In 2007 the Mexican Supreme Court issued several opinions dealing with military personnel dismissed from the Army because of their being HIV-positive. The author describes the main questions under discussion and the core arguments developed by the Court, and stresses three reasons why these cases deserve close attention: positively, because they reinforced the use of the proportionality principle as a tool for identifying discriminatory norms and because they opened the door lo the use of specialized scientific knowledge in constitutional adjudication; negatively, because they failed to build on the direct normative efficacy of the right to health

Abstract:

Authoritarian constitutionalism is a distinct phenomenon that involves an intriguing mixture of a regime type commonly known for its tendency to abuse power with a centuries-old lineage of theories and practices seeking precisely to place limits on how it be used. This chapter provides a conceptual and analytical framework that addresses both dimensions of authoritarian constitutionalism, and in so doing, it discusses the theoretical and empirical advantages and disadvantages of distinct conceptualizations of this term. It then illustrates the different categories in a conceptual map with examples drawn from Latin American countries. The chapter concludes with some promising avenues for research in this interesting and vibrant area

Resumen:

A partir de la literatura sobre instituciones formales e informales, este artículo ofrece una tipología del papel que las instituciones informales tienen en la eficacia o ineficacia de las instituciones formales. Específicamente, analizamos si, y por qué, la independencia judicial de jure diverge o converge con la independencia judicial de facto. Distinguimos cuatro relaciones entre las instituciones informales y las formales: competencia, refuerzo, traslape y redundancia. Ilustramos cada relación con ejemplos de América Latina. En México observamos ejemplos de refuerzo y competencia, mientras que en Brasil nos enfocamos en la independencia judicial interna. En Uruguay y Paraguay miramos la independencia judicial externa y mostramos la redundancia y el traslape, respectivamente

Abstract:

Building on the literature on formal and informal institutions, this paper offers a typology of the role informal institutions play in the efficacy or inefficacy of formal institutions. Particularly, whether and why de jure and de facto independence converge or diverge. We distinguish four roles that informal institutions can play: competing, reinforcing, overlapping, and redundant. To illustrate our typology, we include examples of each role from Latin American countries. In Brazil we show that a norm of professionalism reinforces internal judicial independence, whereas in Mexico a norm of patronage competes with it. In Uruguay we show that a norm of legislative cooperation is redundant with a high level of de jure external judicial independence, whereas in Paraguay a norm of militant subordination overlaps with a low level of it

Resumen:

Las abogadas y los abogados juegan un papel fundamental en la solución institucional de conflictos y en el funcionamiento y reproducción de un sistema social y de gobierno fundado en leyes. La regulación de los ámbitos principales en el mundo del derecho -como la educación jurídica, el acceso a la abogacía, el ejercicio de la profesión y la organización del gremio- es clave para garantizar la calidad de la formación y los servicios que ofrecen los profesionales del derecho. En México, a diferencia de otros países de la región y principalmente en contraste con los países europeos y con Estados Unidos, la regulación de estos ámbitos es mínima o directamente inexistente. Irónicamente, en México las y los profesionales de las leyes se forman y ejercen su profesión en medio de un vacío legal en el que se fomentan el predominio de desigualdades estructurales y la discriminación. Una de las consecuencias de la falta de regulación de la educación jurídica es un incremento exponencial en el número de Instituciones de Educación Superior (IES) que ofrecen la carrera de "Licenciado en Derecho" (que, de hecho, tiene 73 denominaciones diferentes). Para 2018 el número de IES privadas que imparten esta carrera llegó a 1338, y el de las IES públicas a 62 un total de 1400. En contraste, en Alemania hay 44 instituciones que imparten la carrera en Derecho (solo 2 son privadas) y en Estados Unidos 203 (el 58% privadas). Sin embargo, las IES públicas y privadas expiden casi el mismo número de cédulas profesionales (cerca de 90 mil en cada ámbito durante los años 2015-2018). El aumento sin control de las IES privadas, y la masificación de las IES públicas, van acompañadas de una muy baja calidad generalizada en la educación jurídica y de la generación de islotes de calidad concentrados en unas pocas IES privadas y públicas a las que muy pocos tienen acceso

Abstract:

The Mexican Constitution of 1917 granted the Supreme Court the power to handpick lower court judges and oversee their careers. For almost eight decades this capacity was not regulated. To fill this void, the justices began to take turns filling vacancies which developed into an informal institution - the so-called 'Gentlemen's Pact'. Using original archival data, we document and describe the birth and development of this practice and argue that it consolidated into an informal institution as the judiciary increased in size. We uncover the workings of this social norm that established a patronage model of judicial selection. Our analysis period ends in 1994, when a constitutional reform created a judicial council with the explicit aim of ending patronage and corruption within the judiciary

Abstract:

This paper contributes to the debate on Europe's modern economic growth using the statistical concept of long-range dependence. Different regimes, defined as periods between two successive endogenously estimated structural shocks, matched episodes of pandemics and war. The most persistent shocks occurred at the time of the Black Death and the twentieth century's world wars. Our findings confirm that the Black Death often resulted in higher income levels but reject the view of a uniform long-term response to the Plague. In fact, we find a negative impact on incomes in non-Malthusian economies. In the North Sea Area (Britain and the Netherlands), the Plague was followed by positive trend growth in output per capita and population, heralding the onset of modern economic growth and the Great Divergence in Eurasia

Abstract:

This paper contributes to the debate on the origins of modern economic growth in Europe from a very long-run perspective using econometric techniques that allow for a long-range dependence approach. Different regimes, defined by endogenously estimated structural shocks, coincided with episodes of pandemics and war. The most persistent shocks occurred at the time of the Black Death and the twentieth century's world wars. Our findings confirm that the Black Death often resulted in higher income levels, but reject the view of a uniform long-term response to the Plague while evidence a negative reaction in non-Malthusian economies. Positive trend growth in output per head and population took place in the North Sea Area (Britain and the Netherlands) since the Plague. A gap between the North Sea Area and the rest of Europe, the Little Divergence, emerged between the early seventeenth century and the Napoleonic Wars lending support to Broadberry-van Zanden's interpretation

Abstract:

Financial crises in emerging economies are accompanied by a large fall in total factor productivity. We explore the role of financial frictions in exacerbating the misallocation of resources and explaining this drop in TFP. We build a two-sector model of a small open economy with a working capital constraint on the purchase of intermediate goods. The model is calibrated to Mexico before the 1995 crisis and subjected to an unexpected shock to interest rates. The financial friction generates an endogenous fall in TFP and output and can explain more than half of the fall in TFP and 74 percent of the fall in GDP per worker

Abstract:

We build a partial equilibrium model of firm dynamics under exchange rate uncertainty. Firms face idiosyncratic productivity shocks and observe the current level of the real exchange rate each period. Given their current level of capital stock, firms make their export decisions and choose how much to invest. Investment is financed through one period loans from foreign lenders. The interest rate charged by each lender is set to satisfy an expected zero-profit condition. The model delivers a distribution of firms over productivity, capital stocks and debt portfolios, as well as an exit rule. We calibrate the model using data from a panel of Mexican firms, from 1989 to 2000, and analyze the effect of the 1994 crisis on these variables. As a result of the real exchange rate depreciation, the model predicts: (i) an increase in the debt burden, (ii) an increase in exports and (iii) a large decline in investment. These real effects are consistent with the evidence for the Mexican crisis

Abstract:

In this paper, I explain two "puzzles" that have been observed in firm level data. First, firms that display a high sensitivity of investment to cash flow (commonly believed to be an indicator of liquidity constraints) usually have large unutilized lines of credit which, presumably, could be used to overcome the shortage of funds. Second, firms that are perceived to be extremely liquidity constrained actually show very little sensitivity of investment to cash flow. I show how a dynamic model of firm investment with liquidity constraints and non-convex costs of adjustment of capital can explain these facts. These two features together imply that firms need to have a certain threshold level of financial resources before they can afford to increase investment. Once they cross this threshold, firms' investment will be positively correlated with their financial resources until they reach their desired level of capital stock. However, even if investment is sensitive to cash flow, firms may borrow below their credit limit to guard against future bankruptcy or binding liquidity constraints

Abstract:

In this paper we characterize and estimate the degree to which liquidity constraints affect real activity. We set up a dynamic model offirm investment and debt in which liquidity constraints enter explícitly into the firm's maximization problem, so that investment depends positively on the firm's financial position. The optimal policy rules are incorporated into a maximum likelihood procedure to estimate the structural parameters of the model. We identifY liquidity constraints from the dynamics of a firm's evolution, as formalized by the dynamic estimation process, and find that they significantly affect investment decisions offirms. Firms ability to raise equity is about 73% ofwhat it would have been mider free capital markets. If firms can finance investment by issuing fresh equity, rather than with internal funds or debt, average capital stock is about 6% higher over a period of 20 years. Transitory interest rate shocks have a sustained impact on capital accumulation, which lasts for several periods

Abstract:

This is an invited commentary for the Active Living Research (ALR) special issue. The commentary focuses on the lessons that can be learned from Latin America regarding obesity prevention. Examples from Brazil, Mexico, and Colombia that may inform US policy are described

Abstract:

This paper describes a computational system designed to imitate human inspiration during musical composition. The system is called MIS (Musical Inspiration Simulator). The MIS system is inspired by media to which human beings are exposed daily (visual, textual, or auditory) to create new musical compositions based on the emotions detected in said media. After building the system we carried out a series of evaluations with volunteer users who used MIS to compose music based on images, texts, and audio files. The volunteers were asked to judge the harmoniousness and innovation in the system's compositions. An analysis of the results points to the difficulty of computational analysis of the characteristics of the media to which we are exposed daily, as human emotions have a subjective character. This observation will direct future improvements in the system

Resumen:

La Grundnorm de Kelsen, la conocida norma fundamental, que constituye la pieza más importante y al mismo tiempo más frágil de la teoría del derecho de Kelsen. Norma que supone la imposición de obediencia al primer poder fáctico-constituyente y norma que lo habilita para la creación de la primera norma jurídica su ambigüedad ha generado un sinfín de interpretaciones y críticas. En este trabajo me enfocaré en la última versión ofrecida por Kelsen, según la cual la norma fundamental es el sentido normativo de un acto de voluntad que no es real, un acto ficticio. Intentaré mostrar que, con la última versión, Kelsen parece desafiar doblemente nuestras intuiciones naturalistas e invitar a un auténtico acto de fe. Por un lado, la norma fundamental se opone a la naturaleza porque precisamente es, para usar una expresión de Levinas, de otro modo que ser, por otro lado, desafía la relación que mantenía con el ser; la norma fundamental no solo es un deber ser, esto es, algo categorialmente distinto del ser, sino que, como deber ser necesariamente no es, esto es, es un deber ser que necesariamente está relacionado con un acto de voluntad no existente. Esta última movida parece coronar el carácter sagrado de la normatividad del derecho, imposible de fundar, pero absolutamente necesaria para dar sentido a nuestras prácticas sociales humanas

Abstract:

A crucial Kelsenian thesis was the criticism of the dualist thesis according to which the State exists before the law. It is against this theory that he wrote his first great book, Hauptprobleme der Staatsrechtslehre. In the Foreword to the second Printing, Kelsen recognises that his main intuitions had perfectly been developed by Cohen. Cohen’s ethics helps to understand the distinction between Sein and Sollen. Kelsen explicitly said that such distinction cannot be the object of conceptual analysis; they are elementary concepts. What I will suggest is that such elementary concepts can be analysed on the basis of Cohen and Lévinas’ ethics. By virtue of the contribution of Lévinas, it is possible to keep the distinction out from the realm of neo-kantian postulates, and to give to the Sollen an ethical dimension. The Sollen is not only a category necessary to describe positive law in normative terms; it is also central—as otherwise than being—in order to explain the concept of humanity. International law, conceived as Sollen, does not need to exist. International law can be conceived as a divine command. Only when the individuals assume their responsibility, humanity, not only in the pure ethical relation with the Other, but also in social terms, does emerge. My main point is that the concepts of Sollen and Humanity are inseparable, to the extent that a human being, in his biological dimension, makes himself a human subject when he responds to (and not just conceives) the natural world in normative terms, a world in which he/she is a responsible subject

Resumen:

¿Quién es el mejor razonador moral? ¿El juez o el legislador? El propósito de este trabajo es refinar esta pregunta distinguiendo entre diferentes asunciones metaéticas. Si las asunciones metaéticas de los que construyen argumentos morales son incompatibles o si su objetivo institucional consiste en establecer algún tipo de verdad, de ninguna forma podrá generarse una actividad argumentativa constructiva. Mi tesis es que sólo cuando se renuncia a toda pretensión epistémica y se desarrolla empatía respecto de los argumentos de los demás, pueden las instituciones mejorar la calidad de sus argumentos tanto judiciales como democráticos, y por consiguiente incrementar su autoridad

Abstract:

Who is the best moral reasoner, the judge or the legislator? The aim of this paper is to refine this question, by distinguishing between different metaethical assumptions. If the meta-ethical assumptions of arguers are incompatible or if their institutional goal is to establish some truth, there is no way of entering into a constructive argumentative activity. My claim is that only when arguers renounce any epistemic temptation and feel empathy with respect to others’ arguments, can institutions improve the quality of their judicial and democratic arguments, and therefore gain authority

Resumen:

El punto de partida de este trabajo es doble: desde una perspectiva teórica se cuestiona la relación entre las jerarquías normativas y las teorías sobre las relaciones entre sistemas jurídicos, y desde una perspectiva práctica se reconstruye la postura de la Suprema Corte de Justicia de la Nación respecto de la siguiente cuestión: ¿qué deben hacer los jueces cuando se enfrentan a un conflicto entre una norma constitucional restrictiva de derechos y otra internacional protectora de derechos? Para formular una respuesta que no obedezca a meras ideologías se deben hacer varias distinciones: entre jerarquía y supremacía, fuentes y normas, normas y posiciones normativas, reglas y principios, y tomar en serio la ambigüedad de la noción de norma restrictiva de derechos. Dada la complejidad del cuadro que resulta de la combinación entre todos los significados posibles de las nociones clave, no sorprende que, más allá de las tesis publicadas, la pregunta toral no tenga respuestas correctas ni desde el punto de vista de la Constitución ni desde una perspectiva internacional. Sólo habrá, caso por caso, decisiones más o menos bien fundamentadas

Abstract:

The starting point for this work is two-fold: from a theoretical standpoint the relationship between the hierarchy of norms and (monist or pluralist) theories regarding the relationship between legal systems is questioned, and from a practical standpoint, the Supreme Court of Justice's posture on the following issue is reconstructed: what should judges do when they are faced with a conflict between a restrictive constitutional provision restricting human rights and an international norm protecting them? To provide an answer that is not merely ideological, several distinctions must be made: between hierarchy and supremacy, between norms and normative positions, between rules and principles, and the ambiguity of human rights restriction must be taken seriously. Given the complexity of the result of the combination of all the key concepts, it is not surprising that, beyond the Supreme Court's published theses, neither the constitutional nor the international point of view provide the correct answer for the toral question. There will only be, on a case-by-case basis, more or less wellfounded decisions

Abstract:

The main argument of this article is based on a functional disanalogy between what I shall call "international humanity-based law" constituted by human rights and criminal law on the one hand, and domestic rule of law on the other. If we adopt a functionalist approach, for the purpose of dealing with indeterminacy, our attention has to focus both on the pragmatic objective of the rule of law, i.e., reasonable stability, and on its means, i.e., formalism and legality. Do international key players share these values embedded in the political project of the rule of law? Does humanity-based international law fulfil the requirements of the rule of law? The conclusion of this paper is that the institutions and mechanisms which legal scholars usually refer to when they state that a legal order is a rule of law are almost absent from humanity-based international law. This implies that radical indeterminacy is, in issues of humanity, too formidable an obstacle to achieving the ideal of the rule of law

Resumen:

El constitucionalismo global propuesto en Principia Iuris plantea serias dudas en cuanto a su compatibilidad con el principio de estricta legalidad y el pluralismo. Por un lado, la decisión semántica de distinguir entre guerra y uso regulado de la fuerza, sólo parece justificada si se comparten a la vez la idea ferrajoliana según la cual la guerra es por definición la negación del derecho y la fe en el carácter legal, por lo menos en el plano normativo, de las intervenciones de la ONU. Por otro lado, un análisis más profundo del derecho internacional parece demostrar que, a pesar del optimismo de Ferrajoli, éste se caracteriza más por el dominio de la excepción que por el respeto de la legalidad

Abstract:

The global constitutionalism proposed in Principia Iuris raises serious doubts regarding its compatibility with the principles of strict legality and pluralism. On the one hand, the semantic decision to distinguish between war and regulated use of force only seems justified in the light of the idea that war is by definition the negation of law and UN interventions are, at least from a normative perspective, perfectly legal. Furthermore, a deeper analysis of international law seems to show that, despite the optimism of Ferrajoli, it is characterized more by the dominance of the exception than by the strict compliance with the legality

Resumen:

En este trabajo intentaré demostrar que la tesis monista epistemológica, la única que Kelsen afirma poder defenderse en un plano teórico, es parasitaria de la tesis normativa de la primacía del derecho internacional. Aunque para Kelsen es perfectamente equivalente, desde un punto de vista epistemológico, cuál tesis normativa se adopte, no es para nada equivalente, desde un punto de vista ideológico, cuál tesis epistemológica -entre el monismo y el pluralismo- se adopta. La elección epistemológica del monismo no es, a pesar de la pureza de la teoría, ni inocente ni neutral. Si, como cuestión de hecho, resulta que los fenómenos normativos nacionales e internacionales se caracterizan por la diversidad y no por la homogeneidad, la tesis kelseniana de la unidad solo puede explicarse como condición necesaria de su proyecto político pacifista

Abstract:

In this paper I will attempt to show that the epistemological monistic thesis, i.e. the only, according to Kelsen, that can be defended on a theoretical level, is parasitic of the normative thesis of the primacy of international law. Although for Kelsen is perfectly equivalent, from an epistemological point of view, what normative thesis is adopted, I will suggest that it is not at all equivalent, from an ideological standpoint, what epistemological thesis –among monism and pluralism– is adopted. The epistemological choice of monism, despite the purity of the theory, is neither innocent nor neutral. If, as a matter of fact, national and international normative phenomena are characterized by diversity and not by homogeneity, Kelsen’s theory of the unity can only be explained as a necessary condition of his pacifist political project

Abstract:

In this paper we provide the first solution to the challenging problem of designing a globally convergent estimator for the parameters of the standard model of a continuous stirred tank reactor. Because of the presence of non-separable exponential nonlinearities in the system dynamics that appear in Arrhenius law, none of the existing parameter estimators is able to deal with them in an efficient way and, in spite of many attempts, the problem was open for many years. To establish our result we propose a novel procedure to obtain a suitable nonlinearly parameterized regression equation and introduce a radically new estimation algorithm-derived applying the Immersion and Invariance methodology-that is applicable to these regression equations. A further contribution of the paper is that parameter convergence is exponential and is guaranteed with weak excitation requirements. To achieve this remarkable property we rely on the utilization of a recently introduced parameter estimator that seamlessly combines a classical least-squares search with the dynamic regressor extension and mixing estimation procedure

Abstract:

In this paper we are interested in the problem of adaptive state observation of linear time-varying (LTV) systems where the system and the input matrices depend on unknown time-varying parameters. It is assumed that these parameters satisfy some known LTV dynamics, but with unknown initial conditions. Moreover, the state equation is perturbed by an additive signal generated from an exosystem with uncertain constant parameters. Our main contribution is to propose a globally convergent state observer that requires only a weak excitation assumption on the system, which is satisfied in the present context of regulation

Abstract:

In this article, we address the problems of flux and speed observer design for voltage-fed induction motors with unknown rotor resistance and load torque. The only measured signals are stator current and control voltage. Invoking the recently reported Dynamic Regressor Extension and Mixing-Based Adaptive Observer (DREMBAO), we provide the first global solution to this problem. The proposed DREMBAO achieves asymptotic convergence under an excitation condition that is strictly weaker than persistent excitation. If the latter condition is assumed the convergence is exponential

Abstract:

An open-circuit fault-detection strategy is here proposed for single-phase DC/AC conversion. The power converter under consideration consists of an H-bridge and a capacitor with parallel resistance and current source in its DC side - these last two stand for the unknown system load and energy injection from renewable resources, respectively. An inductor filter is also included as a coupling element to the AC network. When an open-circuit fault occurs in the H-bridge, the resulting AC output waveform is asymmetric, and induces DC and harmonic components to the network. Hence, by using an additive fault modeling, the fault signature can be expressed by a constant term fdc and a fluctuating signal. The sign of fdc allows to determine the pair of faulty switches in the H-bridge. In this work, an DREM-based identification scheme is proposed to estimate fdc. Through the sign of its estimate, it is possible to detect the pair of faulty switches. To assess our approach, simulation results are included

Abstract:

This paper is devoted to the application of a recently proposed globally convergent adaptive position observer to non-salient permanent magnet synchronous motors. Following the Dynamic Regressor Extension and Mixing Based Adaptive Observer (DREMBAO) approach, a new finite-time robust observer is presented that allows to track adaptively the rotor position by measuring only the currents and voltages and without knowledge of mechanical, electrical and magnetic parameters. A numerical example for the case with different rotor speeds and time-varying load torque is considered to reveal the advantages of the proposed approach in comparison with other existing methods. Experimental studies of the proposed robust nonlinear observer implementation are presented to illustrate the efficiency of the new design technique in different speed modes together with adaptive estimation of unknown parameters

Abstract:

This article explores a model of firm-specific training in a job search environment with labor turnover. The main substantive finding is a positive association between training and wages (when dispersed). The article then precisely characterizes how both wage dispersion and firm profitability depend on the flow value b≥ 0 of workers' unmatched time. It is shown that: (i) for all high values b, no equilibrium exists; (ii) for intermediate values b, multiple equilibria arise, where firms earn zero profits, and choose from a general wage distribution; (iii) for all lower values b, there is a unique equilibrium, with firms earning positive profits, and choosing from an atomless set of wages

Abstract:

In the insurance sector, the assignment of business transactions to operational roles in the business process is a complex task that usually requires human intervention due to the lack of efficient and powerful automated decision making tools. A number of mechanisms and methods have been proposed to improve workloads’ allocation in core business process in the insurance industry. This paper describes a novel solution, the iDispatcher, to efficiently assign and balance business transactions to line operators, in particular in the processes of underwriting and issuing policies. It is based on the use of a business rule engine that evaluates the intrinsic properties of a particular transaction and the specific abilities and technical skills of all available operators to find the best match. The iDispatcher benefits key business results by generating a more efficient operation based on the correct assignment of business transactions using a flexible and dynamic solution. The insurance companies will be able to make the right decision in the right moment for every incoming business request, and this represents a strategic difference in the market

Abstract:

We propose a model to generate electrocardiogram signals based on a discretized reaction-diffusion system to produce a set of three nonlinear oscillators that simulate the main pacemakers in the heart. The model reproduces electrocardiograms from healthy hearts and from patients suffering various well-known rhythm disorders. In particular, it is shown that under ventricular fibrillation, the electrocardiogram signal is chaotic and the transition from sinus rhythm to chaos is consistent with the Ruelle-Takens-Newhouse route to chaos, as experimental studies indicate. The proposed model constitutes a useful tool for research, medical education, and clinical testing purposes. An electronic device based on the model was built for these purposes

Resumen:

Este ensayo aborda el análisis de la evolución del constitucionalismo mexicano al cumplirse cien años de vigencia de la Constitución Mexicana, que fue emitida en la Ciudad de Querétaro el 5 de febrero de 1917. El ensayo sostiene que la longevidad de esta Constitución se debe a la intensa actividad de los órganos legislativos mexicanos que ejercen la función de reforma constitucional, lo que ha aportado una gran plasticidad al texto escrito, favoreciendo la adaptación entre la norma fundamental y la evolución de la sociedad mexicana. Al mismo tiempo, el desarrollo del constitucionalismo mexicano, que históricamente había privilegiado la mirada política propia de los mencionados órganos legislativos, ha arribado a un reforzamiento del entendimiento jurídico de la Constitución. Esto conduce a admitir -en un nuevo entendimiento del constitucionalismo mexicano- la coexistencia entre el tradicional entendimiento político, que actúa a través de adaptaciones formales al texto constitucional, junto a una creciente presencia de los órganos judiciales que imprimen un sello claramente jurídico a la constitución mexicana, creando con ello un entorno de diálogos constitucionales entre las funciones de reforma constitucional y el control de la regularidad constitucional

Abstract:

In the light of the Mexican Constitution centennial and thus hundred years of its validity since it was first published in the city of Queretaro on February 5th of 1917, this essay analyzes the evolution of Mexican constitutionalism. It sustains that the longevity of the Constitution is due to the intense activity of the Legislative power in charge or reforms and constitutional amendments, which have made it possible for the adaptability of the Constitution vis-à-vis the evolution of Mexican society. But, at the same time, this has contributed for Mexican constitutionalism to have a better understanding of the Constitution from a formal legal perspective that goes together with that political one. Therefore, in this new understanding of Mexican constitutionalism, the coexistence of a political perspective -whereas the amendments formally adapt the Constitution to a certain reality- and a crescent judicial and legal perspective -whereas the Judicial power is in charge of the legal interpretation of the Constitution- creates an environment of constitutional dialogue between the amendment function and the judicial review, between legislators and judges

Resumo:

Este ensaio tata da análise da evolução do constitucionalismo mexicano ao se completar cem anos de vigência da Constituição mexicana, que foi promulgada na cidade de Queretaro em 5 de Fevereiro de 1917. O ensaio argumenta que a longevidade desta Constituição se deve à intensa atividade dos órgãos legislativos mexicanos no exercício da função de reforma constitucional, que tem contribuído para uma grande plasticidade do texto escrito, favorecendo a adaptação entre a norma fundamental e a evolução da sociedade mexicana. Ao mesmo tempo, o desenvolvimento do constitucionalismo mexicano, que historicamente privilegiava o olhar político dos órgãos legislativos acima mencionados, levou a um fortalecimento de entendimento jurídico da Constituição. Isto leva a admitir - em uma nova compreensão do constitucionalismo mexicano - a coexistência entre o entendimento político tradicional, que age por meio de ajustes formais ao texto constitucional, com uma presença crescente dos órgãos judiciais que imprimem viés claramente legal para a Constituição Mexicana, criando com ela um ambiente de diálogos constitucionais entre as funções da reforma constitucional e o controle da regularidade constitucional

Resumen:

En el presente ensayo se aborda la evolución del constitucionalismo mexicano tomando como eje explicativo la regulación, la importancia y el comportamiento del llamado “órgano revisor de la Constitución”, cuya eficacia es considerada central en la historia jurídica de México. El autor identifica diferentes periodos de vida de esta institución, a los cuales denomina “edades del constitucionalismo mexicano”. Los criterios utilizados para la periodización propuesta consisten en la identificación de los factores jurídicos y de otros sociopolíticos de índole histórica, sin los cuales sería imposible calificar la eficacia del órgano reformador. Se identifican cuatro edades del constitucionalismo mexicano: del constitucionalismo ineficaz, un constitucionalismo admitido, un constitucionalismo sometido y un constitucionalismo balanceado

Abstract:

In this essay, the evolution of Mexican constitucionalism is addressed, taking as an explanatory axis the importance and the and the regulation of the constitucional amendment function, which is of remarkable importance in the legal history of Mexico. In addition, the autor identifies different sets of periods for the way that the amendment function has emerged and evolved, called the “ages of Mexican constitucionalism”. The criteria for the proposedperiodization consist in the identification of legal and sociopolitical factors of historical nature. Without these elements, the author says, it would be imposible to asses the efficiency of it. Thus, four ages of the Mexican Constitucionalism are determined: an age of ineffective constitucionalism, an age of admitted constitucionalism, an submitted constitucionalism, and, finally, an balanced constitucionalism

Abstract:

The purpose of this study was to test a workplace social exchange network model of employee eco-initiatives in which high-quality relationships with the organization, the supervisor, and the coworkers, influence suggestions for constructive change toward the environment. Data were obtained from 449 university-educated Mexican employees working in the service industry. In contrast with recent research, we found that social exchanges with the organization and the supervisor were not linked to eco-initiatives, at least not directly, when controlled for social exchange with the coworkers. However, the results indicate that the quality of peer relationships mediates influences of the broader social and psychological context represented by the organization and the supervisor. These findings and their implications for theory and practice are discussed

Abstract:

This paper examines minimum advertised price (MAP), a vertical restraint that is observed in manufacturer-retailer interactions. Under MAP, the manufacturer announces that it will reimburse retailers for a fraction of their advertising expenditures if retailers do not advertise the product at below a specified price. MAP can be considered a combination of resale price maintenance (RPM) and a cooperative advertising subsidy. Current antitrust law treats RPM as illegal per se, whereas MAP is judged according to a rule of reason. A framework is presented within with neither a minimum retail price nor a cooperative advertising subsidy is individually sufficient to enable maximization of profits in the complete manufacturer-retailer structure, but the two instruments together are. MAP is therefore a sufficient instrument for the maximization of joint profits. We argue that MAP can also be designed as a second-best instrument that replicates RPM

Resumen:

El trabajo colaborativo está creciendo en las organizaciones y, en la economía global, este trabajo es internacional por naturaleza. La disponibilidad de tecnologías de información y comunicación hace que gran parte del trabajo colaborativo se apoye en medios electrónicos, lo que representa oportunidades y retos para las organizaciones. Este artículo identifica algunos factores relacionados con el éxito en colaboraciones internacionales de enseñanza apoyadas por medios electrónicos. Estudiamos una experiencia académica entre dos grupos de estudiantes de posgrado, uno de Estados Unidos y otro de México. Factores como el porcentaje del tiempo dedicado a definir el alcance del proyecto, el nivel de esfuerzo realizado dentro de cada país y el número de años de experiencia laboral, tuvieron un impacto positivo significativo en la percepción de éxito del proyecto colaborativo. Los resultados indican que los participantes sobreestimaron la importancia que tienen algunos factores contextuales propios a cada país en la colaboración internacional

Abstract:

In worldwide organizations, the share of collaborative work is increasing steadily and, in the current global economy, the collaborative work is international in nature. With the availability and increased use of information and communication technologies most of the international collaborative work is electronic-mediated. This exploratory empirical research is an attempt to identify some factors that relate to success in electronic-mediated international collaboration (EMIC). We studied an EMIC experience between two groups of graduate students, one working from U.S. and the other from Mexico. Process issues such as the percentage of total project time spent on defining the project scope and the level of effort on intra-country activities had a significant positive effect on perceived project success. Number of years of full-time work experience was an individual factor that had a significant positive impact. Results indicate that participants tend to over estimate the impact of some differences in characteristics between groups from different countries on project success

Abstract:

Lead recycling is very important for reducing environmental pollution risks and damages. Liquid lead is recovered from exhaust batteries inside stirred batch reactors; the process requires melting to be cleaned. Nevertheless, it is necessary to establish parameters for evaluating mixing to improve the efficiency of the industrial practices. Computational fluid dynamics (CFD) has become a powerful tool to analyze industrial processes for reducing operating costs, avoiding potential damages, and improving the equipment's performance. Thus, the present work is focused on simulating the fluid hydrodynamics inside a lead-stirred reactor monitoring the distribution of an injected tracer in order to find the best injection point. Then, different injected points are placed on a control plane for evaluation; these are evaluated one by one by monitoring the tracer concentration at a group of points inside the batch. The analyzed reactor is a symmetrical, vertical batch reactor with two geometrical sections: one cylindrical body and a semi-spherical bottom. Here, one impeller with four flat blades in a shaft is used for lead stirring. The tracer concentration on the monitoring points is measured and averaged for evaluating the efficiency inside the tank reactor. Hydrodynamics theory and a comparison between the concentration profiles and distribution of tracer curves are used to demonstrate both methods' similarities. Then, the invariability of the tracer concentration on the monitoring points is adopted as the main parameter to evaluate the mixing, and the best injection point is found as a function of the shortest mixing time. Additionally, the influence of the impeller rotation speed is analyzed as an additional control parameter to improve industrial practices

Abstract:

This work focuses on an analysis of hydrodynamics to improve the efficiency in a batch reactor for lead recycling. The study is based on computational fluid dynamics (CFD) methods, which are used to solve Navier-Stokes and Fick's equations (continuity and momentum equations for understanding hydrodynamics and concentration for understanding distribution). The reactor analyzed is a tank with a dual geometry with a cylindrical body and a hemisphere for the bottom. This reactor is symmetrical vertically, and a shaft with four blades is used as an impeller for providing motion to the resident fluid. The initial resident fluid is static, and a tracer is defined in a volume inside to measure mixing efficiency, as is conducted in laboratory and industrial practices. Then, an evaluation of the mixing is performed by studying the tracer concentration curves at different evolution times. In order to understand the fluid flow hydrodynamics behavior with the purpose of identifying zones with rich and poor tracer concentrations, the tracer's concentration was measured at monitoring points placed all around in a defined control plane of the tank. Moreover, this study is repeated independently to evaluate different injection points to determine the best one. Finally, it is proved that the selection of an appropriate injection point can reduce working times formixing, which is an economically attractive motivation to provide proposals for improving industrial practices

Abstract:

Simulation of the grain growth process, as a function of steel heat transfer conditions, is helpful for predicting grain structures of continuous cast steel products. Many authors have developed models based on numerical methods to simulate grain growth during metal solidification. Nevertheless, the anisotropic nature of grain structures makes necessary the employment of new mathematical methods such as chaos theory, fractals, and probabilistic and stochastic theories of simulation. The problem is significant for steelmakers to avoid defects in products and to control the steel microstructure during the continuous casting process. This work discusses the influence of nodal solidification times and computer algorithms on the dynamic formation of the chill, columnar, and equiaxed zones including physical phenomena such as nucleation and grain growth. Moreover, the model incorporates pre-nucleation and pre-growth routines in the original algorithm. There is a description of the influence of the mathematical parameter criteria and probabilities over the grain morphology obtained after solidification. Finally, an analysis of these algorithms elucidates the differences between these structures and those obtained from models considering only the solidification

Abstract:

The current automation of steelmaking processes is capable of complete control through programmed hardware. However, many metallurgical and operating factors, such as heat transfer control, require further studies under industrial conditions. In this context, computer simulation has become a powerful tool for reproducing the effects of industrial constraints on heat transfer. This work reports a computational model to simulate heat removal from billets' strands in the continuous casting process. This model deals with the non-symmetric cooling conditions of a billet caster. These cooling conditions frequently occur due to plugged nozzles in the secondary cooling system (SCS). The model developed simulates the steel thermal behavior for casters with a non-symmetric distribution of the sprays in the SCS using different boundary conditions to show possible heat transfer variations. Finally, the results are compared with actual temperatures from different casters to demonstrate the predictive capacity of this algorithm's approach

Abstract:

Simulation of a continuous casting process (CCP) is very important for improving industrial practices, reducing working times, and assuring safety operating conditions. The present work is focused on the development of a computational simulator to calculate and analyze heat removal during continuous casting of steel; routines for reading the geometrical configuration and operating conditions were developed for an easy management. Here, a finite difference method is used to solve the steel thermal behavior using a 2D computational array. Conduction, radiation, and forced convection equations are solved to simulate heat removal according to a steel position along the continuous casting machine. A graphical user interface (GUI) was also developed to display virtual sketches of the casting machines; moreover, computational facilities were programmed to show results such as temperature and solidification profiles. The results are analyzed and validated by comparison with industrial trials; finally, the influence of some industrial parameters such as casting speed and quenching conditions is analyzed to provide some recommendations in order to warrant safety operating conditions

Abstract:

This work shows the use of some basic animation techniques to develop a computer simulator capable to display the heat transfer of steel during a continuous casting process. This is the most popular method to produce steel in big bulks. Nevertheless, continuous casting involves the interaction of complex heat transfer phenomena such as conduction, radiation and forced convection, and appropriate algorithms with graphical tools must be developed to display thermal and solidification profiles according to temperature values. Here a 2D model was applied for calculation purposes; nevertheless the animation techniques employed allowed us to display a 3D representation. Furthermore, additional enhancements were created to show the internal thermal behavior. A math model provided a good approach for process simulation and the animation procedures were successfully tasted to provide a good visual display

Abstract:

The present work is dedicated to explain the development of a computational model for simulating the grain structure formed in a squared billet produced by continuous casting. During steel solidification, three different grain structures are formed as a function of the heat removal conditions. The solidification times previously calculated using a finite difference method were used as input data in order to simulate dynamically the evolution of the grain formation. Computational routines for grouping, counting and classifying have been programmed to evidence the influence of the solidification speed on the grain morphology resulted. Criterions based on solidification speed and time on mushy are used to establish the transitions zones between chill, columnar and equiaxed grains. Routines to simulate grain nucleation and growth based on chaos theory have been included to create a graphical cellular automaton on the computer screen to animate the grain structure formation

Abstract:

This work is focused to show the importance on the developing of software to create computer simulators for planning production. The simulator shown here was developed to reproduce the dynamics and operating of the continuous casting process in steelmakers industries. These industries work in journeys of 24 hours a day and 7 at week without interruption; these industries use the basic metallurgical procedures to produce big bulks of steel. Nevertheless the activities must be organized in order to avoid delays and probable risks or undesirable else unexpected situations must be evaluated to minimize impact. Thus engineers and workers must have an appropriated training. The simulator was successfully tested for reproducing many activities for casting; moreover it was also employed to train all the personal about how take the best decision and re-program activities if necessary

Abstract:

The present work is dedicated to evidence the importance of the practical lessons with real tools and machines for the students of technical and engineering. A sample of 24 students in 4 groups participated during the evaluation of the practical lessons. These students take a course named manufacturing of mechanical components (MMC); here they are instructed about to employ of some tools and machines and how to perform during real manufacturing in small offices and laboratories to create special mechanical components. At the end of the course the students were agreed that practicing is absolutely necessary in technician and engineering courses and they coincided in the fact that this is also a good way to reinforce theory and recognize its importance for a better information

Abstract:

This work is focused to show the importance of the use of computer simulation on the teaching-learning process for the engineering students. There some courses whit technical topics about statics, mechanics and mechatronics that student must take during their formation as engineering in universities; nevertheles these course includes the solution of some complex problems with numerical methods, the understanding is difficult for the students due to these solutions requires the appropriated used of many mathematical and physical concepts and many times a lot of imagination. Nowadays, some computer simulators have been used to improve the teaching methodologies towards the students. The software used during the course in the present work is ADAMS provided by MSC and PACE which was successfully used to teach the student in the mechanic of solid course. The use of this software improved the student's performance and motivated their imagination to explore beyond about physical and mechanical concepts

Abstract:

The computer science and programming have been widely developed and increased; many different applications have raised developing new areas and bringing the need for specialization; so the present work is dedicated to evidence the importance of a good agreement and sequence between the programming courses for the students of computer engineering. Thus, some modification on the original courses have been done in order to maintain updated and for improving the student performance focusing the importance of computer courses towards an specific area, moreover new courses have had to be created for other engineering students in order to improve their training. After five year of application, the improvement of the students abilities have been notorious, nevertheless the need for reviewing again the courses remains

Abstract:

This study investigated the influence of increasing practice and the inclusion of applied examples (exercises with a direct application or related to real problems) during programming and numerical analysis courses. The results obtained from this experiment demonstrated that examples related to students engineering careers have a very positive impact on learning and awaked up the student interest on programming courses; in comparison with students who just follow examples from a textbook as in an ordinary course. The conclusions of this work were obtained from the analysis and comparison of the statistical information of the students’ performance during 2 programming courses and an additional mathematical course (numerical analysis). Finally, a questionnaire (at the end of the courses) about the methods used for teaching was applied in order to investigate students opinion

Abstract:

The present work is dedicates to illustrate how a computer algorithm based on the generation of random numbers can be used successfully to simulate the grain structures presented in metallic materials. Here the random functions are programmed to create vertexes or nucleation nodes which can be growth, both algorithms are explained. Moreover the influence of these procedures on the grain size and morphology is also explained. After testing, the best fit was obtained simulating the two principal processes involved on grain formation (nucleation and growth) due to complex grain morphologies can be simulated creating columnar and equiaxed grains in two dimensional samples; and because this model allows to represent the grain formation dynamically as a function of time during solidification

Abstract:

The present work illustrates the procedure to simulate computationally the continuous casting process of metals. This is the most popular method to produce metals ingot without interruption. This procedure is useful to cast metals and alloys such as steel and aluminum. The algorithms have been developed to provide appropriated and efficient tools for reading data and display the simulation on the screen using a graphical user interphase(GUI). Analysis of the industrial processing allows to test and correct probable situations avoiding problems and risks;moreover computer simulation allows to identify critical aspect of the kinematics procedure and evaluate productivity

Abstract:

This work is focused to evaluate heat removal simulation on the continuous casting process of stell, which is the most used method to produce big mounts of steel.The understanding of steel thermal behavior is very important in order to control the industrial processing and guarantee steel quality, here different mechanism are involved such as radiation,forced convection and conduction.A finite difference method easy to program is used to solve the simulation.This calculation is also coupled a with virtual kinematics model in order to create a more realistic simulation.Finally computer tools were programmed to save and display graphically the steel thermal behavior

Abstract:

A description of a mathematical algorithm for simulating grain structures with straight and hyperbolic interfaces is shown. The presence of straight and hyperbolic interfaces in many grain structures of metallic materials is due to different solidification conditions, including different solidification speeds, growth directions, and delaying on the nucleation times of each nucleated node. Grain growth is a complex problem to be simulated; therefore, computational methods based on the chaos theory have been developed for this purpose. Straight and hyperbolic interfaces are between columnar and equiaxed grain structures or in transition zones. The algorithm developed in this work involves random distributions of temperature to assign preferential probabilities to each node of the simulated sample for nucleation according to previously defined boundary conditions. Moreover, more than one single nucleation process can be established in order to generate hyperbolic interfaces between the grains. The appearance of new nucleated nodes is declared in sequences with a particular number of nucleated nodes and a number of steps for execution. This input information influences directly on the final grain structure (grain size and distribution). Preferential growth directions are also established to obtain equiaxed and columnar grains. The simulation is done using routines for nucleation and growth nested inside the main function. Here, random numbers are generated to place the coordinates of each new nucleated node at each nucleation sequence according to a solidification probability. Nucleation and growth routines are executed as a function of nodal availability in order to know if a node will be part of a grain. Finally, this information is saved in a two-dimensional computational array and displayed on the computer screen placing color pixels on the corresponding position forming an image as is done in cellular automaton

Abstract:

Computational models are developed to create grain structures using mathematical algorithms based on the chaos theory such as cellular automaton, geometrical models, fractals, and stochastic methods. Because of the chaotic nature of grain structures, some of the most popular routines are based on the Monte CarIo method, statistical distributions, and random walk methods, which can be easily programmed and included in nested loops. Nevertheless, grain structures are not well defined as the results of computational errors and numerical inconsistencies on mathematical methods. Due to the finite definition of numbers or the numerical restrictions during the simulation of solidification, damaged images appear on the screen. These images must be repaired to obtain a good measurement of grain geometrical properties. Some mathematical algorithms were developed to repair, measure, and characterize grain structures obtained from cellular automata in the present work. An appropriate measurement of grain size and the corrected identification of interfaces and length are very important topics in materials science because they are the representation and validation of mathematical models with real samples. As a result, the developed algorithms are tested and proved to be appropriate and efficient lo eliminate the errors and characterize the grain structures

Abstract:

This work is dedicated to analyze the heat removal phenomena during the simulation of the continuous casting process of the steel squared sections; it is a useful method to produce big mounts of steel and some other metals such as aluminum. Radiation, forced convection and thermal conduction are the physical phenomena involved. The method used to solve the heat removal and distribution is a finite difference. Steel is discretized using a regular squared mesh. Here a 3D problem is simplified to a 2D in which every node represents a 3D steel volume, but the assumptions are appropriately justified in order to reduce computing time without sacrifice precision on calculation. The efficiency of the method is also evaluated and alternatives to improve the approaching and results are commented

Abstract:

The development of some computational algorithms based on cellular automaton was described to simulate the structures formed during the solidification of steel products. The algorithms described take results from the steel thermal behavior and heat removal previously calculated using a simulator developed by present authors in a previous work. Stored time is used for displaying the steel transition from liquid to mushy and solid. And it is also used to command computational subroutines that reproduce nucleation and grain growth. These routines are logically programmed using the programming language C++ and are based on a simultaneous solution of numerical methods (stochastic and deterministic) to create a graphical representation of different grain structures formed. The grain structure obtained is displayed on the computer screen using a graphical user interface (GUI). The chaos theory and random generation numbers are included in the algorithms to simulate the heterogeneity of grain sizes and morphologies

Abstract:

The factors involved in simulating the continuous casting process of steel and the effects of the factors on the thermal behavior were investigated. The numerical methods and the influence of some assumptions were also analyzed, such as nodes used to discretize the steel in array size and computing time to obtain good approaches. The results show that some of these factors are related with the design of the continuous casting plant (CCP), such as geometrical configuration, and the operating conditions, such as water flow rate, heat removal coefficient in the mold, casting times, and casting speed in the strand, which affect the heat removal conditions over the temperature and solidification profiles

Abstract:

This work is focused on the development of computational algorithms to create a simulator for solving the heat transfer during the continuous casting process of steel. The temperatures and the solid shell thickness profiles were calculated and displayed on the screen for a billet through a defined continuous casting plant (CCP). The algorithms developed to calculate billet temperatures, involve the solutions of the corresponding equations for the heat removal conditions such as radiation, forced convection, and conduction according to the billet position through the CCP. This is done by a simultaneous comparison with the kinematics model previously developed. A finite difference method known as Crank-Nicholson is applied to solve the two-dimensional computational array (2D model). Enthalpy (HI,J) and temperature (TI,J) in every node are updated at each step time. The routines to display the results have been developed using a graphical user interface (GUI) in the programming language C++. Finally, the results obtained are compared with those of industrial trials for the surface temperature of three steel casters with different plant configurations in different casting conditions

Abstract:

Computational modeling of grain structures is a very important topic in materials science. In this work, the development ofthe computational algorithms for a mathematical model to predict grain nucleation and grain growth is presented. The model place a number of nucleated points randomly in a liquid pool according with the solid and Iiquid fractions (X(sol) and X(liq)) of metal solute and the local temperature distribution (SS(lj)). Then these points grows isotropicalIy until obtain a grain structure with straight interfaces. Different grain morphologies such as columnar and equiaxed can be obtained as a function of the temperature distributions and growth directions

Abstract:

Recently, a great ¡nterest has been focused for investigations about transport phenomena in disordered systems. One of the most treated topics is fluid flow through anisotropic materials due to the importance in many industrial processes like fluid flow in filters, membranes, walls, oil reservoirs, etc. In this work is described the formulation of a 2D mathematical model to simulate the fluid flow behavior through a porous media (PM) based on the solution of the continuity equation as a function of the Darcy's law for a percolation system; which was reproduced using computational techniques reproduced using a random distribution of the porous media properties (porosity, permeability and saturation). The model displays the filling of a partially saturated porous media with a new injected fluid showing the non-defined advance front and dispersion of fluids phenomena

Resumen:

Este trabajo se realiza con el objeto de ilustrar el desarrollo de simuladores para procesos industriales, como ejemplo se realizó un sistema de simulación para representar el proceso de colada continua que es el proceso más común para la producción de perfiles de acero. El sistema de simulación para colada continua (SSCC) fue programado por los presentes autores en lenguaje C++; sin embargo, los procedimientos matemáticos, algoritmos y diagramas de flujo ilustrados en este trabajo pueden ser programados en cualquier lenguaje de programación. Esta primera parte del trabajo se enfoca a desarrollar herramientas de cálculo para representar lo que sucede físicamente durante el proceso de colada continua

Abstract:

This work was written for illustrating the development of computer simulators for production process. As an example a simulator for continuous casting process was made, this is the most popular method to produce some steel products like billets and slabs. The simulator (SSCC) was programmed by the present authors in C++; but mathematical procedures, algorithms and flow charts shown here can be used as a base in any other programming language. This is the first part of the work; it is focused to the development of some computational tools to describe the real continuous casting process

Resumen:

El estudio del comportamiento térmico del acero es de gran importancia para controlar la calidad de productos como perfiles colados, por lo que el presente trabajo muestra el acoplamiento de una subrutina, para simular las condiciones de extracción de calor que ocurren durante el proceso de colada continua, a la rutina de simulación del proceso descrita por los presentes autores en un trabajo previo[1]; como resultado se obtienen los perfiles de temperatura del acero y las gráficas de temperatura superficial de este y, posteriormente, se procede a la validación del sistema con datos de condiciones reales de operación

Abstract:

The understanding of steel thermal behavior is very important in order to take care the quality of the products like billets and slabs due to these; this work shows the join of a subroutine to simulate the heat transfer conditions during the continuous casting process to the model for simulating the process described by the present authors in a previous work [1]; the result is the temperature profiles and surface temperature graphics of the steel, then they are compared with data carried out of real operating conditions

Resumen:

Este trabajo muestra el empleo de técnicas de métodos de Monte Carlo y generación de números aleatorios en combinación con datos obtenidos del sistema de simulación (SSCC) para comportamiento térmico previamente descrito para la reproducción computacional del proceso de solidificación del acero y simular la formación de estructuras de colada a paso

Abstract:

This work was written for illustrating the use of Monte Carlo methods and generating of random number in combination with the information of the simulation system for thermal behavior described previously in order to reproduce in a computer the solidification process of the steel and simulate the formation of structures of casting step by step

Abstract:

The evolution of microstructure is a very important topic in material science engineering because the solidification conditions of steel billets during continuous casting process affect directly the properties of the final products. In this paper a mathematical model is described in order to simulate the dendritic growth using data of real casting operations; here a combination of deterministic and stochastic methods was used as a function of the solidification time of every node in order to create a reconstruction about the morphology of cast structures

Abstract:

This paper aims to propose an alternate, efficient and scalable modeling framework to simulate large-scale bike-sharing systems using discrete-event simulation. This study uses this model to evaluate several initial bike inventory policies inspired by the operation of the bike-sharing system in Mexico City, which is one of the largest around the world. The model captures the heterogeneous demand (in time and space) and this paper analyzes the trade-offs between the performance to take and return bikes. This study also includes a simulation-optimization algorithm to determine the initial inventory and present a method to deal with the bias caused by dynamic rebalancing on observed demand

Abstract:

Port terminals consist of two interfaces for transferring cargo among transport modes: (1) the seaside or quayside interface and (2) the landside interface. At the seaside interface, cargo is loaded and unloaded from the vessels and stored temporarily at the yard. Landside operations consist of receiving and dispatching cargo from external trucks and rail. The increasing volumes of international trade are demanding more efficient cargo handling throughout the port logistic chain and coordination with the hinterland, hence attracting more attention from both practitioners and researchers on the landside interface of ports. Due to the high variability of truck arrivals with a significant concentration at peak hours, congestion at the access gates of ports and an unbalanced utilization of the resources occur. Truck appointment systems (TAS) have already been implemented in some ports as a coordination mechanism to reduce congestion at ports, balance demand and capacity, and reduce truck turnaround times. Based on the current situation faced by the Port of Arica, Chile, this paper aims to analyze potential configurations of a TAS and evaluate its impacts on yard operations, specifically in the reduction of container rehandles, as well as truck turnaround times. For this, a discrete-event simulation model and a heuristic procedure are proposed and experimentation is performed using historical data from the port terminal. Results indicate that implementing a TAS may significantly benefit yard operations in terms of reducing container rehandles as well as truck waiting times

Abstract:

Output analysis methods of steady-state simulations have extensively been subject of study to evaluate the performance when estimating the mean. However, smaller efforts have been placed on performance evaluation of these methods to estimate variance and quantiles. In this paper, we empirically evaluate the performance of output analysis methods based on multiple replications and batches to estimate mean, variance and quantile with the same set of data. The evaluation of the performance of the methods is based on the empirical coverage of the true value using confidence intervals, the average bias, relative error and mean squared error. The methods are applied to estimate the average, variance and quantiles of waiting time in an queue. The results show that the methods based on non-overlapping batches perform consistently well in all the metrics. The performance of the other methods varies depending on the metric and the parameters of the simulation. In addition, we provide another example of a non-geometric ergodic Markov chain to show that asymptotically valid confidence intervals for quantiles can be obtained using batches and replications

Abstract:

Dispensing of mass prophylaxis can be critical to public health during emergency situations and involves complex decisions that must be made in a short period of time. This study presents a model and solution approach for optimizing point-of-dispensing (POD) location and capacity decisions. This approach is part of a decision support system designed to help officials prepare for and respond to public health emergencies. The model selects PODs from a candidate set and suggests how to staff each POD so that average travel and waiting times are minimized. A genetic algorithm (GA) quickly solves the problem based on travel and queuing approximations (QAs) and it has the ability to relax soft constraints when the dispensing goals cannot be met. We show that the proposed approach returns solutions comparable with other systems and it is able to evaluate alternative courses of action when the resources are not sufficient to meet the performance targets

Abstract:

Ambulance diversion (AD) is used by emergency departments (EDs) to relieve congestion by requesting ambulances to bypass the ED and transport patients to another facility. We study optimal AD control policies using a Markov Decision Process (MDP) formulation that minimizes the average time that patients wait beyond their recommended safety time threshold. The model assumes that patients can be treated in one of two treatment areas and that the distribution of the time to start treatment at the neighboring facility is known. Assuming Poisson arrivals and exponential times for the length of stay in the ED, we show that the optimal AD policy follows a threshold structure, and explore the behavior of optimal policies under different scenarios. We analyze the value of information on the time to start treatment in the neighboring hospital, and show that optimal policies depend strongly on the congestion experienced by the other facility. Simulation is used to compare the performance of the proposed MDP model to that of simple heuristics under more realistic assumptions. Results indicate that the MDP model performs significantly better than the tested heuristics under most cases. Finally, we discuss practical issues related to the implementation of the policies prescribed by the MDP.

Abstract:

Child obesity is a public health problem that is of concern of several countries around the world. Long-term effects of child obesity include prevalence of chronic diseases, such as diabetes and heart-related illnesses. This paper presents an agent-based simulation framework to analyze the evolution of obesity in school-age children. In particular, in this paper we evaluate the impact of physical activity on the prevalence of child obesity using an agent-based simulation model. Simulation results suggest that the fraction of overweight and obese children at the end of elementary school can be reduced by doing physical activity with moderate intensity

Abstract:

Most of research efforts on the analysis of steady-state simulation focus on the estimation of the mean. However, analysts may be interested in other types of measures, (e.g. measures of risk) such as variance and quantiles. In this paper, we present an empirical comparison of the multiple replications, non-overlapping batches and spaced-batches methodologies to estimate mean, variance and 90%-quantile of an M/M/1 system. These methods are compared with respect to the empirical coverage of 90% confidence intervals through multiple experiments. The results show that spaced-batches exhibit poor coverage of the mean compared with replications and non-overlapping batches. The three methods perform similarly for estimating the variance and the 90%-quantile. However, estimating the variance requires more observations to obtain the expected coverage than the estimation of the mean and the quantile

Resumen:

En este artículo se presenta una comparación empírica entre los métodos de repeticiones independientes, grupos consecutivos y grupos espaciados para estimar la esperanza, la varianza y el 90%-cuantil del tiempo en la fila de espera (en estado estable) de una cola M/M/1. La comparación está basada en el cubrimiento empírico de los intervalos del 90% de confianza, a través de 1000 repeticiones independientes de las estimaciones. Además de confirmarse la validez asintótica de los intervalos de confianza producidos por los tres métodos, los resultados experimentales muestran que el método de grupos espaciados exhibe un cubrimiento más pobre que los otros dos métodos para estimar la esperanza, mientras que los tres métodos tienen un desempeño similar para estimar la varianza y el 90%-cuantil. Además se presenta el ejemplo de una cadena de Markov con espacio de estados continuo, en el que se obtienen intervalos de confianza válidos, siendo que esta cadena no es geométricamente ergódica

Abstract:

Simulation has been successfully used for estimating performance measures (e.g. mean, variance and quantiles) of complex systems, such as queueing and inventory systems. However, parameter estimation using simulation may be a difficult task under some conditions. In this paper, we present a counterexample for which traditional simulation methods do not allow us to estimate the accuracy of the point estimators for the mean and risk performance measures for steady-state. The counterexample is based on a Markov chain with continuous state space and nongeometric ergodicity. The simulation of this Markov chain shows that neither multiple replications nor batch-based methodologies can produce asymptotically valid confidence intervals for the point estimators

Abstract:

Ambulance diversion (AD) is often used by emergency departments (EDs) to relieve congestion. When an ED is on diversion status, the ED requests ambulances to bypass the facility; therefore ambulance patients are transported to another ED. This paper studies the effect of AD policies on the average waiting time of patients. The AD policies analyzed include (i) a policy that initiates diversion when all the beds are occupied; (ii) a policy obtained by using a Markov Decision Process (MDP) formulation, and (iii) a policy that does not allow diverting at all. The analysis is based on an ED that comprises two treatment areas. The diverted patients are assumed to be transported to a neighboring ED, whose average waiting time is known. The results show significant improvement in the average waiting time spent by patients in the ED with the policy obtained by MDP formulation. In addition, other heuristics are identified to work well compared with not diverting at all

Abstract:

Ambulance Diversion (AD) has been an issue of concern for the medical community because of the potential harmful effects of long transportation; however, AD can be used to reduce the waiting time in Emergency Departments (EDs) by redirecting patients to less crowded facilities. This paper proposes a Simulation-Optimization approach to find the appropriate parameters of diversion policies for all the facilities in a geographical area to minimize the expected time that patients spend in non-value added activities, such as transporting, waiting and boarding. In addition, two destination policies are tested in combination with the AD policies. The use of diversion and destination policies can be seen as ambulance flow control within an emergency care system. The results of this research show significant improvement in the flow of emergency patients in the system as a result of the optimization of destination-diversion policies compared to not using AD at all

Abstract:

The assumption that all susceptible individuals are equally likely to acquire the disease during an outbreak (by direct contact with an infective individual) can be relaxed by bringing into the disease spread model a contact structure between individuals in the population. The structure is a random network or random graph that describes the kind of contacts that can result in transmission. In this paper we use an approach similar to the approaches of Andersson (Ann Appl Probab 8(4):1331–1349, 1998) and Newman (Phys Rev E 66:16128, 2002) to study not only the expected values of final sizes of small outbreaks, but also their variability. Using these first two moments, a probability interval for the outbreak size is suggested based on Chebyshev’s inequality. We examine its utility in results from simulated small outbreaks evolving in simulated random networks. We also revisit and modify two related results from Newman (Phys Rev E 66:16128, 2002) to take into account the important fact that the infectious period of an infected individual is the same from the perspective of all the individual’s contacts. The theory developed in this area can be extended to describe other “infectious” processes such as the spread of rumors, ideas, information, and habits

Abstract:

In this paper we discuss the SIMID tool for simulation of the spread of infectious disease, enabling spatio-temporal visualization of the dynamics of influenza outbreaks. SIMID is based on modern random network methodology and implemented within the R and GIS frameworks. The key advantage of SIMID is that it allows not only for the construction of a possible scenario for the spread of an infectious disease but also for the assessment of mitigation strategies, variation and uncertainty in disease parameters and randomness in the progression of an outbreak. We illustrate SIMID by application to an influenza epidemic simulation in a population constructed to resemble the Region of Peel, Ontario, Canada

Resumen:

El presente trabajo analiza de forma comprada un modelo econométrico de voto económico. Usando la base de datos del Comparative Study of Electoral Systems y las diversas fuentes oficiales de estadísticas de los ocho países incluidos en la muestra, se desarrolla un modelo que considera variables que miden la satisfacción de los individuos con la democracia y con la economía con el fin de medir el impacto de éstas sobre el apoyo al partido en el gobierno en cada elección. La principal conclusión es que el efecto del voto económico está altamente relacionado con facilidad con la cual los electores pueden identificar la responsable de las acciones del gobierno, lo cual a su vez depende del entorno institucional del sistema político, del sistema electoral y del sistema de partidos. Por otro lado, este trabajo muestra que aunque los electores pocas veces se equivocan en su evaluación del desempeño económico, el impacto para castigar o premiar es diferente según el grado de responsabilidad y la ideología del gobierno en cuestión

Abstract:

In this work we study the channel capacity from the point of view of a secondary user that shares the bandwidth of the channel with a primary user using dynamic spectrum access in cognitive radio.The secondary user sees bandwidth fluctuations (i.e, at any given time the bandwidth can be available or not) that impact its channel capacity. We study the outage capacity for the secondary user considering two scenarios in which the secondary user uses either a single carrier modulation for the case in which bandwidth fluctuates over the complete transmission band, and a multicarrier modulation for the case in which bandwidth fluctuations are over various transmission subbands. We derive expressions for the outage capacity of the secondary user for both single carrier and multicarrier. Results show that: (1) The outage capacity for single carrier can be higher than for multicarrier, but with a higher outage probability for single carrier than for multicarrier. In fact, a low value of outage probability for single carrier requires a duty cycle for the secondary user close to one, but this has the problem that it leaves a very short duty cycle for the primary user. (2) Although for the secondary user the outage capacity for multicarrier is smaller than for single carrier, for multicarrier lower values of the outage probability can be achieved even for short values of the duty cycle of the secondary user, allowing larger duty cycle values of the primary user. (3) For multicarrier, the outage capacity is more sensitive to changes in the duty cycle than to changes in the outage probability. To obtain a larger outage capacity with low values of both the outage probability and the duty cycle, it requires the use of a large number of subbands

Abstract:

We study ultra wideband (UWB) communications over dense multipath channels with M-ary frequency shift keying (FSK) data modulation and low complexity receivers. We model the signal transmission-reception taking into account the effects of both the frequency response of the antenna system and the frequency selectivity of the multipath channel. We derive an expression for the bit error rate (BER) when detection is achieved using a pair of passband filters. While other low complexity binary receivers reported in the literature present a large signal-to-noise ratio (SNR) degradation with respect to the binary coherent detection, using our signal design and filter-based detection the loss with respect to the binary coherent detection is only 3.5 dB using a binary receiver with no channel estimation

Abstract:

Both detect and avoid (DAA) and continuous transmission with no DAA (NDAA) are mechanism defined to allow an ultrawideband (UWB) terminal (UWBT) to coexist with other non UWB terminas (nUWBTs). This work studies the dynamic multiple access (DMAC) efficiency E of DAA and NDAA when the UWBT is coexisting with several nUWBTs that use Aloha as a medium access control (MAC). The E is studied in terms of the spectrum utilization factor F of the nUWBTs, and the signalto-noise ratio (SNR) of both the nUWBTs and the UWBT. The nUWBTs and the UWBT share a wireless channel with fading effects due to multipath in which the nUWBTs experience noncorrelated fading and the UWBT experiences correlated fading. We consider two scenarios: A DMAC scenario in which both nUWBTs and the UWBT coexist, and a pure MAC scenario in which only the nUWBT access the channel using Aloha. Results show that: 1) For low values of F, the increase in E in a DMAC scenario is by a factor of about 5 with respect to the E in a pure MAC scenario. However, as F gets larger we get a diminishing increase in E for DAA. 2) For low values of F, the E using DAA is about 80% the E using NDAA. Hence, the loss in E using DAA with respect to NDAA represents less than 1 dB for low values of F. 3) Furthermore, we found that for a pure MAC scenario E in AWGN is very close to the E in multipath, but for a DMAC scenario the E in AWGN is higher than the E in multipath

Abstract:

We study ultrawideband (UWB) communications over dense multipath channels using M-ary frequency shift keying (FSK) data modulation with both coherent and noncoherent detection. We present an M-ary FSK signal design that is able to balance the energy and correlation variations due to the frequency response of the antenna system and the frequency selectivity of the multipath channels. We calculate the bit error rate (BER) for different values of M and channel conditions. Numerical results show that large values of coding gains can be achieved with large values of M with a coherent detector

Abstract:

We study ultra wideband (UWB) communications over dense multipath channels using M-ary frequency shift keying (FSK) data modulation with both coherent and noncoherent detection. We present an M-ary FSK signal design to balance energy and correlation variations due to the frequency selectivity of the multipath channels and the frequency response of the antenna system. We evaluate the bit error rate (BER) taking into account the frequency domain effects of the antenna and the channel. We calculate the signal-to-noise ratio (SNR) improvement for different values of M and channel conditions. Using the binary coherent receiver as a baseline, we provide numerical results showing that a coding gain larger than 6 dB is possible for large M using a coherent receiver with just two correlators

Abstract:

This paper describes various block waveform encoded (M-ary) signal designs using pulse-position-modulation (PPM) that are useful in pulse-based ultra wideband (UWB) communications in wireless, cable and twisted-pair wire channels, and in other systems based on PPM (not necessarily of UWB nature). This work is focused in four interesting M-ary PPM signal designs: Orthogonal, Equicorrelated, N-orthogonal, and Generic Correlation designs. The designs are based on algebraic constructions with favorable correlation properties, mapping the algebraic constructions into sequences of time shifts to get PPMsignals with good correlation properties. For each signal design, the normalized correlation properties are described, the design method is given, and examples of the designs are presented

Abstract:

In this work we explore flexible modulations to allow demodulation using receivers with different complexity and cost. More specifically, we study pulse-based ultra wideband (UWB) communications over channels with dense multipath effects using frequency shift keying (FSK) data modulation. Our aim is to explore the performance tradeoffs between high and low complexity receivers. For this purpose we determine the performance of coherent, non-coherent and mismatched (e.g., non-coherent detection using templates consisting of a single path) demodulators. We take into account the influence of the frequency response of the antenna system and the effects of the frequency selectivity of the multipath channel. Given a specific channel condition, we derive expressions for the bit error rate for coherent, non-coherent and mismatched reception of correlated FSK signals with unequal energies, and calculate the signal-to-noise ratio (SNR) degradation for different receiver's complexities with a given FSK frequency deviation. We show that UWB-FSK has a SNR degradation similar to other low-complexity receivers studied in the literature

Abstract:

Ultra wideband (UWB) is a modulation technique that uses communication signals with a bandwidth in the order of hundreds of megahertz or even various gigahertz. It is a very promising communications technology under active research, development, regulation, and standardization efforts, with applications in computer and communications, consumer, radar, automotive, and cable. UWB presents many design challenges, and requires innovative designs for lowcost high-performance transmitters and receivers. In this article, we discuss selected patents on communications system techniques for UWB

Abstract:

This work describes the functionality of the "User Environment Area Network" (UEAN) and explores some issues related to its practical implementation. An UEAN is a network that connects the electronic devices that belong to a person, in spite of their physical location. The "Functional Convergence" (FC) of personal electronic devices allows the implementation of virtual personal electronic devices, that are implemented by grouping sources, sinks, storage, processors or gateways that belong to the UEAN. We explore the feasibility to use water copper pipelines as the backbone to establish a UEAN based on the interconnection of several Wireless Personal Area Networks (WPAN) operating in the different rooms of a house. The copper pipeline represents a good candidate to interconnect the multiple WPAN because of its high bandwidth and strong shielding to external electromagnetic noise. Some measurements in the 2.4 GHz frequency band are presented. The converging trends observed in telecommunications technology and markets, make us believe that it is important to study and model a network of personal devices in terms of an UEAN

Abstract:

We study ultra wideband (UWB) communications over dense multipath channels using receivers with low complexity and develop a generic framework for determining the performance of non-coherent frequency shift keying (FSK) data modulation. Detection is achieved using a pair of mismatched bandpass filters (i.e., non-coherent detection using templates consisting of a single path). We take into account the effects of both the antenna system and the multipath channel, and find an expression for the bit error rate (BER). We find that the signal-to-noise ratio (SNR) degradation is similar to other lowcomplexity receivers studied in the literature

Abstract:

We study the statistical nature of the multiuser interference (MUI) from pulse-based ultrawideband (UWB) signals with time-hopping (TH) and pulse position modulation (PPM). We determine the domain of validity for the Gaussian assumption for three different conditions: An ideal propagation channel with perfect and imperfect power control, as well as a multipath channel with "perfect average" power control. Results for imperfect power control suggest that the Gaussian approximation can be used if we consider a large enough number of active users Nu and/or number of pulses per symbol Ns, e.g., for low data rate systems with large number of users. Results for multipath suggest that in dense multipath environment a low number of Nu and/or Ns is enough to reach Gaussianity

Abstract:

We propose a method to manage transmission power in nodes belonging to a wireless sensor network (WSN). The scenario contemplates uncoordinated communications using impulse radio ultra wide- band (IR-UWB). Transmission power is controlled according to the statistical nature of the multiple access interference (MAI) produced by the nodes in the close vicinity of the communicating nodes. The statistical nature of the MAI is a function of the node population density within the area of coverage of the WSN. We show that when the node population density is high enough transmission power savings are possible

Abstract:

This paper studies pulse-based M-ary ultrawideband communications analyzing both the symbol error rate (SER) and the information theoretic capacity C for single-link communications over both additive white Gaussian noise and multipath channels (taking into account random variations in both energy and correlation values). In particular, we consider M-ary N-orthogonal pulse-position-modulated (PPM) signals with numerical pulse-position optimization to exploit the negative values in the pulse correlation function. On one hand, the N-orthogonal PPM (N-OPPM) signals can accommodate a larger value of M than the nonoverlapping orthogonal PPM (OPPM) signals for a constant frame size and pulse width, allowing increasing the symbol transmission rate and/or the use of error correcting codes. However, N-OPPM requires a receiver with N correlators, while the receiver for OPPM requires one correlator. On the other hand, the pulse positions of theN-OPPM signals can be manipulated to shape the power spectrum density and decrease the level of the discrete components. However, this PPM manipulation changes the N-OPPM correlation properties of the signal set. At the same time, N-OPPM signals have equal or better performance than OPPM signals. More specifically, simulations show that for low values of M, the N-OPPM has lower SER and higher C than OPPM for the same signal-to-noise ratio (SNR), and that for larger values ofM, it achieves similar SER and C. Hence, N-OPPM signals allow a tradeoff between transmission rate, signal performance, and receiver complexity

Abstract:

In this work we study ultra wideband (UWB) communications over dense multipath channels using orthogonal pulse position modulation (PPM) for data modulation and time-hopping (TH) for code modulation.We consider the effects of the multiple access interference (MUI) in asynchronous spread spectrum multiple access (SSMA) based on random TH codes. We use a realistic multipath channel to analyze the effects of the transmission rate in the number of users for different bit error rate (BER) values

Abstract:

We study pulse-based ultra wideband (UWB) communications over multipath channels using asynchronous spread spectrum (SS) multiple access (MA) based on time-hopping (TH) and pulse position modulated (PPM) signals. More specifically, we analyze the signal-to-interference (SIR) degradation in the presence of additive white Gaussian noise (AWGN), multi-user interference (MUI), and dense multipath effects (DME) with lineof- sight (LOS) and non-line-of-sight (NLOS). In particular, we define a degradation margin factor for the combined MUI and multipath effects and also find an expression for the maximum number of simultaneous radio links Nu in terms of the operating SIR, the SS processing gain, and the bit transmission rate Rb. We consider both cases with perfect and imperfect power control

Abstract:

In this work, we study ultra wideband (UWB) communications over dense multipath channels using frequency shift keying (FSK) data modulation. We take into account the effects of both the antenna system and the multipath channel, and calculate the signal-to-noise ratio (SNR) degradation for different frequency deviations

Abstract:

This paper describes general guidelines used by international standardization bodies to incorporate patented technology into the standards and to license this patented technology to other members of the standardization committee. It also discusses some of the issues arising from patent abuse and noncompliance of those guidelines

Abstract:

In this work we study ultra wideband (UWB) communications over dense multipath channels using orthogonal pulse position modulation (PPM) for data modulation and time-hopping (TH) for code modulation. We consider the effects of the multiple access interference (MUI) in asynchronous spread spectrum multiple access (SSMA) based on random TH codes. We consider a realistic multipath channel to analyze the effects of the transmission rate in the number of users for different bit error rate (BER) values

Abstract:

This work computes the information theoretic capacity e of M -ary2-orthogonal pulse position modulated (2- OPPM) signals for ultra wideband (UWB) cornmunications over both additive white Gaussian noise (AWGN) and multipath channels (taking into account random variations in both energy and correlation values). The C is increased by exploiting the negative values in the pulse correlation function. Simulations show that for low values of M the 2-0PPM have a smaller signalto- noise ratio (SNR) threshold to achieve C than non-overlapping orthogonal PPM (OPPM), and that for larger values of M they achieve the same C for the same SNR

Abstract:

In this work we compute the information theoretic capacity C of binary orthogonal pulse position modulated (PPM) signals for ultra wideband (UWB) communications over multipath channels. We consider binary PPM signals with random energy and correlation values. Numerical examples are given to illustrate the capacity results

Abstract:

This work studies the performance of N -orthogonal pulse position modulated (PPM) signals for ultra wideband (UWB) communications over both additive white Gaussian noise (AWGN) and multipath channels. We use a pulse position optimization that reduces the detection error probability by exploiting the negative values in the pulse correlation function. Numerical examples are given to illustrate the advantages of N -orthogonal over orthogonal signals

Abstract:

Wireless spread spectrum multiple access (SSMA) using time hopping and block waveform encoded (M-ary) pulse position modulated (PPM) signals is analyzed. For different M-ary PPM signal designs, the multiple-access performance in free-space propagation conditions is analyzed in terms of the number of users supported by the system for a given bit error rate, signal-to-noise ratio, bit transmission rate, and number of signals in the M-ary set. Processing gain and number of simultaneous users are described in terms of system parameters. Tradeoffs between performance and receiver complexity are discussed. Upper bounds on both the maximum number of users and the total combined bit transmission rate are investigated. This analysis is applied to ultrawideband impulse radio modulation. ln this modulation, the communications waveforms are practically realized using subnanosecond impulse technology. A numerical example is given that shows that impulse radio modulation is theoretically able to provide multiple-access communications with a combined transmission capacity of hundreds of megabits per second at bit error rates in the range 10^(-4) to 10^(-7) using receivers of moderate complexity

Abstract:

In this article, our goal is to describe mathematically and experimentally the gray-intensity distributions of the fore- and background of handwritten historical documents. We propose a local pixel model to explain the observed asymmetrical gray-intensity histograms of the fore- and background. Our pixel model states that, locally, the gray-intensity histogram is the mixture of gray-intensity distributions of three pixel classes. Following our model, we empirically describe the smoothness of the background for different types of images. We show that our model has potential application in binarization. Assuming that the parameters of the gray-intensity distributions are correctly estimated, we showthat thresholding methods based on mixtures of lognormal distributions outperform thresholding methods based on mixtures of normal distributions. Our model is supported with experimental tests that are conducted with extracted images from DIBCO 2009 and H-DIBCO 2010 benchmarks. We also report results for all four DIBCO benchmarks

Abstract:

In this paper, we will present a mathematical analysis of the transition proportion for the normal threshold (NorT) based on the transition method. The transition proportion is a parameter of NorT which plays an important role in the theoretical development of NorT. We will study the mathematical forms of the quadratic equation from which NorT is computed. Through this analysis, we will describe how the transition proportion affects NorT. Then, we will prove that NorT is robust to inaccurate estimations of the transition proportion. Furthermore, our analysis extends to thresholding methods that rely on Bayes rule, and it also gives the mathematical bases for potential applications of the transition proportion as a feature to estimate stroke width and detect regions of interest. In the majority of our experiments, we used a data base composed of small images that were extracted from DIBCO 2009 and H-DIBCO 2010 benchmarks. However, we also report evaluations using the original (H-)DIBCO's benchmarks

Abstract:

We propose a cross-level perspective on the relationship between individual-level perceived camaraderie and organizational-level camaraderie climate which interact to predict employee perceptions of innovativeness. Additionally, organizational gender diversity weakens the cross-level interaction. We tested our hypotheses by conducting a multi-level study with 39,574 employees working in 53 firms in Central America and the Caribbean. The positive link between perceived camaraderie and perceived innovativeness was stronger in firms with higher organizational camaraderie climate, and this interaction was moderated in firms with more gender diversity. We discuss the international human resource management implications of social context for the organizational gender diversity and innovation literature in an often-overlooked region

Abstract:

This paper presents an alternative strategy to evaluate the stability of tunnels during the design and construction stages based on a hybrid system, composed by neural, neuro-fuzzy and analytical solutions. A prototype of this system is designed using a database formed by 261 cases, 45 real and the rest synthetic. This system is capable of reproducing the displacements induced at the periphery of the tunnel before and after support installation. The stability of the excavation process is evaluated using a criterion that considers dimensionless parameters based on the shear strength of the media, the induced deformation level in the ground, the plastic radii and the advance of excavation without support. The efficiency and validity of the prototype is verified with two examples of actual tunnels, one included in the database used to train the system and the other not included. The results of both examples show a better approximation than other commonly used techniques

Abstract:

This paper is divided into two parts. In the first one I distinguish between weak and strong Anti-Archimedeanisms, the latter being the view that metaethics, just as any other discipline attempting to work out a second-order conceptual, metaphysical (semantic, etc.) non-committed discourse about the first-order discourse composing normative practices, is conceptually impossible or otherwise incoherent. I deal in particular with Ronald Dworkin's famous exposition of the view. I argue that strong Anti-Archimedeanism constitutes an untenable philosophical stance, therefore making logical space for the practice of a discipline such as metaethics-conceived as ethically neutral. This makes space, concurrently, for neutral conceptual jurisprudence. In the second part of the article, I attempt to show two things. On the one hand, that Dworkin's widely discussed 'challenge of disagreements' to legal positivism (the latter being precisely an instantiation of conceptual jurisprudence) is founded upon strong Anti-Archimedeanism. On the other hand, that having rejected strong Anti-Archimedeanism we should consequently reject the challenge as a serious challenge to positivism. This move, of course, does not thereby imply that accounting for legal disagreements is not an important jurisprudential task. But it marks-contra Dworkin-that there is no principled or a priori impossibility of doing so within a positivist framework

Abstract:

This paper offers a focused analysis of Cristina Redondo's latest book, Positivismo jurídico "interno." I first point out the lack of a discussion regarding what a participant (of the legal practice) is. I then emphasize that Redondo's distinctions between the internal and external points of view, which she offers in order to shape her favoured form of positivistic metatheory, are incompatible with an expressivistic rendition of first-order legal language. Since what constitutes the best rendition of first-order legal language is a controversial theoretical matter, a metatheoretical framework would be, in principle, preferable to others since it does not prejudge such a matter. Finally, I suggest an alternative strategy to arrive at a metatheoretical model similar to Redondo's, but which does not incur that particular problem

Abstract:

Coercion and the Nature of Law, by Kenneth Himma, claims to be an essay in descriptive conceptual analysis and there are good reasons to also take it as an essay in legal positivism. These amount to and imply certain methodological commitments. In this article I explore the compatibility between such commitments and Himma’s elaboration on the conceptual relation between law and coercion. The result will be that those commitments are not thoroughly honoured, as Himma’s argument moves from the descriptive to the normative when making the case for law’s “conceptual” function being peacekeeping and when fleshing out what sort of reasons for action the law provides

Resumen:

En el número XIV de Discusiones se produjo un debate acerca del rol de los hechos sociales como fundamento ontológico de la práctica jurídica. En las líneas que siguen intentaré reconstruir los elementos centrales de aquel debate sometiéndolos a escrutinio crítico y, sobre todo, intentaré poner en relación las cuestiones allí discutidas con algunos avances que se han dado en la literatura especializada con posterioridad. De esta manera, la finalidad principal del trabajo es la de sugerir algunas vías con las cuales continuar y profundizar la discusión sobre el tema

Abstract:

In Discusiones, XIV, a debate took place about the role social facts play as the ontological grounds of the legal practice. In what follows, I will try to reconstruct the central tenets of that debate, critically scrutinizing them and, more importantly, I will try to trace a link between the issues then discussed and some developments worked out in the specialized literature aferwards. Thus, the main goal of this paper is that of suggesting some lines along which to continue and sharpen the debate on the subject

Resumen:

Uno de los temas de más profundo debate en la filosofía del derecho de los últimos años ha sido el de las maneras en que dar cuenta del fenómeno del desacuerdo entre operadores jurídicos y entre juristas a la hora de desentrañar el contenido del derecho y, por ende, de dar con la respuesta jurídica para controversias particulares. A partir del trabajo de Ronald Dworkin, el tema se ha convertido en un instrumento de intenso análisis crítico del positivismo jurídico y consecuentemente ha generado diversos enfoques que intentan dar cuenta del fenómeno de manera armónica con las tesis positivistas centrales. Uno de estos enfoques, en particular, intenta afrontar este desafío ligando al positivismo con un análisis expresivista del discurso jurídico de primer orden; análisis análogo al que importantes autores abocados a la metaética contemporánea han propuesto para el discurso moral. En el presente trabajo presento los lineamientos centrales de este enfoque iusfilosófico, indagando tanto en algunas de sus virtudes como en algunas de sus mayores dificultades

Abstract:

One of the most profoundly debated issues in recent legal philosophy has been that of the ways in which to account for the phenomenon of disagreement among legal practitioners and among jurists when it comes to elucidating the content of the law and to, therefore, finding legal answers to particular controversies. Deriving from Ronald Dworkin's work, this issue has become the instrument for an intense critical re-examination of legal positivism and has consequently spurred several jurisprudential approaches trying to account for the phenomenon in a way consistent with the central positivistic tenets. One of those approaches, in particular, intends to face the challenge by linking legal positivism to an expressivist analysis of first order legal discourse; an analysis analogous to that proposed for moral discourse by important authors in contemporary metaethics. In this paper I introduce the main features of this jurisprudential approach, inquiring into both some of its virtues and of its major difficulties

Abstract:

This paper introduces an original and maybe not very widely known theoretical approach to the topic of legal disagreements - topic much discussed in our days as presented by Ronald Dworkin in the context of an attack to hartian legal positivism. The approach under examination is intended as a response - drawing heavily in both Hart's original framework and Gibbard's contemporary metaethical theory - to Dworkin's challenge: Hart is thus read as an expressivist in his analysis of internal legal statements, and in turn - with some amendments to his theoretical picture - an explanation of fundamental legal disagreements is given, with the promise of thus overcoming the dworkinean challenge. The introduction to this approach is critical, so I offer the sketches of several objections to it. Nevertheless, the paper also seeks to highlight some possible major contributions of the examined approach to the current philosophical debate on legal disageements

Resumo:

Esse artigo apresentará a abordagem de Kevin Toh à temática dos desacordos jurídicos; uma abordagem teórica original e talvez não tão amplamente conhecida à temática muito discutida desde que apresentada por Ronald Dworkin no contexto de ataque ao positivismo jurídico hartiano. A abordagem em análise é uma tentativa de resposta à crítica feita por Dworkin e parte fortemente do modelo original de Hart e da teoria metaética contemporânea de Gibbard: Hart aparece como um expressivista na sua análise das afirmações jurídicas do ponto de vista interno e, em troca - com alguns ajustes ao seu quadro teórico - uma explicação do desacordo jurídico fundamental é oferecida, com a promessa de, assim, superar o desafio dworkiniano. A apresentação a essa abordagem é crítica, e, por isso, serão oferecidos esboços de diversas objeções. No entanto, o artigo também busca sublinhar alguma possível importante contribuição da proposta sob exame ao debate filosófico contemporâneo sobre os desacordos jurídicos

Resumen:

De acuerdo con Juan Carlos Bayón, el positivismo jurídico incluyente es inconsistente, por cuanto sus presupuestos alegadamente convencionalistas serían incompatibles con las consecuencias de la incorporación de principios morales como criterios de validez jurídica que esta corriente teórica asume como posibles. Concretamente, el positivismo incluyente enfrentaría un dilema: si sostiene que el derecho remite al juez a la moral positiva para la identificación de contenidos normativos, colapsa con la versión excluyente; si en cambio sostiene que el derecho remite al razonamiento moral sustantivo, se ve obligado a abandonar sus presupuestos convencionalistas, en favor del realismo moral. En este trabajo analizaré esta crítica de Bayón, e intentaré proponer un argumento con el cual afrontar dicho cuestionamiento

Abstract:

According to Juan Carlos Bayón, inclusive legal positivism is inconsistent, insofar as its allegedly conventionalist assumptions seem to be incompatible with the consequences of the incorporation of moral principles as criteria of legal validity that this theory acknowledges as possible. In particular, inclusive positivism would face a dilemma: if it holds that the law directs the judge to recourse to positive morality in order to identify the legal contents, it collapses into the exclusive version of positivism; if, on the other hand, it holds that the law appeals to substantive moral reasoning, it becomes obliged to withdraw its conventionalist assumptions, in favor of moral realism. In this paper I will analyze Bayón's critique and try to offer an argument to counter that objection

Abstract:

The servitization of network resources leads to new challenges for optical networks. For instance, to provide on-demand lightpaths as a service while keeping the probability of packet loss (PPL) low, issues such as lightpath setting up, resource reservation and load balancing must be addressed. We present a self-adaptive framework to process lightpath requests on packet switching optical networks that considers and handles the aforementioned issues. The framework is composed of a dimensioning phase that adds up new resources to an initial topology and a learning phase based on reinforcement learning that provides self-adaptation to tolerate traffic changes. The framework is tested on three realistic mesh topologies achieving a PPL between 1 x 10 (-1) and 1 x 10 (-6) for different traffic loads. Compared to fixed multi-path routing strategies, our framework reduces PPL between 19% and up to 80%. Furthermore, no packet loss can also be achieved for traffic loads equal to or lower than 0.4

Abstract:

Commercial services are of utmost importance for the economy. Due to the widespread use of information and communication technologies, many of these services may be delivered online by means of service value networks. To automate this delivery, however, issues such as composition, integration, and operationalization need to be addressed. In this paper, the authors share their long-term vision on composition of service value networks and describe relationships with fields such as cloud computing and enterprise computing. As a demonstration of the state of the art, capabilities and limitations of e(3)-service are described and research challenges are defined

Abstract:

While transparent optical networks become more and more popular as the basis of the Next Generation Internet (NGI) infrastructure, such networks raise many security issues because they lack the massive use of optoelectronic monitoring. To increase these networks’ security, they will need to be equipped with proactive and reactive mechanisms to protect themselves not only from failures and attacks but also from ordinary reliability problems. This work presents a novel self-healing framework to deal with attacks on Transparent Optical Packet Switching (TOPS) mesh networks. Contrary to traditional approaches which deal with attacks at the fiber level, our framework allows to overcome attacks at the wavelength level as well as to understand how they impact the network’s performance. The framework has two phases: the dimensioning phase (DP) dynamically determines the optical resources for a given mesh network topology whereas the learning phase (LP) generates an intelligent policy to gracefully overcome attacks in the network. DP uses heuristic reasoning to engineer the network while LP relies on a reinforcement learning algorithm that yields a self-healing policy within the network. We use a Monte Carlo simulation to analyze the performance of the aforementioned framework not only under different types of attacks but also using three realistically sized mesh topologies with up to 40 nodes. We compare our framework against shortest path (SP) and multiple path routing (MPR) showing that the self-organized routing outperforms both, leading to a reduction in packet loss of up to 88% with average packet loss rates of 1 x 10 -3. Finally, some conclusions are presented as well as future research lines

Abstract:

This paper presents a set of methods for building masquerade attacks. Each method takes into account the profile of the user to be impersonated, thus capturing an intruder strategy. Knowledge about user behavior is extracted from several statistics, including the frequency at which a user types a specific group of commands. It is then expressed by rules, which are applied to synthesize computer sessions that mimic the attack as ordinary user behavior. The masquerade attack datasets have been validated by making a set of Intrusion Detection Systems (IDS) try to detect user impersonation, this way showing the capabilities of each masquerade synthesis method for evading detection. Results demonstrate that a better performance of masquerade attacks can be obtained by using methods based on behavioral rules rather than those based only on a single statistic. Summing up, masquerade attacks exhibit a good strategy for bypassing an IDS

Abstract:

In this article I inquire about the effects initial wealth has on black-white differences in early employrrient careers. I set up a dynamic model in which individuals simultaneously search for a job and accumulate wealth, and fit it to data from the National Longitudinal Survey (1979-cohort). Regime changes and decompositions of racial differences reveal that differences in the labor market environment and in preferences account fully for racial gaps in wealth and in wages persisting several years after high school graduation. Differences in initial wealth partially explain differences in early employment careers

Abstract:

In this paper, I measure the contribution of knowing Catalan to finding a job in Catalonia. In the early 1980s, a drastic language policy change (normalització) promoted the learning and use of Catalan in Catalonia and managed to reverse the falling trend of its relative use vs Castilian (Spanish). Using census data for 1991 and 1996, I estimate a significant positive Catalan premium: the probability of being employed increases between 3 and 5 percentage points if individuals know how to read and speak Catalan; it increases between 2 and 6 percentage points for writing Catalan

Abstract:

This article examines the relationship between wealth accumulation and job search dynamics. It proposes a model in which risk-averse individuals search for jobs, save, and borrow to smooth their consumption. One motivation for accumulating wealth is to finance voluntary quits in order to search for better jobs. Using data on men from the National Longitudinal Survey (1979 cohort), I estimate the individual's dynamic decision problem. The results show that borrowing constraints are tight and reinforce the influence of wealth on job acceptance decisions, namely that more initial wealth and access to larger amounts of credit increase wages and unemployment duration

Resumen:

Haciendo uso del modelado en Dinámica de Sistemas y simulación, se explora la coordinación de dos procesos logísticos (aprovisionamiento y producción) de la cadena de suministro de una alcoholera. En este sentido, la evaluación de tres escenarios de producción permite identificar: a) el movimiento del inventario de acuerdo a las políticas actuales de inventario, y b) las variables críticas que afectan la coordinación de estos dos procesos. Dado que el objetivo principal de la empresa es satisfacer la demanda del cliente, se incorpora un pronóstico de ventas, y cuatro indicadores de desempeño para evaluar el estado de los procesos: 1) el porcentaje promedio de la satisfacción de la demanda, 2) la cantidad máxima de etanol en exceso, 3) el etanol a disponer al finalizar el año, y 4) los costos de inventario. Para modelar el caso de estudio, se considera el cambio en el rendimiento de producción y las restricciones particulares de la cadena. Los resultados de la simulación muestran que la Dinámica de Sistemas puede utilizarse para observar los efectos de las políticas sobre el inventario, y la satisfacción de la demanda en un sistema real, igualmente, permite definir la coordinación para una cadena de suministro y proporcionar información para mejorarla. El modelo creado utiliza el software STELLA® para simular los procesos logísticos y para realizar la evaluación utilizando los indicadores de desempeño

Abstract:

This paper uses System Dynamics modeling and process simulation to explore coordination in two logistic processes (procurement and production) of the supply chain of an ethanol plant. In that sense, three production scenarios are evaluated to identify: a) stock movement according to current inventory policies, and b) the critical variables affecting the coordination for these two processes. Since the main goal in the company is to meet customer demand, this research incorporates sales forecasting, and four performance indicators to evaluate the state of the processes: 1) average percentage of demand satisfaction, 2) maximum amount of ethanol in excess, 3) available ethanol at the end of the year, and 4) inventory costs. To model the case study, the change in production yield and specific constraints for the chain are considered. The simulation results show that System Dynamics modeling can be used to observe the effects of policies on inventory, and meeting the demand in a real system. It also can define the coordination for a supply chain and give information to improve it. The developed model uses STELLA® software to simulate the logistic processes and execute the evaluation employing the performance indicators

Abstract:

I provide new results concerning dynamics for a version of the Kiyotaki-Wright model (1989) in which strategies (either mixed or pure) are restricted so that agents play the same strategy for each opportunity set. My results demonstrate the importance of examining stability in such models, because they show that many steady states focused on in the literature are not stable. Furthermore, I exhibit examples of two-period-convergent equilibria in which agents are indifferent among media of exchange. Consequently, their endogenous transaction pattern is analog to the coexistence of assets whose acceptability or “liquidity” varies inversely with their rates of return

Abstract:

This work presents the application of the Mahalanobis–Taguchi System (MTS) to a dimensional problem in the automotive industry. The combinatorial optimization problem of variable selection is solved by the application of a recent version of binary ant colony optimization algorithm. Moreover, a comparison with respect to binary particle swarm optimization algorithm is also presented and a discussion regarding the numerical results is given

Abstract:

Recent empirical evidence from the United States indicates a high degree of persistence in earnings across generations. Designing effective public policies to increase social mobility requires identifying and measuring the major sources of persistence and inequality in earnings. We provide a quantitative model of intergenerational human capital transmission that focuses on three sources: innate ability, early education, and college education. We find that approximately one-half of the intergenerational correlation in earnings is accounted for by parental investment in education, in particular early education. We show that these results have important implications for education policy

Abstract:

We construct a panel for the price of aggregate investment over consumption and report the following observations. (1) Relative price differences across countries are large over the entire sample period, and this conclusion is not affected by excluding non-tradable consumption goods. (2) Relative price dispersion has decreased during the sample period. (3) Relative price changes are not persistent across periods, while average price levels are. Moreover, the persistence in relative price levels is higher for countries at low relative prices. We show that the relative price of investment is negatively correlated with investment rates in a cross section of countries. The standard one-sector growth model is a natural framework to study quantitatively the effects of barriers to capital accumulation since there is a very simple mapping between barriers to investment and relative prices in this environment. We simulate a calibrated version of the model, in which barriers follow a stochastic process common to all countries and estimated using relative price data, and obtain statistics for investment rates that closely resemble what we observe in the data. In particular, the model accounts for 90% of the 1985 Gini index of relative investment rates and the decline in investment rate dispersion over time. The model has two limitations as a theory of development. First, it cannot account for the income disparity in the data unless we assume unmeasured capital with barriers affecting the broad measure of capital. Second, even under these extreme assumptions, the model cannot account for the evolution of income disparity over time

Resumen:

La emisión de los estados financieros se realiza para mostrar la situación financiera y los resultados de un ente público o privado, así como para proporcionar información sobre la situación económica en la que se encuentran estas entidades. En este artículo se resalta la relevancia de incluir en estos documentos las revelaciones ASG y la necesidad de definir e implementar estrategias para alcanzar los propósitos requeridos en esta materia

Resumen:

Este texto es una respuesta al artículo de Daniel Garber. Se discute la idea de que la dinámica es un proyecto paralelo e incompatible con el proyecto monadológico. Para criticar esta posición, presento la importancia de la conceptualización de la acción en la Dynamica de potentia (1690) y sus implicaciones para la propia metafísica monadológica

Abstract:

This text is an answer to Daniel Garber's article. In the reply, I challenge the idea that the dynamics is a parallel project, incompatible with the monadological project. To criticize this position, I present the importance of the conceptualization of action from Dynamica de potentia (1690) and its implications for the monadological metaphysics itself

Abstract:

This paper proposes a new approach to minimise inventory levels and their associated costs within large geographically dispersed organisations. For such organisations, attaining a high degree of agility is becoming increasingly important. Linear regression-based tools have traditionally been employed to assist human experts in inventory optimisation; endeavoursr; ecently,N euralN etwork( NN) techniquesh ave been proposedf or this domain.T he objectiveo f this paperi s to create a hybrid framework that can be utilised for analysis, modelling and forecasting purposes. This framework combines two existing approaches and introduces a new associated cost parameter that serves as a surrogate for customer satisfaction. The use of this hybrid framework is described using a running example related to a large geographically dispersed organisation

Abstract:

The Internet boom has lead to the image of a Global Village, a totally communicated planet where national borders have no meaning and where Internet and the Information Highway have become the path to solve any problem. Technological gnd scientific research has been conducted in order to exchange information through the telephone lines, which provide a solid platform for the new services related to Internet such as e-mail, electronic commerce and internet telephony. Non-technical issues should be taken into account to notice that not all the people on earth are aware of the Internet and the new technological society. Many persons are not educated for those new technologies that are out of their reach. Indeed, even if they should be able to access a specific communications network, the new services would not be affordable for them economically and/or inlellectually. This article presents indicators that magnify the problem: the gap between the rich and the poor, the educated and non-educated, the Internet literate and illiterate. Reflections on several important issues are presented, and ultimately, the Global Village is questioned as a reality or just a new myth

Abstract:

This paper analyzes the influence key scientists have in the development of a science and technology system. In particular, this work appraises the influence that star scientists have on the productivity and impact of young faculty, as well as on the likelihood that these young researchers become a leading personality in science. Our analysis confirms previous results that eminent scientist have a prime role in the development of a scientific system, especially within the context of an emerging economy like Mexico. In particular, in terms of productivity and visibility, this work shows that between 1984 and 2001 the elite group of physicists in Mexico (approximate 10% of all scientists working in physics and its related fields) published 42% of all publications, received 50% of all citations and bred 18% to 26% of new entrants. In addition our work shows that scientists that enter the system by the hand of a highly productive researcher increased their productivity on average by 28% and the ones that did it by the hand of a highly visible scientist received on average 141% more citations, vis-à-vis scholars that did not published their first manuscripts with an eminent scientist. Furthermore, scholars that enter the system by the hand of a highly productive researcher were on average 2.5 more likely to also become a star

Abstract:

Considering that modern science is conducted primarily through a network of collaborators who organize themselves around key researchers, this research develops and tests a characterization and assessment method that recognizes the particular endogenous, or self-organizing characteristics of research groups. Instead of establishing an ad-hoc unit of analysis and assuming an unspecified network structure, the proposed method uses knowledge footprints, based on backward citations, to measure and compare the performance/productivity of research groups. The method is demonstrated by ranking research groups in Physics, Applied Physics/Condensed Matter/Materials Science and Optics in the leading institutions in Mexico, the results show that the understanding of the scientific performance of an institution changes with a more careful account for the unit of analysis used in the assessment. Moreover, evaluations at the group level provide more accurate assessments since they allow for appropriate comparisons within subfields of science. The proposed method could be used to better understand the self-organizing mechanisms of research groups and have better assessment of their performance

Abstract:

The accurate classification of galaxies in large-sample astrophysical databases of galaxy clusters depends sensitively on the ability to distinguish between morphological types, especially at higher redshifts. This capability can be enhanced through a new statistical measure of association and correlation, called the distance correlation coefficient, which has more statistical power to detect associations than does the classical Pearson measure of linear relationships between two variables. The distance correlation measure offers a more precise alternative to the classical measure since it is capable of detecting nonlinear relationships that may appear in astrophysical applications. We showed recently that the comparison between the distance and Pearson correlation coefficients can be used effectively to isolate potential outliers in various galaxy data sets, and this comparison has the ability to confirm the level of accuracy associated with the data. In this work, we elucidate the advantages of distance correlation when applied to large databases. We illustrate how the distance correlation measure can be used effectively as a tool to confirm nonlinear relationships between various variables in the COMBO-17 database, including the lengths of the major and minor axes, and the alternative redshift distribution. For these outlier pairs, the distance correlation coefficient is routinely higher than the Pearson coefficient since it is easier to detect nonlinear relationships with distance correlation. The V-shaped scatter plots of Pearson versus distance correlation coefficients also reveal the patterns with increasing redshift and the contributions of different galaxy types within each redshift range

Abstract:

Lobbying dominates corporate political spending, but comprehensive studies of the benefits accrued are scarce. Using a dataset of all u.s. firms with publicly available financial statements, we delve into the tax benefits obtained from lobbying. Firms that spend more on lobbying in a given year pay lower effective tax rates in the next year. Increasing registered lobbying expenditures by 1 % appears to lower effective tax rates by somewhere in the range of 0.5 to 1.6 percentage points for the average firm that lobbies. While individual firms amass considerable benefits, the costs oflobbying-induced tax breaks appear modest for the government

Abstract:

Autonomous driving is a trend topic that is enabled by communication between devices. Under this scope, ROS is a useful tool for running multiple processes in a graph architecture, where each node may receive and post messages that are consumed by other nodes for their own needs. In this tutorial chapter we discuss a solution for autonomous path planning using a Randomized Random Tree (RRT) and a simple control scheme based on PIDs to follow that path. The control uses internal sensors and an external camera that works as an "eye in the sky". This is implemented with the help of ROS version 1.12.13 using the Kinetic distribution. Results are validated using the Gazebo multi-robot simulator, version 7.0.0. The robot model used corresponds to the AutoNOMOS mini developed by PHD Raúl Rojas, while the "eye in the sky" is an artificial simple RGB camera created in Gazebo for research purposes. The Rviz package is used to monitor the simulation. The repository for this project can be found at https://github.com/Sutadasuto/AutoNOMOS_Stardust. (The original model for the AutoNOMOS mini was retrieved from https://github. com/EagleKnights/EK_AutoNOMOS_Sim)

Resumen:

Existe una tensión inherente entre el populismo y el constitucionalismo que se materializa, una vez que el populismo está en el poder, en la necesidad de cooptar o al menos neutralizar el poder judicial y los órganos del sistema de justicia que tienen el papel de interpretar la constitución. El poder judicial puede resistir y contribuir a que el populismo revitalice la democracia en lugar de erosionarla. Sin embargo, las posibilidades de hacerlo dependen de su legitimidad entre la ciudadanía y de la composición partidista de los poderes ejecutivo y legislativo. Estos argumentos se ilustran a partir del análisis del caso de México, en particular el primer trienio (2018-‍2021) de la administración del presidente Andrés Manuel López Obrador y su Movimiento de Regeneración Nacional (MORENA)

Abstract:

The inherent tension between populism and constitutionalism tends to crystallize in the judicial power, which populist leaders in power need to coopt or at least neutralize in order to implement their agenda free of obstacles. The judiciary can contribute to limit the authoritarian impulses of populism and instead take its just demands and revitalize democracy. The likelihood that the judiciary plays this role depends on its legitimacy amongst the citizens and the partisan composition of the executive and legislative powers. These arguments are illustrated in the case of Mexico, specifically during the first three years of President Andrés Manuel López Obrador (1918-2021) and his Movement of National Regeneration (MORENA)

Resumen:

La democracia constitucional reconoce a las mayorías pero limita sus excesos. Ante la llamada "Ley Zaldívar", la Suprema Corte debe optar por alejar a México de la captura del Poder Judicial y mantenerlo en el rumbo democrático

Abstract:

Facing pressure from the Mexican government, the Chief Justice proposed constitutional reforms that concentrate power in the Mexican Supreme Court and the Judicial Council. While the reforms may enable the reining of corruption and nepotism, they open significant threats to the independence of judges from the highest echelons of the judiciary. The concentration of power in the Supreme Court could encourage political and other efforts to capture the Supreme Court, ultimately undermining its status and independence

Resumen:

En países como Hungría, Polonia o Turquía, líderes populares con amplias mayorías legislativas han tomado medidas para limitar la independencia del poder judicial, erosionando con ello la democracia. La renuncia de Medina Mora, en un contexto de hostil retórica presidencial y agresivas leyes e iniciativas, nos acerca a esa posibilidad

Abstract:

What role do justice institutions play in autocracies? We bring together the literatures on authoritarian political institutions and on judicial politics to create a framework to answer this question. We start from the premise that autocrats use justice institutions to deal with the fundamental problems of control and power-sharing. Unpacking "justice institutions" we argue that prosecutors and ordinary courts can serve, respectively, as "top-down" and "bottom-up" monitoring and information-gathering mechanisms helping the dictator in the choice between repression and cooptation. We also argue that representation in the Supreme Court and special jurisdictions enables the dictator and his ruling coalition to solve intra-elite conflicts facilitating coordination. We provide several examples from Mexico under the hegemonic system of the PRI and of Spain under Francisco Franco, as well as punctual illustrations from other countries around the world. We conclude by reflecting on some of the potential consequences of this usage of justice institutions under autocracy for democratization

Resumen:

¿Qué explica la variación en los niveles de independencia judicial de los estados de México? Este artículo vincula de forma positiva, teórica y empíricamente, la competencia electoral y la independencia judicial de jure. Distingue entre varias lógicas y medidas que ligan la competencia electoral con la creación de reformas judiciales. También explora explicaciones alternativas, como la lógica del seguro, la difusión y la ideología política. El análisis se hace con una base de datos original que cubre las reformas a los mecanismos de nombramiento, remoción y duración de los magistrados del tribunal superior en todos los estados de 1985 a 2014

Abstract:

What explains variation in levels of judicial independence across Mexican states? This article examines the positive, theoretical and empirical link with electoral competition and de jure judicial independence. It distinguishes between various logics and measures associating electoral competition with the creation of judicial reforms. It also explores alternative explanations, such as the logic of insurance, dissemination and political ideology. The analysis is conducted using an original database on amendments to the mechanisms for the appointment, removal and duration of judges of the higher court in every state from 1985 to 2014

Resumen:

El propósito de este texto es describir casi un siglo de trayectoria institucional de diversos aspectos de los poderes judiciales en los estados de México. A partir de una base de datos original de las constituciones de los 31 estados mexicanos y todas sus reformas desde 1917 hasta 2014, describimos sistemáticamente aspectos relativos a los tribunales superiores de justicia, los consejos de la judicatura y los tribunales constitucionales de los estados, el presupuesto del poder judicial y las jurisdicciones especiales. El hallazgo más importante es la interesante diversidad en la arquitectura institucional de los poderes judiciales en los estados durante casi cien años, que incluyen periodos tanto de un régimen autoritario como de uno democrático a nivel nacional

Abstract:

The purpose of this article is to describe almost one century of institutional track record of diverse aspects of the judicial power in the estates of Mexico. Starting with the original databases of the constitutions of the 31 Mexican estates and all their reforms since 1917 to 2014, we systematically describe aspects relevant to the higher courts, judicial councils and the constitutional courts of the estates, the judicial power budgeting and special jurisdictions. The most important finding is the interesting diversity of the judicial power's institutional architecture in the estates for almost one hundred years; these include authoritarian regime periods as well as a democratic regime at national level

Resumo:

O propósito deste texto é descrever quase um século de trajetória institucional de diversos aspectos dos poderes judiciais nos estados do México. A partir de uma base de dados original das constituições dos 31 estados mexicanos e todas suas reformas desde 1917 até 2014, descrevemos sistematicamente aspectos relativos aos tribunais superiores de justiça, aos conselhos do sistema judicial e aos tribunais constitucionais dos estados, ao orçamento do poder judicial e às jurisdições especiais. O achado mais importante é a relevante diversidade na arquitetura institucional dos poderes judiciais nos estados durante quase cem anos, que incluem períodos tanto de um regime autoritário quanto de um democrático no âmbito nacional

Abstract:

We provide a conceptual map of judicial independence and evaluate the content, construct, and convergent validity of 13 cross-national measures. There is evidence suggesting the validity of extant de facto measures, though their proper use requires attention to correlated patterns of measurement error and missing data. The evidence for the validity of extant de jure measures is weaker. Among other findings, we do not observe a strong and direct link between the rules that allegedly promote judicial independence and independent behavior. The results suggest that while the measurement of both de jure and de facto judicial independence requires a careful strategy for measuring latent concepts, the way that scholars should address this issue depends on whether they are targeting the incentives for independent behavior induced by formal rules or independent behavior itself

Abstract:

This essay explores whether the design of justice system institutions helps control corruption. Applying the basic logic of checks and balances to intrabranch institutional design, the main argument is that any justice system where judges and prosecutors, of different ranks and levels, are unchecked actors generates incentives for them to abuse their positions. In other words, while generally judges and prosecutors are considered organs that oversee other branches of government, they may also constitute sources of corruption if left unchecked. The essay offers preliminary evidence on the specific hypotheses derived from the general argument from samples of eighteen Latin American countries and two case studies on Chile and Mexico

Abstract:

Mexico has undergone a peculiar transition to a democracy that in some aspects and places still exhibits traits of the authoritarian past. The combination of authoritarian shades and democratic glares, rich diversity in socioeconomic conditions across the country, and the recent availability of a wealth of information and systematized data make for a great deal of research opportunities for sociolegal scholarship. This article reviews recent sociolegal studies on courts and judicial behavior, public security and the criminal justice system, and legal culture, pointing to several empirical puzzles and open questions that are crying out for explanations and systematic empirical analysis

Abstract:

When and why can constitution-making processes be expected to produce an institutional framework that formally serves constitutionalism? Based on a simple and general typology of constituent processes that captures their legal/political character and dynamic nature, constitution-making processes controlled by one cohesive and organized political group (unilateral) can be distinguished from processes controlled by at least two different political groups (multilateral). A sample of eighteen Latin American countries from 1945 to 2005 shows that multilateral constitution making tends to establish institutional frameworks consistent with constitutionalism

Resumen:

¿Qué explica el nivel de protección judicial de los derechos? En América Latina, por ejemplo, mientras la Corte Constitucional del Colombia o la Cuarta Sala de la Corte de Costa Rica han sido altamente activas en términos de la protección de derechos, la Corte Suprema de México o el Tribunal Constitucional de Chile no lo han sido. ¿Por qué, entonces, solo ciertas cortes constitucionales trabajan activamente en la garantía de derechos? En este trabajo se exploran posibles explicaciones del porqué algunas cortes constitucionales defienden más activamente los derechos establecidos en las constituciones que otras. Estas explicaciones se articulan en torno a tres dimensiones: socio-política, individual, e institucional. Además, se discute y explora la relación que puede darse entre estas tres variables, presenta datos nacionales y sugiere varias hipótesis para futuras investigaciones dentro de este campo de estudio

Abstract:

What explains the level of judicial protection of rights? In Latin America, for example, while the Colombian Constitutional Court or the Costa Rican Sala Cuarta have been highly active in the protection of rights, the Mexican Supreme Court or the Chilean Constitutional Tribunal have not. Why, then, only some constitutional courts decide to actively engage in upholding rights? This paper classifies different answers to this question based on three dimensions: socio-political, personal or ideological, and institutional. The paper discusses and explores relations among variables across the three dimensions, presents some cross-national data, and suggests some hypotheses for future research

Abstract:

Legal reforms that make judges independent from political pressures and empower them with judicial review do not make an effective judiciary. Something has to fill the gap between institutional design and effectiveness. When the executive and legislative powers react to an objectionable judicial decision, the judiciary may be weak and deferential; but coordination difficulties between the elected branches can loosen the constraints on courts. This article argues that the fragmentation of political power can enable a judiciary to rule against power holders' interests without being systematically challenged or ignored. This argument is tested with an analysis of the Mexican Supreme Court decisions against the PRI on constitutional cases from 1994 to 2002. The probability of the court's voting against the PRI increased as the PRI lost the majority in the Chamber of Deputies in 1997 and the presidency in 2000

Abstract:

This article offers a comparative perspective on judicial involvement in policy change in Latin America during the last decade and a half. Drawing on the literature on new institutionalism and the judicialisation of politics, and on case studies from Latin America's two largest countries, we propose a comparative framework for analysing the judicialisation of policy in the region. On the basis of this framework, we argue that institutional structure is a primary determinant of patterns of the judicialisation of policy. In particular, institutional characteristics of the legal system affect the way political actors fight to achieve their policy objectives and the kinds of public justifications used to defend policy reform

Abstract:

This paper studies a lot-sizing and scheduling problem to maximize the profit of assembled products over several periods. The setting involves a plastic injection production environment where pieces are produced using auxiliary equipment (molds) to form finished products. Each piece may be processed in a set of molds with different production rates on various machines. The production rate varies according to the piece, mold and machine assignments. The novelty lies on the problem definition, where the focus is on finished products. We developed a two-stage iterative heuristic based on mathematical programming. First the lot-size of the products is determined together with the mold-machine assignments. The second stage determines if there is a feasible schedule of the molds with no overlapping. If unsuccessful, it goes back to the first stage and restricts the number of machines that a mold can visit, until a feasible solution is found. This decomposition approach allows us to deal with a more complex environment that incorporates idle times and assembly line considerations. We show the advantages of this methodology on randomly generated instances and on data from real companies. Experimental results show that our heuristic converges to a feasible solution with few iterations, obtaining solutions that the companies find competitive both in terms of quality and running times

Abstract:

In developing national epidemiological control strategies, understanding the environment in which an epidemic develops, the complex interrelationships of the relevant variables and their resulting behavior requires responsible health decision makers to develop comprehensive, effective policies. Systemic decision models can help managers understand the impact of alternative strategies for addressing disasters such as national epidemics. This paper discusses an interactive, systemic decision model developed in the Secretariat of Health of Mexico, at the advisory level, highlighting how the change in decision-making perspective provided valuable insight into strategically managing the control of dengue, a potentially catastrophic epidemic

Resumen:

Propósito - Comprender la influencia de la propiedad familiar y mecanismos de gobierno en las calificaciones de Responsabilidad Social Corporativa (RSC) a través de la perspectiva principal-principal. Diseño/metodología - Usando un modelo de efectos aleatorios en una muestra de dos mil cien consejeros en ciento un empresas públicas mexicanas del 2008 al 2020. Hallazgos - La independencia y comités del consejo tienen una relación positiva con calificaciones RSC. Implicaciones prácticas y sociales - Los resultados sugieren que instituciones más fuertes de gobierno pueden incrementar el RSC: la independencia y los comités del consejo pueden ser un mecanismo que haga dicho balance sirviendo a partes interesadas que deseen mejorar calificaciones de RSC. Originalidad/valor - La mayoría de la investigación en RSC se ha enfocado en determinantes y productos de RSC. Nosotros analizamos un aspecto no explorado en la relación de gobernanza y RSC: la potential influencia de la propiedad familiar y los mecanismos de gobierno en la RSC

Abstract:

Purpose - The purpose of this paper is to understand the influence of family ownership and governance mechanisms on corporate social responsibility (CSR) scores through the lens of the principal-principal (PP) perspective. Design/methodology/approach - Using a random-effects model the authors sample 21 hundred board members across a 101 listed Mexican companies from 2008 to 2020. Findings - The paper finds that board independence and board committees are positively related to CSR scores. Practical implications - Results of this paper suggest that stronger governance can enhance CSR: board independence and committees can be a counterbalancing mechanism serving stakeholders aiming to improve CSR scores. Originality/value - Most CSR research has focused on determinants and outcomes of CSR. The authors analyze an unexplored aspect of the corporate governance (CG) and CSR relationship: the potential influence of family ownership and governance mechanisms on CSR

Resumen:

Propósito- Analizar redes de consejeros de empresas familiares y no familiares en Chile, México y Perú. Metodología- El análisis de redes sociales nos permitió analizar la posición de empresas familiares dentro de la estructura de redes a nivel local y transnacional. Hallazgos- Las empresas familiares tienen un nivel más alto de entrecruzamientos con otras empresas, especialmente con las familiares. Además, las empresas familiares son más propensas a ocupar posiciones de intermediación en las estructuras nacionales de redes. Finalmente, también tienen más entrecruzamientos con otras empresas nacionales y en regiones geográficas cercanas por lo que crean redes transnacionales. Originalidad- Encontramos evidencia que apoya los tres pilares de la literatura de familiaridad interorganizacional (Lester y Cannella 2006). Las empresas familiares son parte de una red nacional e internacional más que otros tipos de empresas a través de las interconexiones de posiciones en consejos

Abstract:

Purpose- The purpose of this study is to analyze interlocking directorate (ID) networks of family and nonfamily firms (FFs) in Chile, Mexico and Peru. Design/methodology/approach- Social network analysis methodology allowed us to analyze the position of FFs within the structure of IDs at the local and transnational levels. Findings- FFs tend to have a higher proportion of board interlocks to other firms, especially FFs. In addition, FFs are more likely to occupy a brokerage position in national IDs structures. Finally, they also have a higher proportion of interlocks to other domestic firms in and nearby geographic areas. Thus, they create transnational networks. Originality/value- This paper finds evidence that supports three of the premises of interorganizational familiness literature (Lester and Cannella, 2006). FFs are part of national as well as international corporate networks more than other types of firms, through interlocking directorships

Resumo:

Objetivo- Analisar redes de diretores de empresas familiares e não familiares no Chile, México e Peru. Metodologia- A análise das redes sociais permitiu analisar a posição das empresas familiares na estrutura das redes a nível local e transnacional. Resultados- As empresas familiares têm um nível mais elevado de ligações cruzadas com outras empresas, especialmente as empresas familiares. Além disso, é mais provável que as empresas familiares ocupem posições intermediárias nas estruturas das redes nacionais. Por último, têm também mais referências cruzadas com outras empresas nacionais e em regiões geográficas próximas, pelo que criam redes transnacionais. Originalidade- Encontramos evidências que sustentam os três pilares da literatura sobre familiaridade interorganizacional (Lester e Cannella 2006). As empresas familiares fazem parte de uma rede nacional e internacional mais do que outros tipos de empresas através do interconexões de cargos nos conselhos de administração

Resumen:

Propósito – Entender si hay características de un consejo que ayudan a atraer consejeros CEOs. Diseño/Metodología – Modelo con datos panel y efectos fijos usando información individual y de empresa de 450 empresas públicas en Argentina, Brasil, Chile, Colombia, México y Perú. Hallazgos – Mayores niveles de maestrías en el extranjero, lazos de consejo, experiencia de gobierno y presencia de extranjeros están relacionados negativamente con el nombramiento de nuevos consejeros CEOs. Originalidad/Valor – El uso de una variable que no es de desempeño como experiencia de CEOs en el contexto de empresa familiar en Latinoamérica

Abstract:

Purpose - To understand if certain board traits can contribute to attract CEO directors. Design/methodology/approach - Panel data model with firm fixed effects of individual and firm level attributes from 450 public firms in Argentina, Brazil, Chile, Colombia, Mexico and Peru. Findings – Higher levels of masters abroad, board ties, government experience and foreign members are all negatively related to the appointment of CEO directors. Originality/value – The use of non-performance outcome variable such as CEO experience in the family led emergent environment of Latin America

Resumen:

Propósito. Analizar los efectos diferenciales que las instituciones tienen en OPIs (ofertas públicas iniciales) a nivel país. Diseño/Metodología. En una muestra de 64 países en un periodo de 15 años (2000-14) probamos las variables: estado de derecho, aversión a la incertidumbre y masculinidad en submuestras de países: desarrollados (27) y emergentes (37) para explorar su influencia en el número de OPIs a nivel país. Hallazgos. En países desarrollados la aversión a la incertidumbre y masculinidad son significativas. Dentro de países emergentes, la aversión a incertidumbre y estado de derecho son significativas

Abstract:

Purpose. The purpose of this paper is to analyze the differential effects that institutions have on country IPO activity. Design/methodology/approach. With a sample of 64 countries over a 15-year period (2000-2014), the authors test the variables rule of law, uncertainty avoidance and masculinity on subsamples of developed (27) and emerging (37) countries to explore their influence on domestic IPO activity level. Findings. For developed countries, only uncertainty avoidance and masculinity are significant. Within emerging countries, it is uncertainty avoidance and rule of law that are significant

Resumo:

Objetivo. Analisar os efeitos diferenciais que as instituições têm na atividade de IPO (Oferta pública inicial en inglés) dos países. Design/metodologia/abordagem. Em uma amostra de 64 países em um período de 15 años (2000-14). Testamos as variáveis: estado de direito, evitação de incerteza e masculinidade em subamostras de países desenvolvidos (27) e, emergentes (37) para explorar sua influência no nível de atividade doméstico do IPO. Resultados. Para os países desenvolvidos, apenas a evitação de incertezas e a masculinidade são significativas. Nos países emergentes, a evitação de incertezas e o estado de direito são significativos

Resumen:

Explorar la asociación entre el riesgo de subsidiarias y los atributos a nivel consejo de experiencia internacional, de gobierno y finalmente de independencia (externos)

Abstract:

This paper aims to explore the association between firm subsidiary risk and the board composition attributes of international experience, government experience and independence (outsiders)

Resumo:

Explorar a relação entre o risco de empresas filiais e as características de composição do conselho de direção com relação a experiência internacional, experiência no governo, e independência de seus membros

Abstract:

The purpose of this paper is to test the relationship between board and top management team (TMT) members’ international experience and CEO multinationality, with their firm’s degree of internationalization. Through the lenses of upper echelon theory, on a sample of 108 European and US firms, the author tests the variables “international experience” and “CEO multinationality”, at the board and at the TMT levels

Abstract:

Through the lens of resource-dependence theory, we use the boards of 108 US and European firms to test the following variables: percentage of women, outsiders, government experience, and international members on firm internationalization. We find that the percentage of females and international members both have a positive effect on firm international diversification. Contrary to our expectations, we find that it is the percentage of insiders – and not outsiders as hypothesized – that is related to internationalization

Abstract:

Using a mixed US and European sample where both the board and TMT units are analyzed at the same time, we test the variables age, tenure and functional background. We extend the information/decision making perspective (Williams & Q'Reilly, 1998) to both TMTs and boards by analyzing them as separate but related entities. In line with our hypotheses we find a positive effect on internationalization for functional background diversity of both boards and TMTs. Contrary to our expectations longer TMT tenure and younger board age also have a positive effect on internationalization

Abstract:

Drawing on the Upper Echelons perspective, we find that international experience and the average tenure of the board are positively related and the average age of the directors is negatively related to internationalization. We find no relationship between the functional background and former government experience of the board and internationalization

Abstract:

Having in mind some applications in operator theory, we extend the class of shearlet-based non-homogeneous Triebel–Lizorkin spaces, originally introduced in Vera (Appl Comput Harmon Anal 12:130–150, 2013). The extension consists in allowing the value p= infinity as one of the parameters describing the spaces. We also establish some basic properties of these new spaces, including the fact that for a particular set of parameters one gets a properly larger space than the space of local bounded mean oscillation functions from Goldberg (Duke Math J 46:27–42, 1979)

Abstract:

We prove that a condition of Lq-regularity of a Dirichlet problem associated to second order divergence form parabolic equations implies the A(infinity) property of the corresponding parabolic measure, over a class of non-cylindrical domains

Resumen:

El autor revisa las implicaciones e influencias que las obras de Wittgestein han tenido –y tienen– sobre la filosofía y las ciencias sociales, mediante el análisis del "primer" y del "segundo" Wittgenstein, y de su cambio de enfoque con respecto al lenguaje y la comunicación

Abstract:

The author reflects on the implications and influences that Wittgenstein’s work has had –and continues to have– on Philosophy and Social Sciences. This is accomplished by analyzing both his “first-person point of view” and “second-person point of view” as well as his change in focus regarding language and communication

Abstract:

The objective of our investigation was to design a formal mentoring program for novice professors who come from another culture and are recent graduates from a doctoral program. We studied a sample of eight international novice professors in the program to demonstrate its effectiveness. What distinguishes this program from others is that it offers mentoring to help professors both improve the quality of their instruction and adapt to the culture of the country and of the university. The methodology used was that of case study with a design of pre-test, intervention, post-test. The professors who participated in the mentoring program demonstrated an improvement of 0.95 points (on a scale of five points) in their student evaluations. The principal causes of deficient teaching performance in the novice international faculty were: the absence of pedagogical knowledge and the lack of teaching experience, as well as lack of familiarity with the country culture and the organizational culture of the university. Our study shows that a mentoring program can help improve low student evaluations of novice professors who come from another culture and are recent graduates of a doctoral program

Abstract:

What accounts for income per capita and total factor productivity (TFP) differences across countries? We study resource misallocation across heterogeneous production units in a general equilibrium model where establishment productivity and size are affected by policy distortions. We solve the model in closed form and show that policy distortions have a substantial negative effect on establishment productivity growth, average establishment size, and aggregate productivity. Calibrating a distorted benchmark economy to U.S. data, we find that empirically reasonable variations in distortions generate reductions in aggregate TFP of more than 24 percent while slightly increasing concentration in the establishment size distribution. If distortions in addition lower the exit rate of incumbent establishments, as supported by some empirical evidence, the aggregate TFP loss doubles to 48 percent

Abstract:

We study the impact of firing costs on aggregate total factor productivity (TFP) in a dynamic general-equilibrium framework where the evolution of establishment-level productivity is not invariant to the policy. Firing costs not only generate static factor misallocation, but also distort the selection of establishment’s growth by size, contributing to larger aggregate TFP losses. Numerical experiments indicate that firing costs equivalent to 5 year’s wages imply a reduction in TFP of more than 20%. Factor misallocation accounts for 20% of the productivity loss, whereas the remaining 80% arises from distorted selection in the productivity process

Resumen:

Las capturas no deseadas pueden reducirse mejorando la efectividad a la hora de seleccionar las especies y los tamaños elegidos, así como prohibiendo su venta para el consumo humano. La obligación de desembarco impulsada por la Unión Europea (UE) puede entenderse como una combinación de ambos tipos de medidas. El objetivo de este artículo es analizar los efectos de estos dos tipos de políticas aplicados a la pesquería de la Merluza del Caladero Sur Ibérico. Con este objetivo, se computaron los puntos de referencia asociados a una pesquería mixta para las dos políticas como la solución del estado estacionario de un problema de gestión dinámica óptima. Nuestros resultados muestran que las medidas que mejoran la selectividad pesquera generan mejores resultados que las que prohíben la venta, incrementando la producción y el stock y reduciendo los descartes. En concreto, encontramos que reducir los parámetros de selectividad un 90% para las tres edades más jóvenes multiplica la producción de merluza por casi 6, a la vez que reduce la tasa de descartes en más de 20 puntos porcentuales. A su vez, nuestros resultados también muestran que prohibir la venta de las dos edades más jóvenes aumenta la producción de merluza un 21% incrementando también la tasa de descartes en 7 puntos porcentuales

Abstract:

Unwanted catches can be reduced by improving fishing effectiveness in targeting species and sizes and by banning their sale for human consumption. The landing obligation introduced by the European Union can be seen as a combination of these two measures, and the aim of this paper is to analyse its effects on the Southern Iberian Hake Stock fishery. To this end, reference points for a mixed fishery are computed under the two measures as the steady-state solution of a dynamic optimal management problem. Our results show that measures that improve selectivity obtain better results than sales ban strategies in terms of increasing yields and stocks and reducing discards. In particular, we find that reducing the selectivity parameters by 90% for the three early ages leads to an almost six-fold increase in the hake yield and lowers the discard rate by more than 20 percentage points. Banning the sale of the two youngest ages also increases hake yield by 21% and the discard rate by 7 percentage points

Abstract:

The paper develops and analyses a dynamic general equilibrium model with heterogeneous agents that can be used for assessment of the economic consequences of fish stock-rebuilding policies within the EU. In the model, entry and exit processes for individual plants (vessels) are endogenous, as well as output, employment and wages. This model is applied to a fishery of the Mediterranean Sea. The results provide both individual and aggregate data that can help managers in understanding the economic consequences of rebuilding strategies. In particular, this study shows that, for the application presented, all aggregate results improve if the stock rebuilding strategy is followed, while individual results depend on the indicator selected

Abstract:

A methodology that endogenously determines catchability functions that link fishing mortality with contemporaneous stock abundance is presented. We consider a stochastic age-structured model for a fishery composed by a number of fishing units (fleets, vessels or métiers) that optimally select the level of fishing effort to be applied considering total mortalities as given. The introduction of a balance constrain which guarantees that total mortality is equal to the sum of individual fishing mortalities optimally selected, enables total fishing mortality to be determined as a combination of contemporaneous abundance and stochastic processes affecting the fishery. In this way, future abundance can be projected as a dynamic system that depends on contemporaneous abundance. The model is generic and can be applied to several issues of fisheries management. In particular, we illustrate how to apply the methodology to assess the floating band target management regime for controlling fishing mortalities which is inspired in the new multi-annual plans. Our results support this management regime for the Mediterranean demersal fishery in Northern Spain

Abstract:

Concerns over the re-distributive effects of individual transferable quotas (ITQ's) have led to restrictions on their tradability. We consider a general equilibrium model with firm dynamics to evaluate the redistributive impact of changing the tradability of ITQs. A change in tradability would happen, for example, if permits are allowed to be traded as a separate asset from ownership of an active firm. If the property right is associated with ownership of an active firm, the permit can be leased in each period but it is not possible to exit the industry and keep the right. However, allowing the permits to be traded as a separate asset has two effects. First, it leads to a greater concentration of production in the industry. Second, it directly converts a non-tradable asset into a tradable one, and this is equivalent to giving a lump sum transfer to all firms. The first effect implies a concentration in revenues, while the second implies a redistribution of wealth. We calibrate our model to match the observed increase in revenue inequality in the Northeast Multispecies (Groundfish) U.S. Fishery. We show that although observed revenue inequality-measured by the Gini coefficient-increases by 12 %, wealth inequality is reduced by 40 %

Abstract:

This paper analyzes the impact of reducing fisheries subsidies in a general equilibrium model for a fishery with heterogeneous vessels. It considers the impact of the stock effect, which determines the participation of vessels in a likely increased stock abundance. In equilibrium, the productivity of the fleet is endogenous as it depends on the stock of fish along the equilibrium path. The model concludes that any impact of a subsidy drop will depend on the stock effect. If that effect is large, fishing firms will benefit from the stock recovery and the elimination of the subsidy will increase future returns on investment. The model is particularized to industrial shrimp fisheries in Mexico. It is shown that the complete elimination of a subsidy increases biomass, capitalisation, marginal productivity, and consumption and reduces inequality when the effect of the induced increase in the stock is considered. However, if that effect is not considered, capital and consumption decrease, and inequality and hence, the social costs of a subsidy drop, increase

Abstract:

This paper explores the benefits of including age structure in the control rule (HCR) when decision makers regard their (age-structured) models as approximations. We find that introducing age structure into the HCR reduces both the volatility of the spawning biomass and the yield. Although the benefits are lower at a fairly imprecise level, there are still major advantages for the actual precision with which the case study is assessed. Moreover, we find that when age-structure is included in the HCR the relative ranking of different policies in terms of variance in biomass and yield does not differ. These results are shown both theoretically and numerically by applying the model to the Southern Hake fishery

Abstract:

International fisheries agencies recommend exploitation paths that satisfy two features. First, for precautionary reasons exploitation paths should avoid high fishing mortality in those fisheries where the biomass is depleted to a degree that jeopardise the stock’s capacity to produce the Maximum Sustainable Yield (MSY). Second, for economic and social reasons, captures should be as stable (smooth) as possible over time. In this article we show that a conflict between these two interests may occur when seeking for optimal exploitation paths using age structured bioeconomic approach. Our results show that this conflict be overtaken by using non constant discount factors that value future stocks considering their relative intertemporal scarcity

Resumen:

En este artículo se presenta un modelo en que los agentes deben tomar decisiones acerca de la formalización de los derechos de propiedad sobre la tierra. En el modelo se muestra la existencia de un equilibrio en el que coexisten agentes que producen en tierras de libre acceso y agentes que producen en tierras de propiedad privada. Este equilibrio es ineficiente porque el esfuerzo para formalizar la tierra es menor de lo necesario

Abstract:

We characterize the stationary competitive equilibrium in a model in which private decisions have to be made to define effective property rights on land. We show that there is an interior competitive equilibrium in which there will be some agents producing in free access lands and others in private property land. This equilibrium is inefficient because too few plots of land are enclosed

Abstract:

Large variation in stock dynamics affects the accuracy of stock estimates, which fisheries managers rely on when determining quotas and other regulations. The purpose of this paper is to investigate the implications of uncertainty on pulse fishing. We show that as the variance of the random natural fluctuations increases, the optimal pulse length decreases and converge toward a constant-escapement policy. Hence, in fisheries with large natural variability, a constant-escapement policy is a good approximation of the optimal policy

Abstract:

Global warming of the oceans is expected to alter the environmental conditions that determine the growth of a fishery resource. Most climate change studies are based on models and scenarios that focus on economic growth, or they concentrate on simulating the potential losses or cost to fisheries due to climate change. However, analysis that addresses model optimisation problems to better understand the complex dynamics of climate change and marine ecosystems is still lacking. In this paper, a simple algorithm to compute transitional dynamics in order to quantify the effect of climate change on the European sardine fishery is presented. The model results indicate that global warming will not necessarily lead to a monotonic decrease in the expected biomass levels. Our results show that if the resource is exploited optimally, then in the short run, increases in the surface temperature of the fishery ground are compatible with higher expected biomass and economic profit

Abstract:

Reaction-diffusion systems arise in many different areas of the physical and biological sciences, and traveling wave solutions play special roles in some of these applications. In this paper, we develop a variational formulation of the existence problem for the traveling wave solution. Our main objective is to use this variational formulation to obtain exact and approximate traveling wave solutions with error estimates. As examples, we look at the Fisher equation, the Nagumo equation, and an equation with a fourthdegree nonlinearity. AIso, we apply the method to the multi-component Lotka-Volterra competition-diffusion system

Abstract:

In this paper, we address the problem of recovering the local volatility surface from option prices consistent with observed market data. We revisit the implied volatility problem and derive an explicit formula for the implied volatility together with bounds for the call price and its derivative with respect to the strike price. The analysis of the implied volatility problem leads to the development of an ansatz approach, which is employed to obtain a semi-explicit solution of Dupire’s forward equation. This solution, in turn, gives rise to a new expression for the volatility surface in terms of the price of a European call or put. We provide numerical simulations to demonstrate the robustness of our technique and its capability of accurately reproducing the volatility function

Abstract:

In this article, we use a Mellin transform approach to prove the existence and uniqueness of the price of a European option under the framework of a Black–Scholes model with time-dependent coefficients. The formal solution is rigorously shown to be a classical solution under quite general European contingent claims. Specifically, these include claims that are bounded and continuous, and claims whose difference with some given but arbitrary polynomial is bounded and continuous. We derive a maximum principle and use it to prove uniqueness of the option price. An extension of the put-call parity which relates the aforementioned two classes of claims is also given

Abstract:

We show that the problem of recovering the time-dependent parameters of an equation of Black-Scholes type can be formulated as an inverse Stieltjes moment problem. An application to the problem of implied volatility calculation in the case when the model parameters are time varying is provided and results of numerical simulations are presented

Abstract:

In this note we provide a simple derivation of an explicit formula for the price of an option on a dividend-paying equity when the parameters in the Black–Scholes partial differential equation (PDE) are time dependent. With the aid of general transformations, the option value is expressed as a product of the Black–Scholes price for an option on a non-dividend-paying equity with constant parameters, the ratio of the strike price in the time-varying case to the strike price in the constant-parameter case, and a modified discount factor containing a parametrised time variable

Abstract:

In this paper, we consider the class of equations u(t) - [F(x, u)u(x) + G(x, u)](x) + H(x, u). Using hodograph and dependent variable transformations, we determine sufficient conditions on F, G, and H such that this equation is linearizable. We also derive a general quasilinear equation, which includes the Clarkson-Fokas-Ablowitz equation (SIAM J. Appl. Math. 49 (1989), 1188-1209), that can be transformed into semilinear form

Abstract:

The KPP–Fisher equation was proposed by R. A. Fisher as a model to describe the propagation of advantageous genes. Subsequently, it was studied rigorously by Kolmogorov, Petrovskii, and Piskunov. In this paper, we study the dynamics of the KPP–Fisher equation in bounded domains by giving bounds on its solution. The bounding functions satisfy nonlinear equations which are linearizable to the heat equation. In addition to describing the dynamics of the KPP–Fisher equation, we also recover some previous results concerning its asymptotic behavior. We perform numerical simulations to compare the solution of the Fisher equation and the bounding functions

Abstract:

We study the dynamics of fronts arising in the KPP-Fisher's equation, proposed by Fisher in 1936 to model the propagation of a mutant gene and subsequently studied rigorously in the seminal work of Kolmogorov, Petrovskii, and Piskunov. The approach is via acomparison theorem, where the comparison functions satisfy equations which are linearizable to the heat equation. In some sense, we have obtained a “linearization” of the KPP-Fisher's equation

Abstract:

In this paper, we consider a two-component competition-diffusion system of Lotka-Volterra type which arises in mathematical ecology. By introducing an appropriate ansatz, we look for exact travelling and standing wave solutions of this system

Abstract:

We re-examine the representative agent’s optimal consumption and savings under uncertainty in the presence of investment constraints using martingale representation and convex analysis techniques. This framework allows us to explicitly quantify precautionary savings which induces a higher average growth rate than in a certainty setup. We provide a closed form solution for a Cobb–Douglas economy. The effect of uncertainty on portfolio selection is analyzed. Consumption growth rate and risk free interest rate exhibit a U-shaped relationship. Uncertainty negatively affects expected consumption growth rate; such a result seems to be supported by empirical evidence

Abstract:

It is usually assumed that populism is hostile towards international law. However, this assumption is based on a particular version of populism (far-right) and of international law itself (that associated with the liberal international order). A closer look at the many manifestations of populism, actually reveals that different approaches to international law have been present across time and space. Taking Latin America as the case-study, this chapter shows that governments which have been labelled 'populist' have proactively advocated certain conceptions of international law. Conscious of their status in the semi-periphery, Peronismo in Argentina promoted a Cold War doctrine of equidistance to the major powers based on efforts of regional legal integration, while the government of Echeverría in Mexico became the major force behind the New International Economic Order (NIEO). Both represent early Third World Approaches to International Law (TWAIL). Based on Andean indigenous ontology, the government of Evo Morales in Bolivia has advanced an emerging global law of nature. Hence, reducing populism to a parochial, international law unfriendly ideology is partly grounded in an outdated vision of international law that has not yet learned, or does not want to recognise, that there are several possible versions of it

Abstract:

This paper presents an analytical pathological appraisal of elite thinking and mobilization in Mexico after the Trump administration withdrew from TPP and forced Mexico to engage in NAFTA renegotiations. It examines the strategies developed by political and economic elites in response to the threat of trade war coming from Mexico’s most important trade partner. This analysis shows that, although Mexican elites have developed sophisticated heuristics in order to confront the immediate challenge from the government in Washington DC, they are not engaging in what should be a very important debate about Mexico’s role in the reconfigurations of global trade and order. This is a missed opportunity for Mexico which could affect its role in the ongoing reconfigurations of global trade and law, and thus its future stature in world politics

Abstract:

Many scenarios of conflict in the global south are economically driven but have experienced such extreme forms of violence that commentators have reviewed the law of non-international armed conflict (NIAC) in light of the ‘war on drugs’ in several countries of Latin America, and elsewhere. The article also addresses this issue but with a view to better understand the relationship between law and violence. International Humanitarian Law (IHL) is not understood here as a mere set of neutral normative hypotheses which may or not apply to factual situations, but rather as an important means through which war is constructed. This relationship is reviewed in the framework of the struggle of the Mexican government against powerful drug cartels as well as among the latter, in particular during the administration of President Calderón, from 2006 to 2012. It does so, by analysing the legal narratives that have structured the discourse about Mexico’s violence. By simplifying its complexity, these narratives facilitate the qualification of the situation as internal war. This is contestable as a matter of lex lata. Moreover, it is counterproductive as it reinforces the strategic shifts between the law of war and the law of peace. The article concludes by arguing that the acknowledgment of the complexities of these conflicts is of the utmost importance for IHL, since they put into light the contingent character of some of its structuring categories. This sounds self-defeating but it is necessary for avoiding that this body of law reinforces what it seeks to contain

Abstract:

Non-permanent members of the United Nations Security Council experience clear and well-known limits. Yet, there are certain tools at their disposal which, beyond lucky political constellations, allow them to exercise a more systemic influence on the Council’s work and outcomes. These tools are of a juridical nature, often established and developed through the organ’s practice, but their efficient use depends primarily on diplomatic expertise and imagination channeled through informal venues. The present article shows how said tools have been used in the case of the promotion of the ‘international rule of law’. However contested the concept and restricted its practical consequences on the organ’s functions, the evolution of its promotion within the Security Council is both a demonstration of and a further vehicle for non-permanent members’ influence on this body. That this in turn serves to legitimate the Council under its current configuration can be seen critically. However, it seems important to underline that the UN Security Council’s efficiency depends ever more on the legitimacy that non-permanent members can best imprint on it. In a non-polar world, this tendency can be expected to increase

Resumen:

La conclusión de los trabajos sobre "fragmentación" por parte de la Comisión de Derecho Internacional no agotó el debate. A la fecha, la discusión sigue dividiéndose entre quienes ven en ella un riesgo para la coherencia del orden jurídico internacional, y aquéllos quienes hablan de un fenómeno acorde a los tiempos que vivimos, que permite al derecho internacional responder de manera más enfocada a los nuevos riesgos globales. Y ante ello, ¿qué deberíamos opinar en México? Tras presentar un panorama general sobre las diferentes posturas que existen al respecto, el presente artículo brinda algunas reflexiones desde una perspectiva mexicana

Abstract:

The debate on “fragmentation” has not been exhausted with the conclusion of the International Law Commission’s work on the subject. To present, discussions are still divided between those who see in it a threat to the coherence of the international legal order, and those who rather speak of a phenomenon that goes along with our times, allowing international law to deal with global risks in a more focused way. But, what do we have to say to all this in Mexico? After rendering a general overview of different, existing positions, the present article offers some reflexions from a Mexican perspective

Résumé:

Le fait que la Commission de Droit International ait conclu ses travaux sur la fragmentation du droit international ne signifie pas que le débat sur ce sujet soit clos. Encore maintenant, le débat oppose ceux que pensent que la fragmentation représente un risque pour la cohérence de l’ordre juridique international à ceux qui la considèrent comme un phénomène d’actualité qui permet au droit international de réagir de façon plus précise face aux menaces mondiales. Dans ce contexte, quelle position devrait être assumée par le Mexique? Après un aperçu général des différentes positions adoptées sur ce sujet, cet article offre quelques réflexions sous une perspective mexicaine.

Resumen:

La última guerra en Iraq dio a conocer el fenómeno llamado coalition of the willing. Sin embargo, la alianza militar ad hoc liderada por Estados Unidos de América no fue su primera manifestación, ni se trata del único tipo de coalición que actúa en la escena internacional. Si bien podemos ubicar los primeros antecedentes a principios del siglo XX, las coalitions continúan evolucionando, según lo demandan los nuevos temas globales que afrontan. Este artículo es un intento por identificar sus propiedades esenciales. Al mismo tiempo, se parte de la necesidad de ponerlas en contexto, con el fin de mostrar que, a pesar de su naturaleza política, inciden decididamente en el sistema de fuentes del derecho internacional y en las legislaciones nacionales

Abstract:

The last war in Iraq has brought to light the phenomenon called coalition of the willing. Nevertheless, the ad hoc military alliance led by the United States has not been its first manifestation nor is it the only type of coalitions acting on the international scene. Although their first appearances can be found at the beginning of the XXth century, coalitions continue to evolve as demanded by the new global issues they face. This article is an attempt to identify their essential properties. At the same time, it endeavours to put them into context in order to demonstrate that, despite their political nature, coalitions have a concrete impact on the sources of international law, as well as on national legislations

Résumé:

La dernière guerre en Irak a mis en lumière le phénomène appelé coalition of the willing. Toutefois, cette alliance militaire ad hoc sous le leadership des Etats-Unis n’a pas été sa première manifestation; d’ailleurs, il ne s’agit pas du seul type de coalition qui existe sur la scène internationale. Si l’on retrouve les premières manifestations du phénomène au début du XXème siècle, les coalitions ont évolué en fonction des nouveaux sujets de l’agenda globale qu’elles cherchent à affronter. Cet article essaie d’identifier ses caractéristiques essentielles. Aussi, il a été considéré nécessaire de mettre les différentes coalitions dans leur propre contexte. Ceci démontrera que, malgré leur nature politique, les coalitions ont des effets réels sur le système des sources du droit international ainsi que sur les législations nationales

Abstract:

A fractionally integrated panel data model with a multi-level cross-sectional dependence is proposed. Such dependence is driven by a factor structure that captures comovements between blocks of variables through top-level factors, and within these blocks by non-pervasive factors. The model can include stationary and non-stationary variables, which makes it flexible enough to analyze relevant dynamics that are frequently found in macroeconomic and financial panels. The estimation methodology is based on fractionally differenced block-by-block cross-sectional averages. Monte Carlo simulations suggest that the procedure performs well in typical samples sizes. This methodology is applied to study the long-run relationship between energy consumption and economic growth. The main results suggest that estimates in some empirical studies may have some positive biases caused by neglecting the presence non-pervasive cross-sectional dependence and long-range dependence processes

Abstract:

The nature and novelty of crypto markets have given rise to speculative bubbles, which have permeated almost all cryptocurrencies. This paper shows that the log-periodic model with conditional heteroscedasticity structures has predictive capabilities to estimate the most likely crash date of cryptocurrency bubbles. We use the 2017 bitcoin bubble to perform the primary analysis and date a potential crash just four days before the price peak. We detect the crash date a month before the Bitcoin prices reach their highest value. The bitcoin price fell 30% two weeks after reaching its maximum value. Robustness exercises include the Ether bubble in 2021 and others in Bitcoin's history to show that the model can be helpful to crypto investors

Abstract:

This paper tests if air pollution serves as a carrier for SARS-CoV-2 by measuring the effect of daily exposure to air pollution on its spread by panel data models that incorporates a possible commonality between municipalities. We show that the contemporary exposure to particle matter is not the main driver behind the increasing number of cases and deaths in the Mexico City Metropolitan Area. Remarkably, we also find that the cross-dependence between municipalities in the Mexican region is highly correlated to public mobility, which plays the leading role behind the rhythm of contagion. Our findings are particularly revealing given that the Mexico City Metropolitan Area did not experience a decrease in air pollution during COVID-19 induced lockdowns

Abstract:

This paper studies long economic series to assess the long-lasting effects of pandemics. We analyze if periods that cover pandemics have a change in trend and persistence in growth, and in level and persistence in unemployment. We find that there is an upward trend in the persistence level of growth across centuries. In particular, shocks originated by pandemics in recent times seem to have a permanent effect on growth. Moreover, our results show that the unemployment rate increases and becomes more persistent after a pandemic. In this regard, our findings support the design and implementation of timely counter-cyclical policies to soften the shock of the pandemic

Abstract:

We introduce a novel multilevel factor model that allows for the presence of global and pervasive factors, local factors and semi-pervasive factors, and that captures common features across subsets of the variables of interest. We develop a model estimation procedure and provide a simulation experiment addressing the consistency of our proposal. We complete the analyses by showing how our multilevel model might explain on the commonality across CDS premiums at the global level. In this respect, we cluster countries by either the Debt/GDP ratio or by sovereign ratings. We show that multilevel models are easier to interpret compared with factor models based on principal component analysis. Finally, we experiment how the multilevel model might allow the recovery of the risk contribution due to the latent factors within a basket of country CDS

Abstract:

The increased presence of Canadians in Mexico has been apparent during the last two decades, especially in certain localities. The evolution and the most recent characteristics of this population was analized through comparative and complementary analysis of data from census information and immigration administrative records. The study results confirm the steady increase of Canadians in Mexico, as well as the diversification of this population in terms of migratory status and the areas and territories where it is present. Three main groups are identified: a) the employed population; b) retirees; and c) children under 16 years of age. Also notable are long-stay tourists, known as snowbirds. It is concluded that these immigration patterns are essentially associated with the integrationist model in North America (NAFTA), the opening to foreign investment and tourism in Mexico, the increase in the retired population in Canada, and the historical ties between the Mennonite communities in the two countries

Resumen:

Presentamos antecedentes sobre la validación de un modelo cognitivo para el aprendizaje del espacio vectorial R2. Como hallazgo, destacamos el papel que desempeña asociar un par de números reales a una ecuación lineal homogénea (de dos incógnitas) para inducir estructura algebraica a su conjunto solución. Además, se entrega evidencia de cómo el uso de un parámetro, para escribir una solución de una ecuación lineal homogénea, es un factor importante que pone de relieve a la ponderación de una solución por un escalar como una operación que se asocia al conjunto solución de una ecuación lineal homogénea. Todo lo anterior en estrecha relación con la construcción del espacio vectorial R2

Abstract:

We present background information on the validation of a cognitive model for the learning of the vector space R2 . As a result, we highlight the role played by the association of a pair of real numbers to a homogeneous linear equation (with two unknowns) to induce an algebraic structure to the solution set. Furthermore, we present evidence of how the use of a parameter to write a solution of a homogeneous linear equation, is an important factor that highlights the product of a solution by a scalar as an operation that is associated with the solution set of a homogeneous linear equation. All of this has a close connection with the construction of the vector space R2

Résumé:

Nous présentons les antécédents d’un modèle cognitif pour l’apprentissage de l’espace vectoriel R2 et sa validation. En tant que contribution, il faut remarquer le rôle joué par l’association d’une paire de nombres réels à une équation linéaire homogène (avec deux inconnues) pour induire une solution algébrique à son ensemble solution. Nous présentons aussi d’une part, des évidences sur l’utilisation d’un paramètre, comme un facteur important, qui souligne la pondération d’une solution par un nombre réel en tant qu’opération associée à l’ensemble de la solution d’une équation linéaire homogène, pour écrire une solution d’une équation linéaire homogène. Ces résultats ont une étroite relation avec la construction de l’espace vectoriel R2

Resumo:

Apresentamos antecedentes sobre a validação dum modelo cognitivo para a aprendizagem do espaço vetorial R2. Destacamos o papel que desempenha a associação dum par de números reais com uma equação lineal homogénea (de duas incógnitas) para induzir estrutura algébrica a seu conjunto solução. Ademais amostramos evidencia de como o uso dum parâmetro para escrever uma solução duma equação lineal homogénea, é um fator importante que destaca a ponderação duma solução por um escalar como uma operação que se associa ao conjunto solução duma equação lineal homogénea. Tudo o anterior em estreita relação com a construção do espaço vetorial R2

Abstract:

The Mexico City Metrobus is one of the most popular forms of public transportation inside the city, and since its opening in 2005, it has become a vital piece of infrastructure for the city; this is why the optimal functioning of the system is of key importance to Mexico City, as it plays a crucial role in moving millions of passengers every day. This paper presents a model to simulate Line 1 of the Mexico City Metrobus, which can be adapted to simulate other bus rapid transit (BRT) systems. We give a detailed description of the model development so that the reader can replicate our model. We developed various response variables in order to evaluate the system's performance, which focused on passenger satisfaction and measured the maximum occupancy that a passenger experiences inside the buses, as well as the time that he spends in the queues at the stations. The results of the experiments show that it is possible to increase passenger satisfaction by considering different combinations of routes while maintaining the same fuel consumption. It was shown that, by considering an appropriate combination of routes, the average passenger satisfaction could surpass the satisfaction levels obtained by a 10% increase in total fuel consumption

Abstract:

A major goal in various fields has been the development of believable, intelligent, and social Autonomous Agents (AAs) whose behavior is influenced by affective signals. This endeavor has promoted the development of cognitive architectures for AAs that incorporate processes that imitate those of human cognition and emotions. However, there is still a need for appropriate environments in such agent architectures for the modeling of the interaction between emotional and cognitive components. In this paper, we address the following research question: how to model the interaction of emotion and cognition in agent architectures so that AAs are capable of generating consistent emotional states and displaying believable emotional behaviors. We address this problem from the perspective of the development of Computational Models of Emotions (CMEs). In particular, we propose an integrative framework for constructing CMEs whose design is focused on two main aspects: (1) the modeling of the underlying mechanisms of emotions, and (2) the incorporation of input and output interfaces that facilitate the interaction between affective processes implemented in CMEs and cognitive processes implemented in agent architectures

Abstract:

As in many Latin–American countries, in Mexico many older adults live alone as a result of the migration of one or more of their relatives, mostly to the USA. Thus, not only do they live alone, but they might seldom see these relatives for long periods, even though they often depend on them financially. With the goal of designing appropriate communication technology for seniors and their relatives experiencing this situation, we conducted interviews and evaluated scenarios and prototypes to reveal the practical ways they maintain emotional ties despite the distance. Based on those findings, we envisioned a communication system through which seniors and their relatives can maintain close social ties by sharing information, personal reminiscences and stories. We found that older adults perceived the system as a richer, natural form of communication with their relatives that could facilitate their integration into the networks that currently connect members of their families

Abstract:

The aging of the population is a phenomenon faced by most nations. Growing old is often accompanied by the loss of close companionship which has been shown may aggravate the cognitive impairment of elders. From a qualitative study, key issues emerged regarding unmet needs of elder’s communication that we propose to address with an agentbased communication system. This is a family newspaper through which seniors and their relatives not only maintain close social ties by sharing information, personal reminiscences and cultural stories, but enable them to exercise their minds through its entertainment section that can help to delay the cognitive decline that elders experience as they become older. The system provides elders with a richer form of communication with their relatives and facilitates their integration into the networks that currently connect members of their families who use email and IM systems to keep in touch with each other. To facilitate the information capture, several autonomous agents help the user to interact with the system which can be accessed by any electronic display with a touch screen, such as a Tablet PC. By means of autonomous agents we have incorporated a reminder mechanism to enable elders and their relatives to preserve and strengthen their relationships. In this paper, we present the study that motivated the development of the electronic family newspaper, and describe its functionality

Abstract:

This note presents a new passivity-based controller that ensures asymptotic stability for quadrotor position without solving partial differential equations or performing a partial dynamic inversion. After a resourceful change of coordinates, a pre-feedback controller, and a backstepping stage on the yaw angle dynamic, it is possible to identify new quadrotor cyclo passive outputs. Then, a simple proportionalintegral controller of these cyclo-passive outputs completes the design. The cyclo-passive outputs allow the construction of an energy-based Lyapunov function that includes five out of six quadrotor degrees of freedom and guarantees asymptotic stability of the desired equilibrium. Moreover, the constant velocity reference tracking problem is solved with a slight modification to the proposed controller. Finally, the approach is validated through simulations and real-time experimental results

Resumen:

El objetivo de este trabajo es analizar cómo un sistema de estímulos económicos y de reconocimiento a las actividades de investigación en México ha impactado la productividad científica de las instituciones de educación superior (IES) a nivel nacional. Este análisis emplea una base de datos con información de 27,667 investigadores mexicanos que han formado parte del Sistema Nacional de Investigadores (SNI), al menos por un año, en el periodo 1991-2011, e integra 122,406 artículos publicados por dichos científicos en revistas ISI. A fin de determinar la cantidad y calidad de la productividad de las IES, se consideraron como unidades de medida los artículos de publicaciones científicas y las citas bibliográficas. Los resultados indican que únicamente el 28% de las IES analizadas emitieron uno o más artículos en el periodo de estudio. Adicionalmente, nuestros resultados muestran que en diez universidades se agrupa tanto la mayor cantidad de artículos publicados, como la mayor cantidad de citas bibliográficas. Por otra parte, se muestra que la edad máxima de productividad es a los 58 años para el caso de las investigadoras y 57 años para los investigadores. Finalmente, una de las contribuciones de este trabajo es que muestra que las actividades de colaboración están en continuo crecimiento entre los investigadores

Abstract:

The aim of this paper is to analyze how a system of economic stimulus and recognition to the research activities in Mexico has impacted the scientific productivity of Higher Education Institutions (HEI). This analysis uses a database with information of 27,667 Mexican researchers who have been benefited from the Sistema Nacional de Investigadores (SNI) at least for one year among the period 1991-2011 and includes 122,406 articles published by these researchers in ISI journals. In order to determine the quantity and quality of the productivity of HEI we considered as units of measurement the publications and citations. The results indicate that only 28% of the HEI analyzed issued one or more publications in the study period. Additionally, our results show that in 10 universities are grouped both the higher quantity of publications and the higher quantity of quotes. Moreover, it is shown that the maximum age of productivity is at 58 years in the case of the female researchers and 57 years for male researchers. Finally, one of the main contributions of this work is that it show that collaboration activities are continuously growing among researchers

Abstract:

The purpose of this paper is to analyze the effects of networks on research output and impact. The analysis was done using a database of 2150 Mexican engineers who have been members of the National System of Researchers. Results show that although there are several methods to measure centrality and structure in the social network analysis theory, not all variables show the same impact on performance. Our results suggest that the centrality measures that show a positive effect on publications and citations are degree and closeness, betweenness is only significant with publications, and eigenvector has a negative effect on publications. Related to the measures of network structure, our analysis suggests that both structural holes and density have a positive effect on research output and impact, confirming that there are networks in which closure and brokerage can benefit the performance of the members of the network

Abstract:

We present a reactive multi-agent push system for a large set of objects. The behavior for the pushing agents consists of: 1) selecting and updating an object set to push, 2) reaching positions near the objects to start influencing, 3) pushing the objects along a path to the goal region, and 4) regrouping when needed to ensure the group is packed tightly enough. The emergent properties of the behavior allow us to test how effectively a group of agents can push a set of objects through the environment with different strategies

Abstract:

We study the effects of a social security reform in a large overlapping generations model where markets are incomplete and households face uninsurable idiosyncratic income shocks. We depart from the previous literature by assuming that, because of lack of commitment in the credit market, the borrowing constraint in the unique asset is endogenously determined by individuals' incentives to default on previous debts. In our model, after the reform the incentives to default are lower and consequently households face more relaxed borrowing limits, leading to an increase in debt and a reduction in the size of precautionary savings. However, the quantitative impact of this mechanism on stationary aggregate savings is small. Computing the transitional dynamics for the basic model following the social security reform we obtain important welfare gains for workers at the bottom of the income distribution (equivalent to 1.3% of consumption each period) associated to the relaxation of the endogenous borrowing constraints, which are missed in an environment with fixed borrowing limits

Abstract:

This article argues that the MLCBI and its rules of interpretation, by stating that the main insolvency proceeding must take place in the debtor's COMI, fail to achieve some of the goals they intended to accomplish. They fail to consider that the insolvency systems of many jurisdictions have deficiencies that may risk the survival of certain companies. Accordingly, when these companies challenge the system filing in a jurisdiction other than their COMI for a better outcome, they do not seek recognition of such main proceeding in their local jurisdiction in fear that it may not be recognized

Resumen:

Este texto empieza por demostrar con datos estadísticos el escaso número de concursos mercantiles en México, para luego analizar algunas de las causas a las que se atribuye su escasa utilización, a fin de plantear soluciones, como la implementación de procedimientos especiales para las micro y pequeñas empresas, modificaciones necesarias a la Ley de Concursos Mercantiles (LCM), la necesidad de difundir el cambio de paradigma en el derecho de la insolvencia, así como un llamado para despertar el interés del Consejo de la Judicatura Federal (CJF) en los procesos judiciales que pueden acelerar la recuperación económica del país

Abstract:

Mexico City is the first and the largest city in the world, outside the USA, to approve an innovative regulation for Transport Network Companies. These regulations imposed an original approach by categorizing TNCs' services as private transportation services as opposed to the public transportation services provided by taxis. This has resulted in a different treatment to services rendered by TNCs. This chapter will look at the legislative and adjudicatory developments to examine if this different treatment is justified and will identify certain loopholes in this regulation which can lead to conclude that the purported differences are inexistent

Resumen:

Este artículo busca sensibilizar sobre la necesidad de adecuar el marco jurídico vigente del concurso civil a los sistemas modernos de insolvencia, los cuales proporcionan incentivos a las partes para la renegociación de los adeudos, buscan conciliar sus intereses y podrían constituir una herramienta para ayudar a resolver los problemas económicos derivados del COVID-19. El artículo explica por qué la regulación del juicio de concurso civil vigente no resulta idónea para resolver los problemas económicos del deudor; asimismo, demuestra que el procedimiento vigente ha sido utilizado pocas veces, sin éxito. Se hace una propuesta de regulación del concurso civil y se exponen los problemas que ocasionan los artículos que pretenden añadirse para dar preferencia al fisco federal

Abstract:

This paper discusses why the current regulation of the bankruptcy procedure for non - business persons in Mexico does not helps the parties to solve their economic problems. It further demonstrates that the procedure is not regularly used, and when used it does not lead to a good outcome. The paper examines the features of modern insolvency systems and enhances the incentives to reconcile the parties' interests and to renegotiate the debtor's indebtedness as a potential tool to deal with the economic problems arising from COVID-19. The paper makes a regulation proposal and illustrates the problems that may arise if the articles awarding a preference to the federal tax authorities contained in the bill for the National Civil Procedures Code are passed

Resumen:

Esta colaboración demuestra con estadísticas el enorme rezago que tiene México en relación con otras economías, incluso mucho más pequeñas, en los procesos de insolvencia. La pandemia ocasionada por el coronavirus disease (Covid-19) no solamente ha afectado la salud de millones de personas en el mundo, sino que ha ocasionado múltiples pérdidas económicas y el cierre de muchas empresas productivas. Por ello, países en todos los continentes se han dado a la tarea de reformar sus procesos de insolvencia para permitir un acceso más rápido para proteger el negocio del comerciante y sus bienes, y para desarrollar herramientas que permitan negociaciones más ágiles y que eviten la pérdida de empleos y de negocios productivos. Este artículo aborda algunos de los beneficios que un concurso mercantil puede aportarle a una empresa para superar sus dificultades financieras si se inicia a tiempo, y describe varias de las reformas que han realizado diversos países a sus procesos de insolvencia para beneficiar a sus empresas y contribuir a la recuperación económica

Resumen:

Este artículo se concentrará en dos de las cualidades que son deseables en los jueces, poniendo énfasis en la necesidad de especialización, y analizará por qué el sistema vigente de selección y designación de jueces federales en México dificulta una evaluación de fondo de su desempeño y no contribuye a ganarse la confianza de los gobernados, lo cual resulta fundamental para que el sistema de administración de justicia pueda ser más eficiente

Abstract:

This article will focus on two of the qualities that are desirable in judges emphasizing expertise and will analyze why the current system of selection and appointment of federal judges in Mexico makes it difficult to evaluate their performance in depth and does not contribute to gain the people’s trust, which is essential for the justice administration system to be more efficient

Abstract:

The benefits of commercial insolvency proceedings have been acknowledged globally. However, commercial insolvency proceedings are driven only by economic concerns. When the insolvent is a human being other concerns must be considered, such as: satisfaction of basic needs of the debtors and their families, relief from stress and health problems caused by over indebtedness, incentives to remain productive and to maintain or help debtors re-entry the economic market. Many international organizations had developed recommendations and principles for effective business insolvency regimes, but insolvency for non-business natural persons was left behind. However, the 2008 recession alerted of the dangers of excessive household debt. Accordingly, international organizations recommended revisions to insolvency laws dealing with natural persons. In Latin America only a few countries have implemented a special insolvency system for individuals. This paper will compare and analyze key features of the insolvency proceedings for natural persons implemented by Colombia and Chile. The analysis will be made by examining the statistics each country has provided, and by discussing the weight that: (a) access requirements, (b) renegotiation terms, (c) repayment terms and requirements for approval of repayment plans, and (d) costs have played in the results. The renegotiation proceedings implemented by Colombia and Chile have a special feature that has played a key role in achieving restructuring agreements with creditors: failure to reach an agreement during the negotiation stage leads to bankruptcy, and bankruptcy grants a discharge. However, statistics show that there are around 35% more filings in Chile than in Colombia, and that around 30% more repayment plans have been approved in Chile than in Colombia. Access requirements and costs have played an important role in these results

Resumen:

El presente artículo responde a las preguntas siguientes: ¿En qué circunstancias podría una persona tener dos o más pólizas que aseguren un mismo riesgo? ¿La ley permite tener dos o más pólizas que aseguren un mismo riesgo? ¿En qué consiste el uso adecuado de los seguros? Finalmente, ¿cuál es la forma más eficiente de utilizar de manera complementaria dos o más contratos de seguro de gastos médicos mayores que están vigentes?

Abstract:

This article answers the following questions: Under what circumstances can a person have two or more policies to ensure the same risk? Does the law allow two or more policies to ensure the same risk? What does the proper use of the insurance consist of? Finally, what is the most efficient way to use, in a complementary way, two or more insurance contracts for major medical expenses that are in force?

Resumen:

En los últimos veinte años, los procedimientos de insolvencia han adquirido ciertas características especiales que se han ido extendiendo por todos los continentes. El propósito de este artículo es introducir al lector a los objetivos que persigue el juego de reglas que ha sido puesto en práctica a nivel internacional para lidiar con las empresas con problemas de insolvencia, así como analizar, en términos generales, algunos de sus beneficios y consecuencias

Abstract:

In the last twenty years, the insolvency proceedings have acquired certain special characteristics that have spread across all continents. The purpose of this article is to introduce the reader to the objectives of the ruleset that has been implemented at international level to deal with companies with insolvency problems and analyze, in general terms, some of their benefits and consequences

Resumen:

El presente estudio tiene como objetivo abordar algunos claroscuros de la nueva Ley de Concursos Mercantiles (LMC) publicada en el Diario Oficial de la Federación el 12 de mayo de 2000 y que entró en vigor al día siguiente de su publicación. No es la intención de este artículo repetir lo que se ha escrito sobre el tema ni numerar los puntos principales de la LMC. Me concretaré a tratar algunos puntos de importancia a los que no se les ha prestado suficiente atención hasta el momento

Resumen:

En este artículo se presenta el modelado matemático y control de un sistema de levitación por aire, así como la construcción del prototipo para la validación experimental del esquema de control. El sistema de levitador de aire consiste en un cilindro vertical, en cuya base cuenta con un ventilador para generar una fuerza que impulse verticalmente a un objeto redondo de acuerdo con una señal de referencia proporcionada por el usuario. Se diseña un esquema de control lineal por retroalimentación de estados (posición y velocidad) para controlar la posición de la masa en torno a una referencia constante. Sin embargo, el sensor usado sólo permite medir de forma directa la posición, por lo que tomando a ésta como salida se propone un observador de orden completo de Luenberger para estimar la velocidad y así evitar el ruido que implica utilizar derivación numérica. El controlador total es por retroalimentación de salida. Para diseñar el controlador como se ha mencionado anteriormente, se requiere de un modelo matemático que represente el comportamiento del sistema. Al utilizar las ecuaciones de Euler-Lagrange se obtiene el modelo, ya que con éste enfoque basado en la energía, es posible identificar la naturaleza física de las fuerzas que influyen en el comportamiento del sistema. Dada la naturaleza no lineal del sistema, se utiliza un modelo incremental, alrededor de cada posición deseada utilizando la linealización aproximada al aplicar el teorema de Taylor. Se incluyen resultados experimentales para mostrar el desempeño del controlador

Abstract:

This paper shows the mathematical model and control of the system of an air levitator, as well as the construction of a prototype for experimental validation of the control scheme. The air levitator system consists of a vertical cylinder, in which the base is a fan that generates the force that moves a rounded object vertically according to a reference signal given by the user. A linear state feedback control scheme is designed to stabilize a mass in a constant reference the position. Nevertheless, the implemented sensor only allows measuring position directly, so a Luenberger observer of complete order is proposed for position output to estimate the velocity and avoid the noise that numeric differentiation generates. The total controller is an output feedback. To design the controller as mentioned before, a mathematical model is required to represent the behavior of the system. Using Euler-Lagrange equations the model is obtained, since this energy-based method allows identifying the physical nature of the system. Given that the system is a nonlinear system, an incremental model is used, around each desired position using an approximate linearization by applying Taylor’s theorem. Experimental results are included to show the controller’s performance

Resumen:

En este artículo se presenta un método para implantar un ERP en una Institución Privada de Educación Superior (IPrES), a manera de auto-consultoría por parte del personal de la organización. El trabajo es consecuencia de una profunda reflexión de los autores después de iniciar un proyecto similar en condiciones caóticas que casi lo hacen fracasar. Durante la estructuración del método se identifican los factores críticos: 1) casi ninguna persona conoce en detalle la operación integral de la organización, 2) persona alguna sabe cuales son los procesos esenciales de la IPrES y mucho menos su modelo explícito, 3) existe una idea falsa de que implantación de ERP significa la implementación de todos y cada uno de los módulos, y 4) la cultura de procesos y de responsabilidades en el trabajo es casi nula. A partir de estos factores críticos se plantea una estructura del método consistente en seis etapas: Planeación, Preparación, Diseño, Desarrollo, Operación y Mantenimiento. La Planeación proporciona el panorama de alto nivel para guiar los proyectos, mientras que el Mantenimiento se ocupa del soporte cotidiano durante la puesta en producción del ERP. Las otras cuatro etapas corresponden a las etapas clásicas de Requerimientos, Análisis, Diseño y Construcción encontradas en numerosos métodos de sistemas de información

Abstract:

We prove that nuclei in orthomodular lattices, and more generally in quantic lattices are completely determined by the central elements

Abstract:

The category of quantic lattices is defined. All the multiplicative lattices, such as residuated lattices and orthomodular lattices, turn out to be objects of this new category

Abstract:

Purpose. In this paper we propose an iterative approach for the deployment of rural telecommunication networks. Methodology/approach/design. This approach relies heavily on the concept of locality, prioritizing small 'cells' with a considerable population density, and exploits the natural nesting of the distribution of rural communities, focusing in communities which are populous enough to justify the investment required to provide them with connectivity, and whose sheer size promotes the formation of 'satellite' communities that could be benefited from the initial investment at a marginal expense. For this approach, the concept of 'cells' is paramount, these, are constructed iteratively based on the contour of a Voronoi tessellation centered on the community of interest. Once the focal community has been 'connected' with network of the previous layer, the process is repeated with less populous communities at each stage until a coverage threshold has been reached. One of the main contributions of this methodology is that it makes every calculation based on 'street distance' instead of Euclidean, giving a more realistic approximate of the length of the network and hence the amount of the investment. To test our results, we ran our experiments on two segregated communities in one of the most complicated terrains, due to the mountain chains, in the state of Chiapas, Mexico. The results suggest that the use of 'street distance' and a local approach leads to the deployment of a remarkably different network than the standard methodology would imply. We hope that this might lead to a significant reduction in the costs associated with these kinds of projects and therefore make the democratization of connectivity a reality. In order to make our results reproducible, we make all our code open and publicly available on GitHub

Abstract:

In this paper we propose an iterative approach for the deployment of rural telecommunication networks. This approach relies heavily on the concept of locality, prioritizing small 'cells' with a considerable population density, and exploits the natural nesting of the distribution of rural communities, focusing in communities which are populous enough to justify the investment required to provide them with connectivity, and whose sheer size promotes the formation of 'satellite' communities that could be benefited from the initial investment at a marginal expense. For this approach, the concept of 'cells' is paramount, these, are constructed iteratively based on the contour of a Voronoi tessellation centered on the community of interest. Once the focal community has been 'connected' with network of the previous layer, the process is repeated with less populous communities at each stage until a coverage threshold has been reached. One of the main contributions of this methodology is that it makes every calculation based on 'street distance' instead of Euclidean, giving a more realistic approximate of the length of the network and hence the amount of the investment. To test our results, we ran our experiments on two segregated communities in one of the most complicated terrains, due to the mountain chains, in the state of Chiapas, Mexico. The results suggest that the use of 'street distance' and a local approach leads to the deployment of a remarkably different network than the standard methodology would imply. We hope that this might lead to a significant reduction in the costs associated with these kinds of projects and therefore make the democratization of connectivity a reality. In order to make our results reproducible, we make all our code open and publicly available on GitHub

Resumen:

Se plantea desde un punto de vista pragmático el significado de la teología dentro del edificio de las ciencias, y si puede ser tolerada o si debe hacer una contribución indispensable a su integridad —lo que queremos demostrar aquí—. Después de un breve vistazo a la historia del origen de la universidad en Europa, se examinan algunos momentos decisivos en la historia de la fundación de la Universidad de Berlín, que siguen siendo importantes en los países de lengua alemana. Sobre este trasfondo se investigan las afirmaciones más dignas de atención de John Henry Newman sobre la universidad, a pesar de la dificultad de ponerlas en práctica

Abstract:

This article analyzes, from a pragmatic perspective, the meaning of theology within the scope of sciences, that is, whether it can be tolerated or if it must make an indispensable contribution to its integrity —which is actually what we want to demonstrate. After a brief look at the history of the origin of the university in Europe, we examine some decisive moments in the history of the founding of the University of Berlin, which continue to be important in the German-speaking countries. Against this background, the most worthy statements by John Henry Newman on the university are investigated, in spite of the difficulty in putting them into practice

Abstract:

In the context of the Multimedia Automatic Misogyny Identification (MAMI) competition 2022, we developed a framework for extracting lexical-semantic features from text and combine them with semantic descriptions of images, together with image content representation. We enriched the text modality description by incorporating word representations for each object present within the images. Images and text are then described at two levels of detail, globally and locally, using standard dimensionality reduction techniques for images in order to obtain 4 embeddings for each meme. These embeddings are finally concatenated and passed to a classifier. Our results overcome the baseline by 4%, falling behind the best performance by 12% for Sub-task B

Abstract:

This work proposes the use of deep learning architectures, and in particular Convolutional Autencoders (CAE's), to incorporate an explicit component of orthogonality to the computation of local image descriptors. For this purpose we present a methodology based on the computation of dot products among the hidden outputs of the center-most layer of a convolutional autoencoder. This is, the dot product between the responses of the different kernels of the central layer (sections of a latent representation). We compare this dot product against an indicator of orthogonality, which in the presence of non-orthogonal hidden representations, back-propagates a gradient through the network, adjusting its parameters to produce new representations which will be closer to have orthogonality among them in future iterations. Our results show that the proposed methodology is suitable for the estimation of local image descriptors that are orthogonal to one another, which is often a desirable feature in many patter recognition tasks

Abstract:

This work presents a methodology for dimensionality reduction of images with multiple occurrences of multiple objects, such that they can be placed on a 2-dimensional plane under the constrain that nearby images are similar in terms of visual content and semantics. The first part of this methodology adds inductive capabilities to the well known t-SNE method used for visualization, thus making possible its generalization for unseen data, as opposed to previous extensions with only transductive capabilities. This is achieved by pairing the base t-SNE with a Deep Neural Network. The second part exploits semantic information to perform supervised dimensionality reduction, which results in better separability of the low-dimensional space, this is, it separates better images with no relevance, while retaining the proximity of those images with partial relevance. Since dealing with images having multiple occurrences of multiple objects requires the consideration of partial relevance, additionally we present a definition of partial relevance for the evaluation of classification and retrieval scenarios on images, or other documents, that share contents, at least partially

Abstract:

We propose a solution to the full consensus-based formation problem for torque-controlled nonholonomic mobile robots under time-varying communication delays and without velocity measurements. Under the assumptions of static and undirected communication topologies, and smooth and bounded delays, we propose a distributed smooth, time-varying control law which achieves the goal of acquiring a global formation pattern in a common consensus point for both position and orientation. The controller is a combination of a simple Proportional-Plus-Damping scheme and a simple globally exponentially convergent velocity observer. Consensus is reached independently of all initial conditions. Realistic simulations in the Gazebo-ROS environment illustrate the effectiveness of the proposed algorithm and the robustness with respect to measurement noise, non-smooth delays and uncertain dynamics

Abstract:

In this letter, we present a novel adaptive observer for nonholonomic differential-drive robots to simultaneously estimate the system's angular and linear velocities, along with its external matched disturbances. The proposed method is based on the immersion and invariance technique and makes use of a dynamic scaling factor. The stability and convergence proof of the velocity and disturbance errors are performed using a strict Lyapunov function. We present a detailed simulation study to validate the performance of our approach

Abstract:

Motivated by its application in gain scheduling and event-triggered controllers we propose in this paper an adaptive observer for linear time-varying (LTV) systems with unknown switching parameters, but known switching time instants. To construct the observer we propose an extension to the generalized parameter estimation-based observers (GPEBO), which we combine with the least-squares plus dynamic regression extension and mixing (LS+DREM) estimator recently proposed. We prove that the system state and the parameters converge in finite time inside each switching sub-interval, under the weak assumption of interval-excitation (IE). The excellent performance of the proposed observer is illustrated with a practical and an academic example both reported in the literature

Abstract:

It is widely recognised that the existing parameter estimators and adaptive controllers for robot manipulators are extremely complicated, stymieing their practical use - in particular, for robots with many degrees of freedom. This is mainly due to the fact that the existing parameterisation includes the complicated signal and parameter relations introduced by the Coriolis and centrifugal forces matrix. In an insightful remark of their seminal paper, Slotine and Li suggested to use the parameterisation of the power balance equation, which avoids these terms - yielding significantly simpler designs. To the best of our knowledge, such an approach was never actually pursued in on-line implementations, because the excitation requirements for the consistent estimation of the parameters are 'very high'. In this paper, we use a recent technique of generation of 'exciting' regressors developed by the authors to overcome this fundamental problem. The result is applied to general Euler-Lagrange systems and the fundamental advantages of the new parameterisation are illustrated with comprehensive simulations of a 2 degrees-of-freedom robot manipulator

Abstract:

In this work, we report a novel controller to solve (globally) the position and orientation consensus-based formation problem of multiple nonholonomic vehicles modelled as differential drive robots affected by external disturbances. The controller has the structure of a simple to implement Proportional Integral Derivative (PID) scheme. In order to satisfy the well-known Brocket stabilisation theorem for nonholonomic systems, we also incorporate a time-varying persistency of excitation term in the angular velocity dynamics. Our proposal reports the first solution to the robust formation problem using a smooth time-varying controller. Simulations are shown to evidence the theoretical findings of this work

Abstract:

In this note, we present a constructive procedure to solve the orbital stabilization problem of a class of nonlinear underactuated mechanical systems with n degrees of freedom and underactuation degree one using the Immersion and Invariance technique. We define sufficient conditions to solve explicitly the partial differential equations arising in the Immersion and Invariance methodology, so that mechanical systems with gyroscopic terms can be considered. At the end, we use two practical systems as examples to illustrate the design procedure, as well as validating performance via simulations and experiments

Abstract:

In this paper we present the design of a robust controller for nonholonomic differential wheeled mobile robot moving on the plane and subject to constant unknown disturbances. The proposed controller is smooth, time-variant and has a (simple) PID-like structure. Also, as analytically proved, it achieves global asymptotic convergence to zero of the position and orientation (regulation) errors despite the perturbations. Realistic simulations in the Gazebo-ROS environment validate the effectiveness of our approach

Abstract:

We propose a solution to the problem of adaptive speed observer for a class of mechanical systems containing cross terms in the velocity (momenta). We focused our attention on mechanical systems with unknown constant input disturbances and Coulomb friction matrix, which generate products of unmeasurable velocities and unknown friction parameters. The adaptive speed observer proposed is based on the well-known Immersion and Invariance technique. We ensure global convergence of the speed observer, while the friction parameters estimation remain bounded. Our approach is validated through simulations and experimental results

Abstract:

In this article, we present a new passivity-based controller which ensures asymptotic stability of the desired equilibrium point for a quadrotor with a cable-suspended load. Two steps compose the control synthesis procedure: the first step uses a new coordinate transformation on the position coordinates together with a partial feedback controller to tailor a new dynamical model; where the translational and rotational dynamics become coupled via a new control input. Since the new dynamical model preserves the Lagrangian structure and verifies the conditions imposed in Donaire et al. [5] and Romero et al. [13] to design energy shaping controllers, avoiding to solve a Partial Differential Equation, the second step designs a proportional-integral controller around the two cyclo-passive outputs of the new dynamical model. Using Lyapunov theory, a strict analysis to ensure asymptotic convergence to the desired equilibrium point is presented. Besides, we show that a simple modification to the proposed controller solves the problem of constant velocity tracking on the translational coordinates. The performance of the controller is evaluated using numerical simulations

Abstract:

In this paper we report a novel output feedback controller for a class of mobile robots that exhibit nonholonomic restrictions. The control objective is to regulate the robot position and orientation at a given desired point. The controller is a (nonautonomous) smooth certainty equivalent PD scheme and the velocity observer is designed using the Immersion Invariance principle. Simulations are presented to corroborate our theoretical findings

Abstract:

This paper considers three different tracking control schemes for a distributed energy storage system. The system is made up of a battery and a synchronous boost converter which, connected to a dc bus, allows bidirectional power flow, enabling the battery for charging and discharging operation modes. First, we derive the controllers assuming all parameters and variables to be known. Then, designing an adaptive observer, the three approaches are extended for the case in which the current is not measured and the internal converter resistances are unknown. Stability of the resulting closed-loop system is proved for both cases. The approaches are compared, tested and validated in a setting consisting of a dc grid with renewable sources connected to it

Abstract:

An adaptive speed observer for mechanical systems, which are linear in the velocity (momenta) after a change of coordinates (Venkatraman et al. [2010]), was recently reported in the literature (Romero and Ortega [2015]). The velocity observer considers input disturbances and some unknown friction parameters in the design. In this note, we relax some assumptions imposed in that work so that, the class of systems is enlarged and quadratic terms in the velocity(momenta) and all the friction parameters are included in the design of the velocity observer. Simulation results using the robotic–leg system illustrate the performance of the proposed adaptive observer

Abstract:

This article proposes a new control algorithm to solve the regulation problem for a quadrotor vehicle using the passivity-based control method, without solving partial differential equations neither performing a partial dynamic inversion. After a resourceful change of coordinates, it is possible to identify new quadrotor cyclo passive outputs. Then, a nonlinear proportional-integral controller in terms of these cyclo passive outputs completes the design. The proposed controller is evaluated using numerical simulations

Abstract:

In this note we identify a class of underactuated mechanical systems whose desired constant equilibrium position can be globally stabilised with the ubiquitous PID controller. The class is characterised via some easily verifiable conditions on the systems inertia matrix and potential energy function, which are satisfied by many benchmark examples. The design proceeds in two main steps, first, the definition of two new passive outputs whose weighted sum defines the signal around which the PID is added. Second, the observation that it is possible to construct a Lyapunov function for the desired equilibrium via a suitable choice of the aforementioned weights and the PID gains

Abstract:

In this paper an observer for a vehicle system considering roll dynamics is presented. The observer design is based on the well-known Immerse and Invariance (I&I) technique to estimate the lateral velocity, roll angle and roll velocity of the vehicle from the measurements of the longitudinal/lateral accelerations, longitudinal velocity, yaw rate and steer angle. Furthermore, under practical considerations, global exponential convergence can be assured. To assess the performance of the observer, an observer-based control using the super-twisting algorithm has been derived and validated through numerical simulations

Abstract:

In this note we identify a class of underactuated mechanical systems whose desired constant equilibrium position can be globally stabilised with the ubiquitous PID controller. The class is characterised via some easily verifiable conditions on the systems inertia matrix and potential energy function, which are satisfied by many benchmark examples. The design proceeds in two main steps, first, the definition of two new passive outputs whose weighted sum defines the signal around which the PID is added. Second, the observation that it is possible to construct a Lyapunov function for the desired equilibrium via a suitable choice of the aforementioned weights and the PID gains. The results reported here follow the same research line as (Donaire et al., 2016a) and (Romero et al., 2016a)--bridging the gap between the Hamiltonian and the Lagrangian formulations used, correspondingly, in these papers

Abstract:

In a recent contribution, it was shown that a class of mechanical systems, which contains many practical examples, can be stabilized via energy shaping without solving partial differential equations. The proposed controller consists of two terms, a partial linearizing state-feedback and a linear PID loop around two new passive outputs. In this brief note we prove that the first, admittedly non-robust, step can be obviated leaving only the linear PID. A second contribution of the note is to propose a slight modification to the controller to go beyond regulation tasks-being able to follow ramp references in the actuated coordinates

Abstract:

Money laundering is not a victimless crime. Under certain circumstances, it may lead to significant criminal violence. We analyze the specific case of money laundering in local economies. Criminal organizations invest dirty money in legal local businesses, which may lead to short-term improvements in the economy that benefit the population. Authorities with access to local information may (purposely) fail to report suspicious economic activities to specialized agencies in charge of money laundering because it is politically and economically convenient. The economic windfall generated from illicit money can eventually attract additional criminal organizations to the community, or may fragment the dominant criminal organization, endogenously increasing violence. The violence generated in no way compensates the previous economic growth. We develop theoretical insights on the conditions under which this mechanism exists, and empirically test its incidence and the magnitude of its effects, using Mexican municipalities as units of analysis

Resumen:

La crisis de inseguridad por la que atraviesa México considera diversos tipos de crímenes y de violencias. El régimen de prohibición de drogas y una política de seguridad sesgada al combate coercitivo del tráfico de drogas en detrimento de otros delitos de mayor incidencia, han resultado ineficaces para reducir el problema. En este ensayo se analizan los conocimientos existentes sobre estos temas y se esbozan tres acciones concretas para mejorar la situación

Abstract:

The crisis of insecurity that Mexico is going through considers different types of crimes and violence. The drug prohibition regime and a security policy biased towards the coercive combat of drug trafficking to the detriment of other crimes of greater incidence, have proven to be ineffective. This essay analyzes the existing knowledge on these issues and outlines three concrete actions to improve the situation

Abstract:

A central task of a state is to establish order within its territory. The mere creation of the state is justified by this statement. A proper order that allows citizens to develop in all dimensions is based upon the rule of law. An insecure environment (i.e. disorder) signals a weakness in the rule of law, which is further weakened as crime and violence unfold. Mexico has been suffering an insecurity crisis since 2007 - which is not unique, or even the worst in its recent history. It is, however, the worst insecurity crisis since the country’s democratization process began. The combination of a deficient rule of law, an ineffective war on drugs, a fiscal system disconnected from citizens’ preferences, and the democratization process itself, has provided fertile ground for a dramatic increase in crime and violence. Under these circumstances, strengthening the rule of law is paramount, yet conditions are such that improving it will prove problematic

Abstract:

To fight criminal organizations effectively, governments require support from significant segments of society. Citizen support provides important leverage for executives, allowing them to continue their policies. Yet winning citizens' hearts and minds is not easy. Public security is a deeply complex issue. Responsibility is shared among different levels of government; information is highly mediated by mass media and individual acquaintances; and security has a strong effect on peoples' emotions, since it threatens to affect their most valuable assets—life and property. How do citizens translate their assessments of public security into presidential approval? To answer this question, this study develops explicit theoretical insights into the conditions under which different dimensions of public security affect presidential approval. The arguments are tested using Mexico as a case study

Abstract:

Drug-related crime and violence have become increasingly worrisome phenomena in many countries around the world. This situation threatens citizens’ lives and property, but, perhaps more importantly, it also threatens the survival of democratic states in developing countries (United Nations Office on Drugs and Crime, 2012)..

Abstract:

Presidents should prefer to be positively remembered in history for improving their country’s conditions, rather than to be hated for generations. Few, however, succeed. Why? The inquiry goes beyond historic accounts or mere intellectual curiosity; it is a key part of understanding presidential decision making. We answer this question using data from an expert survey on the Mexican Presidency, the first of its kind for Mexico. Problem-solving capacities and presidents’ ability to change the existing institutions are the main determinants of success. Corruption is barely punished by experts. Negative remembrance in history is associated to authoritarianism and economic crises

Resumen:

Uno de los principales problemas que enfrentan los países en desarrollo es una creciente presencia del crimen organizado, que en muchos casos ha debilitado la estructura del Estado. Algunos gobiernos eligen combatir a estas organizaciones ilegales, mientras que otros deciden ignorarlas o coludirse con ellas. Esperaríamos que, como con toda polí- tica pública, la decisión de los gobiernos de combatir al crimen organizado dependiera en buena medida de la respuesta esperada de la opinión pública. A diferencia de ouos determinantes, como la economía, por ejemplo, poco se ha investigado sobre el efecto de las políticas de combate al crimen en la evaluación de los presidentes. A partir de los datos de la encuesta LAPOP 2010 para el caso de México, indago en la repercusión en la aprobación presidencial de las distintas dimensiones del temáde seguridad pública. Encuentro que la evaluación del desempeño en seguridad tiene una influencia importante en la aprobación; con una repercusión relativamente más baja pero significativa en la aprobación están el acuerdo con la forma de combatir al crimen, la afectación cuasi directa de la seguridad y " la percepción de que la inseguridad es el problema principal. Un dato un tanto cuanto sorpresivo es que no encuentro efectos significativos de ser víctima de un delito ni de tener miedo a la inseguridad

Abstract:

One of the main problems that developing countries face is the increasing presence of organized crime; a presence thar, in many cases, has weakened the structure of the State. Sorne governments choose to combat these iIlegal organizations while others decide to either ignore,them or collude with them. We would expect that, as it occurs with any other public policy, the governmental decisions to fight organized crime would greatly depend on the anticipated response ofpublic opinion to this type ofpublic policy. Unlike other determinants -the economy, for instance-few studies have been done on the effects of anti-crime policies on the assessment of presidents. Using data from the LAPOP 2010 survey for the case of Mexico, 1 study the im pact of several dimensions of the public safety theme on presidential approval. 1 find that the assessment of public safety performance has an important impact on approval. On the other hand; 1 also find that public support for a specific approach to fighting crime, the quasi-direct effect generated by lack of safety, and the perception of public safety as the central problem, have a relatively lower -but nonetheless meaningful- impact on approval. It was somewhat surprising not to discover meaningful effects on approval from victims of crime or persons who fear insecurity

Resumen:

Muchas de las encuestas preelectorales que se publicaron en 2010 para las elecciones de gobernador en México no fueron acertadas. Esto ha reforzado muchas de las dudas sobre las encuestas, que van desde la capacidad del método para medir preferencias, hasta la honorabilidad de los encuestadores. Para el 2010, no hay evidencia sistemática de que las diferencias entre encuestas y elecciones se deban a efectos de elección, pero sí hay evidencia de efectos de la casa encuestadora. En esta nota exploro distintas hipótesis utilizando los datos disponibles de las encuestas preelectorales publicadas para las doce entidades con elecciones para gobernador en 2010. También detallo otros determinantes que por ahora no pueden verificarse debido a la ausencia de datos públicos, por lo que quedarán pendientes para una futura investigación

Abstract:

Many of the pre-election polls published in 2010 that centered on the elections for governor in Mexico were incorrect. This has strengthened many of the misgivings that already existed about public opinion polls that bring into question the effectiveness of the methods used to measure preferences or the honesty of the pollsters. For 2010, there is no systematic evidence that suggests that the discrepancies between the polls and the eléction results are due to election effects. On the other hand, there is evidence of polling house effect. In this note, I explore several hypotheses based on the available data from preelection polls published for the twelve entities with elections for governors during 2010. I also mention other determinants that, beca use of the absence of any public data, cannot be substantiated at this time and thus must be left for future research

Resumen:

Cuando en una democracia el presidente no compite por su reelección, el funcionamiento de la conexión electoral no es tan directo. En estos casos no resulta obvio cómo se transfiere la evaluación del gobernante en turno al voto por el candidato de su partido ni cómo modifican las campañas electorales dicha relación. El argumento de este trabajo es que la evaluación del Ejecutivo, aproximada por aprobación presidencial, funciona como un atajo de información acerca del futuro desempeño del candidato; conforme . las campañas generan información sobre los contendientes, la aprobación del presidente saliente importa relativamente menos para decidir el voto. Usando análisis economécrico se prueban estas hipótesis para el caso de la elección presidencial de 2006 en México mediante cuatro encuestas nacionales realizadas entre octubre de 2005 y junio de 2006

Abstract:

When presidents do not run for reelection in democracies, the electoral connection is not straightforward. lt is not obvious in chis case how voters' evaluations of leaders get transformed into voces for political parties, nor how polítical campaigns affect such a relationship. The argument is that voeers' evaluations of presidents, which can be measured by presidential approval ratings, work as an informatíon shortcut aboue the future performance of the candidate; as campaigns generare information about concenders, appraval of che retiring presidentmatcers relatively less in the voting decision. Based on economeeric analysis and four nationwide polls conducted from October 2005 to lune 2006, the areicle tests these hypotheses using the case of che 2006 Mexican presidencial election

Abstract:

This article accounts for the role that partisan divisions played in shaping variation in mass preferences for market-oriented policies in Latin America during the 1990s. Most of the existing studies on attitudes toward market reforms have focused on issues such as the timing of reforms, the presence of economic crises, and how economic performance shaped citizens' preferences. Fewer studies have investigated whether partisan cleavages translated into divergent preferences toward market reforms. Were there systematic differences between left- and right-wing voters in their preferences toward market reforms? Did left-wing voters oppose these policies and right-wing voters favor them? Which of these structural transformations-state retrenchment or trade liberalization-witnessed greater mass polarization along partisan lines? This article answers these questions with the use of a mass survey on public opinion about market reforms conducted by Mori International in eleven Latin American countries in 1998

Resumen:

Repasamos los principales eventos políticos de México en 2006. La elección presidencial más reñida de la historia de México dejó en un lejano segundo plano las demás elecciones que se desarrollaron en el año (para renovar las dos cámaras del Congreso federal, 6 gubernaturas, 12 congresos estatales, y 566 Ayuntamientos en 12 estados) y dejó fuera de la agenda muchos temas de política pública. y aunque el desconocimiento del triunfo del adversario por parte del candidato derrotado parece dejar mal parada a la naciente democracia mexicana, son muchos más los indicadores de una democracia perfectible pero que funciona. Por tratarse de la primera entrega de lo que será un ejercicio anual, hemos incluido en la reseña, por su relevancia, información previa al 1 de enero de 2006

Abstract:

We review the principal polítical events of Mexíco in 2006. The most competitive presidentíal election in Mexican history left without much coverage the other elections that took place during the year (to renew both chambers of the federal Congress, 6 governorships, 12 state congresses, and 566 county councils in 12 states) and removed from the agenda many policy issues. Even ifthefailure by the defeated candidate to concede victory to his adversary in a nasty post·election dispute would appear to leave the nascent Mexican democracy in abad conditíon, there are many more indícators of a healthy and working democratic system. Because thís is the first review in what will become a yearly exercise, we include some relevant information previous to January 1st, 2006

Abstract:

Outsourcing the management of ninety-three randomly-selected government primary schools in Liberia to eight private operators led to learning gains of 0.18σ after one year, but these effects plateaued in subsequent years (reaching 0.2σ after three years). Beyond learning gains, the programme reduced corporal punishment (by 4.6 percentage points from a base of 51%), but increased dropout (by 3.3 percentage points from a base of 15%) and failed to reduce sexual abuse. Despite facing similar contracts and settings, some providers produced uniformly positive results, while others presented trade-offs between learning gains, access to education, child safety, and financial sustainability

Abstract:

We use a large-scale randomized experiment (across 1,198 public primary schools in Mexico) to study the impact of providing schools directly with high-quality managerial training by professional trainers vis-à-vis through a cascade-style “train the trainer” model. The training focused on improving principals’ capacities to collect and use data to monitor students’ basic numeracy and literacy skills and to provide feedback to teachers on their instruction and pedagogical practices. After two years, the direct training improved schools’ managerial capacity by 0.13σ (p-value 0.018) (relative to “train the trainer” schools), but had no meaningful impact on student test scores (we can rule out an effect greater than 0.08σ at the 95% level)

Abstract:

In 2016, the Liberian government delegated management of 93 randomly selected public schools to private providers. Providers received US$50 per pupil, on top of US$50 per pupil annual expenditure in control schools. After one academic year, students in outsourced schools scored 0.18σ higher in English and mathematics. We do not find heterogeneity in learning gains or enrollment by student characteristics, but there is significant heterogeneity across providers. While outsourcing appears to be a cost-effective way to use new resources to improve test scores, some providers engaged in unforeseen and potentially harmful behavior, complicating any assessment of welfare gains

Abstract:

Almost a third of world's forest area is under communal management. In principle, this arrangement could lead to a "tragedy of the commons" and therefore more deforestation. But monitoring outsider's deforestation may be easier if the owner is a community rather than an individual. We study the effect of communal titling on deforestation in Colombia using a difference-in-discontinuities strategy that compares areas just outside and inside a title, before and after titling. We find that deforestation decreased in communal areas after titling. Interestingly, we find evidence of positive spillovers of reduced deforestation in nearby areas

Abstract:

In this paper, the QFD methodology is adapted to accommodate the assessment and selection of supply chain configurations. This new methodology incorporates a series of factors including supply chain needs, technical specifications, relationships among specifications, and potential synergies. Resultantly, the original contribution of this methodology is the selection of a supply chain scenario that balances the desire to fulfill a series of needs, along with the ability of a supply chain to deliver according to specification. This balance between need fulfilment and availability of supply chain is particularly difficult to strive for early-stage companies. Accordingly, this methodology was applied into the analysis and selection of four different supply chain scenarios considered for the production and final delivery of a large number of customer orders, placed for the Tesla Roadster vehicle, which needed to be fulfilled in record time. In all cases, scenarios were placed in context of the company’s mission so that business goals and operating decisions were aligned. The application was successful, and a specific supply chain design scenario was selected. The proposed methodology could be a roadmap for designing incipient supply chains

Abstract:

Governments across the globe have adopted different programs to deal with increasing amounts of municipal solid waste (MSW) including recycling, waste prevention programs, and waste-to-energy (WTE) technologies, such as gasification. Deciding on a specific WTE technology involves an understanding of a complex blend of factors including location, haul distance, regulations, capital costs, feedstock availability, tipping fees, taxes, electricity price, and incentives which do not necessarily denote a linear behaviour. Therefore, the business feasibility of gasification technologies is still unclear. This paper includes the development of a model that combines the aforementioned factors in the context of a potential gasification plant in the USA. The model successfully concluded that location is the most sensitive factor for most of the cases. Authors include a geographical analysis which may be used, in combination with the model, to decide on regional energy options and new business opportunities

Abstract:

Industrial energy is the largest segment of consumption in the United States, accounting for ~33% of the total energy use in the country. Production output keeps rising in order to match an increasing demand from consumers. Companies are constantly searching for ways to increase productivity and save costs. Energy efficient projects are a response to this challenge. This chapter provides an introduction of energy efficiency in the manufacturing sector. Drivers and barriers for the adoption of projects are presented and serve as a reference for a project selection framework proposed by the authors. The breadth of potential projects is reduced to four major energy efficiency opportunities ranging from lighting, efficient HVAC systems, improved motor systems, and building envelope projects. A final section provides context on the stakeholders involved for this projects and relevant business models. Most of these models have been proved successful and are used as reference criteria for the adoption of energy efficiency projects. In the long run, energy efficiency initiatives can only suceed when adopted by operations executives and integrated into a company's culture

Abstract:

Over 90% of Fortune 75 companies publish corporate social responsibility (CSR) reports to highlight their efforts to tackle sustainability challenges, and while a vast majority of these reports cite improved waste management initiatives, less than a quarter of these firms derive profit from good solid waste management practices or circular economy strategies. Unfortunately, many firms instead implement waste management purely to enhance a corporate image. This article both outlines a framework to guide companies on their journey from a linear consumption pattern to a holistic, circular approach and illustrates that waste management practices can be cost-saving and revenue-generating opportunities. Collectively defined as circular economy initiatives, these opportunities create new value streams from materials previously discarded. Circular economy principles extend beyond traditional waste management enhancement practices. These principles emphasize improved design and production practices to eliminate the traditional concept of waste and repurposing resources from products at the end of their life cycle back as raw material inputs to create new products. Though there are surface challenges to adopting circular economy principles, this article presents a framework for all companies to start on a path to maximizing the value of corporate waste streams

Abstract:

This case illustrates the key factors that Tim Travis, the maintenance lead for Legrand North American, must consider in a capital investment decision that involves the implementation of a new lighting system. Legrand, a France-based multi-national company that sells electrical wiring accessories worldwide, is considering whether it should implement its own internal energy efficiency program at its Fort Mills, South Carolina, warehouse location to achieve cost and environmental savings on electrical usage and to meet the new standards established under U.S. Department of Energy's Better Buildings Challenge . The case examines the key components of the capital investment decision, how tax policies promoting energy efficiency can influence the decision, and how to measure the return on capital investment through three different methodologies: the payback method, internal rate of return, and net present value. A breakdown of a cost/savings analysis on energy usage is also examined

Abstract:

This case study focuses on a sustainability corporate initiative that aims to strengthen a photovoltaic business and to understand consumer preferences. The case analyzes the process of how a photovoltaic panel company decides to look beyond its manufacturing facility and understand the potential environmental impact of its supply chain and improve its sustainability performance. This case illustrates how renewable energy companies (such as a solar photovoltaic company) can revise its operations, lower its carbon footprint, and incorporate sustainability criteria into its strategy. Students are asked to evaluate the carbon footprint of Han Solar's manufacturing and distribution operations and to decide further actions considering other factors such as cost and delivery time

Abstract:

Background, aim, and scope. This paper presents a waste management analysis of the packaging systems for soft drinks in Mexico, with emphasis on polyethylene terephthalate (PET) containers. The work presented is part of a project sponsored by a consortium of Mexican industries that participate in the PET market, such as resin producers, bottle manufacturers, soft drinks producers, distributors, and plastic recyclers. Two different life cycle assessments (LCAs) were elaborated to provide insight on waste management scenarios and waste products comparisons, respectively. The first LCA was a description of the actual PET market and PET waste treatment in Mexico. On the second LCA, three systems were analyzed: PET bottles, aluminum cans, and glass bottles. Currently, these results are used in Mexico as a basis for environmental policy. Materials and methods. PET bottle’s participation in the market has increased substantially in the previous years, and it is forecasted that this increase rate will continue. Due to this factor, there are some concerns about the environmental implications of PET usage. In order to analyze the waste management of PET bottles in Mexico, an LCA and a series of sensitivity analyses were conducted in order to understand: (1) the effect of different collecting distances in environmental impacts, (2) the effect of different recycling rates in environmental impacts, (3) the effect of different collecting rates in environmental impacts, and (4) the effect of different collecting rates with its associated distances in the environmental impacts. Results and conclusions. An optimal degree of PET waste collection was identified as a result of considering different collecting rates and distances. At this point, minimum environmental impact occurs. This is due to the excessive increase in environmental resources that is needed in order to collect higher amounts of waste by traveling longer distances

Abstract:

The purpose of this paper is to illustrate a business process modelling approach based on: the incorporation of the best practices in the industry; higher reliability standards for operation; real-time settlement; improved security; and transparency in the process and information handling

Abstract:

Background. The analysis of a wastewater treatment technology, under a expanded boundaries system which includes both the technology and the inputs required for its operation, quantifies the overall environmental impact that may result from the treatment of a wastewater stream. This is particularly useful for environmental policy makers being that a expanded boundaries system tends to provide a holistic view. The former view can be highly enriched with the use of process engineering tools, such as mathematical process modelling, process design, performance assessment and cost optimised models. Main Features. The traditional approach used to assess waste treatment technologies is contrasted with a life cycle analysis (LCA) approach. The optimal design of a granular activated carbon adsorption (GAC) process is used as a model system to demonstrate the advantages of LCA approaches over traditional approaches. Further sections of the paper describe a mathematical framework for the assessment of technologies, design considerations applied in the cost optimised carbon adsorption model, the use of LCA techniques to perform an inventory of all emissions associated to the process system and, some of its environmental impacts. Results. Economic and environmental considerations regarding the optimum process design are introduced as a basis for decision towards the selection and operating conditions of wastewater treatment technologies. Moreover, the use of LCA has revealed that the environmental burden associated with the wastewater treatment may produce a higher environmental impact than one that can be caused by untreated discharges. Conclusion. The paper highlights the string advantages that environmental policy makers may have by combining LCA and process engineering tools. Furthermore, this approach can be incorporated into other existing treatment processes or for process designers

Abstract:

A traditional approach used to evaluate clean-up technologies, in which only plant discharges are considered, is contrasted with a sustainability assessment. The sustainability of any technology can be assessed from three complementary points of view: economic, environmental and social. As such, this paper presents a comprehensive scheme that can be applied to any process, product or technology. In addition, the use of chemical engineering tools such as process design, process modelling and simulation represent a baseline for the sustainability assessment of technologies, as presented in a case study. The optimal granular activated carbon adsorption process design is used as a model system to demonstrate the advantages of sustainability approaches over traditional approaches. A mathematical model that describes the performance of the process at various design options was developed. This model includes cost equations that were used to estimate the total cost of each alternative under different plant designs and two waste scenarios (a benzene and a 1,2-dichloroethane discharge). Life Cycle Assessment tools were applied to generate an inventory of emissions and the impact assessment measured as Photochemical Ozone Creation (POC) and Global Warming Potential (GWP). The model examined trade-offs between pollutants discharged into the atmosphere and pollution associated with the adoption and operation of the technology. One of the main results from the technology assessment is that the environmental impact, measured in terms of GWP proved to be higher for the technology operation than for the untreated waste streams themselves, and therefore suggested that the streams should not be treated. However, the social impact evaluation (measured as risk assessment) conducted as part of this work proved that it was morally and legally mandatory to treat them due to the adverse effects on human health that they may represent

Abstract:

Environmental impacts of industrial production processes are usually estimated by considering the emissions leaving the process. These emissions are often reduced using abatement processes, such as wastewater treatment technologies, in the belief that reducing emissions will reduce the environmental impact. Typical legislation focuses on reducing discharge levels, without considering the impact on the environment of the additional inputs required by the abatement process to achieve this reduction. This leads to the possibility that some waste streams may be over treated. In other words, industry might be devoting increased resources to reducing discharges and at the same time be worsening the environment. This paper presents a framework for the analysis of wastewater treatment technologies from an economic and environmental point of view. The work examines trade-offs in abatement processes between higher inputs (energy consumption and raw material) and lower discharge quantities (pollutant flow). As a result, an optimal degree of pollution abatement (ODPA), at which environmental impact is minimized, is identified. This value could act as a guideline to legislators who are setting discharge limits and to chemical engineers with waste discharge responsibilities. Case studies on two different abatement technologies, steam stripping and pervaporation, are presented to illustrate this framework.

Resumen:

Se presenta la metodología y herramientas computacionales utilizadas para la optimización de una cadena de suministro internacional en la industria automotriz. Una versión modificada de la metodología para el desarrollo de un producto es empleada mediante el Quality Function Deployment (QFD) y la casa de la calidad, utilizados para mejorar el diseño y proceso de selección de los diferentes actores dentro de una cadena de suministro bajo un corto periodo de tiempo. Se propone una versión modificada del QFD para el análisis de distintos escenarios de cadena de suministro según las necesidades del mercado y la empresa así como las especificaciones de los proveedores y la cadena de suministro. La información obtenida de este análisis muestra la correlación entre necesidades y especificaciones así como el punto en el que ambas son optimizadas. El resultado muestra un refinamiento al proceso de selección de la cadena de suministro tomando en cuenta la estrategia corporativa y misión de la empresa

Abstract:

The methodology and computational tool deployed for the optimization of an international supply chain in the automotive industry is shown. A modified version of the product development methodology is used through the Quality Function Deployment (QFD) and the house of quality matrix, both used to improve the design and selection process of the different actors within the supply chain under a short time frame. A modified version of the QFD is proposed to analyze different supply chain case scenarios according to the market and company needs as well as the suppliers and supply chain specifications. The data obtained through this analysis shows the correlation between needs and specifications as well as the point on which both are optimized. The result shows a refinement to the supply chain selection process by taking into account the corporate strategy and company's mission

Abstract:

The present research analyses the environmental implications on the manufacturing, production and shipping of similar physical goods to the California market. In terms of corporate social responsibility, we address the implications of designing sustainable supply chains. The project is focused on single crystalline and poly crystalline silicon based panels. The analysis takes into account mass and energy flows over the whole production process starting from silica extraction to the final panel assembly. Information from the International Energy Agency was gathered in order to get the respective energy mixes of Mexico, China and Germany. The functional unit used in this project is 1 m2 of solar panel, the overall energy requirement is of 2298 MJ/m2 (assuming wafers of 0.2 mm thickness). A multidisciplinary analysis as the one presented can clearly aid decision-makers with a full perspective of complex supply chains

Abstract:

While there are no simple or immediate solutions for the warming up, the requirements in energy consumption are varying to achieve a sustainable evolution, mainly at the production of electricity. In Mexico the industrial sector registers the highest consumption, but at the residential sector - the second highest consumer of energy - more people might get to know the relevance of taking care of the planet. Now that different options might provide energy to Mexican homes, there is a tangible opportunity to develop residential energy management and efficiency models, as well as energy generation, transmission and consumption [mainly solar panels, societies of auto-generation, or buying power from the nearest generation-plant]. Home automation may provide a means to achieve this. A well-done planning is fundamental to create a clear concept of a solution; therefore, a reachable conceptual design of home automated solutions to energy conservation may emerge from this proposal

Abstract:

The present paper is concerned with an integrative approach relate to (i) the use of modern CAD and CAE tools, for early effective product design, (ii) an environmental management framework that evaluates the environmental performance of the product along its whole life cycle (raw materials selection, materials transformation, production processes, transportation, use, re-use, recycle, retirement/decomissioning) and (iii) a logistics assessment framework that evaluates the impact along the supply chain of modifications on the early stages of design. The main result is a conceptual and research-based framework for multidisciplinary life cycle management. The presented research provides a high-end technical solution to determine the best product alternative in terms of market expectations, product-process specifications, economic and environmental impact in the long term, that is, along the whole life of a product. The integration of this framework is oriented towards rapid generation and evaluation of innovative products

Abstract:

In this work we tackle a planning problem of the production of a transnational company dedicated partially to the manufacture of soaps. The production environment consists of several lines in parallel, each a flowshop where the optimization of the use of resources is desired. Two proposals for the optimization problem are introduced and evaluated through the use of several heuristics in order to improve production times and evaluate the possibility of flexibilizing the lines obtaining significant savings for the company

Abstract:

A methodology is presented for obtaining optimal forecasts with exponential smoothing (ES) techniques when additional information, other than the historical record of a time series, is available. Such information is usually given as linear restrictions on the future values of the series and may come from: (i) expert judgments, (ii) alternative forecasting methods or (iii) scenarios to be portrayed. Appropriate usage of the additional information improves the forecasts' accuracy and precision. Here we provide closed expressions for the restricted forecasts obtained with the most frequently employed ES methods and emphasize the potential usefulness of the proposed methodology in practice

Abstract:

Local politicians are often expected to mobilize voters on behalf of copartisan candidates for national office. Yet this requirement is difficult to enforce because the effort of local politicians cannot be easily monitored and the promise of rewards in exchange for help is not fully credible. Using a formal model, we show that the incentives of local politicians to mobilize voters on behalf of their party depend on the proportion of copartisan officials in a district. Having many copartisan officials means that the party is more likely to capture the district, but the effort of each local politician is less likely either to be noticed by higher-level officials or to make a difference on the election outcome, thus discouraging lower-level officials from exerting effort. We validate these claims with data from federal elections in Mexico between 2000 and 2012. In line with the argument, the results show that political parties fail to draw great mobilization advantages from simultaneously controlling multiple offices

Resumen:

La filosofía se ha desarrollado, a lo largo del tiempo, bajo un intercambio constante con otras áreas de la cultura. No obstante, en el mundo moderno, donde el dinero lo es todo, dicha disciplina debe reflexionar sobre tal realidad. Por ello, en este trabajo se propone la consideración de una filosofía del dinero en cuanto a su impacto en la vida del hombre. Se retoma y analiza el pensamiento de Georg Simmel, quien en 1900 publicó la Filosofía del dinero en donde plasma que el dinero revela claramente la estructura de todo ser y lo coloca en el centro de la reflexión filosófica

Abstract:

Over time, philosophy has been developed under a constant exchange with other areas of culture. However, in the modern world, where money signifies everything, such discipline must reflect upon that reality. This paper proposes the consideration of a philosophy of money in terms of its impact on the life of the human being. The thought of Georg Simmel, who in 1900 published the Philosophy of Money, is taken up and analyzed. In this book he showed that money clearly reveals the structure of all being and places it at the center of philosophical reflection

Abstract:

Global development goals increasingly rely on country-specific estimates for benchmarking a nation's progress. To meet this need, the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2016 estimated global, regional, national, and, for selected locations, subnational cause-specific mortality beginning in the year 1980. Here we report an update to that study, making use of newly available data and improved methods. GBD 2017 provides a comprehensive assessment of cause-specific mortality for 282 causes in 195 countries and territories from 1980 to 2017

Abstract:

Covid-19 arrived in Latin America in a context of ideological polarisation among national governments and internal socio-political turmoil, which bode badly for its numerous and overlapping regional groupings. However, in a manner suggested by functionalist theories, some of them have “found refuge” in technical/expert cooperation and side-stepped political paralysis with different degrees of success. With institutions and previous experience in health issues, CARICOM and SICA had a significant role in the management of the pandemic, while CELAC revived by promoting technical cooperation. In contrast, MERCOSUR failed to overcome the political rift among its members, so technical cooperation occurred, yet remained limited

Abstract:

In settings with inefficient public provision, expansions in low-cost private-market healthcare delivery may be welfare-improving by increasing access, but may be sacrificing on quality. I study the introduction of retail clinics at private pharmacies in Mexico. I find that entry led to large declines in public-sector emergency room visits and a small but significant reduction in public clinic visits for relatively mild respiratory infections. I also find a significant increase in public clinic visits for chronic conditions and a slight decline in emergency room visits, consistent with better disease management. However, I estimate a sizable association between retail clinics and a shift toward stronger antibiotics in private-market sales. Hence, although retail clinics improve access to healthcare, they may be overselling their patients

Abstract:

For several years, the digitalization of production systems and the usage of simulation tools have been growing. Digital Twins have become an important tool to improve and to optimize production. In this paper, we present the implementation of a digital twin of an educational production cell will be explained. Combining different simulation tools, the process times are analyzed to find the best possible production sequence. The Digital Twin includes the machine tools, a 6 degree of freedom industrial robot and conveyors

Abstract:

With increasing computational power modern concepts of small computers, components in manufacturing have moved form simple components to intelligent and smart systems that identify their working environment and are able to react to different situations. This interaction is possible thanks to the concept of Cyber-Physical-Production Systems (CPPS), one of the key concepts in in Industry 4.0. Artificial vision systems have the ability to enrich a manufacturing systems providing the ability to detect components in a working area. Based in Machine Learning and Deep Learning concepts, this systems are able to identify components such as rough material in a storage box and inform a robot system about the next piece to take. The process of learning, however, requires an high amount of information to create a stable and functional vision system. In order to reduce the time to create the example pictures for the training and testing procedure, an automated procedure is developed and presented in this paper. This procedure allows the creation of teaching pictures of different materials and their positions. These pictures are used as teaching and testing input data to an Artificial Neural Network (ANN) in order to identify the piece and its position

Abstract:

The presented paper describes the reconstruction of process force signals using control internal data, control parameterisation and mechanical compliance characteristics. In this article, an analysis has been conducted considering controller parameterisation of feed drive axis, its control frequency response and parasitic frequency response together with the static and dynamic compliance of the machine axis. The analysis is used to reconstruct the force and torque values at the tool (process). To do so and to be able to reconstruct the values in real time the transfer behaviour is approximated by the method “Vector Fitting” presented in [GUSTAVSEN, B.; SEMLYEN, A]

Abstract:

Research has shown that family firms differ from their non-family counterparts in terms of strategic behavior. Socio-emotional wealth (SEW) is a homegrown theory in this context explaining differences in decision-making by acknowledging the unique connection between a family and their business. This paper contributes to the ongoing research related to the theory of socio-emotional wealth by investigating individual, family and family business values as antecedents and underlying motivators for SEW behavior, influencing strategic decision-making in family firms, directly and as a mediator via SEW. A qualitative study was performed to analyze this connection and the effects on strategic decisions made in family firms. The outcomes show that individual and collective family values are the main drivers of SEW behavior, changing over time and leading to a different focus on the dimensions of SEW, which is then represented in the strategic decisions made in the family business

Resumen:

Se presentan algunos rasgos biográficos de los autores a estudiar y puntos sobresalientes de su aportación a la filosofía del derecho. Ambos se destacaron por el planteamiento integral del derecho como sistema racional de normas sociales de conducta, que regula con autoridad las relaciones del hombre con sus semejantes y soluciona los problemas planteados por la realidad histórica. De igual forma, por la clarificación de la teoría del derecho y sus consecuencias prácticas en la estructuración del Departamento de Derecho de la Universidad Iberoamericana (Miguel Villoro). Analizamos el analogado principal del concepto de derecho y postulamos que éste se da en la facultad moral de la persona sobre lo suyo. Esta concepción personalista del derecho puede aplicarse en la realización de la persona y la sociedad (Efraín González Morfín)

Abstract:

Some biographical features of the studied authors and some relevant points of their contribution to the Philosophy of Law are presented. These characteristics stand out for their integral approach to the Law, as a rational system of social norms of conduct that regulate the relationship of man with society. Moreover, in the structuring of the Department of Law (Miguel Villoro), the solution to problems posed by historical reality as well as for the understanding of the Theory of Law, both must merge to consider its practical consequences. Likewise, it is postulated that is in the moral authority of the person over his own that the main analogue of the concept of Law is realized. This personalistic conception of the Law should be applied to the different fields relating to person and society (Efraín González Morfín)

Resumen:

Mediante el análisis cronológico de sus obras, se aborda el pensamiento de Zygmunt Bauman con relación al tema de las migraciones en la modernidad líquida. Según el autor, el proceso de modernización ha transformado la dinámica social, ya que ha adquirido la propiedad de producir "vidas desperdiciadas". Esta "población excedente" es considerada como un peligro para algunos sectores de la sociedad, por lo que el gobierno tiene la obligación de protegerla

Abstract:

Through chronological analysis of his works, the thought of Zygmunt Bauman is discussed in relation to the subject of migration in liquid modernity. According to the author, the process of modernization has transformed the social dynamic, now that is has acquired the property of producing "wasted lives". This "surplus population" is considered to be a danger for some sectors of society, so the government has the obligation to protect it

Resumen:

En el trabajo se profundiza la doctrina de Santo Tomás de Aquino relativa al hombre y a la sociedad, destacando la importancia de la justicia en la estructura social. La justicia, empero, implica no tan solo el cumplir con aquello que es estrictamente exigible, sino que también se requieren otras perfecciones para crear una sociedad auténticamente humana: tales son las virtudes que aquí se estudian

Abstract:

In this article, we will focus in-depth on Saint Thomas Aquinas' doctrine regarding man and society, highlighting the significance of justice in a social structure. Justice, however, implies not only fulfilling such things that are strictly demanding, but it also requires other perfections in order to create an authentic human society: those virtues will be studied in this article

Resumen:

La violencia se presenta como un fenómeno constante e inquietante. Por lo que sabemos, existe desde los tiempos más remotos de la humanidad y se presenta como fácil tentación en la vida del hombre en sociedad. Su realidad ha dado lugar a diversos estudios, que la han considerado tanto desde el punto de vista de la etnología (comparación del fenómeno de la violencia en el mundo humano y en las conductas animales), de la filosofía política, del derecho, de la filosofía o de la misma religión. Las reflexiones siguientes tratarán de establecer algunas consideraciones desde el punto de vista de la filosofía política, de la antropología, de la moral, del derecho y desde el punto de vista de la doctrina social cristiana

Resumen:

Se analiza el fenómeno de la violencia desde el punto de vista de la filosofía política, la antropología, el derecho constitucional e internacional y la doctrina social cristiana. Se abordan las diversas manifestaciones de la pobreza económica en la llamada "modernidad líquida" y se llega a conclusiones relativas a la violencia y la pobreza en la sociedad contemporánea

Abstract:

In this article, we will analyze the phenomenon of violence from various points of views, namely, political philosophy, anthropological, constitutional and international law, and the social Christian doctrine. We will point out the different manifestations of economic poverty in the so-called "liquid modernity" and we will give some conclusions regarding both phenomena in contemporary society

Resumen:

Las "aventuras jesuíticas" de fines del siglo XVII: Francisco Xavier, Pedro Páez, Antonio Passevino, las 'reducciones' del Paraguay y Mateo Ricci. India, Japón, Etiopía, Rusia, Sudamérica y China, lugares de misión a los que los jesuitas llevaron el mensaje evangélico, pero no sólo eso

Abstract:

This article details the "Jesuit adventures" of Francisco Xavier, Pedro Páez, Antonio Passevino, Mateo Ricci as well as the reductions of Paraguay at end of the seventeenth century. These Jesuits were not only on a mission to bring their Evangelical teachings to India, Japan, Ethiopia, Russia, South America, and China, but much more

Resumen:

Las migraciones constituyen uno de los fenómenos más importantes del mundo contemporáneo. Pueden tener carácter interno o internacional, al igual que caracterizarse por su condición voluntaria o forzada. De los movimientos forzados de personas, se analiza el fenómeno de los migrantes económicos, de los refugiados, del tráfico y trata de personas, que desembocan en formas actuales de esclavitud. Se señalan los intentos de respuesta jurídica internacional que se dan a estas últimas preocupantes situaciones

Abstract:

Migration is one of today’s most important occurrences. They are distinguished at a national or international level as well as by their voluntary or forced nature. Regarding the latter, we will analyze the situation of economic migrants, refugees, human trafficking, and sexual exploitation which lead to a form of slavery. We will emphasize the international legal attempts to address these troubling situations

Abstract:

This case focuses on two Mexican entrepreneurs whose secondhand product retail company has reached a point where they must decide whether to sell an equity stake to a venture capital fund (VCF) or request an equity contribution from existing shareholders. The VCF would ensure opening 40 stores in 2 years and a dramatic boost in the business, whereas contributions from existing shareholders would slow the company's growth but would avoid equity dilution. Is this the right moment for venture capital funding, or is it better to wait to improve the company's financial performance? What are the risks of growing slowly?

Abstract:

This paper introduces a variable rate of capital utilization and depreciation into a modified Ramsey-type neoclassical growth model via the well-known concept of pure user cost. The optimal utilization rate is found to be determined by the opportunity cost of holding capital or the net real interest rate. As a consequence, this rate may vary in the short run, so total services of capital become a control rather than a state variable. Furthermore, the introduction of a variable utilization rate yields a slower rate of convergence toward the steady state, inducing more persistence in the transitional dynamics. To illustrate how the endogenous choice of utilization acts on the system, some simulations are carried out, including the transition period when there is a temporary fall in the exogenous real interest rate

Abstract:

Subjective orderings on random variables are typically represented by an expectation functional, where a subjective probability measure is built using either Savage's original construction or some analogous model such as Anscombe and Aumann' s. This amounts to assigning a projection functional onto the one dimensional space of constants. Using this notion, we generalize the representation to partíal preorders that can be represented by a conditional expectation, or in geometric terms, by a projection onto a probability space. Sorne interesting conclusions about the behavioral implications are drawn and further extensions are proposed

Abstract:

Given a quantum logic (L, L), a measure of noncommutativity for the elements of L was introduced by Roman and Rumbos. For the special case when L is the lattice of closed subspaces of a Hilbert space, the noncommutativity between two atoms of L was related to the transition probability between their corresponding pure states. Here we generalize this result to the case where one of the elements of L is not necessarily an atom

Abstract:

An adequate locale is found to define the Lambek sheaf for symmetric modules. We use the L-set approach (where L is a locale) to sheaf representation, so as to give an alternative description of this particular sheaf

Abstract:

For any ring R (associative with 1) we associate a space X of prime torsion theories endowed with Golan's SBO-topology. A separated presheaf (L-,M) on X is then constructed for any right R-module M(R), and a sufficient condition on M is given such that L(-,M) is actually a sheaf. The sheaf space (E ~ X) determined by L(-,M) represents M in the following sense: M is isomorphic to the module of continuous global sections of p. These results are applied to the right R-module R(R) and it is seen that semiprime rings satisfy the required condition for L(-,R) to be a sheaf. Among semiprime rings two classes are singled out, fully symmetric semiprime and right noetherian semiprime rings; these two kinds of rings have the desirable property of yielding "nice" stalks for the above sheaf

Abstract:

In this study we analyse the relationship between Mexican students’ enrolment in the International Baccalaureate (IB) Diploma Programme (DP) and their college preparedness using a case-study methodology. We found that from the Mexican schools that offer the IB DP, most IB students are fairly successful in their college applications, such that the majority enrols at among the most well-regarded post-secondary institutions in Mexico. The possibility that IB DP grades and/or examination records might help boost students’ college admissions options does not seem to be a primary motivating factor for students’ IB DP enrolment. Rather, we found that students enrol in the IB DP because they think it will help prepare them to successfully handle college-level work. Students and educators believe that various aspects of the IB DP prepare them for college-level work, including the Theory of Knowledge course, the Extended Essay and the Creativity, Action and Service programme

Abstract:

Achieving a fair distribution of resources is one of the goals of fiscal policy. To this end, governments often transfer tax resources from richer to more marginalized areas. In the context of mining in Colombia, we study whether lower transfers to the locality where the taxed economic activity takes place dampen local authorities' incentives to curb tax evasion. Using machine learning predictions on satellite imagery to identify mines allows us to overcome the challenge of measuring evasion. Employing difference-in-differences strategies, we find that reducing the share of revenue transferred back to mining municipalities leads to an increase in illegal mining. This result highlights the difficulties inherent in adequately redistributing tax revenues

Abstract:

The purpose of this paper is two folds. First, to introduce a multiple inflated version of the Conway-Maxwell-Poisson model, that can be used flexibly to model count data when some values have high frequency along with over- or under-dispersion. Also, this model includes Poisson, Conway-Maxwell-Poisson (COMP), zero-inflated Poisson (ZIP), multiple-inflated Poisson, and zero-inflated Conway-Maxwell-Poisson (ZICOMP). Second, to estimate the future distribution from the multiple inflated Conway-Maxwell-Poisson model under the Kullback Leibler difference (loss) function. This model is fitted to the number of penalties scored in the Premier League's 2019-20 season and its future distribution using Bayes and plug-in methods is estimated

Abstract:

This paper proposes a Bayesian predictive density estimator of time to goal in a hockey game, using ancillary information such as performance in the past, points, and specialists’ opinions about teams. To be more specific, we model time to r-th goal as a gamma distribution. The proposed Bayesian predictive density estimator using the ancillary information belongs to an interesting new version of a weighted beta prime distribution and it outperforms the other estimators in the literature such as the one that does not incorporate this information as well as the plug-in estimator. The efficiency of our estimator is evaluated using frequentist risk along with measuring the prediction error from the old dataset, 2016–2017, to the season 2018–2019 of the National Hockey League

Resumen:

Este artículo presenta diferentes enfoques para buscar la distribución bayesiana predictiva de una variable aleatoria con un valor inflado Kε N conocido como el modelo KIP. Se explora como usar una fuente de información adicional para encontrar el estimador. Específicamente, se busca un estimador Bayesiano de la densidad futura de una variable aleatoria Y1, basada en una variable observable X1 a partir del modelo K1IP(p1, λ1), con y sin el supuesto de que existe otra variable aleatoria X2 del modelo K2IP(p2, λ2), independiente de X1, si λ1 ≥ λ2, y se compara su desempeño usando un método de simulación

Abstract:

This paper addresses different approaches in finding the Bayesian predictive distribution of a random variable from a Poisson model that can handle count data with an inflated value of KεN, known as the KIP model. We explore how we can use other source of additional information to find such an estimator. More specifically, we find a Bayesian estimator of future density of random variable Y1, based on observable X1 from the K1IP(p1, λ1) model, with and without assuming that there exists another random variable X2, from the K2IP(p2, λ2) model, independent of X1, provided λ1 ≥ λ2, and compare their performance using simulation method

Abstract:

In this paper, we introduce a multiple-inflated Poisson distribution that can handle count data with multiple inflated values. We explore a Bayes predictive distribution for future observation under Kullback Leibler loss function and a class of shrinkage priors, along with plug-in type pmf estimators. To illustrate how well the proposed pmf estimators perform, we provide both a simulated study and a real example analyzing a dataset of National Hockey League (NHL) shootout losses in 2017/18

Abstract:

In this article, we propose to take the challenges raised by populism to procedural democracy seriously. In particular, we critically assess the populist claim that infringing democracy's procedures is necessary to bring about socioeconomic equality in places of acute inequality like Latin America. Is it true that there is a trade‐off between the constraints of procedural democracy and substantive equality, such that populist governments that loosen or get rid of procedures are better at delivering redistribution? If so, under what conditions should defenders of procedural democracy consider such a trade‐off normatively legitimate?

Abstract:

Purpose– The aim of this paper is to develop a theory of sharecropping with cost sharing after allowing for an explicit role of a creditor. In the tenancy literature, the prevalence of sharecropping has remained an important issue. While most contributions have focussed only on output sharing, very few have studied the issue of cost sharing. Besides, the existing models have considered interactions only between a landowner and a tenant. The purpose of this paper is to extend this setup to a third player – creditor. Design/methodology/approach– The authors adopt a static contract approach with full information and no uncertainty and model possible credit-cum-tenancy arrangements among a money-lender, a landowner and a tenant under the restrictions that the money-lender cannot charge a lump-sum fee and the input choices are left with the tenant. Findings– It is shown that all Pareto optimal arrangements between a creditor, a landowner and a tenant must involve interest rate discrimination between the tenant and the landowner and a share tenancy with cost sharing, or a fixed rent tenancy with cost sharing, or a mixture of the two. None of the polar contracts – wage or rent – is possible. Lending schemes that feature credit rationing or credit delegation can implement some Pareto efficient outcomes. Originality/value– The model developed in the paper presents a framework for studying various tripartite arrangements observed in rural economies of developing countries. Also, it provides a benchmark for studying contracts under asymmetric information and uncertainty

Resumen:

Los procesadores de red constituyen un área de rápido crecimiento y evolución en el diseño y fabricación de nodos de conmutación para redes de comunicaciones. En este artículo se examina la arquitectura y la forma de programación del procesador Intel IXP1200. Para evaluar una de sus características – la programación flexible de aplicaciones basada en módulos- se diseñó e implementó un conmutador de red que incluye funcionalidad para detectar y prevenir ataques que alteran las tablas ARP de los equipos terminales. Se encontró que la arquitectura de procesadores de red resulta una excelente alternativa para la construcción de servicios sofisticados de red. Puede llegar a ofrecer el mismo desempeño que los dispositivos implementados con hardware de propósito específico (ASICs) y permite una gran flexibilidad en la definición de funcionalidades debido a su capacidad de programación

Resumen:

La necesidad de repensar la docencia jurídica marcada por el formalismo interpretativo y la pasividad de estudiantes se aceleró con la crisis de COVID-19. Al rol estudiantil tradicional se le agregaron distintas problemáticas para transitar a la educación en línea. Al docente se le presentaron nuevos retos, que van desde el cómo evaluar y fomentar la adquisición de conocimiento a distancia, hasta el cómo contribuir en el bienestar y la salud mental estudiantil. Con el objetivo de vencer los retos presentados, hemos desarrollado una metodología de simulación de juicios que combina herramientas de aprendizaje y las mejores prácticas docentes en el contexto de la pandemia. La propuesta consiste en promover el papel activo de las y los estudiantes, mediante su participación en la interpretación de distintos roles en una simulación de juicios orales. A veces actúan como parte en un juicio y a veces como jueces. A lo largo de cuatro semestres en pandemia, esta metodología ha sido probada como eficiente en la solución de las problemáticas planteadas, y con ella hemos aprendido lecciones acerca de la posibilidad de evaluar la adquisición del conocimiento a distancia, incentivar la creatividad de estudiantes y generar lazos humanos en medio de una crisis social y sanitaria

Abstract:

We are studying the existence and implications of classical electromagnetic waves with frequencies below 3Hz, also referred as Under Extremely Low Frequency (UELF) waves. As their extreme wavelength (larger than planet Earth) makes them really difficult to observe, our research aims at finding radiating sources and/or interacting spatial structures in the Universe that can reveal their presence. By considering three well known communications principles (radiation, waveguide resonance and propagation) in this working paper we look at the composition of the interstellar matter within a galaxy, to discuss the feasible propagation of some UELF waves that could be radiated by magnetized planets. We argue that their oscillating frequencies could be above the interstellar plasma frequency if the actual cosmic structures filling space are accounted for, instead of considering elementary charged particles (electrons or ions) only. This work is a step forward for our research, as it frames and discusses novel ideas to continue studying UELF waves

Abstract:

Classical electromagnetic waves with frequencies below 3Hz, referred as Under Extremely Low Frequency (UELF) waves, can be radiated by rotating cosmic structures with an associated magnetic field, like planets, stars, pulsars and galaxies. Considering the extreme time scales that characterize UELF waves, which can be as long as billions of years, the associated wavelengths are so large that long range interactions could be produced between extremely distant cosmic structures that could be billions of light years apart. The extreme dimensions, masses and speeds of the radiating cosmic structures, on the other hand, presume that the distant interaction between the studied cosmic objects may not be electromagnetic only but also gravitational (in spite of the humongous distances involved), so the study of UELF waves needs to consider both effects combined. Gravitational wave theory is the leading line of work trying to explain long range interactions due to gravity forces; yet, the present working paper attains to the gravitomagnetic field theory to build a novel communications model that integrates classical forces produced by moving electric charges and masses into a single framework. The proposed model allows an integral approach to investigate long range interactions between the explained distant cosmic structures, and invites to reflect and discuss about the nature and implications of UELF radiation

Abstract:

In this paper we analyze the presence and effects of classical electromagnetic waves with frequencies under 3Hz; which will be further referred as Under Extremely Low Frequency Band (UELFB) waves. The nature of such kind of waves is extreme, as they are larger than Earth's diameter. Because of the lack of useful engineering applications, organizations like the IEEE or the ITU have not given an official classification for this electromagnetic band. On the other hand, classical electromagnetic theory suggests that UELFB waves could be radiated by many astronomical objects, and then their interaction with matter (for example, dust, gas, and particles) could create stationary structures in the Universe. This paper proposes a UELFB wave classification, into frequency sub-bands, considering their wavelengths compared to the size of astronomical objects commonly seen in the Universe. The proposed classification for the fraction of the spectrum under the Extremely Low Frequency band contributes to the understanding of EM radiation characterized by frequencies under3Hz

Abstract:

The electromagnetic (EM) spectrum classifies all the frequency components of the EM signals that propagate in space, identifying frequency bands with similar characteristics and behavior. The most relevant bands of the EM spectrum include frequencies above one Hertz that are relevant for human communications, and for other applications within our living environment. Such bands of frequency have been widely studied and taken advantage of. This paper however, reflects upon the existence, behavior, and implications, of EM waves with frequencies below one Hertz, in the Under Extremely Low Frequency (UELF) band, which have not been studied almost at all. Even if such extremely long waves are not practical for human communications, they are extremely relevant because they can carry signals and information that pertain to remote confinements of the Universe, as they can be radiated by distant cosmic objects that are much larger than Earth and oscillate with frequencies lower than one cycle per second. By means of the basic electromagnetic and radiation theories, this paper presents the underlying concepts for the extremely long cosmic UELF EM waves, as the starting point to build a full theory to describe how such kind of cosmic waves are created, how do they propagate through space interacting with stellar bodies, and how do they produce stellar observable structures that contain information about the happenings occurring at distant points in the Universe

Abstract:

Benefits of developing telecommunications infrastructure are beyond questioning; however, challenges stand at small rural communities, which in Mexico encompass ~15% of the population, living in townships ~800 residents. Previous research indicates that technology concepts like the Internet of Things (IoT) can create economic value in local economies. Thus, this paper presents a model to estimate wireless access demand in such communities. Estimated projections of IoT product adoption are built for a representative scenario, considering publicly available statistical information (accounting for variables like the number of residents, households, cars, commercial establishments, or local economic indicators). A basket of highly probable products, like payment terminals, is constructed and characterized. Then the adoption process for each considered product estimated with an S-curve, to integrate the corresponding broadband/Internet access demand. Telecommunications industry is experiencing exciting times, yet careful analysis is crucial to support well informed strategic planning processes

Abstract:

Evolving information and communications technologies (ICT), along with Mexican regulatory changes, raise expectations to increase ICT population reach. Yet, to bring infrastructure to more than 4,500 rural communities with less than 1,500 residents, remains an enormous challenge for Telecommunications Service Providers (TSP’s). Therefore, two questions are addressed: is there sufficient economic value in rural communities? Is there a sustainable way to capture such value? To address the first question, the economic activity of a typical rural population is analyzed, and the impact from ICT adoption is estimated. Broadband availability leads to optimistic figures, in contrast to traditional models assuming only personal communications services. To answer the second question, a midsized mobile virtual network enabler (MVNE) is proposed to provide turnkey franchise-type business models and assets, for local entrepreneurs to supply information services within their communities, creating sufficient demand for TSP’s to deploy infrastructure. Findings show attractiveness for all stakeholders

Abstract:

This paper describes the work in progress towards understanding the feasibility to use a copper pipeline as the backbone to interconnect the Wireless Personal Area Networks (WPAN) that exist in the different rooms of a house, in order to establish a User Environment Area Network (UEAN). The UEAN can interconnect all the electronic devices of a user, within its personal environment, without interfering (or being interfered) by neighboring UEANs, allowing the exchange of information with multiple high bit-rate channels. With the establishment of a UEAN, all the electronic devices and gadgets at home can exchange voice, data, and video, in real time. The copper pipeline represents an excellent way to interconnect the multiple WPAN that are established within different rooms (bluetooth and WUSB technologies), because of its high bandwidth and strong shielding to external electromagnetic noise

Abstract:

This paper proposes the design of a microwave communication system for an under 10 Kgs Nanosatellite (NS) that is being designed and built at Instituto de Ingeniería - UNAM, Mexico. The proposed communication system was not contemplated in the original Nanosatellite proposal, nor in previous Mexican satellite projects, but it appears to be advantageous because of its narrow radiation pattern, and the high frequency at which it operates. As explained in the paper, we believe that the characteristics of such communications system will benefit the satellite with significant energy savings, high capacity for data transmission, and feedback information for position control. We present preliminary thoughts of the proposed communications system, describing the technology that is to be used for its implementation, and the benefits that such system brings to the nanosatellite. We also outline the next steps to be followed, so this paper should be considered as the starting point towards the design and implementation of the proposed nanosatellite microwave communication system

Abstract:

This work describes the functionality of the "User Environment Area Network" (UEAN), considering the Functional Convergence (FC) of personal electronic devices; and explores some issues related to its implementation. An UEAN is a network that connects the electronic devices that belong to a person, in spite of their physical location. The FC allows the implementation of virtual personal electronic devices that are implemented by grouping sources, sinks, storage, processors or gateways that belong to the UEAN. The converging trends observed in telecommunications technology and markets; make us believe that it is important to model a network of personal devices in terms of an UEAN, instead of the traditional way

Abstract:

Coupled theoretical and computational work is presented aimed at understanding and modelling stimulated Raman backscattering (SRBS) relevant to laser-plasma interactions in large-scale, homogeneous plasmas. With the aid of a new code for simulating and studying the nonlinear coupling in space-time of a large number of modes, and a fluid Vlasov-Maxwell code for studying the evolution of large amplitude electron plasma waves, we report results and their interpretations to elucidate the following five observed, nonlinear phenomena associated with SRBS: coupling of SRBS to Langmuir decay instabilities (LDIs); effect of ion-acoustic damping on SRBS; cascading of LDI; stimulated Raman scattering cascades; and stimulated electron acoustic wave scattering

Abstract:

In this article we share the results of an investigation of a classroom experience in which eigenvalues, eigenvectors, and eigenspaces were taught using a modeling problem and activities based on APOS (Action, Process, Object, Schema) Theory. We show how a sample of 3 students were able to construct an object conception of these difficult concepts in one semester course—something that existing literature had proven to be almost impossible to achieve. Using one team as a case study we describe the work done by a group of 30 students to show how eigenvectors and eigenvalues emerged in a group discussion. Furthermore, we present evidence on how, at least three students, were able to construct an object conception, demonstrating a deep understanding of these concepts. Finally, we validate the designed genetic decomposition. In summary, the results demonstrate the approach to be promising in the learning of eigenvalues and eigenvectors

Resumen:

En el aprendizaje del Álgebra Lineal se observan problemas debido a que los conceptos resultan a menudo complejos por su alto nivel de abstrac­ ción. El tema correspondiente a los valores, vectores y espacios propios es muy abstracto, pero importante dadas sus múltiples aplicaciones. En este artículo se reportan los resultados de una investigación acerca del aprendizaje de los alumnos en un curso en el que estos conceptos se enseñaron usando un diseño didáctico basado en la teoría apoe (Acción, Proceso, Objeto, Esquema). Se presenta la descomposición genética diseñada y el análisis de los resultados del trabajo realizado por los alumnos en relación a los conceptos de interés. Los resultados validan la descomposición genética propuesta y muestran evidencias del aprendizaje de los alumnos. En particular ponen de manifiesto la posibilidad de construir una concepción objeto de los conceptos en estudio y la construcción de la mayoría de los alumnos de una concepción proceso

Abstract:

When teaching algebra, given the complexity and high level of abstraction of the concepts involved in its learning, several problems arise. Eigenvalues, eigenvectors, and eigenspaces are very abstract concepts, but due to their multiples applications are important to learn. This paper reports on the results of a research project that studies students’ learning of these concepts in a course that followed a specific didactical design based on APOS Theory. The results obtained permited to validate the designed decomposition. The analysis of students’ work related to the above mentioned concepts show the possibility of building an object conception of the studied concepts and the construction of a process conception by most students

Abstract:

Emotions influence cognitive processes that underlie human behavior. Whereas experiencing negative emotions may lead to develop psychological disorders, experiencing positive emotions may improve creative thinking and promote cooperative behavior. The importance of human emotions has led to the development of automatic emotion recognition systems based on analysis of speech waveforms, facial expressions, and physiological signals as well as text data mining. However, emotions are associated with a context (in which emotions are actually experienced), hence, this work focuses on emotion recognition from contextual information. In this paper, we present a study aimed to assess the feasibility of automatically recognizing emotions from individuals’ contexts. In this study, 32 participants provided information using a mobile application about their emotions and the context (e.g., companions, activities, and locations) in which these emotions were experienced. We used machine learning techniques to build individual models, general models, and gender-specific models to automatically recognize emotions of participants. The empirical results show that individuals’ emotions are highly related to their context and that automatic recognition of emotions in real-world situations is feasible by using contextual data

Abstract:

In the field of affective computing, a major goal has been the development of models to recognize the affective state of individuals. Data related to people such as physiological signals, facial expressions and speech enable the analysis and recognition of affective states. Currently, sensors integrated in smart devices (e.g., smartphone and smartwatch) allow the collection of this type of data. In this work, we present a platform composed of RESTful web services to collect data related to user emotions and its context through smart devices. Due to potential energy-constrained sensors, the platform is provided with an energy-aware data collection mechanism. Four series of experiments were conducted to evaluate both the energy efficiency and the scalability of the platform. The experimental results indicate that the platform is scalable and helps to save energy of data-collection sensors compared with a system unaware of energy consumption

Resumen:

En este ensayo se pretende aclarar el cambiante mapa político-militar de México durante el verano y el otoño de 1914. Se busca mostrar la fuerza, las posiciones y los recursos de los grupos revolucionarios durante el proceso de gestación de la ruptura que provocaría la guerra civil en noviembre de 1914, guerra que enfrentó a convencionistas contra constitucionalistas. Además, se intenta cruzar el mapa económico y demográfico con el mapa militar, así como recurrir a fuentes originales para hacer una nueva valoración de esa coyuntura, contraria a las tradicionalmente aceptadas

Abstract:

The aim of this paper is to clarify the changing political and military map of Mexico during the summer and autumn of 1914. I intend to show the strength, positions and resources of the revolutionary groups during the emergence of the rupture that would cause civil war between convencionistas and constitutionalistas in November 1914. Furthermore, I attempt to cross over the economic and demographic map with the military map using original sources to make a new assessment of the situation, contrary to the one traditionally accepted

Resumen:

A través de la batalla librada en la Cuesta de Sayula, estado de Jalisco, en febrero de 1915, el autor se propone una forma de narrar las acciones de armas de la Revolución mexicana y los ejércitos que en ellas participan, poniendo énfasis en el terreno, el armamento, el movimiento de los ejército, los planes de guerra y el desarrollo y el resultado de las operaciones

Abstract:

Through the battle in the Cuesta de Sayula, Jalisco, in February 1915, a way of narrating the actions of arms of the Mexican Revolution and the armies involved is proposed, making an emphasis on the ground, the weapons, the movement of the army, the war plans and the development and result of operations

Resumen:

Este artículo pretende mostrar que cuando la historiografía sobre la revolución ha puesto en tela de juicio, en los últimos 35 años, casi todos los aspectos de las versiones anteriores u oficiales de la historia, la versión canónica de la historia militar sobre la guerra civil de 1915, construida por Álvaro Obregón y Juan Barragán, ha salido bien librada y sigue siendo repetida por los historiadores, que explican los resultados de dicha guerra sin poner en tela de juicio ni pasar por el ejercicio de la crítica aquella versión

Abstract:

This article seeks to show that whereas during the last 35 years historiography on the Mexican Revolution has questioned almost all aspects of older or official versions, the canonical version of military history of the 1915 civil war, put together by Álvaro Obregón and Juan Barragán, has gone by untouched and is still repeated by historians, who explain the results of such a war without doubting or analyzing this version

Resumen:

La noción de "izquierda" designa tanto la sustancia de un programa político como el lugar relativo que éste ocupa dentro de un espectro político particular y en un momento histórico determinado; se puede decir que en la última década se registró un giro hacia la izquierda en América Latina. En este artículo se discute la importancia de que dicho vuelco se haya operado por la vía democrática e institucional. Asimismo, se examinan los factores internacionales que contribuyeron a que las izquierdas partidistas alcanzaran el poder en una mayoría de países latinoamericanos

Abstract:

The notion of "the left" refers to not only the essence of a political agenda, but also its relative position in a specific politic spectrum and in a particular historical event. It is generally agreed upon that in the last decade, there has been a shift to the left in Latin America. In this article, the significance of the aforementioned turnover will be presented and how it has been used for democratic and institutional means. Furthermore, we will also analyze the international factors which contributed to the rise of power of leftist parties in a majority of Latin-American countries

Abstract:

The purpose of this article is to assess the effectiveness of the collaboration between stakeholders and scientists in the construction of a bio-economic model to simulate management strategies for the fisheries in Iberian Atlantic waters. For 3 years, different stakeholders were involved in a model development study, participating in meetings, surveys and workshops. Participatory modelling involved the definition of objectives and priorities of stakeholders, a qualitative evaluation and validation of the model for use by decision-makers, and an iterative process with the fishing sector to interpret results and introduce new scenarios for numerical simulation. The results showed that the objectives of the participating stakeholders differed. Incorporating objectives into the design of the model and prioritizing them was a challenging task. We showed that the parameterization of the model and the analysis of the scenarios results could be improved by the fishers’ input: e.g. ray and skate stocks were explicitly included in the model; and the behavior of fleet dynamics proved much more complex than assumed in any traditional modelling approach. Overall, this study demonstrated that stakeholder engagement through dialogue and many interactions was beneficial for both, scientists and the fishing industry. The researchers obtained a final refined model and the fishing industry benefited from participating in a process, which enables them to influence decisions that may affect them directly (to shape) whereas non-participatory processes lead to management strategies being imposed on stakeholders (to be shaped)

Abstract:

We consider variations and generalizations of the initial Dirichlet problem for linear second order divergence form equations of parabolic type, with vanishing initial values and non-continous lateral data, in the setting of Lipschitz cylinders. More precisely, lateral data in adequations of the Lebesgue classes Lp, and a family of Sobolev-type classes are considered. We also establish some basic connections between estimates related to solvability of each of these problems. This generalizes some of the well-known works for Laplace’s equation, heat equation and some linear elliptic-type equations of second order

Abstract:

The war against drugs started in Mexico in 2007. During the following years, there was an unprecedented increase in violence and homicides across the country. It is important to find vulnerable groups of population and to quantify effects of demographic covariates, and spatial and time dependence. We present a Bayesian Poisson regression model of the number of homicides with four factors: sex, age, year, state of occurrence, and two-way interactions of these variables. For the main effects of sex and age, we define independent prior distributions, a dynamic linear prior for temporal effects, and conditionally autoregressive processes for spatial effects and spatial interactions. Identification of vulnerable groups by regions provides a tool to design prevention policies targeted at local levels

Abstract:

The presented paper describes the integration an Artificial Vision System and Artificial Intelligence into a machine tool to find the correct instructions to move chips from an initial state to a goal state. To do so, several technologies such as OCR, graph-search control method and G-Code generation are integrated into one Cyber-Physical Production Systems. The integrated Cyber-Physical Production Systems (CPPS) detects the initial state via a visual system and the user defines the required goal state. Applying the graph-search control method the correct sequence of movements is obtained to find the goal state. Then, the CPPS converts this sequence to G-Code and the machine tool executes by the machine tool. The paper describes the different components and the resumes the results

Abstract:

This paper deals with the spatio-temporal dynamics of a pollinator-plant-herbivore mathematical model. The full model consists of three nonlinear reaction-diffusion-advection equations defined on a rectangular region. In view of analyzing the full model, we firstly consider the temporal dynamics of three homogeneous cases. The first one is a model for a mutualistic interaction (pollinator-plant), later on a sort of predator-prey (plant-herbivore) interaction model is studied. In both cases, the interaction term is described by a Holling response of type II. Finally, by considering that the plant population is the unique feeding source for the herbivores, a mathematical model for the three interacting populations is considered. By incorporating a constant diffusion term into the equations for the pollinators and herbivores, we numerically study the spatiotemporal dynamics of the first two mentioned models. For the full model, a constant diffusion and advection terms are included in the equation for the pollinators. For the resulting model, we sketch the proof of the existence, positiveness, and boundedness of solution for an initial and boundary values problem. In order to see the separated effect of the diffusion and advection terms on the final population distributions, a set of numerical simulations are included. We used homogeneous Dirichlet and Neumann boundary conditions

Abstract:

In this paper we study both, analytically and numerically, the spatio‐temporal dynamics of a three interacting species mathematical model. The populations take the form of pollinators, a plant and herbivores; the model consists of three nonlinear reaction‐diffusion‐advection equations. In view of considering the full model, as a previous step we firstly analyze a mutualistic interaction (pollinator‐plant), later on a predator‐prey (plant‐herbivore) interaction model is studied and finally, we consider the full model. In all cases, the purely temporal dynamics is given; meanwhile for the spatio‐temporal dynamics, we use numerical simulations, corresponding to those parameter values for which we obtain interesting temporal dynamics

Abstract:

Since the late 1990s, Mexico has undertaken a stabilization program that reduced inflation to rates not seen in more than 30 years. The benefits of this progress have been substantial, including the improvement of living standards and the creation of an environment suitable for economic growth. Nevertheless, monetary policy still faces considerable challenges, among which the most important is reaching price stability, defined by the Bank of Mexico as a permanent annual inflation target of 3 percent. The purpose of this article is to evaluate the achievements of Mexico's current stabilization program and identify its shortcomings in order to recommend measures for improvement. Emphasis is on the crucial role of monetary institutions in the progress attained, in particular, the inflation targeting approach under floating exchange rates and other complementary economic policies. Notwithstanding these achievements, the stagnation of disinflation since 2002 and the subsequent inability of the central bank to meet its inflation targets are identified as a major setback, which has damaged the credibility of monetary policy. Behind the recent lack of development, serious limitations of the monetary strategy emerge, including a protracted tolerance of deviations from targets, particularly a propensity to be more reactive than preemptive in the fight against inflation. I warn of the risk of complacency and propose several basic steps to improve communication and operation of monetary policy to control inflation. Finally, Mexico's failures and successes provide some lessons for China, particularly the need to strengthen its banking system as a prerequisite of any monetary modernization

Resumen:

El presente artículo tiene como propósito hacer una reflexión sobre la manera en que la reforma constitucional en materia político electoral de 2013-2014 y la reforma legal de 2014 derivada de la anterior, impactaron en la naturaleza de los órganos electorales -tanto a nivel federal como estatal- y el nuevo modelo de organización electoral que se produce a partir de entonces. En la primera de ellas, se hará un repaso de los principales elementos de la discusión, previos a la reforma constitucional. En la segunda, se detallará y reflexionará sobre las alternativas que se abren tras la promulgación del nuevo ordenamiento legal

Abstract:

Information workers and software developers are exposed to work fragmentation, an interleaving of activities and interruptions during their normal work day. Small-scale observational studies have shown that this can be detrimental to their work. In this paper, we perform a large-scale study of this phenomenon for the particular case of software developers performing software evolution tasks. Our study is based on several thousands interaction traces collected by Mylyn, for dozens of developers. We observe that work fragmentation is correlated to lower observed productivity at both the macro level (for entire sessions), and at the micro level (around markers of work fragmentation); further, longer activity switches seem to strengthen the effect. These observations are basis for subsequent studies investigating the phenomenon of work fragmentation

Abstract:

Determining the style or artistic movement to which a painter corresponds is a challenging and complicated problem because there are many factors that influence a painting, and these factors are of a qualitative nature, not a quantitative one. This work presents a methodology focused on the quantification of paintings via machine learning methods; we also perform a comparison between computer vision and machine learning techniques in order to understand the particularities and the functionalities of different methods

Abstract:

We consider three bodies moving under their mutual gravitational attraction in spaces with constant Gaussian curvature k In this system, two bodies with equal masses form a relative equilibrium solution, these bodies are known as primaries, the remaining body of negligible mass does not affect the motion of the others. We show that the singularity due to binary collision between the negligible mass and the primaries can be regularized local and globally using hyperbolic functions. We show some numerical examples of orbits for the massless particle

Abstract:

Proteins are macromolecules essential for living organisms. However, to perform their function, proteins need to achieve their Native Structure (NS). The NS is reached fast in nature. By contrast, in silico, it is obtained by solving the Protein Folding problem (PFP) which currently has a long execution time. PFP is computationally an NP-hard problem and is considered one of the biggest current challenges. There are several methods following different strategies for solving PFP. The most successful combine computational methods and biological information: I-TASSER, Rosetta (Robetta server), AlphaFold2 (CASP14 Champion), QUARK, PEP-FOLD3, TopModel, and GRSA2-SSP. The first three named methods obtained the highest quality at CASP events, and all apply the Simulated Annealing or Monte Carlo method, Neural Network, and fragments assembly methodologies. In the present work, we propose the GRSA2-FCNN methodology, which assembles fragments applied to peptides and is based on the GRSA2 and Convolutional Neural Networks (CNN). We compare GRSA2-FCNN with the best state-of-the-art algorithms for PFP, such as I-TASSER, Rosetta, AlphaFold2, QUARK, PEP-FOLD3, TopModel, and GRSA2-SSP. Our methodology is applied to a dataset of 60 peptides and achieves the best performance of all methods tested based on the common metrics TM-score, RMSD, and GDT-TS of the area

Abstract:

We analyse the rotation curves and gravitational stability of a sample of six bulgeless galaxies for which detailed images reveal no evidence for strong bars. We explore two scenarios: Newtonian dark matter models and MOdified Newtonian Dynamics (MOND). By adjusting the stellar mass-to-light ratio, dark matter models can match simultaneously both the rotation curve and bar-stability requirements in these galaxies. To be consistent with stability constraints, in two of these galaxies, the stellar mass-to-light ratio is a factor of ~1.5-2 lower than the values suggested from galaxy colours. In contrast, MOND fits to the rotation curves are poor in three galaxies, perhaps because the gas tracer contains noncircular motions. The bar stability analysis provides a new observational test to MOND. We find that most of the galaxies under study require abnormally high levels of random stellar motions to be bar stable in MOND. In particular, for the only galaxy in the sample for which the line-of-sight stellar velocity dispersion has been measured (NGC 6503), the observed velocity dispersion is not consistent with MOND predictions because it is far below the required value to guarantee bar stability. Precise measurements of mass-weighted velocity dispersions in (unbarred and bulgeless) spiral galaxies are crucial to test the consistency of MOND

Abstract:

The purpose of this paper is to represent an analysis of the difficulties faced by students when working with different representations of vectors, planes and their intersections in R^3. Duval's theoretical framework on semiotic representations is used to design a set of evaluating activities, and later to analyze student work. The analysis covers three groups of undergraduate students taking introductory courses in linear algebra. Different types of treatments and conversions are required to solve the activities. One important result shows that, once students choose a register to solve a task, they seldom make transformations between different registers, even though this facilitates solving the task at hand. Identifying these difficulties for particular transformations may help teachers design specific activities to promote students cognitive flexibility between representation registers

Abstract:

In this article, we characterize a new life distribution based on the sinh-normal model. Specifically, we find the density, the distribution function, and the moments of the new model. In addition, we carry out a brief graphical analysis of its density. Furthermore, we derive some properties and transformations related to the new distribution. Moreover, we conduct a study of its hazard rate. Finally, we present an example that illustrates the obtained results and a computational implementation of these results

Abstract:

We will examine the revolts, begun in October of 2019, and currently developing in Chile under three conjoined parts. First, we will not try to theoretically ‘tame’ the revolutionary creature, but rather to plug immanently into the energy of the ‘potentia’ of the revolutionary event. To this extent, we will highlight the shortcomings of a theoretical enterprise that intends to explain it in traditional terms or that thrives for a variant of simple ‘reformism’. Second, and consequently, we will describe how the concept of the constitution (and its practice) is the perfect product of coloniality. Hence, the concept of the ‘constitutional trap’ is not something unique to the Chilean experiment, but that the constitutional idea itself is the main pipeline of coloniality and the most sophisticated product in the contention of democracy. Third, and to encompass and give a particular direction to the previous topics, we will deploy the ‘theory of encryption of power’ and the concept of the ‘hidden people’ (or the people as a synecdoche) to explicate the Chilean phenomenon under a new light, where no past is irrevocable, and no future is necessary

Abstract:

The concept of the republic is a complex meta-principle that facilitates and conceals global relations of domination. Specifically, it enables the invisibility of racism as the core of political power. From its very origins the concept of republic serves to seize constituent power or politeia. In modernity, as it merges with private property, it will serve as the launchpad of a vast colonization project that then evolves in a new form of power in coloniality. The article applies the theory of the "encryption of power" -to decrypt the historical concept of the republic. In doing so, we will demonstrate the key antidemocratic role it has played at the heart of global coloniality. We will bear witness as to how, from within the concept of republic, a refined mechanism of domination raises and expands, conforming the most magnificent and lethal form of power of our times. We exemplify our theory showing that the concept of "republic" is the common thread between the law that abolished slavery in Brazil (known as Lei Aurea) and the 1988 constitution. Within this scope, we propose a vital space of reconsideration of political ontology through radical democracy as the only means to build the world from immanent difference

Abstract:

This paper studies the role of foreign employees as a channel for technology transfer in multinational companies (MNCs). We build a simple model of MNC choice between foreign and domestic management as a function of industry characteristics and of institutional quality. We find that foreign employees are a channel for technology transfer within high-tech MNCs. Further, the reliance of MNCs on foreign employees is U-shaped in terms of institutional quality. Our model implies that we should observe the same pattern between technology transfer and institutional quality. We use a unique dataset that links information on technology transfer and the presence of foreign employees in subsidiaries in Mexico with data on judicial efficiency across Mexican states. The evidence is consistent with the implications of the model and difficult to reconcile with alternative hypotheses

Abstract:

In their struggle to improve student learning, many developing countries are introducing school-based management (SBM) reforms that provide cash-grants to school councils. School councils are expected to work collaboratively and decide on the best use of the funds. In this paper, we study the effects of one such program in Mexico on student outcomes. We complement the differences-in-differences analysis by qualitatively exploring program implementation. Results suggest the program had substantial positive effects on third grade Spanish test scores, with most benefits accruing to schools receiving SBM cash grants for the first time. These results are robust to alternative model specifications. The implementation analysis suggests school councils did monitor grant use, but parental participation did not significantly improve in other areas. Our findings suggest that the observed positive program effects are likely to be the result of providing schools with financial resources to meet pressing equipment, material, and infrastructure needs

Abstract:

In IEEE 802.11 networks a data packet is delivered simultaneously to multiple receivers through the multicast paradigm. The standard defines a simple mechanism that does not implement any error-recovery mechanism, thus, the reliability of the service provided to the multicast users is penalized. This issue is more important as the number of collisions increases due to a large number of active stations and/or a high load network. In this paper we carry out a detailed optimization study of the multicast collision prevention (MCP) mechanism, a highly-efficient multicast collision avoidance mechanism for IEEE 802.11 previously introduced by the authors. Besides a more in deep explanation of MCP, this study includes a comparative performance evaluation of the optimized MCP with the IEEE 802.11 standard. Results shown that, through this optimization, the number of collisions in MCP can be made negligible for any network load

Abstract:

In the IEEE 802.11 networks a data packet is delivered to multiple receivers through the multicast paradigm. Since multicast has not a feedback scheme, the reliability of the service provided to the user is penalized. This issue is more important as the network load or the coverage area increases. These two facts cause collisions or corrupted delivery due to bad channel conditions, respectively. In one of our previous works, we introduced a high-efficient collision avoidance delivery multicast mechanism to tackle only the collision problem of multicast pael,ets. In this papel', we inelude in our scheme a new AlItomatic Repeat-reQlIest (ARQ) mechanism to also be able to retransmit corrupted packets due to bad channel conditions. Through our simulation results, we prove that our feedback mechanism, combined with the collision avoidance, is able to properly deliver more mul,ticast packets than other proposals while the network performance is maintained

Abstract:

In this letter, we introduce a novel Multicast Collision Prevention (MCP) mechanism for IEEE 802.11 WLANs. The proposed MCP mechanism has been designed following the operation principies of the IEEE 802.11 standard. Our analysis shows that the proposed MCP mechanism effectively improves the reliability and efficiency of the multicast mechanism defined by the IEEE 802.11 standard. Simulation results also show that our proposal properly interoperates with the 802.11 unicast mechanism

Abstract:

In this paper we develop a discretized version of the dynamic programming algorithm and study its convergence and stability properties. We show that the computed value function converges quadratically to the true value function and that the computed policy function converges linearly, as the mesh size of the discretization converges to zero; further, the algorithm is stable. We also discuss several aspects of the implementation of our procedures as applied to some commonly studied growth models

Abstract:

This paper provides a fairly systematic study of general economic conditions under which rational asset pricing bubbles may arise in an intertemporal competitive equilibrium framework. Our main results are concerned with nonexistence of asset pricing bubbles in those economies. These results imply that the conditions under which bubbles are possible -including some well-known examples of monetary equilibria-are relatively fragile

Abstract:

In this paper, we analyze a discretized versíon of the dynamíc programmíng algorithm for a parameterized famíly of ínfinite-horizon economíc models, and derive error bounds for the approxímate value and policy functíons. If h is the mesh size of the discretízation, then the approxímatíon error for the value functíon is bounded by M(h^2) , and the approxímatíon error for the polícy functíon ís bounded by Nh, where the constants M and N can be estímated from prímítíve data of the model

Abstract:

The cerebral palsy (CP) is an irreversible disorder that affects the human brain and causes problems with mobility and communication. This paper presents the results of designing, implementing and evaluating an easy-to-use and low-cost equipment that seeks to increase the motivation and effort of the children with cerebral palsy while they do their physical therapies. Through a validation process with the help of patients and physiotherapists, we designed a set of multimedia applications that were evaluated with children with cerebral palsy while doing their therapies. Our emphasis was on providing support for the distension muscle exercises as they are identified as the most painful activities that take place in a physical therapy session. The application was based on a Wiimote control, connected to a personal computer via Bluetooth, as a receptor of the infrared light emitted by a simple control made of an infrared led, an interrupter and a battery

Abstract:

This paper describes a Human-Robot interaction subsystem that is part of a robotics architecture, the ViRbot, used to control the operation of service mobile robots. The Human/Robot Interface subsystem consists of tree modules: Natural Language Understanding, Speech Generation and Robot’s Facial Expressions. To demonstrate the utility of this Human-Robot interaction subsystem it is presented a set of applications that allows a user to command a mobile robot through spoken commands. The mobile robot accomplish the required commands using an actions planner and reactive behaviors. In the ViRbot architecture the actions planner module uses Conceptual Dependency (CD) primitives as the base for representing the problem domain. After a command is spoken a CD representation of it is generated, a rule base system takes this CD representation, and using the state of the environment generates other subtasks represented by CDs to accomplish the command. In this paper is also presented how to represent context through scripts. Using scripts it is easy to make inferences about events for which there are incomplete information or are ambiguous. Scripts serve to encode common sense knowledge. Scripts are also used to fill the gaps between seemingly unrelated events

Abstract:

A mobile robot that navigates in closed environments like houses or offices needs to learn the paths that it can use to cross the rooms. A planner used the approach of artificial potential fields for planning in advance the movements. One of the main problems using this technique is that the planner can stuck into a local minimum where the attraction and repulsion forces cancel each other, thus the movement of the Robot is zero or it oscillates around a certain path. To eliminate this, an expert system uses the knowledge of known obstacles to put intelligently the additional attraction forces in places that will take the robot out of the place where is stuck, specifically in some of the corners of the obstacles. The robot moves between rooms to reach its destination and learns a new path found by the potential algorithm

Resumen:

En esta contribución, analizaré las consecuencias de la caída del Muro desde la perspectiva de las relaciones internacionales y en particular desde la geopolítica. Explicaremos cómo, después de la caída del Muro, Europa — que era un continente dividido entre dos potencias y que no era dueña de su destino — regresó a una triple centralidad. Entre 1989 y 1992, Alemania volvió a ocupar el centro geográfico y político de Europa mientras la Unión Europea se perfila de nuevo como un polo de poder no solamente económico sino también diplomático y financiero (con el euro). Insistiremos en particular sobre el desarrollo seguido, pasado por alto por los internacionalistas, respecto al renacimiento de la noción de Europa central como idea tanto histórica como cultural, con un real sentido político e influencia dentro del nuevo proyecto europeo y en particular de sus relaciones con el Este (Rusia) y el Sur (el Medio Oriente). A su vez, este renacimiento de Europa central es una de las manifestaciones de la reemergencia de varios imperios (ruso, chino, otomano) que juntos con el declive de Estados Unidos contribuyen al advenimiento de un mundo multipolar

Abstract:

The article examines the consequences of the berlin Wall Fall from a geopolitical and pan European point of view. We will explain how Europe — a continent divided between two powers and without a hold on its destiny — has returned to a “triple centrality” after the fall of the Berlin Wall. Between 1989 and 1992, Germany recovered its place as a geographical and political centre for Europe. Meanwhile, the European Union stands as a new economic, diplomatic, and financial centre since the establishment of the euro and the Maastricht treaty of 1991. We will give a particular attention to a fact usually overviewed by internationalists, the rebirth of a ‘Central Europe’ understood not only in accordance to historical and cultural roots, but also as an idea with recognizable political sense and influence inside the construction of a new European project, particularly towards its relations with the East (Russia) and the South (the Middle East). This rebirth is to be put in the broader context of the re-birth of former empires (Russian, Chinese, Ottoman). This fact, added to the decline of the United States as a superpower, contributes to the advent of a multipolar world

Resumen:

El artículo indaga de qué manera Canadá, Estados Unidos y México podrían establecer un régimen lingüístico para su integración, que no descanse en un laissez faire que significaría la imposición del inglés. Es importante para los tres países preservar un espacio para el español (y el francés) en términos de legitimidad y transparencia. Las autoridades mexicanas deberían conocer más la experiencia canadiense en el terreno de la defensa de la lengua nacional en las instancias internacionales, para unir esfuerzos en aras de un mejor resultado en la resistencia frente a una hegemonía dañina para todos

Abstract:

This article investigates how Canada, the United States, and Mexico could establish a linguistic system for their integration, which does not take a laissez faire approach thereby allowing the dominance of the English language. The three countries must maintain the roles of the Spanish and French language in terms of legitimacy and transparency. Mexican authorities should study the Canadian experience relative to the protection of their national language internationally in order to join forces in the hopes of better results when dealing with a hegemonic force

Abstract:

Whereas the linguistic governance of the European Union and its institutions is the object of heated political debate, there is no such problem yet in North America. This disregard for a "linguistic balance of power" is likely to be temporary. In the case of deepening in North American Integration such a political debate is bound to emerge. In such case the experience of the ongoing debate in Europe could be invaluable. It is a highly political as well as a highly technical debate which has not much to do with purely linguistic considerations and more with political ones. Although it is often considered as inevitable, choosing English as the only regional communication language in Europe or North America is neither neutral not costless. To frame there discussions, the notion of "soft power" was developed both in the United States and in Europe, although with different approaches. As theorized by Joseph Nye, "soft power" describes the ability of a state to influence directly or indirectly the behavior or the interest of other actors through cultural or ideological means. These ideas are an adaptation in International Relations of A. Gramci's notion of hegemony where dominant ideas are particularly powerful because they are assumed as implicit aspects of a more explicit ideology. It is also related to P. Bourdieu's ideas about the symbolic value and thus domination of one particular language over others, based on misrecognition (méconnaissance). To us, language is the most concrete, measurable and should we say scientific way to observe the diffusion of soft power

Abstract:

In the 1990s, the European Union's foreign policy, until then oriented towards the rest of western Europe, the Mediterranean and the former colonies in Africa, in the Caribbean and in the Pacific, changed radically. Latin America, kept apart during the first decades, came forward. It is quite easy to make the link between this appearance on the European Community's agenda and the entry into the EC, five years earlier, of the two Iberian countries, Spain and Portugal. Indeed, these new memberships gave the impetus that the bilateral relations lacked, and, although the renewal of the bi-regional relations is also due to other factors, Latin America unquestionably constitutes an important element of Spain's European policy. It remains to be seen whether this advantage is real and, above all, whether it is longstanding

Abstract:

The Mexico-EU agreement has already achieved the result of putting Europe back on the Mexican agenda. However, negotiating an agreement with Europe represents a complex task. There are not critical voices against the agreement either in Mexico or the EU. Yet for EU more priority was given to political and co-operation issues rather than economic trade; while Mexico had more interest in a trade agreement. Here, Dr. Sberro reviews the different negotiating strategies involved in the various Mexican trade agreements and their outcomes

Abstract:

Purpose - The purpose of this paper is to explore the cross-cultural efficacy of a gender identity scale commonly used in marketing: the shortened version of the Bem Sex Role Inventory (BSRI) measure developed by Barak and Stern, the Gender Trait Index (GTI). Design/methodology/approach - Data were collected in the USA, Mexico, and Norway, and confirmatory factor analysis was used to assess the cross-cultural equivalence of the GTI. Findings - Configural, metric and partial scalar invariance of a revised 16-item measure were supported. Originality/value - The validated 16-item GTI scale will enhance measurement applications and theory building in cross-cultural research, and further the authors' understanding of the role that gender identity plays in consumer decision making

Resumen:

Este ensayo tiene por objeto presentar de manera ortodoxa la tesis de Kelsen sobre la cláusula alternativa tácita, indicando cuál es el problema jurídico que pretende solucionar con su postulación: la demostración del derecho como un orden normativo unitario, i. e., sin contradicciones frente al hecho observable de la existencia de conflictos entre normas de diverso nivel jerárquico. Se formulan varias críticas a dicha tesis y se concluye que Kelsen incurrió en un error lógico, conocido como el principio de explosión, que expresa que a partir de un enunciado falso (contradictorio) se sigue lógicamente cualquier enunciado (ex falso quodlibet) en cuyo caso se afirma la unidad (no contradicción) entre normas de diverso nivel jerárquico

Abstract:

This essay aims to present in an orthodox way Kelsen’s thesis regarding the tacit alternative clause, indicating which is the legal problem it seeks to solve: demonstrate law as a unitary normative order, i. e., with no contradictions given the observable fact of the existence of conflicts between rules of different hierarchical level. Several criticisms of this thesis are formulated, concluding that Kelsen committed a logical error known as principle of explosion, which states that from a false statement (contradictory) any logical statement follows (ex falso quodlibet): in this case, the false statements is the one which states the unity (not contradiction) between rules of different hierarchical level

Abstract:

'Authority', 'competence' and other related concepts are determined on the basis of the concept of law as a dynamic order of norms. The norms which regulate the processes of norm creation establish empowerments (Ermächtigungen). The material domain of validity of the empowering norm is called 'competence'. The concept of 'person' in relation to empowering norms yields the concepts of ‘organ’ and 'authority'. The spatial domain of the validity of these norms is the spatial or territorial jurisdiction. This paper analyses the basic norm and its legal functions; it considers the irregularity of legal acts and norms, as well as the legal consequences thereof, namely nullity and annulment. Additionally, the Kelsenian 'Tacit Alternative Clause' is criticized and a possible solution to the problem of irregular norms is offered through new definitions of the existence, validity and legitimacy of norms

Abstract:

Control sometimes triggers negative responses. Although there is empirical evidence for such negative reactions and theories that can explain them, it remains to be examined when they occur. We conjecture that these negative responses disappear if control is legitimate, that is, if it averts antisocial behavior. Specifically, we predict that fewer individuals respond negatively to control if control prevents selfishness or theft. We confirm these predictions in an experiment

Abstract:

Testing, contact tracing, and isolation (TTI) is an epidemic management and control approach that is difficult to implement at scale because it relies on manual tracing of contacts. Exposure notification apps have been developed to digitally scale up TTI by harnessing contact data obtained from mobile devices; however, exposure notification apps provide users only with limited binary information when they have been directly exposed to a known infection source. Here we demonstrate a scalable improvement to TTI and exposure notification apps that uses data assimilation (DA) on a contact network. Network DA exploits diverse sources of health data together with the proximity data from mobile devices that exposure notification apps rely upon. It provides users with continuously assessed individual risks of exposure and infection, which can form the basis for targeting individual contact interventions. Simulations of the early COVID-19 epidemic in New York City are used to establish proof-of-concept. In the simulations, network DA identifies up to a factor 2 more infections than contact tracing when both harness the same contact data and diagnostic test data. This remains true even when only a relatively small fraction of the population uses network DA. When a sufficiently large fraction of the population (greater-than, tilde 75%) uses network DA and complies with individual contact interventions, targeting contact interventions with network DA reduces deaths by up to a factor 4 relative to TTI. Network DA can be implemented by expanding the computational backend of existing exposure notification apps, thus greatly enhancing their capabilities. Implemented at scale, it has the potential to precisely and effectively control future epidemics while minimizing economic disruption

Resumen:

La reflexión relativa al llamado imperativo a seguir el dictamen de la conciencia —y no sacrificarla a ningún interés particular ni instancia exterior— debe tomar siempre en cuenta las exigencias planteadas "por la alteridad del mundo" y, sobre todo, por aquellas de la verdad. La palabra debe servir no para justificar ninguna especie de tiranía, sino para hacer presente la verdad

Abstract:

Reflection on the imperative call of acting as our conscience dictates —despite the pressure of individual interests or external influence— must always take into account the demands raised "by the otherness of the world" and, above all, those raised by the truth. The Word must serve not to justify any kind of tyranny, but rather, make the truth present

Resumen:

A partir de la ilusión en la creencia de un progreso infinito de la historia se presentan dos proposiciones contemporáneas para no naufragar en la desesperanza. La primera defiende la creación de un nuevo ser humano gracias a la tecnociencia. La segunda pretende deshacerse de toda esperanza para concentrarse en el presente único. Ambas se basan en una antropología del sujeto competente y amo de sí mismo, y rechazan una antropología de la receptividad y dependencia. Para abrigar una esperanza hay que crear la experiencia de situaciones aparentemente sin salida, en las que uno está tentado justamente a desesperar. La esperanza se otorga como un don y, por ende, la esperanza cristiana escatológica es su arquetipo

Abstract:

Two contemporary proposals, originated in the illusion brought by the belief in the myth of progress, are presented to battle hopelessness. The first one defends the creation of a new human being due to technoscience. The second one proposes to get rid of all hope to focus on the unique present. Both proposals are based on an anthropology of a competent and independent human being. In order to hold on to hope, one must experience situations where apparently there is no way out, where the temptation to despair is present. The hope is given as a gift and therefore the eschatological Christian hope is their archetype

Resumen:

Desde la visión personalista, la mutua donación de sí en una comunidad de personas es la base de una adecuada antropología para una satisfactoria consideración de la dignidad y vocación de la mujer. Dicha visión entra en conflicto con otras de tipo feminista, que cuestionan cuál es la naturaleza del amor humano, la posibilidad de una auto-realización plena y el lugar que ocupa la sexualidad. La autora se inclina por la primera visión, analiza algunas posturas feministas y concluye con el concepto de éxtasis como cima del amor entre hombre y mujer

Abstract:

From a personalistic view, mutual sharing of oneself in a community is the foundation of an appropriate anthropology for the consideration of a woman’s dignity and role. This view conflicts with other feministic ones, questioning the nature of human love, the possibility of complete self-realization, as well as the role of sexuality. The author leans towards the first view, analyzes various feministic views and concludes with the concept of ecstasy as the height of love between a man and a woman

Abstract:

In this paper, we develop two models for the valuation of information technology (IT) investment projects using the real options approach. The IT investment projects discussed in this paper are categorized into development and acquisition projects, depending upon the time it takes to start benefiting from the IT asset once the decision to invest has been taken. The models account for uncertainty both in the costs and benefits associated with the investment opportunity. Our stochastic cost function for IT development projects incorporates the technical and input cost uncertainties of Pindyck's model (1993), but also considers the fact that the investment costs of some IT projects might change even if no investment takes place. In contrast to other models in the real options literature in which benefits are summarized in the underlying asset value, our model for IT acquisition projects represents these benefits as a stream of stochastic cash flows

Abstract:

A computer simulation model for the valuation of investments in disruptive technologies is developed. Based on the conceptual framework proposed by Christensen (1997) for explaining the Innovator's Dilemma phenomenon, an investment project is divided into two sequential phases representing the evolution of the disruptive technology from an emerging to a mainstream market. In each of these phases, development costs and net commercialization cash flows are modeled using various stochastic processes that interact with each other. As a result, the initial estimate on the value of the project is continuously updated to reflect the stochastic changes of these variables. An example illustrates the usefulness of the model for understanding the effects of cash flow and cost volatilities in the value of a disruptive technology investment

Abstract:

Consumer protection in financial markets in the form of information disclosure is high on government agendas, even though there is little evidence of its effectiveness. We implement a randomized control trial in the credit card market for a large population of indebted cardholders and measure the impact of Truth-in-Lending-Act-type disclosures, de-biasing warning messages and social comparison information on default, indebtedness, account closings, and credit scores. We conduct extensive external validity exercises in several banks, with different disclosures, and with actual policy mandates. We find that providing salient interest rate disclosures had no effects, while comparisons and de-biasing messages had only modest effects at best

Abstract:

This paper tackles the problem of searching unscheduled public transport fast routes for Mexico City proposing as solution a Public Transport Navigation System (PTNS) for mobile devices. The intelligent system proposed in this work finds fast public transport routs to a destination by using a search algorithm with knowledge based time dependent heuristic. This heuristic captures the knowledge of public transport expert users and combines it with data given by transport companies to find the fastest routes available. The algorithm generates an estimated time of arrival (ETA) and finds the best rout in less than 10 seconds

Abstract:

In Mexico City there are 12 subway lines and over 350 bus lines that do more than 12 million trips daily. All this lines are unscheduled and information about their stations location is limited. Only expert public transport users have enough knowledge to choose a fast rout to their destination using this transport system. The city transport system is a big maze for users; this situation makes inexpert users avoid public transport services, preferring taxis or their own car. This paper tackles the unscheduled public transport fastest route search problem in Mexico City proposing as solution a Public Transport Navigation System (PTNS). Although solutions for minimum delay rout search problems have been widely studied, and many algorithms have been created, Mexico City public transport data is highly variable and uncertain making impossible to implement any of them, therefore the necessity of finding a solution not based on the public transports uncertain data but in their users knowledge. The system developed in this work finds fast routs to a destination in public transport by using a search algorithm with a knowledge based time dependent heuristic. The proposed heuristic aims to capture the knowledge of public transport expert users and combine it with data given by transport companies to calculate the fastest routes available. These factors make the heuristic search algorithm produce routes that are faster than the ones considered by any public transport user. The heuristic is a time dependent function and therefore the estimated time of arrival (ETA) generated is precise.  Test were made comparing trip times of persons with and without the PTNS installed in their mobile devices. The results showed that persons that followed the PTNS suggested rout had significantly shorter trip time than the ones who did not use the system

Abstract:

This paper reports an experiment evaluating the effect of gift giving on building trust. We have nested our explorations in the standard version of the investment game. Our gift treatment includes a dictator stage in which the trustee decides whether to give a gift to the trustor before both of them proceed to play the investment game. We observe that in such case the majority of trustees offer their endowment to trustors. Consequently, receiving a gift significantly increases the amounts sent by trustors when controlling for the differences in payoffs created by it. Trustees are, however, not better off by giving a gift as the increase in the amount sent by trustors is not large enough to offset the trustees’ loss associated with the cost of giving a gift

Abstract:

Should one use words or money to foster trust of the other party if no means of enforcing trustworthiness are available? This paper reports an experiment studying the effectiveness of two types of mechanisms for promoting trust: a costly gift and a costless message as well as their mutual interaction. We nest our findings in the standard version of the investment game. Our data provide evidence that while both stand-alone mechanisms enhance trust, a gift performs significantly worse than a message. Moreover, when a gift is combined with sending a message, it can be counterproductive

Abstract:

This paper reports on an experiment studying the effectiveness of a deposit mechanism on increasing trust and trustworthiness. The deposit mechanism is modeled as a monetary transfer from the trustee to the trustor prior to the transaction. If the deposit is implemented, it makes the trustor at least as well off as if no transaction ever took place, but does not give him any means of enforcing the contract. Our experiment consists of a three treatments Baseline, Deposit, and Endowment Control implemented in an across subjects design. Baseline is the standard investment game by Berg et al. (1995). There are two players, A and B, both endowed with $10 at the beginning of the game. The first mover, player A, decides on an amount whether to send a whole dollar amount t ∈{0,1,2,...,10} to her counterpart player B. The amount sent is tripled by the experimenter. The second mover, player B, then decides how much of the tripled amount, r ∈{0,...,3t} in whole dollar amounts to return to player A. Deposit involves the investment game as described in Baseline and a pre-game stage during which player B has an option to transfer his whole $10 endowment to player A. In the actual game that follows player A can still send a maximum of $10 even if player B decided to transfer his endowment to player A. Endowment Control treatment is analogous to Baseline and differs only in the endowment given to both players: Player A starts the game with $20 and player B with $0. We observe that majority of trustees offer their endowment to trustors during a pre-game stage. Such deposit significantly increases the amounts sent by trustors when controlling for the differences in payoffs created by receiving a deposit. Trustees are, however, not better off by giving a deposit as the increase in the amount sent by trustors is not large enough to offset the trustees’ loss. We also find that trustees do not change the amount returned after they have given a deposit

Abstract:

Experimental evidence suggests the size of the foregone outside option of the fIrst mover does not affect the behavior of the second mover in the lost wallet game. In this paper we experimentally compare the behavior of subjects when they face an outside option with unequal payoffs, i.e., the first mover gets 10 and the second mover gets 0, and when they face an outside option with equal payoffs, i.e., both get 5. Consistent with the most of the literature we do not find a significant difference in behavior of second movers

Abstract:

We conduct an experiment to examine the strategic use of trust in an environment similar to Berg, Dickhaut, and McCabe (1995) investment game. The environment differs in that the second mover is restricted to the binary choice of returning half of the tripled amount (fair split) or zero (selfish split). We use the theory of guilt aversion to explain the behavior in strategic and non-strategic environments represented by playing the game sequentially and simultaneously respectively. We find that in the sequential treatment first movers invest significantly more than when the transfer decisions are conducted simultaneously. Moreover, in line with the theoretical prediction, 91% of subjects who invested the entire endowment received half of the surplus. On the other hand only 5% of subjects who invested anything less than the entire endowment received half. In the simultaneous treatment the proportions yield 11.1% and 32.1% respectively. These allocations along with the beliefs collected in a salient manner are consistent with the predictions of guilt aversion

Resumen:

En el artículo se delinean los alcances artísticos de The sirens of Titan, de Kurt Vonnegut y su capacidad para explorar los dilemas de la cultura posmoderna y de la condición humana tal como fue formulada por Hannah Arendt. Se propone que la versatilidad en el manejo de tramas, personajes, mundos posibles y del narrador le permite a Vonnegut plasmar los rasgos distintivos de la cultura estadounidense de la posguerra, al mismo tiempo que desnuda sus fundamentos filosóficos por medio del uso virtuoso de la parodia

Abstract:

The article delineates the artistic scope of Kurt Vonnegut's The Sirens of Titan, and his ability to explore the dilemmas of postmodern culture and the human condition as formulated by Hannah Arendt. It suggests that the versatility in the handling of plots, characters, possible worlds and the narrator allows Vonnegut to capture the distinctive features of post-war U. S. culture, while he strips its philosophical foundations through the virtuous use of parody

Resumen:

La enseñanza de la historia en el ITAM tiene como propósito formar en el profesionista una conciencia del significado y la responsabilidad de su función social. El Departamento Académico de Estudios Generales se concibe como un espacio abierto dedicado a la enseñanza de las humanidades, es decir, a toda aquella expresión del hombre que no se limite exclusivamente al campo de alguna ciencia particular. El objetivo es ubicar al hombre y a la humanidad en el centro de nuestra reflexión y de alcanzar una visión integradora del saber. La educación debe, además, ayudar a formar una ciudadanía consciente de su identidad, que contribuya con prudencia y justicia a la sociedad; debe ordenar significativamente la experiencia

Abstract:

History is taught at ITAM with the intention of providing future professionals with an awareness of the importance and responsibility of their social role. The General Studies Department was conceived as an open domain devoted to the teaching of Humanities, that is, all man's expressions unrestricted to any particular scientific field. Our goal is to allocate Man and Humanity at the very center of our studies and to achieve an integrating vision of knowledge. In addition, Education must help citizens become aware of their identities, and contribute to society through prudence and justice; it must provide a meaningful order to experience

Resumen:

En este ensayo se pretende explicar la naturaleza de la crisis que enfrentan las universidades de nuestro tiempo. Se explican algunas transformaciones, se analizan las relaciones conflictivas entre la universidad y la sociedad, y se describen proyectos que rompen con el sentido y los valores fundamentales que la han caracterizado desde su fundación. Se describe el fondo permanente que la particulariza y que ha defendido durante cerca de siete siglos, y que es lo que le ha permitido al mismo tiempo permanecer, cambiar y ser un actor primordial en la construcción del hombre y la sociedad del futuro. Se ubica al ITAM en esta perspectiva

Abstract:

In this article, we will explain the type of crisis facing universities today. In particular, we will address some changes that they have suffered through and also analyze the troublesome relationship between University and society. Furthermore, we will describe projects that torn it away from its philosophy and fundamental values of its founding days. We will also describe its foundation which it has defended for more than seven centuries allowing it to persist, adapt, and play a primary role in the construction of man and the society of the future. We will look at ITAM in this perspective

Resumen:

Los cursos de Problemas de la civilización contemporánea que el Departamento de Estudios Generales del itam ofrece a los estudiantes intentan configurar académicamente algunas formas de comprender la realidad actual. Pero también es una apuesta por un marco teórico interdisciplinar, un método dialógico y el entendimiento de que la universidad es la conciencia crítica de la sociedad. Así, se invita a los estudiantes a valorar su responabilidad social

Abstract:

The courses, Problems of Contemporary Civilization, given by itam’s General Studies Department, attempt an academic approach to understanding our current reality. They constitute a venture to create an interdisciplinary theoretical framework based on dialogic teaching and the view of university as society’s critical conscience. Moreover, these courses entice students to value their social responsibility

Resumen:

Se ofrece un panorama general de lo que son los Estudios Generales en el ITAM: desde los principios, misión y objetivos que guían a la institución, hasta la filosofía educativa de las siete materias básicas que se imparten en el departamento, así como su estructura, metodología y finalidad

Abstract:

In this article, we will give a general overview of General Studies at ITAM: from the principles, mission, and objectives which guide it, to the educational philosophy of the Department’s seven core courses. Moreover, we will address their structure, methodology, and goals

Abstract:

CSmoothing allows an analyst to use the so-called Controlled Smoothing technique to estimate trends in a time series framework. In this Web-tool (Shiny), the analyst may apply the methodology to at most 3 mortality time series simultaneously, as well as to other kind of time series individually. Likewise, this smoothing approach allows the analyst to establish one, two or three segments in order to take into account possible changes in variance regimes. For estimating trends it uses different amounts of smoothness, both globally for the total data set and through some partial indices for each selected segment. It is also possible to endogenously fix the points where the segments start and end (the cutoff points) with continuous joints. Additionally, intervals of different standard deviations for their respective trends are given. Particular emphasis is placed on a big data set of log mortality rates, log(qx), taken from period life tables of the Human Mortality Database (HMD) (University of California Berkeley (USA) and a Max Planck Institute for Demographic Research (Germany)), 2021). In all cases, dynamic graphs and several statistics related to the Controlled Smoothing technique are illustrated

Resumen:

En este artículo se estima la esperanza de vida temporal en torno a la joroba de mortalidad masculina para México a nivel estatal, para los años 2000, 2005, 2010 y 2015. Se optó por el método de suavizamiento controlado por segmentos, con la finalidad de garantizar la comparabilidad y mitigar el efecto que pudieran tener observaciones extrañas, en función de lo esperable en cuanto a la mortalidad subyacente. Se compara la eficacia del método propuesto frente a modelos paramétricos de la literatura y destaca el presente. Los resultados indican que dicha esperanza de vida temporal es desigual y en algunos casos menor que la del año 2000, además de evidenciar el mejor ajuste que logra la presente propuesta frente a varios modelos paramétricos de mortalidad, tales como el de Heligman y Pollard

Abstract:

Temporary life expectancies are estimated around the male mortality hump for the Mexican case at the state level, for 2000, 2005, 2010 and 2015. Controlled smoothing by segments is used to guarantee comparability and mitigate the outlier effect, depending on expected underlying mortality. The results indicate that the life expectancies are unequal and, in some cases, even worse to that of year 2000. In addition, the present proposal achieves a better fit as compared to several mortality parametric models, such as the Heligman and Pollard model

Resumen:

La pandemia de Covid-19 ha causado un número muy grande de muertes en todo el mundo, de tal manera que si la prevalencia de la infección continúa, indudablemente afectará los niveles del indicador de esperanza de vida. Desde una perspectiva demográfica, la principal característica del Covid-19 es su efecto diferenciado por grupos de edad, teniendo mayor letalidad en personas de edad avanzada. Estimamos la esperanza de vida temporal en tres grupos etarios y por sexo, a nivel nacional y en los 32 estados del país. El método que se usó para la estimación de la esperanza de vida permite controlar el porcentaje adecuado de suavidad de la tendencia de una serie de tiempo, tanto globalmente como por segmentos, y se considera adecuado para identificar los efectos diferenciados que ha tenido la pandemia por grupo etario. Para el grupo de 41 a 85 años, nuestras estimaciones indican una reducción en la esperanza de vida temporal a nivel nacional, entre 2020 año donde alcanzaron su mínimo y el año en donde alcanzaron su máximo, de 3.5 años para los hombres y 1.6 para las mujeres

Abstract:

The Covid-19 pandemic has caused many deaths around the world. In fact, if the prevalence of the infection continues, it is expected to decrease life expectancy. From a demographic perspective, the main feature of Covid-19 is its differential effect on age groups, having the worst effect on the elderly people. We estimate temporary life expectancies for each of three age groups by sex, at the national as well as the state level. The method used to estimate life expectancy uses an approach based on Controlled Smoothness of a time series trend, both globally and by segments. This approach is considered suitable to identify the differential effects of each selected age group. For the 41 to 85 year-old group, we estimate a reduction in temporary life expectancy at the national level, between year 2020 (when it reached its minimum value) and the year where it reached its maximum, 3.5 years for men and 1.6 for women

Abstract:

This paper presents a method to estimate mortality trends of two-dimensional mortality tables. Comparability of mortality trends for two or more of such tables is enhanced by applying penalized least squares and imposing a desired percentage of smoothness to be attained by the trends. The smoothing procedure is basically determined by the smoothing parameters that are related to the percentage of smoothness. To quantify smoothness, we employ an index defined first for the one-dimensional case and then generalized to the two-dimensional one. The proposed method is applied to data from member countries of the OECD. We establish as goal the smoothed mortality surface for one of those countries and compare it with some other mortality surfaces smoothed with the same percentage of two-dimensional smoothness. Our aim is to be able to see whether convergence exists in the mortality trends of the countries under study, in both year and age dimensions

Resumen:

Se presenta un método original para controlar suavidad cuando se estiman tasas de mortalidad en un contexto bidimensional (por edades y años) con una perspectiva de P-splines. El analista puede elegir el porcentaje de suavidad deseado, ya sea en la dimensión de edad, de años o de ambas, con el objetivo de obtener tendencias suavizadas de tasas de mortalidad que sean comparables. Para ello se proponen unos índices que relacionan la suavidad deseada con los parámetros que controlan el suavizamiento. También se establecen algunos resultados teóricos que brindan soporte a los índices de suavidad y se tocan algunos aspectos de carácter numérico. Con fines ilustrativos, el método propuesto se aplica a datos de estadísticas vitales para México y a datos del Continuous Mortality Investigation Bureau del Reino Unido

Abstract:

An original method is presented to control smoothness when estimating mortality rates in a two-dimensional context of ages and years with a P-spline perspective. The analyst can choose a desired percentage of smoothness for the dimension of age, the dimension of year or both, thus obtaining smoothed trends of mortality rates that are comparable for different datasets. To that end, some indices that relate the desired smoothness with the smoothness parameters are proposed. Some theoretical results that lend support to the indices as well as some numerical aspects are also mentioned. The proposed method is illustrated with vital statistics data from the Mexican national institute of statistics and from the UK Continuous Mortality Investigation Bureau

Abstract:

This article presents some applications of time-series procedures to solve two typical problems that arise when analyzing demographic information in developing countries: (1) unavailability of annual time series of population growth rates (PGRs) and their corresponding population time series and (2) inappropriately defined population growth goals in official population programs. These problems are considered as situations that require combining information of population time series. Firstly, we suggest the use of temporal disaggregation techniques to combine census data with vital statistics information in order to estimate annual PGRs. Secondly, we apply multiple restricted forecasting to combine the official targets on future PGRs with the disaggregated series. Then, we propose a mechanism to evaluate the compatibility of the demographic goals with the annual data. We apply the aforementioned procedures to data of the Mexico City Metropolitan Zone divided by concentric rings and conclude that the targets established in the official program are not feasible. Hence, we derive future PGRs that are both in line with the official targets and with the historical demographic behavior.We conclude that growth population programs should be based on this kind of analysis to be supported empirically. So, through specialized multivariate time-series techniques, we propose to obtain first an optimal estimate of a disaggregate vector of population time series and then, produce restricted forecasts in agreement with some data-based population policies here derived

Abstract:

Mobile devices are becoming pervasive in medical informatics. A common practice among physicians is to take notes during ward rounds, which helps them write the formal medical note. MedNote is a mobile application designed to support this practice. We conducted an experimental evaluation, where seven medical interns watched a video projection of three clinical cases, using different devices (PDA, Tablet PC, Paper) to take notes. Participants were required to elaborate a formal medical note with the help of these personal notes. We measured the time required to complete the tasks as well as their perception of comfort with each device. Using the application proved to be faster in elaborating the formal note when compared to paper, and all users agreed that the structure of fields in MedNote was very useful as a guideline. In contrast, we found that entering data in MedNote was somewhat difficult and slow compared to using paper

Resumen:

La figura del consejero independiente fue acogida por la legislación mexicana en 2005 para provocar una mejoría en el gobierno corporativo de las sociedades que cotizan en bolsa. Se pensaba que con su inserción se avanzaría en una mayor representatividad de los intereses de los accionistas minoritarios y una mayor rendición de cuentas. El artículo cuestiona estas ideas en diez entrevistas realizadas a consejeros independientes. Se encuentra que la ley dista mucho de la realidad: los consejeros carecen de un nivel aceptable de profesionalización, su grado de independencia parece muy limitado debido a los criterios usados en su designación y normalmente carecen de la información requerida para participar adecuadamente en las sesiones del consejo; el texto sugiere la colegiación obligatoria, mecanismo que, junto con un sistema de certificación, redundaría en una mejoría de la forma en que los consejeros desempeñan su función

Abstract:

The legal notion of an independent board member was introduced to the Mexican legislation in 2005. The goal was to improve the corporate governance of the corporations listed on the stock exchange. It was thought that including the independent board members would help to better represent the interest of the minority holders, as well as to increase the level of accountability. This article challenges these ideas based on 10 interviews carried out with independent board members. It finds that what the law says is far from what actuatly occurs: they lack of an adequate level of knowledge to perform their tasks, their degree of independence is severely limited by the criteria used to appoint them and they usually ignore essential corporate information, which impede them from participating effectively in the board sessions. The article suggests the creation of an independent board member’s association, a mechanism that, along with a mandatory certification system, would improve the way in which they perform their duties

Resumen:

Este artículo estudia la relación entre las características de las quejas de abuso y/o acoso sexual de primarias públicas de la Ciudad de México y su resultado, medido a partir de si éstas se confirman o no. Para ello se analizaron 109 informes de intervención, realizados por especialistas de la Unidad de Atención al Maltrato y Abuso Sexual Infantil. Un análisis de regresión logística que considera la confirmación de la queja como la variable dependiente respalda la existencia de una relación estadísticamente significativa entre la confirmación y las siguientes variables dicótomas: reincidencia del ofensor, evaluación psicológica de la víctima, entrevista al director, realización de taller y cambio de plantel de la víctima. A partir de los resultados se dan varias recomendaciones entre las que destaca la urgente necesidad de que las autoridades educativas implementen medidas rigurosas para evitar que aquellos miembros del personal escolar que ya han estado involucrados en una queja de abuso y/o acoso sexual vuelvan a cometer una ofensa de este tipo

Abstract:

This article studies the relations among the characteristics of complaints of sexual harassment and/or abuse in public elementary schools in Mexico City, and the measurement of whether or not these complaints have been confirmed. The study analyzed 109 reports of intervention, carried out by specialists from the Unit of Attention to Sexual Abuse and Mistreatment of Children. An analysis of logistic regression that considered the confirmation of complaint as the dependent variable supports the existence of a statistically significant relationship between confirmation and the following dichotomous variables: repeat offender, psychological evaluation of victim, interview of director, workshop held, and victim moved to another school. Based on the results, several recommendations are made, including the urgent need for educational authorities to implement rigorous measures to prevent school employees previously involved in a complaint of sexual harassment and/or abuse from committing another offense of this type

Resumen:

Este artículo analiza el caso de la Unidad para la Atención al Maltrato y Abuso Sexual Infantil (UAMASI), entidad encargada de atender las quejas de violencia escolar que ocurren en las escuelas del Distrito Federal. Luego de estudiar el funcionamiento institucional de la dependencia, se exponen las tendencias de las denuncias atendidas a partir de cinco variables: ciclo escolar, delegación, turno y nivel educativo del plantel del cual proviene la denuncia, así como el motivo que la generó. Lo anterior, con base en la información de tres mil 242 quejas interpuestas de 2001 a 2007. Al final se hace un diagnóstico general de la experiencia institucional de la UAMASI y se interpretan las principales tendencias de las quejas interpuestas

Abstract:

This article analyzes the case of the Child Abuse Treatment Unit, the entity responsible for dealing with complaints of violence in Mexico City schools. After studying the organization's institutional functioning, the paper describes trends in complaints, based on five variables: school year, city borough, morning or afternoon shift at school, grade, and the reason behind the complaint. The description uses information from 3,242 complaints filed from 2001 to 2007. A general diagnosis is made of the Unit's institutional experience, and the main tendencies involving complaints are interpreted

Resumen:

La programación lineal es un primer acercamiento del estudiante universitario a la optimización numérica, cuyos conceptos suelen requerir un alto nivel de abstracción, por lo que es importante entender la forma en que se construyen los conceptos relacionados con este método en su versión gráfica y aquéllos más abstractos, involucrados en el método simplex. En este artículo se reportan los resultados obtenidos a partir de un estudio en el que se utiliza la teoría APOE y la modelación, en la enseñanza de este método a estudiantes de un primer curso universitario de álgebra lineal. Los resultados obtenidos muestran, en primer término, que este acercamiento favorece la construcción con sentido del modelo geométrico, así como la de su relación con los pasos del algoritmo simplex. Estas construcciones juegan un papel importante en la comprensión de los conceptos involucrados en la formalización de este último. Este estudio contribuye a la literatura, en tanto que el tema de la programación lineal ha recibido muy poca atención de los investigadores a pesar de que forma parte de diversos cursos de álgebra lineal. Además, los resultados ponen de manifiesto la posibilidad de que los estudiantes comprendan los conceptos involucrados en el método simplex

Abstract:

Linear programming constitutes university students’ first approach to numerical optimization. The involved concepts require a high level of abstraction. It is thus important to understand how they are constructed. This study presents the results obtained from the use of an APOS Theory based didactical model together with a simple modeling problem to teach the elementary linear programing concepts starting from the basic problem and finishing with the simplex algorithm to students in their first Linear Algebra course. Results show that this didactic approach fosters a meaningful construction of the geometrical model and of its relation to the simplex algorithm steps. These constructions play an important role in the understanding of the concepts involved in the formalization of this algorithm. This study contributes to the literature in studying a topic, linear programming, which has received very little attention from researchers, although it is part of many linear algebra courses at the university level. Moreover, results show that students understand the concepts involved in the simplex algorithm

Resumen:

Se presenta una interesante aplicación de aprendizaje de máquina en la originación de créditos en microfinanzas. El objetivo de microfinanzas son las personas que no pueden construir un historial crediticio y, en consecuencia, no pueden acceder a préstamos de bancos u otras instituciones financieras. Usamos datos de una compañía microfinanciera mexicana que opera en varias regiones del país. De igual modo, se pretende guiar a prestamistas intermediarios para escoger sus clientes y alcanzar un menor riesgo de crédito. Usamos varios modelos estadísticos como análisis de componentes principales, análisis de grupos y árboles de regresión. Obtenemos, como resultado, una serie de recomendaciones basadas en las características de los clientes

Abstract:

This article presents an exciting application of machine learning for loan origination in microfinance. Microfinance targets people who cannot build a credit history and therefore cannot access loans from banks or other financial institutions. We use data from a Mexican microfinance company that operates in several regions throughout the country. The objective is to guide intermediate lenders to choose their clients and achieve a lower credit default risk. We use several statistical models such as principal component analysis, clustering analysis and a regression tree. We obtain, as a result, a series of recommendations based on the characteristics of the clients

Abstract:

Is culture a lasting driver of corruption? I study whether normative attitudes toward bribery persist through generational change. To disentangle cultural from institutional causes, I compare individuals who share an institutional environment but whose parents were born abroad. I find evidence of intergenerational persistence: average bribery attitudes in the parental country of ancestry explain variation in bribery attitudes across second-generation immigrants. Consistent with theoretical models, both family-based and community-based mechanisms of attitudinal transmission appear to matter, and cultural persistence appears to be greater in laxer environments. Finally, I find that bribery attitudes are associated with two measures of bribing behavior, highlighting the need to increase attention to cultural factors in corruption scholarship and policy

Abstract:

A profusion of recent research has focused on historical legacies as key to understanding contemporary outcomes. We review this body of research, analyzing both the comparative-historical analysis (CHA) and modern political economy (MPE) research traditions as applied to the study of communism, imperialism, and authoritarianism. We restrict our focus to the sizeable subset of arguments that meets a relatively strict definition of legacies, i.e., arguments that locate the roots of present-day outcomes in causal factors operative during an extinct political order. For all their differences, the CHA and MPE approaches both face the challenges of convincingly identifying the sources of historical persistence and of reckoning with alternative channels of causation. We find that mechanisms of persistence in legacy research generally belong to one of three main categories. While both traditions acknowledge the role of institutions in historical persistence, CHA research tends to emphasize the lasting power of coalitions, whereas work in MPE often argues for the persistence of cognitions. We argue that, at their best, CHA and MPE approaches yield complementary insights. Further progress in legacy research will benefit from greater cross-fertilization across research traditions and deeper recognition of commonalities across communist, imperialist, and authoritarian regimes

Abstract:

Social spending by central governments in Latin America has, in recent decades, become increasingly insulated from political manipulation. Focusing on the 3x1 Program in Mexico in 2002–2007, we show that social spending by local government is, in contrast, highly politicized. The 3x1 Program funds municipal public works, with each level of government—municipal, state, and central—matching collective remittances. Our analysis shows that 3x1 municipal spending is shaped by political criteria. First, municipalities time disbursements according to the electoral cycle. Second, when matching collective remittances, municipalities protect salaries of personnel, instead adjusting budget items that are less visible to the public, such as debt. Third, municipalities spend more on 3x1 projects when their partisanship matches that of the state government. Beyond the 3x1 Program, our findings highlight the considerable influence that increasing political and economic decentralization can have on local government incentives and spending choices, in Mexico and beyond

Abstract:

The monitoring of elections by international groups has become widespread. But can it have unintended negative consequences for governance? We argue that high-quality election monitoring, by preventing certain forms of manipulation such as stuffing ballot boxes, can unwittingly induce incumbents to resort to tactics of election manipulation that are more damaging to domestic institutions, governance, and freedoms. These tactics include rigging courts and administrative bodies and repressing the media. We use an original-panel dataset of 144 countries in 1990–2007 to test our argument. We find that, on average, high-quality election monitoring has a measurably negative effect on the rule of law, administrative performance, and media freedom. We employ various strategies to guard against endogeneity, including instrumenting for election monitoring

Abstract:

Does electoral manipulation reduce voter turnout? The question is central to the study of political behavior in many electoral systems and to current debates on electoral reform. Nevertheless, existing evidence suggests contradictory answers. This article clarifies the theoretical relationship between electoral manipulation and turnout by drawing some simple conceptual distinctions and presents new empirical evidence from Mexico. The deep electoral reforms in 1990s Mexico provide a hitherto-unexploited opportunity to estimate the effect of electoral manipulation on turnout. The empirical strategy makes use of variation over time and across the states of Mexico in turnout and in electoral manipulation. The analysis finds that electoral manipulation under the PRI discouraged citizens from voting. Conceptually, the article shows that true and reported turnout need not move in the same direction, nor respond in the same way to electoral manipulation

Abstract:

This study estimates the aggregate import demand function for Greece using annual data for the period 1951–92. There are two methodological novelties in this paper. The authors find that the variables used in the aggregate import demand function are not stationary but are cointegrated. Thus, a long-run equilibrium relationship exists among these variables during the period under study. The price elasticity is found to be close to unity in the long run. The cross-price elasticity is also found to be close to unity. Import demand is found to be highly income elastic in the long run. This implies that with economic growth, ceteris paribus, the trade deficit for Greece is likely to get worse.

Abstract:

In many virtual environment (VE) applications, the VE system must be able to display accurate models of human figures that can perform routine behaviors and adapt to events in the virtual world. In order to achieve such adaptive, task-level interaction with virtual actors, it is necessary to model elementary human motor skills. SkillBuilder is a software system for constructing a set of motor behaviors for a virtual actor by designing motor programs for arbitrarily complicated skills. Motor programs are modeled using finite state machines, and a set of state machine transition and ending conditions for modeling motor skills has been developed. Using inverse kinematics and automatic collision avoidance, SkillBuilder was used to construct a suite of behaviors for simulating visually guided reaching, grasping, and head-eye tracking motions for a kinematically simulated actor consisting of articulated, rigid body parts. All of these actions have been successfully demonstrated in real time by permitting the user to interact with the virtual environment using a whole-hand input device

Abstract:

Using recent developments in econometric techniques, we show that the conventionally accepted view that higher saving rate causes higher economic growth does not hold for Mexico. In fact, the causality goes in the opposite direction

Abstract:

Doing business on the Internet has many opportunities along with many risks. This cahpter focuses on a series of risks of legal liability arising from e-mail and Internet activities that are a common part of many e-businesses. Some of the laws governing these electronic activities are new and especially designed for the electronic age, while others are more traditional laws whose application to electronic activities is the novelty. E-business not only exposes companies to new types of liability risk, but also increases the potential number of claims and the complexity of dealing with those claims. The international nature of the Internet, together with a lack of uniformity of laws governing the same activities in different countries, means that companies need to proceed with caution

Abstract:

This paper studies the short-run and long-run relationships between saving and investment rates for 123 countries using an error correction framework. The conventional wisdom suggests that capital should be more mobile for the countries with high per capita income. Our estimates suggest that capital is mobile for 16 countries most with a low per capita income

Abstract:

We investigate the relationship between latitude and incidence of melanoma among Whites in different age groups using data from different continents. Relevant data for 59 Whites were obtained for 59 regions around the world. A statistical analysis was carried out using regional dummy variables to eliminate spurious statistical correlation due to clustering. Simple correlation between latitude and incidence of melanoma is strongly negative for almost all age groups. However, once the regional dummies were intreduced in the analysis, the relation between latitude and incidence rates disappeared for all age groups but the explanatory power of the regression equation increased substantially

Abstract:

Even supercomputers are poor competition for the human brain when it comes to forecasting stockmarket movements. But the gap is closing, report Tapen Sinha and Clarence Tan. Research overseas and in Australia shows that artificial neural networks can be trained to predict share prices with some success, and they've becoming smarter

Abstract:

To study the impact of Superannuation Guarantee Charge (SGC) on small business, we conducted a survey of 344 small businesses. We find (1) small businesses do not see any benefits from SGC, (2) cost of lost jobs will be high and (3) government has not adequately communicated future changes in SGC

Abstract:

In a seminal study, Fiegenbaum (1990) attempted to set down parameters of relationship between risk and return for firms and related it to ‘two piece von-Neumann Morgenstern utility function’ first explored empirically by Fishburn and Kochenberger (1979). I re-examine the estimated relationship. Specifically, I perform a meta-analysis of the above and below median returns to show that the relationship between risk and return is weaker above median than below median. I also show that the relationship between below median returns and above median returns is very small exhibiting compartmentalization of decision making by the firms similar to individual decision makers

Abstract:

A model is developed where risks of varíous types are lumped together by annuity contract issuing companies due to asymmetry of information between the companies and their customers. If firms behave nonstrategically, an equilibrium exists with the firms charging uniform price to all customers. In such an equilibrium, I study how changes in (a) the fractions of various groups of individuals, (b) survival probabilities of various types and (c) altitudes towards risks of customers affect the equilibrium price of the annuity contracts

Abstract:

An explicit upper bound is calculated for wealth distribution in a model of overlapping generations witb uncertain lifetimes and an imperfect annuities market

Abstract:

In this note, I have shown how economically meaningful propositions can be derived regarding the effects of human wealth, nonhuman wealth and the probability of dying on the optimal term life insurance coverage of an individual with state dependent preferences in the framework of Babbel & Economides (1985)

Abstract:

Increased survival probability and expected lifespan is shown to increase capital intensity and welfare in an overlapping generations model with uncertain lifetimes in the presence of annuities market and production

Resumen:

En este trabajo se evalúa la sustentabilidad de la política fiscal en México de acuerdo con el comportamiento de la restricción presupuestaria del gobierno y con el saldo de la deuda acumulada en los pasados dos decenios. Se utiliza el valor de la deuda y el análisis empírico se basa en Wilcox (1989), requiriéndose para la sustentabilidad que el proceso de la deuda sea estacionario y que su media no condicional sea igual a cero. Nuestros resultados sugieren que la política fiscal para el periodo 1980-1997 no es sustentable. Estos no son concluyentes ya que presenta inestabilidad en los parámetros del proceso univariado que define a la serie de deuda. Cuando el análisis se realiza para el subperiodo 1988.III-1997.IV, los resultados sugieren que para ese lapso la política fiscal sí sería sustentable, mientras que para la mayor parte del decenio de los ochenta no lo fue

Abstract:

This paper studies the sustainability of the fiscal policy for Mexico, according to the behavior of the Government budget restriction and the balance in the accumulated debt over the last two decades. We use the value of the debt and the empirical analysis is based in Wilcox (1989), were it is required that the debt process is stationary and its no conditional mean is equal to zero. Our results suggest that the fiscal policy, for the period between 1980-1997, is not sustainable. These results are not conclusive due to the instability in the parameters of the univariate process in the debt series. When the analysis is realized for the period between 1988.III-1997.IV, the results suggest that for this term the fiscal policy is sustainable, while for most of the eighties it was not

Resumen:

Una síntesis de la historia de los cursos de Historia Socio-Política de México y Problemas de la Realidad Mexicana Contemporánea: sus orígenes y gestación; los cambios que experimentaron; la motivación subyacente de los mismos y su finalidad: el compromiso que el estudiante tiene con la sociedad mexicana

Abstract:

We will give a summary of the history of the following courses: Mexico’s SocioPolitical History and Contemporary Mexican Reality. Furthermore, we will delve into their origins and development, the changes they have gone through, their underlying motivation, and finally their goal: the student’s commitment with Mexican society

Resumen:

Este estudio pretende explicar la participación de Manuel de Mier y Terán en la guerra de independencia, principalmente cuando ejerció la jefatura de las fuerzas patriotas en el partido de Tehuacán, Puebla, entre el 16 de agosto de 1815 y el 21 de enero de 1817. El ensayo articula tres procesos de la guerra de independencia en esta región: la carrera militar de Mier y Terán previa a 1815, las características del espacio de Tehuacán y de las zonas colindantes, y, finalmente, la estrategia político-militar del personaje para darle vida a la insurgencia en condiciones adversas

Abstract:

This paper seeks to explain Manuel de Mier y Terán's role in the Independence struggle, particularly as head of the patriotic forces of the partido of Tehuacán, Puebla, between August 16,1815 and January 21,1817. The essay articulates three processes of the Independence struggle in this region: Mier y Terán's military career before 1815, the spatial characteristics at Tehuacan and its surrounding areas and, finally, Mier y Teran's political and military strategy to invigorate insurgency in adverse conditions

Abstract:

Rapid developments in power distribution systems and renewable energy have widened the applications of dc-dc buck-boost converters in dc voltage regulation. Applications include vehicular power systems, renewable energy sources that generate power at a low voltage, and dc microgrids. It is noted that the cascade connection of converters in these applications may cause instability due to the fact that converters acting as loads have a constant power load (CPL) behavior. In this brief, the output voltage regulation problem of a buck-boost converter feeding a CPL is addressed. The construction of the feedback controller is based on the interconnection and damping assignment control technique. In addition, an immersion and invariance parameter estimator is proposed to compute online the extracted load power, which is difficult to measure in practical applications. It is ensured through the design that the desired operating point is (locally) asymptotically stable with a guaranteed domain of attraction. The approach is validated via computer simulations and experimental prototyping

Abstract:

Let V be any shift-invariant subspace of square summable functions. We prove that if for sorne A expansive dilation V is A-refinable, then the completeness property is equivalent to several conditions on the local behaviour at the origin of the spectral function of V, among them the origin is a point of A*-approximate continuity of the spectral function if we assume this value to be one. We present our results also in the more general setting of A-reducing spaces. We also prove that the origin is a point of A*-approximate continuity of the Fourier transform of any semiorthogonal tight frame wavelet if we assume this value to be zero

Abstract:

Bike sharing systems allow people to take and later return a bicycle at one of many stations scattered around the city. Users naturally imbalance the system by creating demands in a highly asymmetric pattern. In order for bike sharing systems to meet the fluctuating demand for bicycles and parking docks, inventory optimization among stations is crucial. In this paper, we model a subset of stations of the bike sharing system in Mexico City (ECOBICI) using discrete-event simulation and we evaluate different heuristics for the initial allocation of bicycles. In addition, we compare those heuristics with a simulation-optimization method to assess the optimal number of bicycles that should be placed at each station during the rush-hour period to enable the system to comply with ECOBICI’s operational requirement. The results show that simulation-optimization method outperforms other heuristics

Abstract:

As 2002 came to an end, the Argentine economy seemed to have touched bottom after contracting more than 10 percent. But a robust recovery during 2003 will be blocked by many of the same forces that helped create Argentina's economic catastrophe: the hard-line stance of the IMF, the muddle that is Argentine politics, and a dispirited and politically disaffected populace

Abstract:

Since the arrival of Vicente Fox to the presidency, Mexico has been stuck in neutral. The executive has been characterized by confusion, indecision, and repeated policy mistakes. Mexican political parties have shown a striking inability to adjust their behavior to the new democratic political environment. And Mexicans of all stripes remain steeped in an authoritarian culture that has prevented them from embracing the political opportunities offered by Mexico’s new democratic setting

Resumen:

Este trabajo tiene como objetivo dilucidar la noción kantiana de “concepto de un objeto en general”. En un pasaje de la Crítica de la razón pura Kant ofrece una pista al respecto al indicar que las categorías son los conceptos que definen al objeto en general. Este trabajo pretende esclarecer la noción de “concepto de un objeto en general” al investigar cómo debe entenderse la relación entre las categorías y el objeto. Para ello se explica en primer lugar la doctrina kantiana de la inclusión conceptual y del género sumo, y se pone en relación con la noción que nos ocupa. En segundo lugar se investiga el modo en que ha de entenderse la relación entre el concepto de un objeto en general y las categorías a partir del pasaje mencionado de la Primera Crítica. Finalmente se muestra el papel que juega la referencialidad para entender dicho concepto y su relación con las categorías

Abstract:

This paper aims to elucidate the Kantian notion of the “concept of an object in general”. In a passage from the Critique of Pure Reason, Kant offers a clue to this by indicating that the categories are the concepts that define the object in general. This paper seeks to clarify the notion of “concept of an object in general” by analyzing how the relationship between categories and the object is to be understood. For this, it first explains the Kantian doctrine of conceptual inclusion and of the highest genus, and relates it to the notion at hand. Secondly, it investigates the way in which the relationship between the concept of an object in general and the categories is to be understood, based on the aforementioned passage of the First Critique. Finally, it shows the role that referentiality plays in the way that this concept and its relation with the categories should be understood

Resumen:

En las pocas ocasiones en que Kant aborda el tema de la verdad, lo suele hacer en función de los problemas implicados en la definición nominal de verdad y en la búsqueda de un criterio de verdad. El objetivo de este trabajo es ofrecer una visión sinóptica del modo en que Kant plantea estas dos cuestiones. En la primera sección del trabajo se aborda el tema de la definición de la verdad. Primero explico qué es una definición y qué implica esto para el caso de la verdad. Tras esto, expongo las consecuencias y problemas que Kant extrae al respecto. En lasegunda sección del trabajo se desarrolla el tema del criterio de verdad. Para comenzar expongo las expectativas de Kant respecto de dicho criterio y los límites con que se encuentra. Tras esto ofrezco una clasificación y explicación de los criterios de verdad que menciona a lo largo de su obra

Abstract:

On the few occasions that Kant addresses the subject of truth, he usually does so in relation to the problems involved in the nominal definition of truth and in the search for a truth criterion. The aim of this paper is to provide a synoptic view of the way in which Kant poses these two issues. In the first section of the paper I address the topic of the definition of truth. I begin by explaining what a definition is and what does this entail for the definition of truth. I then present the consequences and problems that Kant draws from this. In the second section of the paper I develop the topic of the criterion of truth. First, I set out Kant's expectations for such criterion and show the limits he encounters. Then I provide a classification and an explanation of the truth criteria that he mentions throughout his work

Resumen:

En el paso de su filosofía precrítica a su filosofía crítica, Kant transita de la noción de verdad entendida como identidad entre sujeto y predicado a la de verdad como concordancia entre conocimiento y objeto. En este trabajo se muestra el papel que la Dissertatio juega en este tránsito. Se analiza el modo en que Kant caracteriza la verdad en dicha obra y los problemas relacionados con este tema. En concreto, se relaciona la noción de verdad expuesta en esta obra con dos de los aportes más relevantes de la Dissertatio: la distinción entre mundo sensible y mundo inteligible y la afirmación de la limitación y finitud del conocimiento humano

Abstract:

In the passage from his pre-critical philosophy to his critical philosophy, Kant moves from a notion of truth as identity between subject and predicate to that of truth as concordance between knowledge and object. This article shows the role that the Dissertatio plays in this transition. It analyzes Kant characterization of truth in this work and the problems related to this topic. Specifically, I connect the notion of truth exposed in this work with two of the most relevant contributions of the Dissertatio: the distinction between the sensible world and the intelligible world and the assertion of the limitation and finitude of human knowledge

Resumen:

La discusión en torno a la teoría kantiana de la verdad suele girar alrededor de las preguntas —íntimamente relacionadas entre sí— por la adscripción de Kant a una versión coherentista o correspondentista de la verdad y por los correspondientes criterios de verdad. Estas discusiones suelen ponderar qué teoría de la verdad resulta más adecuada dados ya los principios críticos. En contraste con esto, este trabajo pretende mostrar, a través de la evolución de la noción de verdad del Kant precrítico y del proyecto de una lógica trascendental, la vinculación intrínseca de la noción de verdad con los principios mismos de la filosofía crítica, y replantear las preguntas por la definición y el criterio de verdad en el marco de la pregunta por la posibilidad de la verdad

Abstract:

The discussion about Kant’s theory of truth usually revolves around his ascription to some version of the coherence or correspondence theory of truth, and the matching criteria of truth. These discussions often deliberate which theory of truth is most appropriate given the critical principles. Instead, this paper aims to exhibit, through the evolution of Kant’s notion of truth in his precritical years and through the project of a transcendental logic, the intrinsic relation between the notion of truth and the very principles of critical philosophy; and to raise again the questions about the definition and the criteria of truth, but in the framework of the question of the possibility of truth

Resumen:

En la tercera sección de la "Introducción" a la lógica trascendental, Kant dedica un par de párrafos al tema de la verdad (KrV B82-83). Basándose en este pasaje, los comentaristas de Kant han justificado diversas y a veces contradictorias interpretaciones de la noción kantiana de verdad. Sin embargo, pocos han analizado el pasaje en su propio contexto, es decir, como parte de la estrategia para introducir la idea de una lógica trascendental. En este trabajo se pretende tomar postura a este respecto. Se intentará mostrar que este pasaje no abona a la distinción entre lógica general y lógica trascendental, sino entre analítica y dialéctica

Abstract:

In the third section of the "Introduction" to transcendental logic, Kant dedicates a couple of paragraphs to the subject of truth (KrV B82-83). Based on this passage, Kant's commentators have justified various and sometimes contradictory interpretations of the Kantian notion of truth. However, few have analyzed the passage in its own context, that is, as part of the strategy to introduce the idea of transcendental logic. In this work, I intend to take a position in this regard. I will try to show that this passage does not subscribe to the distinction between general and transcendental logic, but between analytic and dialectic logic

Resumen:

El objetivo de este trabajo es dilucidar la noción de “verdad trascendental” y mostrar su lugar en el sistema kantiano. Se defenderá que la verdad trascendental consiste, en línea con la definición tradicional de verdad, en un sentido de correspondencia entre conocimiento y objeto, que la lógica trascendental establece criterios de verdad trascendental, y que es esta noción de verdad la que permite establecer la verdad del conocimiento a priori y delimitar el territorio de la verdad empírica

Abstract:

The aim of this work is to elucidate the notion of “transcendental truth” and to show its role in the Kantian system. I will argue that this notion is in line with the traditional definition of truth, i.e., that it consists in the correspondence between knowledge and object. I will also argue that criteria of transcendental truth are provided by transcendental logic, and that it is this notion of truth what makes it possible to establish the truth of a priori knowledge and delimitate the field of empirical truth

Abstract:

Inflationary cosmology has, in the last few years, received a strong dose of support from observations. The fact that the fluctuation spectrum can be extracted from the inflationary scenario through an analysis that involves quantum field theory in curved space-time, and that it coincides with the observational data has lead to a certain complacency in the community, which prevents the critical analysis of the obscure spots in the derivation. We argue here briefly, as we have discussed in more detail elsewhere, that there is something important missing in our understanding of the origin of the seeds of Cosmic Structure, as is evidenced by the fact that in the standard accounts the inhomogeneity and anisotropy of our universe seems to emerge from an exactly homogeneous and isotropic initial state through processes that do not break those symmetries. This article gives a very brief recount of the problems faced by the arguments based on established physics. The conclusion is that we need some new physics to be able to fully address the problem. The article then exposes one avenue that has been used to address the central issue and elaborates on the degree to which, the new approach makes different predictions from the standard analyses. The approach is inspired on Penrose's proposals that Quantum Gravity might lead to a real, dynamical collapse of the wave function, a process that we argued has the properties needed to extract us from the theoretical impasse described above

Abstract:

We review the shortcomings of the standard version of the inflationary account for the emergence of the seeds of cosmic structure. Next we explain how by adding to that scheme the self induced collapse hypothesis, presumably reflecting modification in standard quantum mechanics brought about by its merging with gravitational physics these problems can be overcome. Finally we show bow these ideas can be confronted with existing and future phenomenology

Abstract:

When a learning solution is needed for different robots, a model is often trained for each robot geometry, even if the robotic task is the same and the robots are structurally similar. In this paper, we address the problem of transfer learning of swept volume predictors for the motion of articulated robots with similar geometric structure. The swept volume is a scalar value corresponding to the space occupied by an entire motion of the robot. Swept volume has many applications, including being an ideal distance measure for sampling based motion planners, but it is expensive to compute. We address this learning problem through a multitask network where a common input is used to learn multiple related tasks. In this work a single network learns the kinematic-geometric information common among robots. In order to identify the properties of our multitask network favorable for transfer, we evaluate transfer properties of several shared layers, number of robots in multitask training, and feature layers. We demonstrate positive transfer results with a training set that is a fraction of the data size used in the multitask and baseline training. All the robots considered are 7-DOF manipulators with links with a variety of lengths and shapes. We also present a study of the weights and activations of the trained networks that show high correlation with the transferability patterns we observed

Abstract:

This paper studies how exporting and importing firms match based on their capability by investigating the change in such exporter-importer matching during trade liberalization. During the recent liberalization of the Mexico-U.S. textile and apparel trade, exporters and importers often switch their main partners as well as change trade volumes. We develop a many-to-many matching model of exporters and importers where partner switching is the principal margin of adjustment, featuring Beckerian positive assortative matching by capability. Trade liberalization achieves efficient global buyer-supplier matching and improves consumer welfare by inducing systematic partner switching. The data confirm the predicted partner-switching patterns

Abstract:

The escalating dynamism of external pressures and the persistent demand from stakeholders for systems to maintain value amidst continuous change necessitates a re-evaluation of how system value is delivered. This literature review addresses the ambiguously defined concept of changeability, which spans domains, incorporates various 'ilities' and has in part impeded the formulation of effective comprehensive industry strategies. As a successful approach to cope with change, changeability involves the design of engineering systems that can continue to change, quickly (agile) and easily (flexible). This paper elucidates how changeability is defined, and the elements used to evaluate change in engineering systems. Subsequently, it reviews the methods and strategies employed to quantify, measure, and analyse changeability and change-related 'ilities'. An examination of various cases and applied research sets allowed the researchers to illustrate the roles, features and effects of changeability in the design of complex engineering systems throughout the entire lifecycle, thereby confirming and consolidating how changeability is both perceived and executed. Based on these findings, future research related to the quantification of changeability levels, and the cost implications associated are proposed, with an emphasis on utilising and integrating systems models (model-based systems engineering) to standardise and simplify implementation across various engineering systems

Abstract:

As complex systems, maritime vessels generate and require the utilization of large amounts of data for maximum efficiency. Designing, developing, and deploying these systems in a digital world requires rethinking how people interact and utilize technology throughout all areas of the industry. With growing interests in Industry 4.0, there are broad opportunities for the incorporation and development of new digital solutions that will support the improvement and optimization of next generation systems. However, while different technologies have been deployed with various levels of success the current challenge is not generating data but being able to harmoniously integrate different data streams into the decision-making process. To support the development of next generation vessels, a comprehensive understanding of Maritime 4.0 is necessary. Current conceptions of 4.0 within the industry remain ambiguous and based on our research, have demonstrated a divergence in the levels of technological maturity and digital solutions in different industry sectors. This study leverages current state-of-the-art literature and a series of interviews to formulate a descriptive definition of Maritime 4.0 that incorporates technologies that can be integrated to support decision-making. Through a rigorous, empirically grounded, and contextually relevant approach, the contribution of this study is the establishment of an organized set of technologies and characteristics related to 4.0 and establishment of a practical definition

Abstract:

We provide a framework in which we link the valuation and asset allocation policies of defined benefits plans with the lifetime marginal productivity schedule of the worker and the pension plan formula. In tum, we examine the retirement policies that are implied by the primitives of the model and the value of pension obligations. Our model provides an explicit valuation formula for a stylized defined benefits plan. Tbe optimal asset allocation policies consist of the replicating portfolio of the pension liabilities and the growth optimum portfolio independent of the pension liabilities. We show that the worker will retire when the ratio of pension benefits to current wages reaches a critical value which depends on the parameters of the pension plan and the discount rateo using numerical techniques we analyze the feedback effect of retirement policies on the valuation of plans and on the asset allocation decisions

Abstract:

A framework for analyzing computer-mediated communication is presented, based on Clark’s theory of common ground. Four technologies are reviewed: Facebook, Wikipedia, Blacksburg Electronic Village, and World of Warcraft, to assess their “social affordances,” that is, how communication is supported and how the technologies provide facilities to promote social relationships, groups, and communities. The technology affordances are related tomotivations for use and socio-psychological theories of group behaviour and social relationships. The review provides new insights into the nature of long-lasting conversations in social relationships, as well as how representations of individuals and social networks augment interaction

Abstract:

We investigate the potential for preparer specific performance dimensions to influence workpaper reviewer's judgments and actions. In a 2 x 2 experimental design, we manipulate two factors: 1) the amount of time, relative to the budgeted hours, expended by the preparer to complete the workpaper, and 2) preparer interpersonal behaviors during the course of the audit. The participants in our sample consist of 138 Mexican audit managers and seniors representing all four Big 4 public accounting firms. Although the participants reviewed an identical workpaper, the results of our experiment reveal that reviewers wrote significantly fewer (more) review comments and judged it to be of higher (lower) quality when the preparer completed the workpaper under (over) the budgeted time or when the preparer demonstrated good (poor) interpersonal behaviors

Abstract:

Foam is remarkably effective in the mobility control of gas injection for enhanced oil recovery (EOR) processes and CO2 sequestration. Our goal is to better understand immiscible three-phase foam displacement with oil in porous media. In particular, we investigate (i) the displacement as a function of initial (I) and injection (J) conditions and (ii) the effect of improved foam tolerance to oil on the displacement and propagation of foam and oil banks. We apply three-phase fractional-flow theory combined with the wave-curve method (WCM) to find the analytical solutions for foam-oil displacements. An n-dimensional Riemann problem solver is used to solve analytically for the composition path for any combination of J and I on the ternary phase diagram and for velocities of the saturations along the path. We then translate the saturations and associated velocities along a displacement path to saturation distributions as a function of time and space. Physical insights are derived from the analytical solutions on two key aspects: the dependence of the displacement on combinations of J and I and the effects of improved oil-tolerance of the surfactant formulation on composition paths, foam-bank propagation and oil displacement. The foam-oil displacement paths are determined for four scenarios, with representative combinations of J and I that each sustains or kills foam. Only an injection condition J that provides stable foam in the presence of oil yields a desirable displacement path, featuring low-mobility fluids upstream displacing high-mobility fluids downstream. Enhancing foam tolerance to oil, e.g. by improving surfactant formulations, accelerates foam-bank propagation and oil production, and also increases oil recovery

Abstract:

Understanding the interplay of foam and nonaqueous phases in porous media is key to improving the design of foam for enhanced oil recovery and remediation of aquifers and soils. A widely used implicit-texture foam model predicts phenomena analogous to cusp catastrophe theory: The surface describing foam apparent viscosity as a function of fractional flows folds backwards on itself. Thus, there are multiple steady states fitting the same injection condition J defined by the injected fractional flows. Numerical simulations suggest the stable injection state among multiple possible states but do not explain the reason. We address the issue of multiple steady states from the perspective of wave propagation, using three-phase fractional-flow theory. The wave-curve method is applied to solve the two conservation equations for composition paths and wave speeds in 1-D foam-oil flow. There is a composition path from each possible injection state J to the initial state I satisfying the conservation equations. The stable displacement is the one with wave speeds (characteristic velocities) all positive along the path from J to I . In all cases presented, two of the paths feature negative wave velocity at J ; such a solution does not correspond to the physical injection conditions. A stable displacement is achieved by either the upper, strong‐foam state, or lower, collapsed-foam state but never the intermediate, unstable state. Which state makes the displacement depends on the initial state of a reservoir. The dependence of the choice of the displacing state on initial state is captured by a boundary curve

Abstract:

Risk of sovereign debt default has frequently affected both emerging market and developed economies. Such financial crises are often followed by severe declines of employment that are hard to justify using standard economic models. This paper documents that labor market distortions deteriorate substantially around debt default episodes. Two different explanations for such dynamics are evaluated by linking these distortions to changes in labor taxes and costs of financing working capital. When added into a dynamic model of equilibrium default, these features are able to replicate the behavior of the observed labor distortion around a period of financial crisis and can also account for substantial declines of employment. In the model, higher interest rates are propagated into larger costs of hiring labor through the presence of working capital. Then, as an economy is hit with a stream of bad productivity shocks, the incentives to default become stronger, thus increasing the cost of debt. This reduces firm demand for labor and generates a labor wedge. A similar effect is obtained with an endogenously generated counter-cyclical income tax rate policy that also rationalizes why austerity is applied during deep recessions. The model is used to shed light on the recent events of the Euro Area debt crisis, in particular, the Greek sovereign debt default

Abstract:

The paper presents a bio-inspired robotics model for spatial cognition derived from neurophysiological and experimental studies in rats. The model integrates Hippocampus place cells providing long-term spatial localization with Enthorinal Cortex grid cells providing short-term spatial localization in the form of “neural odometry”. Head direction cells provide for orientation in the rat brain. The spatial cognition model is evaluated in simulation and experimentation showing a reduced number of localization errors during robot navigation when contrasted to previous versions of our model

Abstract:

The efficient resolution of spatial localization is a key challengeinautonomous mobile robots. We describe in this paper our latest work in understanding spatial localization based on rat behavioral and neural studies. We develop agrid cell neural model based on studies in the Medial Entorhinal Cortex that integratestoa place cell neural modelin the Hippocampus to generate “neural odometry” and spatial localization in the rat. We evaluatethe modelthrough simulated and physical robotexperiments using a Khepera III autonomousrobot in a laboratory environment

Abstract:

We describe our latest work in understanding spatial localization in open arenas based on rat studies and corresponding modeling with simulated and physical robots. The studies and experiments focus on goal-oriented navigation where both rats and robots exploit distal cues to localize and find a goal in an open environment. The task involves training of both rats and robots to find the shortest path to the goal from multiple starting points in the environment. The spatial cognition model is based on the rat’s brain neurophysiology of the hippocampus extending previous work by analyzing granularity of localization in relation to a varying number and position of landmarks. The robot integrates internal and external information to create a topological map of the environment and to generate shortest routes to the goal through path integration. One of the critical challenges for the robot is to analyze the similarity of positions and distinguish among different locations using visual cues and previous paths followed to reach the current position. We describe the robotics architecture used to develop, simulate and experiment with physical robots

Resumen:

Hasta ahora, la política fiscal ha sido estudiada ampliamente desde la perspectiva de la ciencia económica. En el presente trabajo, se introducen variables políticas y sociales que no han sido tomadas en cuenta por los estudios económicos. Con esta nueva propuesta y utilizando una muestra de un número significativo de países, se busca contribuir a los intentos de explicar porqué unos países recaudan más que otros. Del mismo modo, al concluir que las variables políticas y sociales son fundamentales para lograr una explicación más completa al respecto, se propone que sean tomadas en cuenta para implementar mejores políticas públicas en materia de ingresos públicos

Abstract:

Privacy is a complex social process that will persist in one form or another as a fundamental feature of the substrate into which ubiquitous computing (ubicomp) is threaded. Hospitals are natural candidates for the deployment of ubicomp technology while at the same time face significant privacy requirements. To better understand the privacy issues related to the use of ubicomp we place our efforts in understanding the contextual information relevant to privacy and how its interplay shapes the perception of privacy in a hospital. The results indicate that hospital workers tend to manage privacy by assessing the value of the services provided by a ubicomp application and the amount of privacy they are willing to concede. For ubicomp applications to better deal with this issue we introduce the concept of Quality of Privacy (QoP) which allows balancing this trade-off in a similar way as that of Quality of Service (QoS) does for networking applications. We propose an architecture that allows designers to identify different levels of QoP based on the user's context. Finally, we identify the main privacy risks of a location-aware application and we extend its architecture exemplifying the use of QoP to manage those risks

Abstract:

This paper addresses the following research questions: When subject to the same mid-term exam, what differences can be found in the academic performance of students from two different countries? Are there plausible explanations to those differences found? What can be learned to improve the teaching and learning process of IE students? What are the lessons for each of the countries involved? This document presents results from implementing the same (mid-term) exam in a large public university in the US and a small private university in Mexico City. The course is a Junior-year course in production planning and systems, and was taught in both cases by the same professor. The exam problems and questions were designed based on categories defined by Bloom’s Taxonomy, thus allowing to evaluate both technical (“hard”) knowledge acquired in-class, and higher (“soft”) skills of cognitive domain

Abstract:

This paper shows results of a five-year study aimed at examining the correlation among student profiles for the Program of Industrial Engineering at a private university located in Mexico City and different academic performance metrics. Metrics considered were: Overall GPA, General Studies courses GPA, attrition rate and change of major. The profiles were defined in terms of students Personality and Temperament types and Learning Preferences. Instruments used to determine profiles are accessible on-line: For personality and temperament types, a questionnaire called Jung Tipology Test; for learning preferences, the Felder-Soloman Index of Learning Styles (ILS). The study results have been used to identify characteristics of students having similar patterns of academic performance, which includes the determination of types of students attrition-prone. Results are also useful to identify typologies of prospective students to whom “promotional” efforts could be focused

Abstract:

This paper deals with component commonality in start-up manufacturing firms. We present a two-product Markov decision model that examines the implications of the inventory and production strategies for the survival probability of the firm. The advantage of using component commonality is studied for varying costs, demand correlations and order replenishment lead times. Optimal policies are derived, and minimum stock levels for survival are obtained. Moreover, we state the conditions under which simplified production decisions can be made. It is shown that commonality is not only useful as a way of dealing with demand uncertainty, but that its increased use is preferred for strongly substitutable products, and shorter replenishment lead times

Abstract:

With the success of randomized sampling-based motion planners such as Probabilistic Roadmap Methods, much work has been done to design new sampling techniques and distributions. To date, there is no sampling technique that outperforms all other techniques for all motion planning problems. Instead, each proposed technique has different strengths and weaknesses. However, little work has been done to combine these techniques to create new distributions. In this paper, we propose to bias one sampling distribution with another such that the resulting distribution out-performs either of its parent distributions. We present a general framework for biasing samplers that is easily extendable to new distributions and can handle an arbitrary number of parent distributions by chaining them together. Our experimental results show that by combining distributions, we can out-perform existing planners. Our results also indicate that not one single distribution combination performs the best in all problems, and we identify which perform better for the specific application domains studied

Abstract:

In this paper we propose a new nonparametric approach to network inference that may be viewed as a fusion of block sampling procedures for temporally and spatially dependent processes with the classical network methodology. We develop estimation and uncertainty quantification procedures for network mean degree using a "patchwork" sample and nonparametric bootstrap, under the assumption of unknown degree distribution. We provide a heuristic justification of asymptotic properties of the proposed "patchwork" sampling and present cross-validation methodology for selecting an optimal "patch" size. We validate the new "patchwork" bootstrap on simulated networks with short- and long-tailed mean degree distributions, and revisit the Erdös collaboration data to illustrate the proposed methodology

Résumé:

Les auteurs proposent une approche non paramétrique pour l'inférence dans les réseaux qui peut être décrite comme une fusion entre la méthodologie classique pour les réseaux et des procédures d'échantillonnage par bloc pour des processus présentant une dépendance spatiale et temporelle. Ils développent des procédures permettant d'estimer le degré moyen du réseau et d'en mesurer l'incertitude grâce à l'échantillonnage en mosaïque et au bootstrap non paramétrique, sous l'hypothèse que la distribution du degré est inconnue. Les auteurs présentent une justification heuristique des propriétés asymptotiques de l'échantillonnage en mosaïque proposé et décrivent une méthodologie de validation croisée afin de choisir la taille optimale des pièces de la mosaïque. Ils valident le bootstrap pour la mosaïque en simulant des réseaux dont la distribution du degré moyen présente des queues lourdes ou légères. Ils illustrent également leur méthodologie à l'aide des données de collaboration d'Erdös

Abstract:

Democracy is frequently framed as a distributional game. Much of the evidence supporting this possibility rests on the World Bank's 1996 'high-quality' inequality dataset. Using the updated and revised 'high-quality' dataset of 2007, this article revisits those results. Using the same country sample, more years and similar specifications to previous studies, as well as a larger country sample with more appropriate statistical models, we find no relationship between democracy/civil liberties and aggregate measures of economic inequality. Whether, and how, democracy decreases economic inequality remains an open question

Abstract:

This paper disaggregates government accounts to examine whether and how representation affects the level and distribution of taxation. Using panel data for over 100 countries from 1970 to 1999 and cross-sectional data for approximately 75 democracies from 1990 to 1998, we find that both democratization and voter turnout induced a modest but highly systematic increase in revenue from regressive taxes on consumption. While one-third of the increase due to democratization reflects a shift from more inefficient and similarly regressive taxes on trade, most of it was new revenue. Less convincingly, democratization and voter turnout also increased total tax revenue. By contrast, neither democracy, nor voter turnout systematically increased revenue from progressive taxes on income and capital. With reasonable assumptions about tax incidence and participation patterns, these findings shed light on competing conceptions of taxation and representation

Abstract:

Using data from approximately ninety countries, the author shows that the more a state taxes the rich as a percentage of GDP, the more it protects property rights; and the more it taxes the poor, the more it provides basic public services. There is no evidence that states gouge the rich to benefit the poor or vice versa, contrary to state-capture theories. Nor is there any evidence that taxes and spending are unrelated, contrary to state-autonomy models. Instead, states operate much like fiscal contracts, with groups getting what they pay for

Abstract:

In the recent past several developing countries have failed to achieve significant real capital investment despite episodes of large capital inflows. Although there are real projects with seemingly high returns, investors prefer to wait for the correct time to invest. In this paper we address this issue by considering a two-sector economy where investment in real capital is irreversible and debt-financed. Furthermore, the interst rate, which is determined in the financial sector, is random as a result of volatile expectations. In this economy the expected return on real capital is above the expected interest rate. This is because the option to wait for lower interest rates has a positive value. In the presence of rumors, taxes on international financial transactions (Tobin taxes) reduce the variance of the domestic interest rate, while leaving its mean unchanged. As a result, they induce more investment in irreversible real capital. The model borrows from the irreversible investment literature. A difference with other models is that the source of noise is a mean-reverting process not just a geometric Brownian motion. We solve for the optimal decision rule using the Girsanov theorem

Resumen:

Alboroto y motín de México del 8 de junio de 1692 es una muestra de las complicaciones que puede causar la clasificación genérica de un texto. Escrita como una suerte de propaganda política, la obra de Carlos de Sigüenza y Góngora es para algunos un texto histórico y, para otros, de un texto literario. En todo caso, constituye un gran ejemplo de la función sincrónico-social que tenía la literatura en ese momento político

Abstract:

Alboroto y Motín de México del 8 de junio de 1692 is a demostration of the complications that the generic classification of a text can generate. Written as a kind of political propaganda, this work by Carlos de Sigüenza y Góngora is a historical text for some and, for others, a literary artifact. Anyhow, it makes a good example of the synchronic-social function that literature had at that political moment

Resumen:

En este trabajo, analizo algunos cuentos de El androide y las quimeras desde la óptica de lo "siniestro", en especial, los cambios naturales que sufre una niña al convertirse en mujer. Así, los personajes femeninos son una suerte de metáfora de la llegada de la conciencia y la culpa que, a su vez, transforma el universo cotidiano de las protagonistas. De esta forma, los cuentos son una estrategia que pone en tela de juicio la verosimilitud de la realidad y la capacidad del lenguaje para explicar lo que sucede en su entorno

Abstract:

In this work I analyze some stories of El androide y las quimeras from the perspective of the "sinister" especially, the natural changes that a young girl goes through when she becomes a woman. Thus, the female characters are a kind of metaphor for the arrival of conscience and guilt that, in turn, transforms the everyday universe of the protagonists. In this way, stories are a strategy that questions the likelihood of reality and the ability of language to explain what happens in their environment

Resumen:

En este trabajo analizo los distintos géneros literarios que aparecen en la novela El zorro de arriba y el zorro de abajo de José María Arguedas, los cuales se incluyen dentro de las llamadas "literaturas del yo". Así, los distintos niveles narrativos, ficcionales y genéricos funcionan como forma de catarsis en la que el autor configura una realidad compleja y escindida

Abstract:

In this paper I analyze the literary genres that appear in José María Arguedas' novel El zorro de arriba y el zorro de abajo, which are included in so-called "literaturas del yo". Thus, the different narrative, fictional and generic levels work as a form of catharsis in which the author presents a complex and divided reality

Resumen:

En este artículo analizo la función de la palabra en el proceso comunicativo del mundo onettiano en el cuento "Los niños en el bosque", es decir, los límites del lenguaje ante el asombro y la desesperanza del exterior. Debido a la imposibilidad de la expresión verbal explícita, uno de los aspectos más relevantes es la gestualidad, configurada como un sistema alternativo de expresión que, poco a poco, sustituye la verbalización de las ideas y los sentimientos. Así, el cuento muestra una amplia gama de sensaciones que materializan los deseos y rompen la barrera de la realidad

Abstract:

This article analyzes the construction of the communicative process in Onetti's world in the story "Los niños en el bosque", that is, the limits of language before the astonishment and hopelessness coming from the outside. Due to the impossibility of explicit verbal expression, one of the most important aspects are gestures, configured as an alternative expression system which gradually replaces the verbalization of ideas and feelings. Thus, the story shows a wide range of sensations that embody desires and break the barrier of reality

Abstract:

We consider the preference aggregation problem in infinite societies. In our model, there are arbitrarily many agents and alternatives, and admissíble coalitions may be restricted to líe in an algebra. In thís framework (whích includes the standard one), we characteríze, ín terms of Stríct Neutrality, the Ultrafilter Property of preference aggregation rules. Based on this property, we define the concept of Limitíng Dictatorial rules, whích are characterízed by the existence of arbítrarily small decisíve coalítíons. We show that, in infiníte socíetíes which can be well approximated by finite ones, any Arrovían rule is limiting

Resumen:

Si bien la apertura no es un "juego de suma cero" dado que todas las partes que amplían su espectro de comercio ganan, necesariamente la reasignación de recursos implica que unos sectores se contraigan y otros se expandan. Pero las barreras regulatorias y las inercias políticas en el interior de las economías no han permitido que la reasignación de recursos materialicen los beneficios con una mayor inclusión. Ningún país en el mundo puede sellarse ante los bienes, servicios, capital, innovaciones o incluso migraciones del resto del mundo, lo que se requiere es mayores niveles de capital humano –educación y habilidades tecnológicas–. Ningún país puede ambicionar la prosperidad tratando de conseguirla por sí mismo en la autarquía

Abstract:

This paper addresses the Mexico experience in the opening to competition in networks infrastructure, mainly in the telecommunications sector. In spite of deregulation and privatization policies in the past, there are threats from regulatory failures which impairs the process of maximizing market competition. Distributive goals, overlapping regulatory agencies and protectionist devices inclusive to new providers from further competitors, are common policy failures. The approach to 'managed competition' leads on to a deficient market environment due to inefficient regulatory design

Abstract:

Given an undirected connected graph G = (V,E), a subset of vertices S is a maximum 2-packing set if the number of edges in the shortest path between any pair of vertices in S is at least 3 and S has the maximum cardinality. In this paper, we present a genetic algorithm for the maximum 2-packing set problem on arbitrary graphs, which is an NP-hard problem. To the best of our knowledge, this work is a pioneering effort to tackle this problem for arbitrary graphs. For comparison, we extended and outperformed a well-known genetic algorithm originally designed for the maximum independent set problem. We also compared our genetic algorithm with a polynomial-time one for the maximum 2-packing set problem on cactus graphs. Empirical results show that our genetic algorithm is capable of finding 2-packing sets with a cardinality relatively close (or equal) to that of the maximum 2-packing sets. Moreover, the cardinality of the 2-packing sets found by our genetic algorithm increases linearly with the number of vertices and with a larger population and a larger number of generations. Furthermore, we provide a theoretical proof demonstrating that our genetic algorithm increases the fitness for each candidate solution when certain conditions are met

Abstract:

Nowadays, clusters containing multiple GPU nodes are widely used to execute high-performance computing applications. Diverse disciplines use these clusters to improve the performance of several services that consume high computational resources. The challenge of executing high-performance computing applications becomes harder when the applications are executed concurrently and each one of them may demand multiple GPU nodes for different periods of time. To tackle this challenge, we propose a multi-agent architecture for scheduling multiple services in a heterogeneous GPU cluster. We provide simulation results of our agent-based system utilizing three commonly used scheduling heuristics for several configuration settings

Abstract:

Since 1996, an independent bureaucracy in Mexico has carried out a redistricting process purportedly founded in machine optimization of plans based on open and objective criteria. However, the process of "fine-tuning" the plans that are initially produced by formula is conducted behind closed doors where parties and experts are allowed to offer counter-proposals. This raises questions about the necessary conditions required for a bureaucracy to operate in a transparent, consistent, and accountable manner. Our research examines this question through the analysis of private records that trace the bargaining process that takes place between parties and bureaucrats. Analysis uncovers substantial gaps in transparency and consistency. Accountability in the Mexican redistricting process remains wanting without these

Resumen:

Los diversos reclamos y protestas de la ciudadanía generados por el desgaste de la clase política en la última década han expuesto, entre otras cosas, la urgencia de estrechar el vínculo entre la ciudadanía y sus representantes. En este rubro, la delimitación de la cartografía electoral es un mecanismo fundamental para transitar hacia una mejor representación política. Por tratarse de una labor inmersa en tecnicismos de diversa índole -geográficos, estadísticos, informáticos, entre los más reconocibles-, es fácil caer en la tentación de relegar la distritación al ámbito de los especialistas y perder de vista su importancia para la vida democrática. Desde nuestra perspectiva, el uso de las nuevas tecnologías, así como la generación y el uso de datos abiertos, ofrece una oportunidad para fortalecer la representación política. En esta nota de investigación discutimos el contexto de redistritación en México, los desafíos en materia de transparencia y cómo de ciertas tecnologías -como el software de código abierto y el mapeo en línea- tienen un enorme potencial para incrementar los niveles de transparencia, participación y rendición de cuentas en torno a los procesos de delimitación electoral

Abstract:

The many complaints and protests by citizens generated by the deterioration of the political elite in recent decades are clear evidence, among other things, of the urgent need to strengthen the connections between citizens and their representatives. To this end, the delimitation of the electoral boundaries -also known as redistricting- is key to improve political representation. Given the many technicalities involved in this processes -geographic, statistical, digital, among the most obvious- it is easy to succumb to the temptation of relegating it to specialists and lose sight of its importance for democracy. From our perspective, the use of new technologies, as well as the generation and use of open data, offer an opportunity to strengthen political representation. In this article we discuss Mexico’s redistricting experience, the challenges in terms of transparency, and how certain tools -such as open source software and online mapping tools- have a tremendous potential for increasing the levels of transparency, participation, and accountability surrounding boundary delimitation

Abstract:

Digital technologies shape the processes of teaching and learning in the classroom. They create spaces for action, while at the same time they pose restrictions and can generate unexpected situations, both for teachers and students. In this paper, we show examples in which teachers respond to contingencies emerging from the use of interactive programs in the classrooms. By describing different ways in which teachers react to those contingencies, we show how the technology plays an important role, at times by creating unexpected situations, and at others in support of teachers' explanations. In both cases it can promote modifications in students' actions and shape teachers' actions and responses in relation to a particular mathematical situation or problem. Understanding how teachers react when they are faced with an unexpected situation is important in order to gain knowledge about those particular responses that result in effective behaviors that are related to mathematics learning, and also in terms of the construction of rich learning environments that promote those kinds of behaviors. Results show that the kind of interaction that technology has the potential for promoting plays an important role in making students and teachers more aware of students' doubts and misunderstandings, but that this potential needs to be accompanied by effective teachers' strategies through which they use contingencies as opportunities to promote both their own and their students' learning. This study contributes to deepening knowledge about teachers’ effective strategies in primary school classrooms as well as providing examples to promote reflection regarding teachers’ training programs

Resumen:

Este trabajo discute el papel del diseño de tareas en la Teoría APOE. Se discute el papel que juega la descomposición genética en la teoría y en el diseño de tareas. Se muestra un ejemplo de descomposición genética para los conceptos de transformación matricial inversa y matriz inversa. Se proporcionan ejemplos que se diseñaron con dicha descomposición genética junto con una descripción de su relación con la misma a fin de dar una idea de cada una de las tareas y la construcción detallada y específica que tiene como objetivo. Se discute el papel de las tareas en el aula dado que la combinación del trabajo colaborativo de los estudiantes en secuencias de tareas y en la discusión en grupo constituyen la base de la Teoría APOE en la que se fundamenta su potencial para promover las construcciones necesarias para el aprendizaje profundo de los conceptos matemáticos

Abstract:

This paper discusses the role of task design in APOS Theory. The role played by the genetic decomposition in the theory and in task design is discussed. An example of a genetic decomposition for the concepts of inverse matrix transformation and inverse matrix is given. Tasks designed using this tool as a guide are exemplified as well as a description of their relationship to the genetic decomposition. In this way we provide insights about each task and the specific detailed construction it has as its aim. The role of the tasks in the classroom is discussed since the combination of collaborative work of students in sequences of tasks and in group discussions are the foundation of APOS Theory’s potential to promote essential constructions needed for a deep learning of mathematical concepts

Résumé:

Cet article discute le rôle de la conception de tâches dans la Théorie APOS. Le rôle de la décomposition génétique dans cette théorie et dans la conception des tâches est discuté. Un exemple de décomposition génétique est présenté pour les concepts de transformation inverse et matrice inverse. Des exemples conçus avec telle décomposition génétique sont fournis avec une description de leur relation avec celle-ci afin de donner une idée de chacune des tâches et de la construction détaillée et spécifique qu’elle a comme objectif. Le rôle des tâches en classe est discuté étant donné que la combinaison du travail collaboratif des élèves dans les séquences de tâches et dans les discussions en groupe constituent la base de la Théorie APOS, qui soutient son potentiel pour promouvoir les constructions nécessaires pour un apprentissage approfondie des concepts mathématiques

Resumo:

Este artigo discute o papel do projeto de tarefas na teoria APOE. O papel da decomposição genética na teoria e no design das tarefas é discutido. Um exemplo de decomposição genética é mostrado para os conceitos de transformação inversa e matriz inversa. Exemplos que foram projetados com a referida decomposição genética são fornecidos juntamente com uma descrição de sua relação com ele, a fim de dar uma idéia de cada uma das tarefas e da construção detalhada e específica que ela tem como objetivo. O papel das tarefas em sala de aula é discutido, uma vez que a combinação do trabalho colaborativo dos estudantes em sequências de tarefas e em discussões em grupo constitui a base da Teoria APOE, que é isso que sustenta seu potencial de promover as construções necessárias para o aprendizagem profunda de conceitos matemáticos

Abstract:

Learning is a complex phenomenon. Analyzing its development can provide knowledge about regularities and differences in students' progress, which is needed to better understand learning. The aim of this study is to examine the development of students' Linear Algebra Schema through an introductory one semester course. For that purpose, Action Process Object Schema (APOS) theory's notions of Schema and Schema development were used to analyze students' constructions at three different times throughout the course. Results show how students' Schema are progressively constructed. They illustrate differences and commonalities in different students' Schema development. Some concepts and factors were found to play an important role in promoting Schema development

Abstract:

In this chapter, an innovative approach, including challenging modeling situations and tasks sequences to introduce linear algebra concepts is presented. The teaching approach is based on Action, Process, Object, Schema (APOS) Theory. The experience includes the use of several modeling situations designed to introduce some of the main linear algebra concepts. Results obtained in several experiences involving different concepts are presented focusing on crucial moments where students develop new strategies, and on success in terms of student’s understanding of linear algebra concepts. Conclusions related to the success of the use of the approach in promoting student’s understanding are discussed

Abstract:

Action-Process-Object-Schema (APOS) theory and tools resulting from dialogue with the Anthropological Theory of the Didactic (ATD) were used to analyse data from semi-structured interviews and teaching materials to study students’ understanding of the relationship between tangent planes and the total differential. Results of the study show students’ difficulties relating these ideas and suggest a refinement of the initial genetic decomposition. They also underline aspects of the teaching materials that need to be considered to promote those constructions and development of a complete praxeology for the total differential. This study exemplifies how the dialogue between a cognitive theory and one that focuses on institutional aspects of mathematics education, can provide tools to deeply analyse the teaching and learning of a mathematical topic

Abstract:

Using Action-Process-Object-Schema (APOS) Theory students’ strategies while solving a linear transformations modelling problem in a Linear Algebra course are studied. Modelling cycles were complemented by conceptual activities designed with a previously developed genetic decomposition for this concept. The work of students during the modelling process in the classroom is described in terms of questions and knowledge emerging from their own strategies, and in terms of the difficulties they faced. Results show some affordances of the modelling situation and the use of activities, and the difficulties faced by students

Abstract:

Action-Process-Object-Schema (APOS) is used to study students’ understanding of the relationship between tangent planes and the differential. An initial conjecture, called a genetic decomposition, of mental constructions students may use in constructing their knowledge of planes, tangent planes, and the differential is proposed. It is tested with semi-structured interviews with 26 students. Results of the study suggest that students tend not to relate these ideas on their own and suggest ways to refine the initial genetic decomposition in order to help students to better understand these concepts

Resumen:

Con base en la teoría APOE (Acciones, Procesos, Objetos y Esquemas) como marco teórico y metodológico, investigamos las construcciones y mecanismos mentales necesarios para construir la Matriz Asociada a una Transformación Lineal (MATL). Diseñamos una descomposición genética (DG) del teorema de la MATL para analizar la forma en que estudiantes universitarios lo aprenden. Reportamos tres casos de estudio que muestran cómo los estudiantes construyen el concepto de coordenadas de un vector como objeto, pero tienen dificultades para utilizarlo en la construcción de un proceso de la matriz de coordenadas de imágenes de vectores mediante la transformación. Esta dificultad parece vinculada a que no han coordinado los procesos involucrados en términos del cuantificador. Específicamente, se muestran las dificultades en la construcción de la MATL como objeto y el papel determinante que juega la consideración de la transformación lineal como una función en la comprensión profunda del TMATL

Abstract:

Based on apos theory (Actions, Processes, Objects and Schemes) as a theoretical and methodological framework, we investigate the mental constructions and mechanisms required to construct the associated matrix of a Linear Transformation (MATL). We design a genetic decomposition (DG) of the theorem matl to discuss how college students learn it. We report three cases of study. The results show how students construct the concept of coordinates of a vector as an object, but have difficulty using it in the construction of the coordinate matrix associated to the image of vectors. Results also show students’ difficulties to construct matl as an object and the crucial role of considering TL as a function in order to attain a deep comprehension of TMATL

Resumen:

Este artículo considera una aplicación de las aportaciones del diálogo entre las teorías APOS y TAD consideradas como praxeologías de investigación. Se utiliza para ello un problema de investigación relacionado con el aprendizaje de las funciones de dos variables. Después de la descripción de la investigación que condujo al diseño de una descomposición genética, se reformula el problema de investigación para hacer posible el diálogo. Se usan las aportaciones del diálogo desde las componentes teórica y técnico-tecnológica como herramienta de análisis de un conjunto de actividades, mostrando su pertinencia y viabilidad institucional

Abstract:

This paper presents an application of the contributions from the dialogue between APOS theory and ATD considered as research praxeologies. A problem related to the learning of functions of two variables is used to show how tools resulting from the dialogue starting from the theoretical and technological components can be applied. After the description of the research work that led to the design of a genetic decomposition, the research problem is reformulated to make the dialogue possible. Results show the pertinence and institutional viability of the activities in the classroom

Abstract:

This paper reports the changes that occurred in the didactic approaches of three professors who participated in a project intended to develop new ways of teaching mathematics to second year university students. An enactivist perspective is used to address the process of change that emerged as a result from interactions during project meetings. We describe changes in the participants' actions by looking at data obtained from the meetings and the classrooms. Teachers were able to ‘see more’ and modify their teaching practices incorporating a more open and flexible approach in accordance to their structural state which depended on their previous history. Therefore, the results varied. It was possible to observe, however, similar changes in all members of the group which included the use of vocabulary from learning theories and the inclusion of in-depth reflections on teaching and learning

Abstract:

We present an approach for teaching linear algebra usingmodels. In particular, we are interested in analyzing the modeling process under an APOS perspective. We will present a short illustration of the analysis of an economics problem related to production in a set of industries. This problem elicits the use of the concepts of linear combination, linear independence, among other linear algebra concepts related to vector space.We describe cycles of students’ work on the problem, present an analysis of the learning trajectory with emphasis on the constructions they develop, and discuss the advantages of this approach in terms of students’ learning

Abstract:

This is a study about how graphs of functions of two-variables are taught. We are interested in particular in the techniques introduced to draw and analyze these graphs. This continues previous work dedicated to students’ understanding of topics of twovariable functions in multivariable calculus courses. The model of the “moments of study” from the Anthropological Theory of the Didactic (ATD) is used to analyze the didactical organization of the topic of interest in a popular calculus textbook, and in a typical classroom presentation. In so doing we obtain information about the institutional dependence of findings in previous studies

Abstract:

This study is part of a project concerned with the analysis of how students work with two-variable functions. This is of fundamental importance given the role of multivariable functions in mathematics and its applications. The portion of the project we report here concentrates on investigating the relationship between students' notion of subsets of Cartesian three-dimensional space and the understanding of graphs of two-variable functions. APOS theory and Duval's theory of semiotic representations are used as theoretical framework. Nine students, who had taken a multivariable calculus course, were interviewed. Results show that students' understanding can be related to the structure of their schema for R3 and to their flexibility in the use of different representations

Abstract:

We report on the effectiveness of the use of models as a teaching strategy to develop university students' ideas about linear independence. We show some examples of evolution of students' schema evolution throughout the teaching process obtained from their work, observation guides and interviews

Abstract:

We report on the effectiveness of the use of models as a teaching strategy to develop university students ideas about systems of linear equations and their solution. Students' productions throughout the modelling process were analysed, together with observation guides and tests. In particular, students' use of variable was investigated in relation to their problem solving strategies

Resumen:

Diversos estudios muestran que los estudiantes tienen dificultades para entender conceptos específicos del cálculo diferencial. Algunos señalan los obstáculos que les representa la integración de los diferentes conceptos en la solución de problemas específicos, incluidos los de graficación de funciones. En este estudio se analizan las respuestas de un grupo de estudiantes cuando resuelven estos problemas utilizando como herramienta de análisis la estadística implicativa y cohesitiva. Los resultados muestran la importancia de la comprensión de la segunda derivada y de los intervalos en los que el dominio se subdivide en virtud de las propiedades de la función para la solución exitosa de los problemas. Se hace evidente que el uso de esta herramienta en este tipo de estudios resulta no sólo pertinente sino de gran utilidad

Abstract:

Various studies show that students experience difficulties in understanding specific concepts of differential calculus. Some studies point to the obstacles represented by having to integrate different concepts into solving specific problems, including the writing of functions. The current study analyzes the responses of a group of students as they solve such problems by using implicative and cohesive statistics as an analytical tool. The results show the importance of understanding the second derivative and the intervals into which the domain is subdivided, due to the function's properties for successfully solving problems. It is evident that the use of this tool in such studies is not only pertinent but also highly useful

Abstract:

Students at university level need to be able to work with complex algebraic equations, and to apply definitions and properties when needed. Some authors claim that the solution of complex problems requires what they define as structure sense. We consider that a flexible use of variable is essential for solving complex algebraic equations. Research on the use of variable has focused on the solution of elementary algebra problems. In this paper the solution given by 36 university students to 3 complex algebraic equations is analyzed using the 3UV model. Results show that a flexible use of variable is required for success in the solution of these problems, but that it has to be accompanied by structure sense. Some characteristics that can be considered as part of structure sense are added to those previously defined

Abstract:

Since 1997, an ongoing national programme called Teaching Mathematics with Technology (EMAT) has been introduced in Mexican lower secondary mathematics classrooms (children 12-15 year-old). The programme introduces a pedagogical model aimed to foster exploratory and collaborative learning through the use of computational tools. In this article, we present some results from an evaluation study that looked at teachers' practice with these tools, their assimilation of the pedagogical model and the impact of the use of the programme on students' learning. The results show that teacher training remains insufficient and that teachers have great difficulties in changing their practice as conceived by the programme. The use of the technological tools in schools is also very inconsistent. For students, the benefit of the technology-based activities remains difficult to assess, and there is no visible impact on learning as measured by items used in national evaluations (although there seems to be a correlation between the tools used and the learning of different mathematical topics)

Resumen:

La perspectiva de la modelación brinda elementos para diseñar una metodología que promueve la reflexión de los estudiantes acerca de los conceptos importantes de la física y sobre su relación con las matemáticas. En este trabajo se reporta una experiencia enmarcada en un contexto específico de modelación. Se estudia el movimiento oscilatorio, en particular, el del péndulo. Los resultados obtenidos ponen de manifiesto las concepciones de los alumnos relativas a este tipo de movimiento, a las fuerzas, a la descomposición de vectores en componentes así como a la variación. También se reportan algunos ejemplos de la evolución conceptual de los participantes en el proyecto

Abstract:

A modeling perspective offers elements for designing a methodology to promote students’ reflection on the important concepts of physics and their relation to mathematics. This study reports on an experience within a specific context of modeling. A study is made of oscillatory movement, and in particular, of pendulum movement. The results reveal students’ conceptions of this type of movement, the forces, the decomposition of vectors in components, and variation. Some examples of the conceptual evolution of the project’s participants are also provided

Resumen:

Se ha mostrado que aun después de cursar álgebra durante varios años, los estudiantes universitarios tienen dificultades serias para comprender los usos elementales de la variable. En este trabajo se presentan y analizan los resultados de un estudio en el que se investigó la comprensión del concepto de variable en los diferentes grados de la enseñanza media. El estudio se realizó con 98 estudiantes de entre 12 y 19 años. Los resultados muestran que las concepciones de la variable que tienen los estudiantes en los diferentes cursos, no reflejan una diferencia sustancial en la comprensión de este concepto. Consideramos que las dificultades que manifiestan los estudiantes están fuertemente influidas por las prácticas docentes y el contenido de los cursos de álgebra

Abstract:

It has been shown that even after several algebra courses starting college students still have serious difficulties in understanding the elementary uses of variable. In this paper we present and analyse the results of a study that investigated the understanding of variable through different school levels. The study involved 98 students aged 12-19 years. Results obtained show that student's conceptions of variable do not substantially different in the tested school levels. We consider that students difficulties are strongly influenced by current teaching strategies and content of algebra courses

Abstract:

University courses on differential equations are being reconsidered and reformed, building on the previous reform efforts in calculus. In this paper we present a detailed analysis of semi-structured interviews where 18 students faced problems related to the solution of systems of ordinary differential equations, presented in different settings. We classify students' strategies and show many instances where students' understanding of parametric functions and variation conflict and become an obstacle to their understanding of the meaning of phase space representation and the notions of solution and equilibrium. We also highlight some cognitive and curricular issues that may be taken into account when dealing with these types of problems

Abstract:

In a first order language we interpret the action of the monoid M of embeddings of (Q,<) on the set Q inside (M,°). A similar result is proved for the monoid E of all endomorphisms of (Q,≤)

Abstract:

We extend results from an earlier paper giving reconstruction results for the endomorphism monoid of the rational numbers under the strict and reflexive relations to the first order reducts of the rationals and the corresponding polymorphism clones. We also give some similar results about the coloured rationals

Abstract:

This paper develops a method to estimate the demand for network goods, using minimal network data, but leveraging within-consumer variation. I estimate demand for video games as a function of individuals' social networks, prices, and qualities, using data from Steam, the largest video game digital distributor in the world. I separately identify price elasticities on individuals with and without friends with the same game, conditional on individual fixed effects and games' characteristics. I then use the discrepancies between estimated price elasticities to identify the impact of social networks. I compare my method to "traditional-IV" strategies in the literature, which require detailed network data, and find similar results. A 1% increase in friends' demands, increases demand by .13%. In counterfactual simulations, I find demand increases by about 5% from a promotional giveaway to "influencers," those users in the top 1% of popularity in the network

Abstract:

This article studies the efficient use of prioritizing certain content over others in Amazon's Twitch.tv, a live streaming service, taking into account the trade-off between entry and congestion. I specify and estimate supply and demand models for live video, and a congestion model. Using technological shocks, I identify congestion costs for content providers and their consumers. Using shocks in prioritization, I identify its benefits. With estimated preferences and technological parameters, I construct counterfactuals. Without congestion, demand doubles. A supply-side Pigouvian tax on traffic is preferred to a demand-side one. Without prioritization, consumer welfare drops by 10%

Abstract:

This paper presents a model of the evolution of the hedonic utility function in which perception is imperfect. Netzer (Am Econ Rev 99(3):937-955, 2009) considers a model with perfect perception and finds that the optimal utility function allocates marginal utility where decisions are made frequently. This paper shows that it is also beneficial to allocate marginal utility away from more perceptible events. The introduction of perceptual errors can lead to qualitatively different utility functions, such as discontinuous functions with flat regions rather than continuous and strictly increasing functions

Resumen:

La presente contribución describe brevemente el contexto político-jurídico, así como las preparaciones y negociaciones que precedieron y resultaron en la adopción de la Resolución 1904 del Consejo de Seguridad de Naciones Unidas y en el establecimiento del Ombudsperson para el régimen de sanciones sobre Al-Qaeda y el Talibán. Si bien se reconoce que aún falta mucho por hacer para garantizar plenamente el respeto al debido proceso legal de los individuos y las entidades enlistadas, los autores argumentan que dicho cambio institucional es un logro destacado en el marco del emergente Estado de derecho global.

Abstract:

This contribution gives a brief account of the politico-juridical context, as well as of the preparations and negotiations that preceded and resulted in the adoption of United Nations Security Council resolution 1904 (2009) and the establishment of the Ombudsperson of the Al-Qaida and Taliban sanctions regime. While acknowledging that still much is needed in order to fully respect due process rights of listed individuals and entities, it is argued that this institutional change is a significant achievement in the frame of the emerging global rule of law

Résumé:

Cette contribution décrit brièvement le contexte politique et juridique, ainsi comme les préparatifs et les négociations qui ont précédés et conduits à l’adoption de la résolution 1904 du Conseil de sécurité des Nations Unies et la mise en place de l’Ombusperson pour le régime de sanctions contre Al-Qaeda et les Taliban. Bien qu’on reconnais qu’il reste beaucoup à faire pour assurer le plein respect d’une sécurité du correcte procès judiciaire des individus et des entités énumérées, les auteurs argumentent que ce changement institutionnel est un réussite important dans le cadre du naissance État de droit mondial

Abstract:

This paper examines the effects of partition structure of schools on students' welfare and on incentives students face under the iterative student optimal stable mechanism (I-SOSM), introduced by Manjunath and Turhan (2016), in divided school enrollment systems. I find that when school partition gets coarser students' welfare weakly increases under the I-SOSM for any number of iterations. I also show that under coarser school partitions the I-SOSM becomes weakly less manipulable for students (when iterated sufficiently many times to reach a stable assignment) according to the "as strongly manipulable as" criteria defined by Pathak and Sönmez (2013). These results suggest that when full integration is not possible keeping school partition as coarse as possible benefits students with respect to their welfare and incentives they face if stability is a concern for policymakers

Abstract:

Despite our increased experience, unconventional gas plays remain risky. In the face of this risk, operators must balance the need to conserve capital and protect the environment by avoiding over drilling with the desire to maximize profitability by achieving the optimal well spacing as quickly as possible. Previous unconventional gas developments such as the Carthage Field (Cotton Valley) have implemented multiple infill drilling programs over several decades to optimize well spacing, with significant reduction in value (McKinney et al. 2002). However, in emerging plays such as the non-core Barnett Shale and the Fayetteville Shale, historical infill programs are not available to evaluate optimal spacing and we do not have the luxury of developing these fields over the next 30-40 years. Existing approaches for optimizing development, such as integrated reservoir simulation studies or statistical moving-window methods, can be either prohibitively time-consuming and expensive or they do not consider the uncertainty inherent in the assessment. The objective of our work was to develop technology and tools to help operators determine optimal well spacing in highly uncertain and risky unconventional gas reservoirs as quickly as possible. To achieve the research objectives, we developed an integrated reservoir and decision modeling system that incorporates uncertainty. We used Monte Carlo simulation with a fast, approximate reservoir simulation model to match and predict production performance in unconventional gas reservoirs. Simulation results are then integrated with a Bayesian decision model that accounts for the risk facing operators. We applied these integrated tools to a hypothetical case based on data from Deep Basin (Gething) tight gas sands in Alberta, Canada, to determine optimal development strategies.

Abstract:

We study ex post implementation in an interdependent value framework and with single dimensional types, using a class of mechanisms identified by monotonicity of outcomes and an integral representation of payments. We give examples to illustrate this class and its relation to the previous literature. The various extensions of the Vickrey-Clarke-Groves mechanism to interdependent value models are examples of this class. The extraction mechanisms of Crémer and McLean (1985) also form a special case for finite type spaces. The class is particularly useful in set allocation problems where the monotonicity condition is easier to work with

Abstract:

Within a simple parametric example, I show that if the buyer's values fail the monotone differences condition, nonmonotone mechanisms are not just feasible but may even be desirable for a seller. The failure of monotone differences is caused by intertemporal consumption externalities in the form of habits. I show that for an interval of habit parameters the revenue maximizing mechanism is nonmonotone: the seller screens out low and high types

Abstract:

We consider an optimal mechanism design problem with several heterogenous objects and interdependent values. We characterize ex post incentives using an appropriate monotonicity condition and reformulate the problem in such a way that the choice of an allocation rule can be separated from the choice of the payment rule. Central to the analysis is the fOffilulation of a regularity condition, which gives a recipe for the optimal mechanism. If the problem is regular, then an optimal mechanism can be obtained by solving a combinatorial allocation problem in which objects are allocated in a way to maximize the sum of virtual valuations. We identify conditions that imply regularity using the techniques of supermodular optimization

Resumen:

Se inicia el artículo con una discusión sobre el porqué de la dificultad de aplicar la teoría de perturbaciones a la Relatividad General. Se presenta una revisión de los diversos enfoques sobre la teoría de perturbaciones en Relatividad General, a saber: el formalismo Invariante de Norma (Gauge Invariant), la teoría 1+3 Covariante-Invariante de Norma (1+3 Covariant Gauge Invariant) y el enfoque estándar de fijar una norma (Gauge Fixing). Se desarrolla a detalle el enfoque Invariante de Norma, debido a que a diferencia de los otros dos enfoques, ya que cuenta con varias ventajas, entre las cuales se pueden mencionar que permite hacer desarrollos a ordenes perturbativos mayores al primero de una manera algorítmica, que aplica a teorías a las cuales se les exija que cumplan con el principio de covariancia general, y que puede aplicarse más de un parámetro perturbativo. Aunque este método es muy general, a manera de ejemplo se aplica a Cosmología

Abstract:

This work presents a review of the different approaches for perturbation theory in General Relativity: the Gauge Invariant formalism, the 1+3 Covariant Gauge Invariant theory and the traditional gauge fixing method. In particular, this review focuses in the Gauge Invariant formalism, due to it has a broader applicability (it applies not only to General Relativity but, to any theory that must fulfill the principle of general covariance) than the other two formalisms and because it has an algorithmic method for calculate the invariant variables to perturbation orders larger than the linear one. The article includes too, a brief discussion about the root of the problem of gauge-invariance in the perturbation theory in General Relativity. To help the reader, this last approach is applied to the cosmological scenario

Abstract:

We consider the possibility that the the Ultra High Energy Cosmic Rays arriving to Earth might be neutrons instead of protons. We stress that in such case the argument for the GZK cutoff is weaker and that it is conceivable that neutrons would not be affected by it. This scenario would require the neutron to start with an energy larger than the observed one, in order to be able to travel the distances involved, within its proper lifetime. It must then loose most of the extra energy through interaction with the galactic dark matter or some other matter in the intergalactic medium

Abstract:

This document deals with the auto parts supply from Mexico to some of the assembly plants of an American auto company located in U.S. and Canada. The ratio between the time required to cross the Mexico-US border (Nuevo Laredo/Laredo, TX) and the total transit time to final destination ranges between 15% and 50%. Quite frequently, trailers containing export auto parts spend more time waiting for border clearance than they do on the road. Two new proposals are developed to redesign the current process, and if applied, savings for the auto company could reach up to $2,000,000 U.S. dollars per year

Resumen:

Algunas investigaciones sugieren que los estudiantes han alcanzado un pensamiento algebraico maduro cuando son capaces de usar la variable de manera flexible, esto es, cuando logran integrar sus diferentes usos y diferenciarlos. Sin embargo, se ha demostrado que el concepto de variable es difícil para los estudiantes de distintas edades, y que en los diferentes niveles educativos, los estudiantes tienen dificultades para comprender los varios usos y aspectos que caracterizan a la variable. Esta investigación se propone analizar si los estudiantes alcanzan esta comprensión conforme progresan en el est dio de las matemáticas universitarias. Para ello, se analiza cuáles aspectos de la variable utilizan estudiantes de diferentes niveles educativos (3º de secundaria, estudiantes de recién ingreso a la universidad, estudiantes de 5º semestre de las carreras de Economía e Ingeniería) al resolver problemas algebraicos. Los resultados muestran que, si bien los estudiantes usan con mayor flexibilidad estos aspectos conforme progresan en los cursos de matemáticas avanzadas, su pensamiento algebraico no se desarrolla como se esperaría

Abstract:

A flexible use of variable, where all its facets can be integrated and differentiated as needed, is an important requirement for students to show mature algebraic thinking. However, variable has proven to be a difficult concept for students of different ages. Research results show that students at different school levels have difficulties in understanding the different facets that characterize variables. In order to analyze if students develop this ability while studying advanced mathematical courses, that is if advanced students have a much better understanding of variable, this research focuses on the comparison of students’ capabilities to solve algebraic problems, at different school levels (9th secondary level, starting university students, students attending the 5th semester of Economy and Enginery). Results show that even though students progress while they complete advanced mathematics courses, their algebraic thinking does not develop as would be expected

Abstract:

All around the world, poisonous scorpions are still considered as a public health issue. The scorpion’s species can be determined by its physical characteristics. Different methods have been applied to differentiate among different insects, such as bugs, bees and moths. However, none have been applied to distinguish between different scorpion species. This paper presents a procedure to distinguish between two different species of scorpions (Centruroides limpidus and Centruroides noxius) using image processing techniques and three different machine learning methods. First, the live scorpion is distinguished from the photograph image using a dynamic separation threshold obtaining its area and contour. A shape vector is obtained from both, area and contour, calculating the following features: aspect ratio, rectangularity, compactness, roundness, solidity and eccentricity. Finally, artificial neuronal network, classification and regression tree, and random forest classifiers are used to differentiate between both species. All three classifiers were evaluated by accuracy, sensitivity and specificity. Experimental results are reported and discussed. The best performance was obtained from the Random Forest algorithm with 82.5 percentage of accuracy

Abstract:

We present a hierarchical skeleton-guided motion planning algorithm to guide mobile robots. A good skeleton maps the connectivity of the subspace of c-space containing significant degrees of freedom and is able to guide the planner to find the desired solutions fast. However, sometimes the skeleton does not closely represent the free c-space, which often misleads current skeleton-guided planners. The hierarchical skeleton-guided planning strategy gradually relaxes its reliance on the workspace skeleton as Cspace is sampled, thereby incrementally returning a sub-optimal path, a feature that is not guaranteed in the standard skeleton-guided algorithm. Experimental comparisons to the standard skeleton guided planners and other lazy planning strategies show significant improvement in roadmap construction run time while maintaining path quality for multi-query problems in cluttered environments

Resumen:

Objetivo: Determinar si el uso de material sexual en línea influye en la conducta sexual de riesgo para VIH/SIDA en los jóvenes universitarios. Se utilizaron conceptos de la Teoría Cognitiva Social. Método: Diseño descriptivo correlacional, participaron 200 jóvenes universitarios, seleccionados por muestreo aleatorio sistemático (k = 11). Resultados: Los jóvenes que usaron material sexual en línea en medios ricos para masturbarse (r s = .34), excitarse (r s = .29), estimularse (r s = .29), buscar una aventura (r s = .30), conocer gente (r s = .27), imágenes (r s = .14) y cibersexo (r s = .25) mostraron mayor conducta sexual de riesgo para VIH/SIDA (p < .01). El uso de material sexual en línea para masturbarse (R 2 = 6.4%, F [1,189] = 12.80, p < .001), buscar una aventura (R 2 = 4.8%, F [1,189] = 9.56, p < .01), conocer gente (R 2 = 5.9%, F [1,189] = 11.88, p < .01) y tener cibersexo (R 2 = 4.1%, F [1,189] = 8.07, p < .01) presentó un efecto positivo y significativo en la conducta sexual de riesgo para VIH/SIDA. Conclusiones: El uso de material sexual en línea influye en la conducta sexual de riesgo para VIH/SIDA

Abstract:

Objective: To determine whether the use of online sexual material influences sexual risk behavior for HIV / AIDS in young university students. Concepts of Social Cognitive Theory were used. Methods: A descriptive correlational design, involving 200 university students selected by systematic random sampling (k = 11). Results: Young people who used sexual material online rich media to masturbation (r s =.34), arousal (r s = 29), stimulation (r s = 29), adventure (r s = 30), meeting people (r s =.27), images (r s =.17) and cybersex (r s =.25) showed greater sexual risk behavior for HIV / AIDS (p <.01). The use of sexual material online for masturbation (R 2 = 6.4%, F [1,189] = 12.80, p <.001), seeking adventures (R 2 = 4.8%, F [1,189] = 9.56, p <.01), meeting people (R 2 = 5.9%, F [1,189] = 11.88, p <.01) and have cybersex (R 2 = 4.1%, F [1,189] = 8.07, p <.01) had a significant positive effect on behavior sexual risk for HIV/AIDS. Conclusions: The use of online sexual material influences sexual risk behavior for HIV/AIDS

Abstract:

Categorical attributes are present in datasets used in machine learning (ML) tasks. Since most ML algorithms only accept numeric inputs, categorical instances must be converted to numbers. There are different encoding techniques to accomplish this task. During this conversion, it is important to preserve the underlying pattern in the dataset. Otherwise, there may be a loss of information that can negatively affect the performance of supervised learning algorithms. In this paper, we present an encoding technique based on finding those numbers or codes that preserve the relationship between the categorical attribute and the other variables of the dataset. We solved six supervised classification problems using the proposed technique with five different ML algorithms. Additionally, we compare the performance of the proposed technique with other ten encoding techniques. We found that the proposed technique outperforms the most commonly used encoding techniques for certain trained ML algorithms. On average, CESAMMO remained within the top 5 techniques in terms of performance of the 12 encoders tested

Abstract:

Most of the datasets used in Machine Learning (ML) tasks contain categorical attributes. In practice, these attributes must be numerically encoded for their use in supervised learning algorithms. Although there are several encoding techniques, the most commonly used ones do not necessarily preserve possible patterns embedded in the data when they are applied inappropriately. This potential loss of information affects the performance of ML algorithms in automated learning tasks. In this paper, a comparative study is presented to measure how the different encoding techniques affect the performance of machine learning models. We test 10 encoding methods, using 5 ML algorithms on real and synthetic data. Furthermore, we propose a novel approach that uses synthetically created datasets that allows us to know a priori the relationship between the independent and the dependent variables, which implies a more precise measurement of the encoding techniques' impact. We show that some ML models are affected negatively or positively depending on the encoding technique used. We also show that the proposed approach is more easily controlled and faster when performing experiments on categorical encoders

Abstract:

Wireless sensor networks constitute an important part of the Internet of Things, and in a similar way to other wireless technologies, seek competitiveness concerning savings in energy consumption and information availability. These devices (sensors) are typically battery operated and distributed throughout a scenario of particular interest. However, they are prone to interference attacks which we know as jamming. The detection of anomalous behavior in the network is a subject of study where the routing protocol and the nodes increase power consumption, which is detrimental to the network’s performance. In this work, a simple jamming detection algorithm is proposed based on an exhaustive study of performance metrics related to the routing protocol and a significant impact on node energy. With this approach, the proposed algorithm detects areas of affected nodes with minimal energy expenditure. Detection is evaluated for four known cluster-based protocols: PEGASIS, TEEN, LEACH, and HPAR. The experiments analyze the protocols’ performance through the metrics chosen for a jamming detection algorithm. Finally, we conducted real experimentation with the best performing wireless protocols currently used, such as Zigbee and LoRa

Abstract:

In this study, a Wireless Sensor Network (WSN) energy model is proposed by defining the energy consumption at each node. Such a model calculates the energy at each node by estimating the energy of the main functions developed at sensing and transmitting data when running the routing protocol. These functions are related to wireless communications and measured and compared to the most relevant impact on an energy standpoint and performance metrics. The energy model is validated using a Texas Instruments CC2530 system-on-chip (SoC), as a proof-of-concept. The proposed energy model is then used to calculate the energy consumption of a Multi-Parent Hierarchical (MPH) routing protocol and five widely known network sensors routing protocols: Ad-hoc On-demand Distance Vector (AODV), Dynamic Source Routing (DSR), ZigBee Tree Routing (ZTR), Low Energy Adaptive Clustering Hierarchy (LEACH), and Power Efficient Gathering in Sensor Information Systems (PEGASIS). Experimental test-bed simulations were performed on a random layout topology with two collector nodes. Each node was running under different wireless technologies: Zigbee, Bluetooth Low Energy, and LoRa by WiFi. The objective of this work is to analyze the performance of the proposed energy model in routing protocols of diverse nature: reactive, proactive, hybrid and energy-aware. Experimental results show that the MPH routing protocol consumes 16%, 13%, and 5% less energy when compared to AODV, DSR, and ZTR, respectively; and it presents only 2% and 3% of greater energy consumption with respect to the energy-aware PEGASIS and LEACH protocols, respectively. The proposed model achieves a 97% accuracy compared to the actual performance of a network. Tests are performed to analyze the consumption of the main tasks of a node in a network

Abstract:

In this work, two new self-tuning collaborative-based mechanisms for jamming detection are proposed. These techniques are named (i) Connected Mechanism and (ii) Extended Mechanism. The first one detects jamming by comparing the performance parameters with respect to directly connected neighbors by interchanging packets with performance metric information, whereas the latter, jamming detection relays comparing defined zones of nodes related with a collector node, and using information of this collector detects a possible affected zone. The effectiveness of these techniques were tested in simulated environment of a quadrangular grid of 7 x 7, each node delivering 10 packets/sec, and defining as collector node, the one in the lower left corner of the grid. The jammer node is sending packets under reactive jamming. The mechanism was implemented and tested in AODV (Ad hoc On Demand Distance Vector), DSR (Dynamic Source Routing), and MPH (Multi-Parent Hierarchical), named AODV-M, DSR-M and MPH-M, respectively. Results reveal that the proposed techniques increase the accurate of the detected zone, reducing the detection of the affected zone up to 15% for AODV-M and DSR-M and up to 4% using the MPH-M protocol

Abstract:

A sensor network is composed of nodes which collaborate in a common task. These nodes have certain sensory capabilities and wireless communication that allow forming adhoc networks, i.e., no pre-established physical structure or central administration is necessary. Therefore, one of the main problems with ad-hoc systems is that there is no existing infrastructure, so the routes change dynamically. This is due to fading, interference, disconnection of nodes, obstacles, node movements, and so on. We expose an analysis of the Multi-Parent Hierarchical (MPH) routing protocol for wireless sensor networks, which has low overhead, reduced latency and low energy consumption. Network performance simulations of the MPH routing protocol are carried out and compared with two popular protocols, Ad-hoc OnDemand Distance Vector (AODV), Dynamic Source Routing (DSR) and the well-known algorithm Zigbee Tree Routing (ZTR). The combination of a hierarchical topology with selfconfiguration and maintenance mechanisms of the MPH protocol makes nodes optimize network processes, reduce delays, take short routes to the destination and decrease network overhead. All this is reflected in the successful delivery of information

Abstract:

This research reveals how domestic gender violence suffered by female teachers affects teacher–student school violence in the classroom. Based on a representative survey of 1,542 female professors in 95 public schools in the Callao metropolitan region of Peru using variance structural equation modelling, there is a strong positive relation found between both types of violence (β = 0.34), accompanied by the existence of mediating effects of morbidity and diminished workplace performance. These results demonstrate that in order to reduce the incidence of school violence we must not only address violence between educators and students, but also violence suffered by teachers at the hands of their domestic partner

Abstract:

The existing methods for analyzing unreplicated factorials that do not have contemplate the possibility of outliers in experimental data have a poor performance for detecting the active effects when that possibility becomes a reality. We propose an interactive procedure based on robust regression which has a good performance in the presence or absence of contaminated data

Abstract:

In this paper a restricted version of the Galois connection between polymorphisms and invariants, called Pol − CInv, is studied, where the invariant relations are restricted to so-called clausal relations. The lattice of all clones arising from this Galois connection, so-called C-clones, is investigated up to equality of their unary parts, denominated C-monoids. All atoms and co-atoms in the lattice of all C-monoids are characterized

Abstract:

We introduce a special set of relations called clausal relations. We study a Galois connection Pol − CInv between the set of all finitary operations on a finite set D and the set of clausal relations, which is a restricted version of the Galois connection Pol − Inv. We define C-clones as the Galois closed sets of operations with respect to Pol− CInv and describe the lattice of all C-clones for the Boolean case D = {0,1}. Finally we prove certain results about C-clones over a larger set

Resumen:

Breve síntesis crítica de la teoría de Bourdieu con el fin de establecer la relación entre el nivel epistemológico y el nivel teórico en el conocimiento sociológico. Particularmente, se sostiene que la epistemología moderna que subyace a la obra de Bourdieu constituye la principal causa del discreto avance que su teoría presenta en el intento por trascender el pensamiento antinómico y también, paradójicamente, de que termine, en algunos casos, por reproducirlo. Como contraejemplo, se presenta una sucinta exposición de algunos de los principios epistemológicos y teóricos centrales de Simmel y de Goffman. El objetivo es mostrar una epistemología distinta que no reproduce el pensamiento dicotómico, entre otras razones, por su inextricable vínculo con el arte

Abstract:

This article presents a brief critical synthesis of Bourdieu's theory in order to establish the relationship between epistemological and theoretical levels in sociological thought. Namely, it states that the modern epistemology underlying Bourdieu's work is the principal reason why his theory has made a slight advance in its attempt to transcend dichotomical thought while paradoxically in many cases it ends up reproducing it. Serving as a counterexample, this article includes a concise presentation of some of Simmel and Goffman's main epistemological and theoretical principles. Our objective is to present a different epistemology which does not replicate dichotomical thought owing to among other reasons, its binding relationship with art

Abstract:

The slope of the implied volatility term structure is positively related to future option returns. I rank firms based on the slope of the volatility term structure and analyze the returns for straddle portfolios. Straddle portfolios with high slopes of the volatility term structure outperform straddle portfolios with low slopes by an economically and statistically significant amount. The results are robust to different empirical setups and are not explained by traditional factors, higher-order option factors, or jump risk

Abstract:

We propose and analyze a new Markov Chain Monte Carlo algorithm that generates a uniform sample over full and non-full-dimensional polytopes. This algorithm, termed "Matrix Hit and Run" (MHAR), is a modification of the Hit and Run framework. For a polytope in Rn defined by m linear constraints, the regime n1+1/3<

Abstract:

In this paper we present a didactical activity based on modelling an engineering problem known as Blind Source Separation (BSS) and the results of its implementation in a linear algebra course in a Mexican university. The problem had previously been analysed from an institutional point of view, carrying out the notion of the matrix map T(x)=Ax. In the frame of APOS theory we propose a genetic decomposition for this concept, in order to analyze students' constructions related to it, and at the same time, using the BSS context as a reference to connect mathematical constructions with a real life situation

Resumen:

El objetivo de este artículo es presentar un análisis praxeológico enmarcado en la Teoría Antropológica de lo Didáctico (TAD) de un método proveniente de la ingeniería conocido como Separación Ciega de Fuetes (BSS). En el método están presentes praxeologías que pueden trasponerse a los cursos iniciales de matemáticas dentro de una formación de ingenieros, concretamente dentro del curso de Álgebra Lineal. El análisis muestra que la BSS tiene potencial para generar actividades de modelación que conecten la teoría matemática con la práctica ingenieril. Se presenta, además, una propuesta inicial para una actividad de estudio e investigación basada en la BSS

Abstract:

The aim of this paper is to present a praxeological analysis in the frame of the anthropological theory of didactics (ATD) of an engineering method known as Blind Source Separation (BSS). In the method we found praxeologies that can be transposed to the first year mathematics courses in engineering education, particularly in Linear Algebra. The analysis shows that BSS is a potential tool to generate modelling activities that reduce the gap between mathematical theory and engineering practice. We present a proposal of a research and study path based on BSS

Abstract:

In this paper, we address the problem of observer design for permanent magnet synchronous motors (PMSM). The only measured signals are stator current and control voltage, and all parameters of the PMSM model except flux from permanent magnets are known. The load torque is unknown and assumed to be constant or slowly time-varying. The approach is based on a new way of applying the Dynamic Regressor Extension and Mixing method (DREM) to regression models with non-stationary parameters, which is recently proposed and called DREM-based adaptive observer (DREMBAO). In this work, we extend the previous result and design a full state observer with finite-time convergence. On the first step, the flux and position observer is constructed. On the second step with these estimates, the rotor speed and load torque are reconstructed

Abstract:

This article introduces frequency domain minimum distance procedures for performing inference in general, possibly non causal and/or noninvertible, autoregressive moving average (ARMA) models. We use information from higher order moments to achieve identification on the location of the roots of the AR and MA polynomials for non-Gaussian time series. We propose a minimum distance estimator that optimally combines the information contained in second, third, and fourth moments. Contrary to existing estimators, the proposed one is consistent under general assumptions, and may improve on the efficiency of estimators based on only second order moments. Our procedures are also applicable for processes for which either the third or the fourth order spectral density is the zero function

Abstract:

Providers of customised goods and services do not directly discriminate against a customer when their refusal to fulfil an order is based on their objection to the message requested by the latter and not on any protected characteristics of the person. This is the conclusion reached by the Supreme Court of the United Kingdom when faced with a claim of direct discrimination on grounds of sexual orientation and religious beliefs or political opinions contrary to two Northern Ireland Statutory Rules against a bakery which objected to incorporating the message 'Support Gay Marriage' into a cake. In this case comment it is argued that the Supreme Court correctly identified the crucial distinction between a message and a person for the purposes of discrimination law. Each of the two grounds of discrimination at issue is examined and an explanation for the inapplicability of a finding of discrimination on either is offered

Resumen:

Se ponen a dialogar dos de las interpretaciones más poéticas que la filosofía del siglo XX nos ha dado sobre el tiempo: la propuesta de Gaston Bachelard que tiene en "el instante" su concepto fundamental, y la idea de "la duración" bergsoniana. Esto, para demostrar que ambas intuiciones, más que contrapuestas, convergen y son complementarias, así como para profundizar dentro de ese marco en lo que Deleuze y Guattari entienden como "experiencia estética"

Abstract:

In this article, we will discuss the two most poetic interpretations of time that twentieth century philosophy has provided us Gaston Bachelard’s idea incorporating "the instant" as its core concept and the idea of the Bergsonian "duration". Thus, we will demonstrate that both concepts are not contradictory but convergent and complementary as well as elaborate in this framework what Deleuze and Guattari understand as "aesthetic experience"

Resumen:

En este artículo se intenta desentrañar la frase de Sancho Panza "con la Santa Hermandad no hay usar de caballerías" para, a propósito de un juego de palabras torcidas, intentar descubrir o entrever una "mentalidad" (la antidora), acaso ya perdida

Abstract:

This article aims to develop the passage in which Sancho Panza says "con la Santa Hermandad no hay usar caballerías" in order to reveal or glimpse, in relation to a special pun, a kind of "mentality" (the antidora), which might have been already lost

Resumen:

El texto pretende pensar hoy la política más allá de la política, el valor de la sociedad y, sobre todo, su configuración en torno al ser-en-común. A partir de ciertos ejes tales como la amistad, la libertad, la obediencia y la tiranía se busca recuperar, más allá de la amistad, el sentido de la política, pero también indagar en lo silenciado el eco de una amistad que más allá de sus fallas supo conjurar una fuerza inexplicable y fatal que, aniquilándola, la hizo posible

Abstract:

The text intends to think todays politics beyond politics, the value of society and, above all, its configuration on the being-in-common. From certain axes such as friendship, freedom, obedience and tyranny I seek to recover, beyond friendship, the sense of politics, but also investigate the echo of a friendship that, beyond its faults, knew to conjure an inexplicable and fatal force that made it possible

Abstract:

This paper analyzes the effect on the level and volatility of the Central American countries' exchange rates of transferring remittances through central banks. Given the importance of remittances for these economies, transferring a percentage of remittances through central banks diminishes the supply of dollars in the local currency market. This reduces the volatility and depreciates slightly the exchange rate, along with the need of the central bank to intervene in the currency market

Abstract:

Restricted non linear approximation is a generalization of the N-term approximation in which a measure on the index set of the approximants controls the type, instead of the number, of elements in the approximation. Thresholding is a well-known type of non linear approximation. We relate a generalized upper and lower Temlyakov property with the decreasing rate of the thresholding approximation. This relation is in the form of a characterization through some general discrete Lorentz spaces. Thus, not only we recover some results in the literature but find new ones. As an application of these results, we compress and reduce noise of some images with wavelets and shearlets and show, at least empirically, that the L²-norm is not necessarily the best norm to measure the approximation error

Abstract:

Shearlets on the cone provide Parseval frames for L2. They also provide near-optimal approximation for the class E of cartoon-like images. Moreover, there are spaces associated to them other than L2 and there exist embeddings between these and classical spaces. We prove approximation properties of the cone-adapted shearlet system coefficients in a more general context. Namely, when the target shearlet sequence belongs to a class or space different to that obtained from a shearlet sequence of a f E E and when the error is not necessarily measured in the L2-norm (or, since the shearlet system is a frame, the l2-norm) but in a norm of a much wider family of smoothness spaces of "high" anisotropy. We first prove democracy of shearlet frames in shear anisotropic inhomogeneous Besov and Triebel–Lizorkin sequence spaces. Then, we prove embeddings between approximation spaces and discrete weighted Lorentz spaces in the framework of shearlet coefficients. Simultaneously, we also prove that these embeddings are equivalent to Jackson and Bernstein type inequalities. This allows us to find real interpolation between these highly anisotropic sequence spaces. We also describe how some of these results can be extended to other shearlet and curvelet generated spaces. Finally, we show some examples of embeddings between wavelet approximation spaces and shearlet approximation spaces and obtain a similar result stated in L2(R2) for the curvelet smoothness spaces. This also paves the way to the use of thresholding algorithms in compression or noise reduction

Abstract:

Shearlets on the cone are a multi-scale and multi-directional discrete system that have near-optimal representation of the so-called cartoon-like functions. They form Parseval frames, have better geometrical sensitivity than traditional wavelets and an implementable framework. Recently, it has been proved that some smoothness spaces can be associated to discrete systems of shearlets. Moreover, there exist embeddings between the classical isotropic dyadic spaces and the shearlet generated spaces. We prove boundedness of pseudo-differential operators (PDO’s) with non regular symbols on the shear anisotropic inhomogeneous Besov spaces and on the shear anisotropic inhomogeneous Triebel–Lizorkin spaces (which are up to now the only Triebel–Lizorkin-type spaces generated by either shearlets or curvelets and more generally by any parabolic molecule, as far as we know). The type of PDO’s that we study includes the classical Hörmander definition with x-dependent parameter δ for a range limited by the anisotropy associated to the class. One of the advantages is that the anisotropy of the shearlet spaces is not adapted to that of the PDO

Abstract:

We define distribution spaces in ℝd via ℓq(Lp) norms of a sequence of convolutions of f ∈ S' with smooth functions, the shearlet system. Then, we define associated sequence spaces and prove characterization with the shearlet coefficients. We also prove continuous embeddings between some shear anisotropic inhomogeneous spaces and between classical (dyadic isotropic) inhomogeneous spaces and shear anisotropic inhomogeneous spaces

Abstract:

The shearlets are a special case of the wavelets with composite dilation that, among other things, have a basis-like structure and multi-resolution analysis properties. These relatively new representation systems have encountered wide range of applications, generally surpassing the performance of their ancestors due to their directional sensitivity. Both theories of coorbit spaces and decomposition spaces provide a way of associating some kind of smoothness spaces to shearlets. However, these smoothness spaces are closer to classical Besov type spaces. Here, instead, we define a kind of highly anisotropic inhomogeneous Triebel–Lizorkin spaces and prove that it can be characterized with the so-called “shearlets on the cone” coefficients. We first prove the boundedness of the analysis and synthesis operators with the “traditional” shearlets coefficients. Then, with the development of the smooth Parseval frames of shearlets of Guo and Labate we are able to prove a reproducing identity, which was previously possible only for the L2 case. We also find some embeddings of the (classical) dyadic spaces into these highly anisotropic spaces, and vice versa, for certain ranges of parameters. In order to keep a concise document we develop our results in the “weightless” case (w = 1) and give hints on how to develop the weighted case

Abstract:

This paper examines the factors responsible for generating the services led growth witnessed in the Indian economy during 1980–2005. A sectoral growth accounting exercise shows that total factor productivity (TFP) growth was the fastest for services; moreover this TFP increase was significant in accounting for service sector value added growth. A growth model with agriculture, industry and services as three principal sectors is calibrated to Indian data using sectoral TFP growth rates. The baseline model performs well in accounting for the evolution of value added shares and their growth rates, but is unable to capture sectoral employment share trends. The performance of the model with respect to value added shares improves when the post 1991 increase in service sector TFP growth following the inception of market-based liberalization reforms is accounted for. A modified version of the model with public capital can better track trends in sectoral employment shares

Abstract:

Since the turn of the millennium, a remarkably large number of incumbent presidents have managed to stay past the end of their constitutionally mandated terms. Russia's Vladimir Putin, Rwanda's Paul Kagame, and Colombia's Alvaro Uribe represent a sizeable collection of presidents who were democratically elected but remained in power long past their original mandates. Such attempts to stay in office are not new, but in recent decades their nature has changed. In this Essay, we present findings from an original and comprehensive survey of all evasion attempts since the year 2000. Tracing the constitutional strategies of 234 incumbents in 106 countries, we document the range of constitutional strategies these incumbents have pursued, along with how they succeeded or failed. This exercise has revealed a number of insights. First, evasion attempts are very common. Globally, no fewer than one-third of the incumbents who reached the end of their prescribed term pursued some strategy to remain in office. If we exclude the world's strongest democracies, we find that about half of the leaders that reached the end of their term attempted to overstay. Second, and perhaps most illuminating, none of these attempts involved ignoring the constitution outright. Instead, incumbents universally displayed nominal respect for the constitution by using constitutional rules and procedures to circumvent term limits, with about two-thirds attempting to amend the constitution. But constitutional amendment is not the only legal strategy at the would-be overstayer’s disposal-presidents have tried many methods. Most notably, a number of incumbents have relied on their courts to interpret constitutional term limits out of the constitution

Resumen:

Una deficiente planeación estratégica en las empresas de nueva creación ha generado muchas veces que las decisiones iniciales de los emprendedores no hayan sido las adecuadas y que, a la larga, las consecuencias se vean reflejadas en el fracaso de muchos nuevos negocios. El presente artículo tiene como objetivo proponer un simulador de vuelo ejecutivo que permita identificar y evaluar distintas estrategias de desarrollo de recursos de una nueva empresa manufacturera bajo las cuatro perspectivas del marcador balanceado, sensibilizando al usuario en el impacto que estas tendrían en los principales indicadores de desempeño. El simulador está diseñado utilizando el enfoque de dinámica de sistemas, para ser utilizado didácticamente en programas de maestría en administración, emprendedores o de desarrollo ejecutivo

Abstract:

A deficient strategic planning in new companies has produced that entrepreneurs initial decisions haven't been the appropriate and, in the long term, consequences will be reflected in the failure of many new companies. The objective of this article is to propose an executive flight simulator that will help to identify and evaluate different strategies for the development of resources of a new manufacturing company, based on the four perspectives of the balanced scorecard that will help the user to become sensitive in the impact that this will have in the performance measures. The simulator is designed under the System Dynamics view for educational purposes in MBAs, entrepreneurship or executive development programs

Resumen:

Una deficiente planeación estratégica en las empresas de nueva creación ha generado muchas veces que las decisiones iniciales de los emprendedores no hayan sido las adecuadas y que, a la larga, las consecuencias se vean reflejadas en el fracaso de muchos nuevos negocios. El presente artículo tiene como objetivo proponer un simulador de vuelo ejecutivo que permita identificar y evaluar distintas estrategias de desarrollo de recursos de una nueva empresa manufacturera bajo las cuatro perspectivas del marcador balanceado, sensibilizando al usuario en el impacto que estas tendrían en los principales indicadores de desempeño. El simulador está diseñado utilizando el enfoque de dinámica de sistemas, para ser utilizado didácticamente en programas de maestría en administración, emprendedores o de desarrollo ejecutivo

Abstract:

A deficient strategic planning in new companies has produced that entrepreneurs initial decisions haven't been the appropriate and, in the long term, consequences will be reflected in the failure of many new companies. The objective of this article is to propose an executive flight simulator that will help to identify and evaluate different strategies for the development of resources of a new manufacturing company, based on the four perspectives of the balanced scorecard that will help the user to become sensitive in the impact that this will have in the performance measures. The simulator is designed under the System Dynamics view for educational purposes in MBAs, entrepreneurship or executive development programs

Resumen:

La falta de planeación en las empresas de nueva creación ha generado, muchas veces, que las decisiones iniciales de los emprendedores no hayan sido las adecuadas y que a la larga, las consecuencias se vean reflejadas en el fracaso de muchos de estos nuevos negocios. El presente artículo tiene como objetivo proponer un simulador ejecutivo de vuelo, que permita identificar y evaluar distintas estrategias de desarrollo de recursos de una nueva empresa manufacturera, bajo las cuatro perspectivas del marcador balanceado, sensibilizando al usuario en el impacto que éstas tendrían en las principales medidas de desempeño. El simulador está diseñado para ser utilizado didácticamente en un programa de emprendedores a nivel universitario o de desarrollo ejecutivo

Abstract:

The lack of planning in new companies has produced that entrepreneurs initial decisions haven’t been the appropriate and, in the long term, consequences will be reflected in the failure of many of these new companies. The objective of this article is to propose a flight executive simulator that will help to identify and evaluate different strategies for the development of resources of a new manufacturing company, based on the four perspectives of the balanced scorecard that will help the user to become sensitive in the impact that this will have in the performance measures. The simulator is designed to be used for educational purposes in an undergraduate entrepreneurship program or for executive development

Abstract:

Existing theoretical literature on justice, law, and community typically treats them as ideas studying them through an analytical and rational approach. In this article, I propose to investigate these concepts through aesthetic experience as an attempt to both sharpen our imagination of such concepts and demonstrate they are inseparable. I do this by painstakingly examining the movies Shoplifters by Kore-eda and The House That Jack Built by Von Trier. Rather than focusing on thematic analysis, I claim and show that film form is crucial for aesthetic and affective experience. Furthermore, and against the conventional view, I argue that both movies articulate a spatialized vision of justice defined by its materiality. Together these aspects help us to keep (re)imagining law, justice, and community, and grasp better their worldmaking properties and powers

Abstract:

This article argues that to disrupt legal education in a radical sense, students need to become acquainted with the art of worldmaking and the view that law is a "way of worldmaking". First, I show that law is a cultural semiotic practice that requires decoding and, for that reason, demands a creative intervention by those that want to know, understand, and do things with law. Altogether this amounts to recognizing the different modes in which law creates, and is part of, worlds. Second, I propose that due to different features of their aesthetic form, comics are a particularly effective medium to place students before the myriad ways in which law and lawyers make and reproduce worlds. Third, I illustrate the argument by exploring how the Saga comic series, through its formal multimodality and narrative and cultural complexity, can make good on that challenge

Abstract:

Kratochwil criticizes two important teleological global narratives of universal progress - Luhmannian systems theory and jus cogens - and defends the need for a non-ideal and situated approach to law and politics. Despite the cogency of Kratochwil's analysis, why should we place our hope in his pragmatic program given the complexity of actual decision-making? This paper shows that more needs to be said about the role of hope grounding Kratochwil's account. Which hopes are hopeless, and which warranted? Why should we care and 'go on', choosing to be prudential and political rather than focusing on one's inner development or pleasure?

Abstract:

Despite the widespread acceptance of China's "Belt and Road Initiative" (BRI), the latter still faces some challenges. First, China's rise is as rampant as opaque and thus this Chinese foreign policy agenda has been met with suspicion due to its lack of detailed content. Second, for some countries, the BRI is essentially a new example of colonialism through which China is paving its way to access energy sources and markets both necessary to keep fueling its spectacular economic growth. Third, changes in political leadership in some of the countries along the Belt have turned the tables against Chinese investment accusing the latter of buying out countries. In turn, this jeopardizes the economic sustainability of Chinese investment. In this chapter, I suggest two ways in which China can try to change and correct these critical perceptions on the BRI. I propose that the cultural exchange and people-to-people ties side of the project is given more prominence. However, and by means of a comparison with the now forgotten Colombo Plan as well as the Australian New Colombo Plan, I advocate that China should avoid a type of one-way cultural exchange in which it tells other countries about itself without showing deep and sustained cultural interest in the "Other". I also suggest that if China wants to exercise a distinctive style of normative leadership it ought to develop the philosophy and values of its BRI. Because there are important resources in Chinese thinking to ground this claim, I resort to the work of Yan Xuetong who recovers the idea that a moral leader converts the hearts of others rather than winning them by different power resources. Altogether, these claims promise to create a more sustainable and appealing cultural and normative vision of the New Silk Road

Abstract:

Through the idea of "one belt, one way", China promotes a transformational vision of an international order based on inter-state equality, integrity within international institutions, and trade and the development of non-conditional requirements as a driver of economic growth and world peace. Considering the concept of the market State -which Patterson and Afilalo developed in their work on the global order of trade- this article seeks to show that the essence of China's participation in global governance appears far less future-oriented when placed in a broader context of society global and changes taking place in the very possibilities that individual countries have

Abstract:

Is it feasible and useful to articulate a general theory of transnational legality? In the book Transnational Legality: Stateless Law and International Arbitration, Thomas Schultz replies yes and argues, furthermore, that we need such a theory. In this Critical Notice I suggest otherwise. The overarching theme of my critique is a plea for thinking seriously on why we still insist on building general theories of legality. As I try to show, by engagement with Schultz’s main claims, those general theories face unsurmountable conceptual and normative problems. Here are some questions. Which theory of society do we endorse? Are transnational society and law different in nature from their domestic and regional counterparts? Why should we adopt a concept of complex legal system rather than focusing on the looser "community"? Should a concept of transnational legality be as inclusive as possible or narrowly-tailored? In virtue of which normative principles are we to make such a decision? How can we decide which elements from our state tradition are we to preserve and which ones are we to let go? Why devising a concept of legal system that does not see the connections to other legal and normative orders? How do Fuller's legality criteria meet the expectations we attach to law? And whose expectations are we speaking of? Why undergoing all these headaches to conclude that after all legality is a matter of clarity and we are not provided any tools on how to proceed empirically? All things considered why is this sort of enterprise worth it?

Abstract:

The label "transnational law" is deployed to address a pressing problem in international and domestic lives: in a different number of arenas, citizens have to abide by standards and rules which they have neither voted for, contributed to nor can easily change or dispute. To address the legitimacy gap of transnational legal practices academics have proposed two main strategies: (i) creation of global political institutions and principles; and, (ii) self-regulation. This article argues that the global constitutionalism/self-regulation set of alternatives is premised on too strong theoretical assumptions about the nature of world society and functional differentiation. Focusing primarily on a detailed analysis of Teubner’s societal constitutionalism and its systems theory’s assumptions, the article claims that the functional differentiation thesis at the core of autonomous transnational law is unconvincing and that there are resources at the domestic and regional (e.g. European Union) levels to address some of the challenges of transnational law

Resumo:

O rótulo "direito transnacional" é usado para resolver um problema premente nas vidas internacional e doméstica: num número diferente de arenas, os cidadãos têm de respeitar normas que não tenham nelas votado, contribuído para a respectiva criação, nem podem facilmente mudá-las ou coloca-las em causa. Para colmatar o défice de legitimidade das práticas jurídicas transnacionais, a doutrina propôs duas estratégias principais: (i) criação de instituições políticas princípios globais; e (ii) auto-regulação. Neste artigo argumenta-se que a alternativa entre constitucionalismo e auto-re-gulação global tem como premissa pressupostos teóricos muito sólidos sobre a natureza da sociedade mundial e diferenciação funcional. Focando-se principalmente numa análise detalhada do constitucionalismo societal de Teubner e das premissas da sua teoria dos sistemas, defende-se no artigo que a tese de diferenciação funcional que se encontra no cerne do direito transnacional autónomo não é convincente e que existem recursos nos níveis nacional e regional (por exemplo, União Europeia) para enfrentar alguns dos desafios do direito transnacional

Abstract:

Is legal theory relevant to legal practice? Should legal theory be part of the academic legal curriculum? This article outlines three propositions in relation to these longstanding contentious questions. First, it argues that existing literature has pursued an inadequate argumentative strategy by (1) assuming that there is a single yes or no answer to the questions surrounding the relevance of legal theory; and (2) treating legal theory and legal practice as discrete, unrelated entities. This article distinguishes between different styles of doing legal theory and legal practice, and argues that the role of legal theory needs to factor in changes in the substance of law, legal reasoning, and legal careers. Second, focusing on European civil law countries, this article concludes that most legal theory is irrelevant for conventional legal practice. Concomitantly, it suggests that the constitutionalization, transnationalization, and Europeanization of legal systems are changing the practice of law in a way that is more congenial to theory than hitherto. It also contends that legal roles embodying a legislative standpoint within law are creating a demand for increased theoretical sophistication. Third, this article suggests what a course in legal theory, sketched along the lines of the analysis carried out, might look like

Abstract:

In the Ethics, Badiou writes against ideal, abstract and rule-based conceptions of ethics. As in some pragmatic ethics, it implies denying the received moral vocabulary and focusing instead on agency. This explains why Badiou’s Ethics is often read as a radical statement in today’s normative landscape. This paper evaluates such a claim. In particular, it questions the extent to which an ethical approach that idealizes the situation through the notion of “fidelity to the event” can truly be non-ideal and radical. Three points are problematic. First, one has to discover actual “events” that demand ethical action; but who can tell us what an event is? Second, one has to be “faithful” to those events; but it is unclear whether Badiou allows for “evil” events and why fidelity to the event is better than its denial or occultation. Three, how can a non-ideal and radical ethical approach be premised on the idea of truth, and which consequences follow for its capacity to provide normative guidance? Put simply, this paper argues that while Badiou seems to perform a radical shift in contemporary moral reasoning, his contribution is more ambiguous. He seems to (i) reinstate an ethics based on naturalistic conceptions of good and evil; and, (ii)replace the role of reason in devising moral rules for the role of the philosopher that defines what counts as an event. Finally, while the results are modest, Badiou’s ethics forces us to adopt a vocabulary that impoverishes the description of moral life, and fails to build an ethics that is sensitive to concrete situations

Abstract:

This paper critically evaluates interdisciplinary research in tax law. The strategy I follow runs at two levels of abstraction. First, I examine a concrete example of interdisciplinary research in taxation. More precisely, I examine Hikaka and Prebble's (2010) recent paper where, applying Luhmannian autopoietic theory to tax law, they make a series of claims about the productivity of their research strategy as well as the consistency and coherence of Luhmann's interdisciplinary framework. Whereas my analytical and conceptual critique of Hikaka and Prebble's paper stands on its own, it should also be read as revealing the obstacles that lurk behind interdisciplinary research in using such a complex and idiosyncratic theory as Luhmann's autopoietic account of law and society. Accordingly, my analysis shows how autopoietic theory can indeed prove useful for tax and accounting reform as well as to connect tax theory and notions of public interest. Second, I extrapolate from the analysis of Hikaka and Prebble's paper some general problems that current interdisciplinary tax research needs to give further consideration: (i) how to identify productive research questions and uses of interdisciplinary resources; (ii) the dubious added value of interdisciplinary research, given its tendency to adopt complex theoretical apparatuses in a cursory way with little comparison being made to existing research achievements; and (iii) the risk of using interdisciplinary research as an exercise of confirmatory investigation and/or an exercise of mere translation of one discipline's problems into another discipline's language

Abstract:

In this paper I contrast Hayek's and Luhmann's treatment of law as a complex social system. Through a detailed examination of Hayek's account of law, I criticize the explanatory power of his central distinction between spontaneous order and organization. Furthermore, I conclude that its application to law leads to different results from the ones derived by Hayek. The central failure of Hayek's failure, however, lies in his identification of complex systems with systems of liberal content maximizing individual freedom. Indeed, in this way, he can only account for systems-individuals and not systems-systems interactions. I introduce Luhmann's theory of autopoietic systems, which I submit, can solve all the mentioned problems and seems a much more promising conceptual architecture to grasp social systems in the context of a complex society

Abstract:

A thorough analysis is performed to find traveling waves in a qualitative reaction-diffusion system inspired by a predator-prey model. We provide rigorous results coming from a standard local stability analysis, numerical bifurcation analysis, and relevant computations of invariant manifolds to exhibit homoclinic and heteroclinic connections, and periodic orbits in the associated traveling wave system with four components. In so doing, we present and describe a wide range of different traveling wave solutions. In addition, homoclinic chaos is manifested via both saddle-focus and focus-focus bifurcations as well as a Belyakov point. An actual computation of global invariant manifolds near a focus-focus homoclinic bifurcation is also presented to unravel a multiplicity of wave solutions in the model

Abstract:

This paper investigates the impact of the European Union landing obligation in the Galician (North West of Spain) multispecies small-scale gillnet fishery. By combining results from semi-structured interviews with small-scale fishers and a bioeconomic model, we found that the percentage of discards for small-scale fisheries is usually low, which is consistent with general empirical observations globally but can be high when quotas are exhausted. Our results also confirm that the landing obligation would generate negative impacts on fishers' activities by investing more time on-board to handle previously discarded fishes, and putting at risk the security of fishers at sea due to full use of allowable storage on-board coupled with often adverse sea conditions in Galician bays. The application of the landing obligation policy to small-scale fisheries would result in short- and long-term losses of fishing days and yields, with high negative impacts on sustainable fisheries such as the Galician multispecies small-scale gillnet fishery. The expected number of fishing days under the landing obligation is estimated to be reduced by 50% during the five years following the implementation of the policy. The future yield (catches) under the landing obligation would be only 50% of catches expected in the absence of the landing obligation, regardless of the total volume of quotas allocated to the fleet

Abstract:

The landing obligation recently adopted by the European Union's (EU) Common Fisheries Policy aims to eradicate discards in EU fisheries. The objective of this paper is to investigate the potential social and economic impacts of the discard ban in European small-scale fisheries (SSF) and the critical factors for its successful implementation. An exhaustive systematic literature review and a stakeholder consultation were carried out in order to (i) collect detailed information about current knowledge on discards in EU SSF and gauge stakeholder perceptions about potential impacts of the discard ban in European SSF, (ii) examine the capacity of the SSF industry to implement the discard ban, and (iii) explore the limits and feasibility of implementing such a measure. The results of this study show that little attention has been given by the scientific community to discards in EU SSF. Indeed, the systematic literature review shows that this problem is relatively unexplored in the EU. In addition, the effectiveness of a discard ban in industrial fisheries is still unclear, mainly because discard data are not systematically collected by fisheries authorities. Stakeholders mostly perceive that the new landing obligation was developed with industrial fisheries in mind and that compliance with the landing obligation in EU SSF will be difficult to achieve without high economic costs, such as those related to the handling and storage of unwanted fish on board

Abstract:

We present a methodology for the creation, manipulation and transmission of 3D anatomical models starting from stacks of medical images. The anatomical information for the models is first segmented from the images and then used by a surface reconstruction algorithm to create 3D meshes that accurately represent the surface of the organs being modeled. The meshes are then exported in an open format for manipulation in a graphics platform. Using high-end graphics algorithms we render and texture-map our meshes. We can also deform the meshes and assign physical and material properties to them. Our models can be used for virtual reality applications and can also be transmitted over the Internet for remote interactive visualization and retrieval

Resumen:

La llamada "gripe española" apareció repentinamente en Norteamérica en 1918, se diseminó por el mundo y causó alrededor de 30 millones de muertos. En México, su brote provocó complicaciones en un panorama de por sí difícil, pues el país se encontraba en la última etapa del movimiento revolucionario. Las medidas adoptadas por el gobierno y la forma en que la prensa dio a conocer las noticias de los contagios y las muertes causadas por la enfermedad, así como el impacto causado en la sociedad dejan ver el desarrollo de la ciencia médica en México, los cambios en las conductas sociales frente a una enfermedad muy contagiosa y la forma en que un fenómeno de esta naturaleza es tratado por la prensa nacional

Abstract:

The so-called "Spanish flu" appeared suddenly in North America in 1918, spread throughout the world and caused around 30 million deaths. In Mexico, its outbreak caused various complications amid an already difficult panorama, since the country was in the last stage of the revolutionary movement. The measures adopted by the government and the way in which the press reported the news of the contagions and deaths caused by the disease, as well as the impact caused in society, show the progress of the medical science in Mexico, the changes in social behavior in the face of a highly contagious disease, and the way in which a phenomenon of this nature is treated by the national press

Resumen:

El artículo tiene como objetivo analizar cómo era entendida la libertad de prensa poco antes de mediar el siglo XIX mexicano, así como los elementos que daban forma a su ejercicio. En el otoño de 1840 fue publicado un folleto en la capital que proponía considerar la monarquía como forma de gobierno para superar los problemas del país. El rechazo fue generalizado y dio inicio un proceso judicial en contra del autor y el impresor del texto por considerar su contenido subversivo. El alboroto ocasionado dio lugar a que se discutiera la importancia de las imprentas en la vida del país y se acusara al gobierno de atentar en contra de la libertad de prensa por haber apresado al impresor del folleto. Con base principalmente en fuentes hemerográficas y decretos legales, se analizan las reacciones y consecuencias que tuvo la publicación para mostrar la participación de la prensa en el terreno de la política, las estrategias utilizadas por parte del gobierno para regularla, la manera en que se echaba mano de las leyes correspondientes, se interpretaban y aplicaban, y la recíproca incidencia que existía entre las actividades de las imprentas y el panorama político. El trabajo aporta elementos que dan fuerza la interpretación de considerar a la prensa (impresos, impresores, escritores) como un actor más, en ocasiones protagonista, de la política en el México decimonónico y a la libertad de prensa como un derecho que debía ser estrechamente regulado

Resumen:

Señalar que el Segundo Imperio mexicano respondió a un proyecto meramente conservador es un lugar común de la visión de la historiografía tradicional sobre la política decimonónica en México: una lucha encarnizada entre el liberalismo y el conservadurismo. Con el fin de contribuir al renovado debate historiográfico, en este artículo se da cuenta del camino recorrido por el monarquismo mexicano durante el siglo XIX, las actividades de quienes trabajaron a su favor y sus resultados

Abstract:

A commonly held view of traditional historiography regarding nineteenth century Mexican politics is to argue that the Second Mexican Empire was a strictly conservative endeavor: a fierce fight between liberalism and conservatism. In this article, we will contribute to the renewed historiographical debate by relating the story of Mexican monarchism in the nineteenth century and the actions of its supporters and their results

Abstract:

Brains are composed of connected neurons that compute by transmitting signals. The neurons are generally fixed in space, but the communication patterns that enable information processing change rapidly. By contrast, other biological systems, such as ant colonies, bacterial colonies, slime moulds and immune systems, process information using agents that communicate locally while moving through physical space. We refer to systems in which agents are strongly connected and immobile as solid, and to systems in which agents are not hardwired to each other and can move freely as liquid. We ask how collective computation depends on agent movement. A liquid cellular automaton (LCA) demonstrates the effect of movement and communication locality on consensus problems. A simple mathematical model predicts how these properties of the LCA affect how quickly information propagates through the system. While solid brains allow complex network structures to move information over long distances, mobility provides an alternative way for agents to transport information when long-range connectivity is expensive or infeasible. Our results show how simple mobile agents solve global information processing tasks more effectively than similar systems that are stationary

Resumen:

En este ensayo se analiza el papel de la reminiscencia en la obra de Marcel Proust, especialmente en En busca del tiempo perdido y su relación con la construcción de la identidad de los protagonistas de sus novelas. La memoria permite acceder al pasado individual y los recuerdos se vuelven una forma peculiar de autoconocimiento. Proust muestra cómo es posible reconstruir la historia personal por medio de la memoria y de la búsqueda del tiempo perdido

Abstract:

In this article, we analyze the role of reminiscence in Marcel Proust’s work, especially In Search of Time Lost and its relationship with the construction of his protagonists’ identities. Memory provides access to the personal past and thus becomes a unique form of self-awareness. Proust demonstrates how it is possible to access one’s personal history through memories and the search of time lost

Resumen:

El autor examina las características de la Generación del 98 por medio de una selección de textos de dos de sus representantes más significativos: Miguel de Unamuno y Pío Baroja. A partir de un estudio comparativo, analiza los temas principales de sus novelas y las preocupaciones que los guían, poniendo especial énfasis en sus semejanzas y diferencias. El hilo conductor de este análisis crítico consiste en rastrear la influencia del existencialismo en sus novelas. Éste se refleja en la obra de ambos escritores mediante una estética trágica y sombría, y el monólogo y la introspección como recursos estilísticos; el protagonista se presenta como forjador de su destino, además de mostrarse en una lucha constante con el resto de los personajes. La trama de sus novelas suele guiarse por una serie de implicaciones éticas, además de hacer reflexionar al lector sobre el lugar de la trascendencia y el sentido de la muerte

Abstract:

In this article, the characteristics of Generation 1898 are analyzed in selected works of its most significant writers, Miguel de Unamuno and Pio Baroja. A comparative study of their main ideas and concerns in those works with a special emphasis placed on their similarities and differences will be addressed. The common theme for this critical analysis is to demonstrate the importance of existentialism in their novels. Examples of this influence are seen in their use of a dark and tragic esthetic, monologue and introspection as stylistic tools, the protagonist as creator of his own destiny, as well as his continuous struggle with the other characters. The plot in these novels addresses ethical issues as well as allowing the reader to contemplate the role of transcendence and the meaning of death

Abstract:

A characteristic-functional approach is introduced to study the space and time stochastic fluctuations of the electron population in a simple one-dimensional model of ionization growth. Two electron sources are considered: (a) ionization by direct collisions and (b) photoemission at the cathode due to de-excitation of atoms. The motion of the ions is neglected and the electrons are assumed to move with constant drift velocity. An equation for the characteristic functional G[theta(x),t] = [exp[i integral-L/0 dx theta(x)n(x, t)]] is obtained, where n(x, t) is the electron density, and theta(x) is a conjugate function; from this, equations for the moments, e.g., the average density and the density-density correlation function, can easily be derived. Similarly to using the method of compounding moments, this technique avoids the use of a probability in function space; however, it has the benefit that the motion in configuration space may be incorporated self-consistently, whether the motion is free streaming or diffusion. Numerical examples are used to illustrate the time behavior of the mean total population in the gap, which shows a good agreement with previous results; in addition, we analyze the time evolution of the associated electron mean density, the density-density correlation function, and the fluctuations around the mean population

Abstract:

Computational fluids dynamic was used to analyze the mixing operation within a stirred batch reactor to distribute rapid and homogeneously reagents, used in the refining process of liquid lead. The flow pattern and distribution of the reagents inside the reactor were analyzed through tracer response curves obtained by numerical simulation. The predominant mechanism of momentum and mass transfer for macro-mixing is convection for the mean and eddy flows. Based on the assumption that the tracer is distributed in the vessel by convection and diffusion, the dynamic distribution of the tracer concentration inside the stirred batch reactor was calculated by solving the Reynolds-averaged conservation equations and the Realizable K–ε turbulence model. The mean and tracer flow was considered as incompressible, isothermal and single phase under turbulent conditions. To optimize the injection point of reagents in the stirrer batch reactor, several simulated tracer con-centration curves were obtained from monitoring points located at different radial and axial positions.

Abstract:

This paper presents a modified state observer for a nonlinear system. Recently presented parameter estimation-based observer for such system uses Dynamic Regressor Extension and Mixing (DREM) procedure and requires non-square integrability property for the regressor to guarantee that an observer error convergences to zero. However, it is not satisfied in some operating modes. For example, when the object is stabilized in a constant position. As an application a model of one degree of freedom magnetic levitation system is considered. Only the electromagnet coil current and voltage are measured, and the flux, speed and position values are reconstructed with observers. The proposed finite time modification for the flux observer requires the weaker interval excitation condition, which is fulfilled during the transient process. The simulation results demonstrate the efficiency of the algorithm

Abstract:

Public spending, because it is influenced by elected officials, can be swayed by political considerations as well as socio-economic factors. Previous studies have confirmed the importance of political measures in the allocation of general public spending as well as in spending for infrastructure and highway projects at the state level. This study examines the determinants of the allocation of state highway funds within one state—in North Carolina to counties during the period 1990–2005—and assesses the relative importance of socio-economic and political factors in these decisions. Determinants were derived from three conceptual approaches to public spending: the median voter model, the special interest model, and the political model. Persistence was found in both highway construction spending and highway maintenance spending. In addition, employment market conditions were a strong determinant of highway construction spending, as was one political factor—the county’s relative vote in the most recent election for the state’s dominant political party. Highway maintenance spending was found to be dominated by median voter factors with no finding of political influence

Abstract:

An interior-point method for nonlinear programming is presented. It enjoys the flexibility of switching between a line search method that computes steps by factoring the primal-dual equations and a trust region method that uses a conjugate gradient iteration. Steps computed by direct factorization are always tried first, but if they are deemed ineffective, a trust region iteration that guarantees progress toward stationarity is invoked. To demonstrate its effectiveness, the algorithm is implemented in the KNITRO [6, 28] software package and is extensively tested on a wide selection of test problems

Abstract:

In this paper we propose a new parameter estimator that ensures global exponential convergence of linear regression models requiring only the necessary assumption of identifiability of the regression equation, which we show is equivalent to interval excitation of the regressor vector. An extension to - separable and monotonic - nonlinear parameterisations is also given. The estimators are shown to be robust to additive measurement noise and - not necessarily slow-parameter variations. Moreover, a version of the estimator that is robust with respect to sinusoidal disturbances with unknown internal model is given. Simulation results that illustrate the performance of the estimator compared with other algorithms are given

Abstract:

In this paper we are interested in the problem of state observation of state-affine nonlinear systems. It is well-known that observability of the system is a necessary assumption for the state reconstruction. Our main contribution is to prove that it is also sufficient to design a globally exponentially stable observer This should be contrasted with existing results that require the strictly stronger assumption of uniform complete observability of the system. To the best of the authors' knowledge this is the first time such a result is reported in the literature

Abstract:

This work characterizes the dispersion of some popular random probability measures, including the bootstrap, the Bayesian bootstrap, and the Pólya tree prior. This dispersion is measured in terms of the variation of the Kullback–Leibler divergence of a random draw from the process to that of its baseline centring measure. By providing a quantitative expression of this dispersion around the baseline distribution, our work provides insight for comparing different parameterizations of the models and for the setting of prior parameters in applied Bayesian settings. This highlights some limitations of the existing canonical choice of parameter settings in the Pólya tree process

Abstract:

We describe a spatial cognition model based on the rat’s brain neurophysiology as a basis for new robotic navigation architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information) cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited, and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired approach to more traditional robotic approaches and discuss current work in progress

Abstract:

Traditionally, modeling of neurobiological systems has involved development of computer-based simulations. As opposed to physical experimentation, simulations tend to over-simplify environmental conditions. Yet, in many cases such environmental conditions are critical to experiment outcome. In the case of animal behavior, simulation-only arenas can serve as a preliminary platform for model experimentation. Realistic physical environments are required for final evaluation of model correctness. In this paper we present our work with physical robots as testbed for animal behavior experimentation under realistic environmental conditions

Abstract:

The objective of this paper and our current research is to develop a human-robot interaction architecture that will let human coaches train robots to play soccer via spoken language. This work exploits recent developments in cognitive science, particularly notions of grammatical constructions as formmeaning mappings in language, and notions of shared intentions as distributed plans for interaction and collaboration between humans and robots linking perceptions to action responses. We define two sets of voice-driven commands for human-robot interaction. The first set involves action commands requiring robots to perform certain behaviors, while the second set involves interrogation commands requiring a response from the robot. We then define two training levels to teach robots new forms of soccer-related behaviors. The first level involves teaching new basic behaviors based on action and interrogation commands. The second level involves training new complex behaviors based on previously learnt behaviors. We explore the two coaching approaches using Sony AIBO robots in the context of RoboCup soccer standard platform league previously known as the four-legged league. We describe the coaching process, experiments, and results. We also discuss the state of advancement of this work

Abstract:

The paper presents a biologically inspired multi-level neural-schema architecture for prey catching and predator avoidance in single and multiple autonomous robotic systems. The architecture is inspired on anuran (frogs and toads) neuroethological studies and wolf pack group behaviors. The single robot architecture exploits visuomotor coordination models developed to explain anuran behavior in the presence of preys and predators. The multiple robot architecture extends the individual prey catching and predator avoidance model to experiment with group behavior. The robotic modeling architecture distinguishes between higher-level schemas representing behavior and lower-level neural structures representing brain regions. We present results from single and multiple robot experiments developed using the NSL/ASL/MIRO system and Sony AIBO ERS-210 robots

Abstract:

Biology has been an important source of inspiration in building adaptive autonomous robotic systems. Due to the inherent complexity of these models, most biologically-inspired robotic systems tend to be ethological without linkage to underlying neural circuitry. Yet, neural mechanisms are crucial in modelling adaptation and learning. The work presented in this paper describes a schema and neural network multi-level modelling approach to biologically inspired autonomous robotic systems. A prey acquisition model with detour behaviour in frogs is presented to exemplify the modelling approach. The model is tested with simulated and physical robots using the ASL/NSL and MIRO robotic system

Abstract:

The objective of the current research is to develop a generalized approach for human-robot interaction via spokenlanguage that exploits recent developments in cognitive science, particularly notions of grammatical constructions as form-meaning mappings in language, and notions of shared intentions as distributed plans for interaction and collaboration. We demonstrate this approach distinguishing among three levels ofhuman-robot interaction. The first level is that of commanding or directing the behavior of the robot. The second level is that of interrogating or requesting an explanation from the robot. The third and most advanced level is that of teaching the robot a new form of behavior. Within this context, we exploit social interaction by structuring communication around shared intentions that guide the interactions between human and robot. We explore these aspects of communication on distinct robotic platforms, the Event Perceiver and the Sony AIBO robot in the context of four-legged RoboCup soccer league. We provide a discussion on the state of advancement of this work

Abstract:

An alternative to traditional simulation of biological behavior is to investigate problems in neuroethology, the study of neural basis for behavior, by developing embedded physical robot models. While a number of neuroethological robot models have been developed, these tend to be quite expensive in terms of computational needs. Two different approaches have been taken to neurotheological robot design: (1) having fully local computation in the robotic system, and, (2) distributing processing between robot and a remote computer system. While the first approach simplifies the overall robotic architecture it involves usually specialized and expensive hardware. The second approach considers smaller and less expensive components, where robots are used as remote sensorimotor devices. In this paper we present the MIRO (Mobile Internet Robots) architecture distributing processing between the robot and the NSL/ASL (Neural Simulation Language / Abstract Schema Language) neural simulation system. The distributed architecture enables a single computational system for both simulated and real-time robot experiments with the ability to monitor robot performance directly from the Internet. We discuss some of the issues having arisen in using the MIRO architecture while experimenting with a toad's prey acquisition and predator avoidance neuroethological model. We conclude with future work in neuroethological modeling and the MIRO architecture

Abstract:

Nature has always been a source of inspiration in the development of robotic systems. As such, the study of animal, behavior (ethology) and the study of the underlying neural structure responsible for behavior (neuroethology) have inspired many robotic designs. In general, neuroethological based systems tend to be more complex than ethological ones thus being more expensive to compute, a common problem to both simulation and robotic experimentation. To overcome this problem, it is necessary either to incorporate very powerful hardware or, particularly in the case of mobile robots, embed the robot via wireless communication into remote distributed computational system where expensive computation can take place. While the first approach simplifies the overall robotic architecture it results in bulky and expensive robots. The second approach results in smaller and less expensive robots, although involving more complex architectures. The work presented in this paper discusses the second approach that of embedding mobile robots to distributed computational systems. We describe our current work in conducting neuroethological robotic experimentation using the MIRO (Mobile Internet Robotics) system linked to the NSL/ASL neural simulation system. In optimizing overall system performance, communication between the robot and the computing system is managed by an Adaptive Robotic Middleware (ARM)

Abstract:

Through experimentation and simulation scientists are able to get an understanding of the underlying biological mechanisms involved in living organisms. These mechanisms, both structural and behavioral, serve as inspiration in the modeling of neural based architectures as well as in the implementation of robotic systems. Among these, we are particularly motivated in studying animals such as toads, frogs, salamanders and praying mantis that rely on visuomotor coordination. In order to deal with the underlying complexity of these systems, we have developed the NSL/ASL simulation system to enable modeling and simulation at different levels of granularity

Abstract:

Neural-based systems are quite common in solving technological applications. For example, in autonomous robot agents, agents vary from those eliciting simple behaviors to those intended to imitate nature as close as possible both in their behavior and neural structure. As the level of complexity increases, computational models become more sophisticated involving multi-level approaches enabling top-down and bottom-up designs. The ASL/NSL architecture was designed with such purpose in mind. At the highest level, animal-like behaviors such as prey acquisition and predator avoidance are decomposed into simpler lower level behaviors such as moving forwards or orienting. At the structural level, depending on available data, behavior is mapped into specialized neural modules or if unavailable, they are implemented through other AI techniques. In this paper we describe the basic ASL/NSL computational model supporting the integration of neural network implementations as part of more complex AI systems.

Abstract:

The study of biological systems has inspired the development of a large number of neural network architectures and robotic implementations. Through both experimentation and simulation biological systems provides a means to understand the underlying mechanisms in living organisms while inspiring the development of robotic applications. Experimentation, in the form of data gathering (ethological physiological and anatomical), provides the underlying data for simulation generating predictions to be validated by theoretical models. These models provide the understanding for the underlying neural dynamics, and serve as basis for simulation and robotic experimentation. Due to the inherent complexity of these systems, a multi-level analysis approach is required where biological, theoretic and robotic systems are studied at different levels of granularity. The work presented here overviews our existing modeling approach and describes current simulation results

Abstract:

As neural systems become large and complex, sophisticated tools are needed to support effective model development and efficient simulation processing. Initially, during model development, rich graphical interfaces linked to powerful programming languages and component libraries are the primary requirement. Later, during model simulation, processing efficiency is the primary concern. Workstations and personal computers are quite effective during model development, while parallel and distributed computation become necessary during simulation processing. We give first an overview of modeling and simulation in NSL together with a depth perception model example. We then discuss current and future work with the NSL/ASL system in the development and simulation of modular neural systems executed in a single computer or distributed computer network

Abstract:

NSL, Neural Simulation Language, is a general purpose simulation system providing a high-level language with many constructs and libraries developed to ease the specification of large neural networks. NSL integrates object-oriented programming methodologies in its design and implementation, providing a simulation environment for users with little programming background, as well as those with more extensive programming expertise, who can use C++ as an extension to NSL's modeling language. NSL is widely used in research and teaching, having lead to many different neural network models, both in the artificial and biological domains. NSL enables the simulation of models with different levels of neural details, with special support for the leaky integrator

Abstract:

Rough sleeping is a chronic experience faced by some of the most disadvantaged people in modern society. This paper describes work carried out in partnership with Homeless Link (HL), a UK-based charity, in developing a data-driven approach to better connect people sleeping rough on the streets with outreach service providers. HL's platform has grown exponentially in recent years, leading to thousands of alerts per day during extreme weather events; this over whelms the volunteer-based system they currently rely upon for the processing of alerts. In order to solve this problem, we propose a human-centered machine learning system to augment the volunteers' efforts by prioritizing alerts based on the likelihood of making successful connection with a rough sleeper. This addresses capacity and resource limitations whilst allowing HL to quickly, effectively, and equitably process all of the alerts that they receive. Initial evaluation using historical data shows that our approach increases the rate at which rough sleepers are found following a referral by at least 15% based on labeled data, implying a greater overall increase when the alerts with unknown outcomes are considered, and suggesting the benefit in trial taking place over a longer period to assess the models in practice. The discussion and modeling process is done with careful considerations for ethics, transparency and explainability due to the sensitive nature of the data involved and the vulnerability of the people that are affected

Abstract:

Bike sharing systems have become an efficient transportation mode in many cities around the world. Bike riding has multiple benefits for the users and the city. However, availability of resources (bikes and parking docks) is significantly important to foster its use. In this paper, we first analyze the demand of a large-scale bike sharing system in order to analyze the initial inventory of bikes through a discrete-event simulation model. In particular, we analyze the impact of the fill levels of the stations at the beginning of the day on the service level received by the users of the system during the morning rush hours. We define the service level in three different ways: average waiting time for a resource, average fraction of users that can find a resource immediately upon arrival to a station and average fraction of time that the stations of the system are full or empty. The results show that policies based on demand patterns may work efficiently while reducing the number of bikes

Resumen:

Los sistemas de bicicletas compartidas se han convertido en un modo de transporte muy popular en grandes urbes, lo cual permite que los usuarios incrementen su actividad física y se reduzcan los efectos negativos del tránsito de vehículos automotor. Sin embargo, para incentivar su uso se requiere otorgar un alto nivel de servicio a los usuarios. En este artículo se modela un sistema de bicicletas compartidas con simulación de eventos discretos para estudiar el impacto del inventario inicial en el nivel de servicio recibido por los usuarios de la hora pico matutina. El nivel de servicio se mide de tres formas: tiempo promedio de espera de los usuarios para tomar/dejar una bicicleta, fracción promedio de usuarios cuya demanda se satisface inmediatamente después de llegar a una estación y fracción del tiempo que las estaciones del sistema se encuentran completamente llenas/vacías. Los resultados demuestran el conflicto existente entre colocar pocas bicicletas y asignar muchas bicicletas a las estaciones. Sin embargo, clasificar a las estaciones en perfiles de acuerdo con su demanda puede servir para definir una política de inventario estandarizada que optimice las tres medidas de desempeño simultáneamente

Abstract:

Bike sharing systems have become an efficient transportation mode in large cities around the world. These systems not only allow users to increase their physical activity, but they contribute to reduce the negative effects caused by traffic of motor vehicles. However, fostering bicycle usage requires delivering a high service level to users. In this paper, we model a bike sharing system through a discrete-event simulation model to study the impact of initial inventory policies in the service level during the morning rush hours. The service level is measured in three different ways: average time to take/return a bicycle, average fraction of users whose demand is satisfied immediately upon arrival to a station and average fraction of time that stations of the system are completely full or empty. The results show the trade-off between allocating few bicycles and allocating many bicycles at the stations. However, classifying the stations on profiles according to their demand may help defining a standardized inventory policy that optimizes the three performance measures simultaneously

Abstract:

This paper considers challenges resulting from the use of advanced artificial judicial intelligence (AAJI). We argue that these challenges should be considered through the lens of value alignment. Instead of discussing why specific goals and values, such as fairness and nondiscrimination, ought to be implemented, we consider the question of how AAJI can be aligned with goals and values more generally, in order to be reliably integrated into legal and judicial systems. This value alignment framing draws on AI safety and alignment literature to introduce two otherwise neglected considerations for AAJI safety: specification and assurance. We outline diverse research directions and suggest the adoption of assurance and specification mechanisms as the use of AI in the judiciary progresses. While we focus on specification and assurance to illustrate the value of the AI safety and alignment literature, we encourage researchers in law and philosophy to consider what other lessons may be drawn

Abstract:

The application of artificial intelligence (AI) to judicial decision-making has already begun in many jurisdictions around the world. While AI seems to promise greater fairness, access to justice, and legal certainty, issues of discrimination and transparency have emerged and put liberal democratic principles under pressure, most notably in the context of bail decisions. Despite this, there has been no systematic analysis of the risks to liberal democratic values from implementing AI into judicial decision-making. This article sets out to fill this void by identifying and engaging with challenges arising from artificial judicial decision-making, focusing on three pillars of liberal democracy, namely equal treatment of citizens, transparency, and judicial independence. Methodologically, the work takes a comparative perspective between human and artificial decision-making, using the former as a normative benchmark to evaluate the latter. The chapter first argues that AI that would improve on equal treatment of citizens has already been developed, but not yet adopted. Second, while the lack of transparency in AI decision-making poses severe risks which ought to be addressed, AI can also increase the transparency of options and trade-offs that policy makers face when considering the consequences of artificial judicial decision-making

Abstract:

This article analyzes the value of behavioral economics for EU judicial decision-making. The first part introduces the foundations of behavioral economics by focusing on cognitive illusions, prospect theory, and the underlying distinction between different processes of thought. The second part examines the influence of selected biases and heuristics, namely the anchoring effect, availability bias, zero-risk bias, and hindsight bias on diverse legal issues in EU law including, among others, the scope of the fundamental freedoms, the proportionality test as well as the roles of the Advocate General and Reporting Judge. The Article outlines how behavioral economic findings can be taken into account to improve judicial decision-making. Accordingly, the adaptation of judicial training concerning cognitive illusions, the establishment of a de minimis rule regarding the scope of the fundamental freedoms, and the use of economic models when determining the impact of certain measures on fundamental freedoms is suggested. Finally, an “unbiased jury” concentrating exclusively on specific factual issues such as causal connections within the proportionality test is necessary, if the hindsight bias is to be avoided. While it is of great importance to take behavioral economic findings into account, judicial decision-making is unlikely to become flawless based on natural intelligence. Despite bearing fundamental risks, artificial intelligence may provide means to achieve greater fairness, consistency, and legal certainty in the future

Abstract:

Purpose- The goal of the present study was to explore the potential impact of within-team value diversity with respect to both team processes and task performance. Design/Methodology/Approach- We explored value diversity within a comprehensive framework such that all components of basic human values were examined. A sample of 306 participants randomly assigned to 60 teams, performed a complex hands-on task, demanding high interdependence among team members, and completed different measures of values and team processes. Findings- Results indicated that value diversity among team members had no significant impact on task performance. However, diversity with respect to several value dimensions had a significant unique effect on team process criteria. Results were consistent with respect to the nature of the impact of value diversity on team process outcomes. Specifically, the impact of team value diversity was such that less diversity was positively related to process outcomes (i.e., more similarity resulted in more team cohesion and efficacy and less conflict). Implications- The results indicated that disparity among teammates in many of these values may have important implications on subsequent team-level phenomena. We suggest team leaders and facilitators of teambuilding efforts could consider adding to their agendas a session with team members to analyze and discuss the combined value profiles of their team. Originality/Value- This is the first study to highlight the unique impact of many unexamined, specific components of team diversity with respect to values on team effectiveness criteria

Abstract:

The authors examined the measurement equivalence of the Multidimensional Work , Ethic Profile (MWEP) across the diverse cultures of Korea, Mexico, and the United States. Korean- and Spanish-Ianguage versions of the MWEP were developed and evaluated relative to the original English version üf the measure. Confirmatory factor analytic results indicated measurement jnvariance across samples drawn from each country. Further analyses indicated potential substantive differences for some of the seven subscales of the MWEP across samples. The implications of these findings and directions for future research are presented

Abstract:

Probabilistic roadmap methods (PRMS) have been highly successful in solving many high degree of freedom motion planning problems arising in diverse application domains such as traditional robotics, computer-aided design, and computational biology and chemistry. One important practical issue with PRMS is that they do not provide an automated mechanism to determine how large a roadmap is needed for a given problem. Instead, users typically determine this by trial and error and as a consequence often construct larger roadmaps than are needed. In this paper, we propose a new PRM-based framework called Incremental Map Generation (IMG) to address this problem. Our strategy is to break the map generation into several processes, each of which generates samples and connections, and to continue adding the next increment of samples and connections to the evolving roadmap until it stops improving. In particular, the process continues until a set of evaluation criteria determine that the planning strategy is no longer effective at improving the roadmap. We propose some general evaluation criteria and show how to apply them to construct different types of roadmaps, e.g., roadmaps that coarsely or more finely map the space. In addition, we show how IMG can be integrated with previously proposed adaptive strategies for selecting sampling methods. We provide results illustrating the power of IMG

Resumen:

Este artículo expone los hallazgos de la investigación etnográfica realizada en torno a la conciliación en el derecho laboral individual, la cual, por medio de la reforma constitucional laboral del 2017, y su legislación secundaria en el 2019, fue puesta en el centro de la justicia laboral. Se analiza, con base en datos empíricos, cómo opera esta figura y se cuestiona por las posibles consecuencias que su fortalecimiento tendrá en la justicia laboral mexicana. La metodología utilizada se basa en la observación participante realizada en las juntas de conciliación y arbitraje, en un despacho jurídico, y en la Procuraduría de la Defensa del Trabajo en la Ciudad de México. La propuesta teórica plantea que la conciliación es una figura que otorga flexibilidad a los procesos laborales individuales. Es a través de esta práctica que lo que se establece y sucede en la Junta de Conciliación y Arbitraje y lo que acontece paralelo a este proceso, fuera de la vista de la ley, se reconcilian. Esto sugiere que la conciliación favorece la flexibilización del proceso laboral, pero también opera en detrimento de una justicia efectiva para el trabajador y contribuye a la precarización de las relaciones laborales mexicanas

Abstract:

This article discusses my ethnographic findings on conciliation processes in individual labor cases in Mexico City. The constitutional reform (2017) and its implementation (2019) places in the center of labor justice the conciliation process. This text focuses on analyzing empirical data of how this figure operates and questions what are the possible consequences of strengthening it. The principal research methodology used was participant observation done in the labor jurisdictional authority Gunn de Conciliacion y Arbitraje), a legal office and the public defender in labor law for Mexico City. The theoretical proposition of this article is that conciliation is a procedure that gives flexibility to labor cases in their judicial process. It is through its practice that everything that happens in the case before the labor jurisdictional authority (Junta de Conciliacion y Arbitraje) and what happens outside of this process, far away from the eyes of the law, merges. This article concludes that the conciliation favors a flexibilization of the labor judicial process, but it also works against effective justice for the working class, as it contributes to the precariousness of the work relationships in Mexico

Résumé:

Cet article expose les découvertes de l’investigation ethnographique réalisée autour de la conciliation du droit du travail individuel, lequel, à travers la réforme constitutionnelle du travail du 2017, et sa législation secondaire en 2019, a été mise au centre de la justice du travail. Analysé, sur les bases des concepts empiriques, comment cette figure opère et se questionne sur les possibles conséquences de son renforcement qu’elle aura dans la justice du travail mexicaine. La méthodologie utilisée se base dans l’observation participante réalisé dans les conseils de conciliation et d’arbitrage, dans un cabinet juridique, et dans la Direction Générale de la Défense du Travail dans la Ville de Mexico. La proposition théorique propose que la conciliation est une figure qui donne de la flexibilité aux procès du travail individuels. C’est à travers cette pratique, dans la quelle ce qui s’établit et advient dans le conseil de conciliation et arbitrage et ce qui se passe parallèlement dans ce procès, hors de la vue de la loi, se réconcilient. Ceci suggère que la conciliation est favorable à la flexibilité du procès du travail, mais aussi elle opère en détriment d’une justice effective pour le travailleur et contribue à la précarisation des relations du travail mexicaines

Abstract:

We propose a class of Bayesian cure rate models by incorporating a baseline density function as well as multiplicative and additive covariate structures. Our model naturally accommodates zero and non-zero cure rates, which provides an objective way to examine the existence of a survival fraction in the failure time data. An inherent parameter constraint needs to be incorporated into the model formulation due to the additive covariates. Within the Bayesian paradigm, we take a Markov gamma process prior to model the baseline hazard rate, and mixture prior distributions for the parameters in the additive component of the model. We implement a Markov chain Monte Carlo computational scheme to sample from the full conditional distributions of the posterior. We conduct simulation studies to assess the estimation and inference properties of the proposed model, and illustrate it with data from a bone marrow transplant study

Abstract:

We study the stability of the optimal stopping problem for a discrete-time Markov process on a general space state X. Revenue and cost functions are allowed to be unbounded. The stability (robustness) is understood in the sense that an unknown transition probability p(·|x), x ∈ X, is approximated by the known one Ṕ(·|x), x ∈ X, and the stopping rule Ť*, optimal for the process governed by Ṕ is applied to the original process represented by p. The criteria of stopping rule optimization is the total expected return. We give an upper bound for the decrease of the return due to the replacement of the unknown optimal stopping rule T* by its approximation Ť*. The bound is expressed in terms of the weighted total variation distance between the transition probabilities p and Ṕ

Abstract:

This article discusses the fundamental changes that have occurred during the past decade in institutions central to Mexico's constitutional order. The demise of a single-party democracy not only created a new political order; it generated, as well, fundamental changes in Mexican constitutionalism, with formal constitutional and legal reforms playing an important but secondary role in revising Mexico's constitutional structure. The authoritarian presidencialismo that dominated Mexico's political culture throughout much of the twentieth century has been replaced by a disempowered presidency and a divided Congress, with a revamped Mexican Supreme Court—long a minor factor in Mexican constitutional politics—assuming a key role in the development of the law. In discussing these changes, the article focuses on three primary areas of the new constitutionalism: separation of powers and the new role of the Mexican Congress; the new role of the Mexican Supreme Court as arbiter between Congress and the presidency; and changes in Mexican federalism. The political instability of multiparty politics in Mexico will place further strains on Mexican constitutionalism in the future and will require careful responses from those institutions—especially the Supreme Court—that oversee the development of law in Mexico

Abstract:

This article presents a Monte Carlo-based mechanism for systematically curbing the risk borne by lenders when extending loans for the purchase of mortgagebacked securities (MBS) or collateralized debt obligations (CDOs) on leverage as investment vehicles. On the basis of the contention that the recent credit crisis of 2008 was chiefly deived from perverse incentives created by the enormous borrowing appetite of the hedge funds and other market participants and the fallacious appearance of credit default swaps, the authors propose a more rigorous implementation of well-known mechanisms for systematically quantifying risky debt interest rates, along with a government policy to require capital reserves and regulate maximum leverage, which would have quelled the real estate bubble that burst in 2008, still unfolding today. A thrust towards the potential systematic implementation of the suggested debt risk curbing mechanisms as a precondition to originating loans by the banking community, thus constituting potentially enforceable policy matter for urgent prudential regulation of the financial markets in the future, hints at the new light under which banks should consider dynamically managing their capital reserve structures. This could contribute to the safe continued growth of the CDO and MBS derivatives markets, while shifting the focus of value creation for bank stockholders to asset growth without undue systematic risk. The authors also recommend academia to consider emphasizing the teaching of risk management and the pricing of risky debt in the MBA curricula, given the pivotal role MBA graduates have in providing intellectually intensive labor to banks and other financial institutions

Abstract:

This paper examines the influence and direction of social and economic determinants of the HIV/AIDS global epidemic across nations and assesses each country's efficiency in battling the pandemic. The initial dataset consisted of 151 countries with five dependent variables and 90 explanatory variables (reduced to 50 after extensive exploratory data analysis of missing value patterns, undesirable multi-colinearities and multivariate outliers). Five measures were analyzed, namely HIV/AIDS Cases per 100,000 population; number and percentage of adults age 15-49 living with HIV/AIDS virus; estimated number of AIDS-related deaths for adults and children; and percentage of male sexual transmitted disease patients diagnosed with HIV/AIDS (identified by canonical correlation as not amenable to significant predictions). Reasonably good fit regression models (R-adj(2) 70-90%) were developed with all coefficients significant (p < 10%) that a, could explain each of the four specific AIDS measures and assess the validity of nine literature-based hypothesis. The major conclusion of this research is that countries with lower population density that manage to provide better health system performance, per capita support (doctors, nurses and hospital beds) with better media information (radio, phone and TV access), and not necessarily higher GNP are more likely to exhibit lower HIV/AIDS indicators. Interestingly, the "spoilers" of the widely anticipated negative relationship between HIV/AIDS prevalence and wealth of countries are the healthier of the wealthiest and the wealthier of the sickest

Abstract:

Consider a model with two logarithmic utility maximízer agents that observe aggregate consumption but disagree about its expected rate of change. Markets are incomplete. We first show that volatility of the interest rate in an economy with additional information is higher. In a complete markets economy (with no additional information) there are two differences in volatility with the previous. The first one, due to the increase in volatility of the wealth share of the individuals is always positive. The second difference is the change in covariance between the components of the interest rate volatility and can be positive or negative

Abstract:

Prior evidence suggests that board independence may enhance financial performance, but this relationship has been tested almost exclusively for Anglo-American countries. To explore the boundary conditions of this prominent governance mechanism, we examine the impact of the formal and information institutions of 18 national business systems on the board independence-financial performance relationship. Our results show that while the direct effect of independence is weak, national-level institutions significantly moderate the independence-performance relationship. Our findings suggest that the efficacy of board structures is likely to be contingent on the specific national context, but the type of legal system is insignificant

Abstract:

In the present paper, we establish geometric properties, such as starlikeness and convexity of order a (0 < a < 1), and close to-convexity in the open unit disk U: = {z E C : |z| < 1} for a combination of a normalized form of the generalized Struve function of order p, w, p, b, c (z), defined by Dp, b, c (z) = 2 p √πT (p+b/2+1) z (-p+1)/2 dp,b,c (√z), where dp, b, c (z): = -pwp, b, c (z) + zw1 p, b, c (z), with p, b, c, E C and K = p + b / 2 + 1 E {0, -1, -2, …}. We determinate conditions for the parameters c and k for which f E R (β) = {f E A (U) : Ref1 (z) > β, z, E, U}, 0 < β < 1, indicates that the convolution product Dp,b,c * f belongs to the spaces H (U) and R (y) with y depending on a and B, where A(U) denotes the class of all normalized analytic functions in U and H (U) is the space of all bounded analytic functions in A(U). We also obtain sufficient conditions in terms of the expansion coefficients for f E A(U) to be in some subclasses of the class univalent functions. Motivation has come from the vital role special functions in geometric function theory

Abstract:

In this paper we obtain subordination, superordination, and sandwich-type results related to certain family of integral operators defined on the space of meromorphic functions in the open unit disk. Also, an application of the subordination and superordination theorems to the Gauss hypergeometric function are considered, and the main new results generalize some previously well-known sandwich-type theorems

Abstract:

Using the third-order differential subordination basic results, we introduce certain classes of admissible functions and investigate some applications of third-order differential subordination for p-valent functions associated with generalized fractional differintegral operator

Abstract:

This paper shows a hypercomplex function theory emerging in the representation of paravector-valued monogenic functions over the (m + 1)-dimensional Euclidean space through a basic set (or basis) of hypercomplex monogenic polynomials. We derive the properties of the arising hypercomplex Cannon function and present an extension of the well-known Whittaker-Cannon theorem to special monogenic functions defined in an open hyperball in R m + 1. More precisely, we determine what conditions should be applied to a basic set of special monogenic polynomials to attain the effectiveness property in an open hyperball employing Hadamard's three-hyperballs theorem. We also provide a necessary and sufficient condition for a special monogenic Cannon series to represent every function near the origin that is special monogenic there. Additionally, we investigate the effectiveness of a non-Cannon basis and show that the underlying hypercomplex Cannon function maintains similar properties in both cases, the Cannon basis and the non-Cannon basis

Abstract:

In many virtual environment (VE) applications, the VE system must be able to display accurate models of human figures that can perform routine behaviors and adapt to events in the virtual world. In order to achieve such adaptive, task-level interaction with virtual actors, it is necessary to model elementary human motor skills. SkillBuilder is a software system for constructing a set of motor behaviors for a virtual actor by designing motor programs for arbitrarily complicated skills. Motor programs are modeled using finite state machines, and a set of state machine transition and ending conditions for modeling motor skills has been developed. Using inverse kinematics and automatic collision avoidance, SkillBuilder was used to construct a suite of behaviors for simulating visually guided reaching, grasping, and head-eye tracking motions for a kinematically simulated actor consisting of articulated, rigid body parts. All of these actions have been successfully demonstrated in real time by permitting the user to interact with the virtual environment using a whole-hand input device

Resumen:

La Primera Guerra Mundial transformó todos los aspectos de la vida humana. El modelo económico del siglo XIX se vio perturbado; un profundo cambio social impulsó la participación plena de la mujer en la sociedad; el mapa político de Europa fue alterado; se dio inicio a una nueva etapa cultural en Occidente y los avances tecnológicos trastornaron la cotidianidad para siempre. Repensar y entender estos vaivenes es comprender la importancia de un suceso que reconstituyó al mundo

Abstract:

The World War I transformed all aspects of the human kind. It disturbed the nineteenth century economic model; a profound social change fostered women participation and engagement; the political map of Europe changed definitely; a new cultural movement begun and technology developments deranged everydayness. To rethink and comprehend these ups and downs is to fully understand the importance of an event that reconstructed the world

Resumen:

La Constitución Política de los Estados Unidos Mexicanos, promulgada en febrero de 1917, tiene como base los diversos episodios experimentados en la historia mexicana a lo largo del siglo XIX. El artículo 27 constitucional no es la excepción. Su redacción y estructura fue la respuesta a experiencias intervencionistas, en todas las escalas, desde los inicios del México independiente. Dicho artículo constituyó la manera en que México se insertó en el escenario internacional según principios diplomáticos definidos como "soberanía nacional" y "no intervención"

Abstract:

Mexico's Constitution, enacted on february 1917, is profoundly based in the historical experience of the country throughout the nineteenth century. Article 27 isn't the exception. Its structure and drafting was the answer to a series of foreign interventions in diverse realms since the Independence of Mexico. This article constituted the way in which the country will introduce itself into the international arena under very clear and defined principles, such as "no intervention" and "national sovereignty"

Abstract:

The automatic monitoring / tracking of environmental boundaries by multi-agent systems is a fundamental problem that has many practical applications. In this paper, we address this problem with formation control techniques based on parametric curves that represent the boundary's feedback shape. For that, we approximate the curve with truncated Fourier series, whose finite coefficients are utilized to characterize the curve's shape and to automatically distribute the agents along it. These feedback Fourier coefficients are exploited to design a new type of formation controller that drives the agents to form desired curves. A detailed stability analysis is provided for the proposed control methodology, considering both fixed and switching multi-agent topologies. The reported numerical simulation and experimental studies demonstrate the performance and feasibility of our new method to track closed boundaries of different shapes

Abstract:

Visibility integrity (VI) is a measurement of similarity between the visibilities of regions. It can be used to approximate the visibility of coherently moving targets, called group visibility. It has been shown that computing visibility integrity using agglomerative clustering takes O(n4 log n) for n samples. Here, we present a method that speeds up the computation of visibility integrity and reduces the time complexity from O(n4 log n) to O(n2). Based on the idea of visibility integrity, we show for the first time that the visibility-integrity roadmap (VIR), a data structure that partitions a space into zones, can be calculated efficiently in 3-D. More specifically, we offer two different offline approaches, a naive one and a kernel-based one, to compute a VIR. In addition, we demonstrate how to apply a VIR to solve group visibility and group following problems in 3-D. We propose a planning algorithm for the camera to maintain visibility of group targets by querying the VIR. We evaluate our approach in different 3-D simulation environments and compare it to other planning methods

Resumen:

En este texto se aborda el tema de la moneda como instrumento de poder y dominación desde el punto de vista de la doctrina social de la Iglesia católica. Aunque no podamos encontrar en ella una enseñanza explícita sobre la moneda como instrumento de dominación, es posible deducir algunas pautas éticas sobre el uso de la moneda

Abstract:

In this text I address the issue of the currency as an instrument of power and domination from the point of view of the Catholic social teaching. Although we cannot find in it an explicit teaching on the currency as an instrument of domination, it is possible to deduce from the Catholic social teaching some ethical guidelines on the use of currency

Abstract:

A Markov description of the electron population in a model of electrical discharge between parallel plates in the presence of an external illumination source is developed. Electron production is assumed to be due to ionizing collisions in the gas, as well as photoelectric emission at the cathode; the electrons are assumed to move with a constant drift velocity towards the anode, where they are lost. The Markov description is based on a discretized distribution of electrons within the gap, from where macroscopic equations for the mean electron density and the density-density correlation function are obtained in the limit to the continuum. The results show the existence of stochastically stable solutions only when stationary discharges are obtained by means of a nonvanishing external illumination. In addition, the variance-to-mean ratio in the steady state shows a discontinuity when the conditions of the discharge are those for which the breakdown Townsend criterion is satisfied. Numerical examples are used to illustrate the results

Abstract:

We present a full review of PID passivity-based controllers (PBC) applied to power electronic converters, discussing limitations, unprecedented merits and potential improvements in terms of large-signal stability, robustness and performance. We provide four main contributions. The nominal case is first considered and it is shown-under the assumption of perfect knowledge of the system parameters-that the PID-PBC is able to guarantee global exponential stability of a desired operating point for any positive gains. Second, we analyze robustness of the controller to parameters uncertainty for a specific class of power converters, by establishing precise stability margins. Third, we propose a modification of the controller by introducing a leakage, in order to overcome some of the intrinsic performance and robustness limitations. Interestingly, such controller can be interpreted at steady-state as a droop between the input and the passive output, similar to traditional primary controllers. Fourth, we robustify the design against saturation of the control input via an appropriate monotone transformation of the controller. The obtained results are thoroughly discussed and validated by simulations on two relevant power applications: a DC/DC boost converter and an HVDC grid-connected voltage source converter

Abstract:

In this paper we show that the PI passivity-based control (PBC) for power converters developed in [1] can be properly modified in order to guarantee global asymptotic stability of the closed-loop system even in case of inaccurate knowledge of the equilibrium to be stabilized. The proposed modification consists of the inclusion of a leakage in the integral action that, viewed in suitable incremental coordinates, acts as a damping term that helps to compensate the error induced by the imprecise knowledge of the equilibrium. Interestingly, with an appropriate choice of the parameters, the controller can be interpreted as a modulation index/power droop controller. The theoretical results are validated by simulations on a DC/DC boost converter

Resumen:

El capítulo describe y explica las diferencias en las motivaciones, formas y efectos que tuvieron las reformas desreguladoras, privatizadoras y liberadoras para el sector eléctrico, en países industrializados y en países en desarrollo. En especial se analiza la experiencia en América Latina con los casos de Chile, Argentina, Brasil y Colombia. A partir de estos casos, se proponen una serie de condiciones o características que parecen estar presentes en la generalidad de los países en desarrollo y que influyen en el desempeño de las infraestructuras de red, como en el caso de la industria eléctrica. Con base en lo anterior, el documento plantea la necesidad de desarrollar literatura ad hoc para países en desarrollo, especialmente para el sector eléctrico latinoamericano, que trate, discuta y plantee soluciones a problemas propios de la región, Una mala infraestructura tiene altos costos e importantes repercusiones para los gobiernos y las sociedades de cualquier país. El débil rendimiento de la infraestructura en países en desarrollo ha implicado serias fugas de recursos, un freno para el crecimiento y un obstáculo para la reducción de la pobreza

Abstract:

This chapter analyzes and describes the differences in motivations, forms and effects that were present in the privatizing, liberalizing and deregulating reforms in the electric sector, both in developed and developing countries. In particular, the Latin American experience is analyzed in the specific cases of Chile, Argentina, Brazil and Colombia. From these cases, a series of characteristics are noted that seem to be present in the most developing countries and which have an influence in the performance of the electric sector. Based on the above, this chapter proposes to need to develop ad hoc analytic literature for developing countries, specially for the Latin American electric sector, which discusses and offers solutions to specific problems to the region. An inappropriate infrastructure has high cost and important consequences for the governments and societies of any country. The weak performance of the infrastructure in developing countries has caused a drainage of resources, and obstacle for growth as well as for poverty reduction

Resumen:

Purpose. The paper enriches the understanding of the principal challenges faced in future lawyers' education in Mexico considering global trends, particularly from the perspective of skills creation in diverse areas of legal practice - Design/methodology/approach. The framework used draws on trends identified within an international collaborative research study in which both authors participated, titled "Developing a Blueprint for Global Legal Education". This current paper stems from the premise that these recommendations can be further developed and better utilised if explored within a specific context. The methodology designed for this research consisted of two main components: a thorough analysis of the norms that regulate the education system and the professional practice in Mexico, and an extensive literature review that provided insights into the state of global trends in legal education - Findings. This paper reveals that in Mexico having a well-designed and comprehensive legal framework is the first step to promote the creation of high-quality educational models - Practical implications. The study analyses the current situation in Mexico within four global trends: (1) regulation of legal education and access to the profession; (2) building professional practice skills; (3) internationalisation of education and (4) incorporation of technology and responsible innovation - Originality/value. The reflections are intended to promote better training of law students in the skills required to face the various challenges that the legal profession currently involves. This is under an approach that analyses global challenges and identifies the best practices to connect learning processes with in-demand professional skills

Abstract:

One of the fundamental problems in communications is finding the energy distribution of signals in time and frequency domains. It should therefore be of great interest to find the quaternionic signal whose time-frequency energy distribution is most concentrated in a given time-frequency domain. The present paper finds a new kind of quaternionic signals whose energy concentration is maximal in both time and frequency under the quaternionic Fourier transform. The new signals are a generalization of the classical prolate spheroidal wave functions to a quaternionic space, which are called the quaternionic prolate spheroidal wave functions. The purpose of this paper is to present the definition and fundamental properties of the quaternionic prolate spheroidal wave functions and to show that they can reach the extreme case within the energy concentration problem both from the theoretical and experimental description. The superiority of the proposed results can be widely applied to the application of 4D valued problems. In particular, these functions are shown as an effective method for bandlimited quaternionic signals relying on the extrapolation problem

Resumen:

La innovadora tecnología con la que opera la criptomoneda bitcoin, llamada blockchain, permite el intercambio de valor entre individuos sin la necesidad de una autoridad central de confianza. En este artículo se da una explicación del funcionamiento general de blockchain

Abstract:

The innovative technology behind the operation of the Bitcoin cryptocurrency, called the blockchain allows the exchange of value between peers without the need for a trusted central authority. In this paper, we provide an explanation of how the blockchain works

Abstract:

This article describes several modifications performed to the credit assignment mechanism of Goldberg's Simple Classifier System [4] and the results obtained when solving a problem that requires the formation of classifier chains. The first set of these modifications included changes to the formula used to compute the effective bid of a classifier by taking into consideration the reputation of the classifier and the maximum bid of the previous auction in which a classifier was active. Noise was made proportional to the strength of the classifier and specificity was incorporated as an additional term in the formula that is independent from the bid coefficient. A second set of changes was related to the manner in which classifiers belonging to a chain may receive a payoff or a penalty from the environment in addition to the payments obtained from succeeding classifiers. We also tested the effect that bridge classifiers [13] have in the solution of the example problem by allowing the creation of shorter chains. Some experiments in which classifiers were better informed gave better results than those in which only the noise and the specificity were included in the computation of the effective bid

Abstract:

The article describes the design and behavior of PLANRIK, a domain independent hierarchical nonlinear planner that solves for interactive conjunctive goals in a simple, different and more efficient manner than other planners. PLANRIK is a polynomial algorithm that efficiently solves for conjunctive goals by obtaining totally ordered object plans for each object of the problem domain (i.e. a block in the blocks world, a ring in the towers of Hanoi, etc.), and by merging these partial plans into a partially ordered global plan that will become the solution to the problem. Each object plan is generated by ignoring all other objects, and by using a network of required object states resulting from previous planning activities within the solution process.

Abstract:

This paper reviews the capabilities and application of the CONSTRUCTION PLANEX system. This knowledge-based expert system assists in the selection of construction technology, the definition of work tasks, the estimation of required resources and durations, the estimation of costs, and the preparation of a project schedule. The system has been applied in laboratory simulations in three ways : interactively as an intelligent assistant, independently as an automated planner, and as part of an integrated building design environment. CONSTRUCTION PLANEX can plan for the excavation and structural erection of concrete and steel buildings ; example applications have ranged from two to twenty stories. Provisions are included in the system for expandin 9 the knowledge base to consider other functional elements or facility types. Descriptions of the system architecture, knowledge representation scheme, control procedures, and applications are included

Abstract:

An experimental knowledge-based expert system to assist in traffic signal setting for isolated intersections is presented. In contrast to existing computer aids, the system can be applied to intersections of highly irregular geometries. Algorithmic processes to evaluate signal settings and decision lables to identify traffic flow conflicts me invoked by the expert system; phase distribution of flows is performed by applying heuristic rules. The system was wrítten in the OPS5 expert system environment. Advantages and disadvantages of the expert system programmíng approach relative to conventional algorithmic processes in the traffic engineering domain are described

Abstract:

In this paper I present a dynamic model that provides an explanation for why violent movements arise, why armed conflicts can persist over long periods, why guerrilla movements operate in rich places, and whether or not redistributive policies can eliminate the incentives for guerrilla movements. I analyze these questions using a model of competitive markets, three inputs and two types of agents. Apart from the standard results, I find that: (1) The existence of guerrilla movements increases wages in the short run but reduces them in the long run. (2) If workers or guerrilla members benefit from learning by doing, the persistence of the conflict leads to an endogenous heterogeneity that increases the difficulties of eliminating the incentives for guerrilla activity. (3) Once on place, guerrilla activity flows to the most productive economies together with labor and capital

Índice Onomástico

Abdel Musik Asali, Guillermo:
2, 3, 4, 5, 6, 7, 8, 9, 10, 8229, 8242

Abreu Vera, Jogin Elizabeth:
Aburto, Claudia:
Acosta Mejía, César Alfonso Jaime:
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29

Acosta, Francisco:
Acosta, Mónica:
Adamuz Peña, María de las Mercedes:
30, 31, 32, 33, 34, 35, 36, 37, 2738, 3916, 7955

Adlemo, Anders:
38, 39, 40

Aguayo Mazzucato, Andrés:
Aguayo, Enrique:
Agudelo Botero, Marcela:
Aguilar Barroso, Claudia:
Aguilar Beltrán, Pedro:
Aguilar Esteva, Arturo Alberto:
53, 54, 55, 56

Aguilar Gómez, Sandra:
Aguilar Villegas, Juan Carlos:
61, 62, 63, 64, 65, 66, 67, 68, 69, 70

Aguilar, Arturo:
47, 48, 49

Aguilar, Eduardo:
Aguilar, Luis F.:
Aguilar-Reyes, Fernando:
Aguilera, Margarita:
Aguinaco Bravo, Fabián:
Aguirre Cristiani, Gabriela:
Aguirre Torres, Víctor Manuel Armando:
85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 1250, 1251, 3833, 3834, 4110, 4111, 4118, 4966, 7238, 8610, 9559, 9560

Ahumada, Alonso:
Alagón Cano, Javier:
Alagón, Jorge:
Alatorre, Antonio:
Alba Guerra, Enrique de:
Albarrán, Claudia:
Albizuri Romero, Miren Begoña:
Alcalá Canto, María Iztchel:
Alcántara-Cárdenas, J. A.:
Alcayaga, Mónica:
Alcérreca Joaquín, Carlos:
Aldana, Silvia:
Alegría Hernández, Alejandro:
Alfaro Altamirano, Adriana:
Alfaro Pastor, Javier:
Alfonso Sierra, Tatiana:
Algorri Guzmán, María Elena:
Alonso Gómez, Emma:
Alonso Ortiz, Jorge:
Alonso, F.:
Altamirano, Bernardo:
Altamirano-Astorga, Jorge:
Alterio, Ana Micaela:
Alvarado Cabrera, Gabriela:
Álvarez Alcalá, Alil:
Álvarez González de Castilla, Clara Luz:
Álvarez Kuri, Fernando:
Álvarez Macías, Diana Lucía:
Álvarez, Miguel:
Amador Arellano, Antonio:
Andere, Eduardo:
Andonova, Veneta:
Andrade Martínez, Virgilio:
Andrade Rodríguez de San Miguel, Horacio:
Ángel, Gustavo A. del:
Ania Briseño, Ignacio de Jesús:
Antonius González, Andrés:
Antún Callaba, Juan Pablo:
Anzarut, Michelle:
Aquino, José:
Aragón, Santiago:
Arango Panamá, Manuel:
Arbeleche, Santiago:
Arce Sánchez, Sara:
Arciniega Ruíz de Esparza, Luis Martín:
Ardavín, José Antonio:
Argüello, Silvia:
Arias Garrido, Pedro:
Arias Nava, Elías:
Arias, Luis G.:
Arizpe, Paula:
Armenta Bello, Gustavo:
Arranz, Conrado J.:
Arredondo, Mauricio:
Arroyo Moreno, Jesús Ángel:
Aspe Armella, Pedro:
Assad Atala, Rashide:
Astey V., Luis:
Astey, Gabriel:
Athié Rubio, Rosa María:
Atristain Suárez, María Concepción:
Auvinet, Caroline:
Ávila, Arturo:
Ayala, Jorge D.:
Azar Manzur, Cecilia:
Azuela, Antonio:
Bacaria, Jordi:
Bachi, Daniela:
Báez Puente, Víctor Alejandro:
Baez-Revueltas, F. Berenice:
Bahri, Amrita:
Bailey, John:
Baillères, Alberto:
Ballinas Valdés, Cristopher:
Balmaseda Pérez, Beatríz Irene:
Balmes, Talía:
Bandiera, Antonella:
Barba Fernández, Magdalena Sofía:
Barba Martín, José:
Barber González de la Vega, Enrique:
Barjau Delgado, Everest:
Barquet Climent, Farid:
Barragán, Julia:
Barreiro Castellanos, Leticia:
Barrera, Alejandra:
Barrero, José María:
Barrios Zamudio, Ernesto Juvenal:
Barrón Resendiz, Yulma:
Barrón, Luis F.:
Basáñez, Miguel:
Bassols Zaleta, Antonio:
Baz, Verónica:
Belausteguigoitia Rius, Imanol:
Belausteguigoitia Rius, Juan Carlos:
Beltrán y Puga, Alma Luz:
Benavides Hernández, Luis Ángel:
Bendesky, León:
Benedict, Rebecca:
Beneitez, Alfredo:
Bengochea, Abimael:
Benítez Manaut, Raúl:
Benítez Tiburcio, Alberto:
Benítez Tiburcio, Mariana:
Benito Alzaga, José Ramón:
Beristáin, Javier:
Berlanga Zubiaga, Ricardo:
Berlanga, Claudia:
Bermudez, Santiago:
Bernal Fontanals, Dolores:
Bernal Verea, Carlos Luis:
Bernal, Claudia:
Bettinger Barrios, Herbert:
Beuchot, Mauricio:
Birkenkötter, Hannah:
Bisson Waters, Philippe:
Blackmore, Hazel:
Blancas, Airam:
Blanco Pantoja, Lucía:
Blanco, Francisco:
Blanco, Víctor:
Bleier, Pedro:
Bonder, Irit:
Bonilla Castañeda, Javier:
Bonilla López, Miguel:
Bonnin, Fernanda:
Bono López, María:
Borda, Sandra:
Borden Eng, Rubén:
Bordier Morteo, Diana Patricia:
Borja Martínez, Francisco:
Bosch Giral, Carlos:
Boué, Michelle:
Brailovsky Signoret, Françoise D.:
Brehm, Luis Fernando:
Breña Medina, Víctor F.:
Brown-Gort, Allert:
Buen Rodríguez, Ricardo de:
Buendía, Jorge:
Bufalá Ferrer-Vidal, Pablo de:
Burgos-García, Jaime:
Buscaglia, Edgardo:
Bustos, Felipe A.:
Caballero Ochoa, José Luis:
Cabo Nodar, Marta:
Cabré, Lorena:
Cáceres Nieto, Enrique:
Callarisa, Felipe M.:
Caloca Ayala, Fernando:
Camacho, Catheryn:
Camacho, Dalia:
Camarena, Daniel:
Camarena, Rodrigo:
Campero, José:
Candela, Eduardo:
Candelas Ramírez, María:
Canizales, Antonio:
Cantú, Luis Fernando:
Cantú-Paz, Erick:
Caracciolo, Ricardo A.:
Carballo Arévalo, Luis Fernando:
Carbonell, Miguel:
Cárdenas Cordón, Alicia:
Cárdenas, Cristina:
Cárdenas, Enrique:
Cardona, Diego:
Carmona Rodríguez, Gisela:
Carrasco Zanini, Felipe:
Carrillo Flores, Bárbara:
Carstens, Catherine Mansell:
Cartas Ayala, Rodolfo:
Cartas-Ayala, Alejandro:
Casillas de Alba, Martín:
Casillas Olvera, Gabriel:
Casillas, Álvaro:
Casillas, Carlos E.:
Castañares, Jorge:
Castañeda Girón, Carlos:
Castañeda Iturbide, Jaime:
Castañeda, Pablo:
Castelazo, José R.:
Castillo Camacho, Francisco Javier:
Castillo Gómez, Rodrigo:
Castillo Múzquiz, Luis del:
Castillo Negrete, Miguel del:
Castillo, Esteban:
Castillo-Rodríguez, David E.:
Cazadero, Manuel:
Cea-Olvera, Fernando:
Cebollada Gay, Marta:
Cedillo, Rodolfo:
Cerda González, Luis Conrado:
Cerro, José A.:
Cervantes Andrade, Raúl:
Cervantes Pérez, Francisco:
Cervantes, Sergio:
Chabaud, Guadalupe:
Chacón, Rodrigo:
Chacón, Samuel:
Chakraborty, Indranil:
Chamarro, Emmanuel Ari:
Chaoul, Ma. Eugenia:
Chapa, Luz:
Charalambous, Nelia:
Charpenel Elorduy, Eduardo Oscar:
Charvel, Roberto:
Chatterji, Shurojit:
Chavarri Villarello, Alain:
Chávez-Ruiz, Javier:
Chiesa Moretti, Giulio:
Chiquito, Arturo:
Cisneros, Rafael:
Clares, Paulina:
Cobo, Fernanda:
Cochran Meléndez, Edgar:
Cocina Martínez, Javier:
Colla de Robertis, Esteban:
Concha, Hugo A.:
Contreras Vaca, Francisco José:
Contreras, Hugo:
Córdoba, José:
Córdova, Santiago:
Coronado, Fernando:
Cortés Campos, Josefina:
Cosío, Carlos:
Cotler, Pablo:
Courtis, Christian:
Coyote Estrada, Hugo César:
Crespo, José Antonio:
Cruz Alvarado, Yaneli:
Cruz Barney, Óscar:
Cruz Morales, Víctor:
Cruz Parcero, Juan Antonio:
Cruz, Luis C.:
Cuervo Escalona, Francisco Javier:
Cueto Legaspi, Roberto del:
Curioca Gálvez, Miriam Shalila:
Dalle Molle, John W.:
Dantés Chong, Melanie Chelsea:
Dávalos Mejía, Luis Carlos Felipe:
Dávila, Enrique:
Dei Vecchi, Diego:
Del Negro, Marco:
Detta Silveira, José Emiliano:
Dev, Pritha:
Deytha, Alan:
Díaz Bonnet, Ana María:
Díaz Calderón, Julio César:
Díaz Cayeros, Alberto:
Díaz de la Serna, Ignacio:
Díaz Domínguez, Alejandro:
Díaz Infante Chapa, Enrique:
Diaz Ochoa, Luis Javier:
Díaz Solís, Emilia del Carmen:
Díaz, Pólux:
Dieterlen, Paulette:
Díez, Antonio:
Dik, Evgeni:
Domenge Muñoz, Rogerio:
Domínguez Esponda, José Pantaleón:
Domínguez Larrea, Diego Alejandro:
Domínguez Skinfield, Guillermo:
Domínguez, Manuel A.:
Dondé Matute, Javier:
Dresser, Denise:
2131, 2132, 2133, 2134, 2135, 2136, 2137, 2138, 2139, 2140, 2141, 2142, 2143, 2144, 2145, 2146, 2147, 2148, 2149, 2150, 2151, 2152, 2153, 2154, 2155, 2156, 2157, 2158, 2159, 2160, 2161, 2162, 2163, 2164, 2165, 2166, 2167, 2168, 2169, 2170, 2171, 2172, 2173, 2174, 2175, 2176, 2177, 2178, 2179, 2180, 2181, 2182, 2183, 2184, 2185, 2186, 2187, 2188, 2189, 2190, 2191, 2192, 2193, 2194, 2195, 2196, 2197, 2198, 2199, 2200, 2201, 2202, 2203, 2204, 2205, 2206, 2207, 2208, 2209, 2210, 2211, 2212, 2213, 2214, 2215, 2216, 2217, 2218, 2219, 2220, 2221, 2222, 2223, 2224, 2225, 2226, 2227, 2228, 2229, 2230, 2231, 2232, 2233, 2234, 2235, 2236, 2237, 2238, 2239, 2240, 2241, 2242, 2243, 2244, 2245, 2246, 2247, 2248, 2249, 2250, 2251, 2252, 2253, 2254, 2255, 2256, 2257, 2258, 2259, 2260, 2261, 2262, 2263, 2264, 2265, 2266, 2267, 2268, 2269, 2270, 2271, 2272, 2273, 2274, 2275, 2276, 2277, 2278, 2279, 2280, 2281, 2282, 2283, 2284, 2285, 2286, 2287, 2288, 2289, 2290, 2291, 2292, 2293, 2294, 2295, 2296, 2297, 2298, 2299, 2300, 2301, 2302, 2303, 2304, 2305, 2306, 2307, 2308, 2309, 2310, 2311, 2312, 2313, 2314, 2315, 2316, 2317, 2318, 2319, 2320, 2321, 2322, 2323, 2324, 2325, 2326, 2327, 2328, 2329, 2330, 2331, 2332, 2333, 2334, 2335, 2336, 2337, 2338, 2339, 2340, 2341, 2342, 2343, 2344, 2345, 2346, 2347, 2348, 2349, 2350, 2351, 2352, 2353, 2354, 2355, 2356, 2357, 2358, 2359, 2360, 2361, 2362, 2363, 2364, 2365, 2366, 2367, 2368, 2369, 2370, 2371, 2372, 2373, 2374, 2375, 2376, 2377, 2378, 2379, 2380, 2381, 2382, 2383, 2384, 2385, 2386, 2387, 2388, 2389, 2390, 2391, 2392, 2393, 2394, 2395, 2396, 2397, 2398, 2399, 2400, 2401, 2402, 2403, 2404, 2405, 2406, 2407, 2408, 2409, 2410, 2411, 2412, 2413, 2414, 2415, 2416, 2417, 2418, 2419, 2420, 2421, 2422, 2423, 2424, 2425, 2426, 2427, 2428, 2429, 2430, 2431, 2432, 2433, 2434, 2435, 2436, 2437, 2438, 2439, 2440, 2441, 2442, 2443, 2444, 2445, 2446, 2447, 2448, 2449, 2450, 2451, 2452, 2453, 2454, 2455, 2456, 2457, 2458, 2459, 2460, 2461, 2462, 2463, 2464, 2465, 2466, 2467, 2468, 2469, 2470, 2471, 2472, 2473, 2474, 2475, 2476, 2477, 2478, 2479, 2480, 2481, 2482, 2483, 2484, 2485, 2486, 2487, 2488, 2489, 2490, 2491, 2492, 2493, 2494, 2495, 2496, 2497, 2498, 2499, 2500, 2501, 2502, 2503, 2504, 2505, 2506, 2507, 2508, 2509, 2510, 2511, 2512, 2513, 2514, 2515, 2516, 2517, 2518, 2519, 2520, 2521, 2522, 2523, 2524, 2525, 2526, 2527, 2528, 2529, 2530, 2531, 2532, 2533, 2534, 2535, 2536, 2537, 2538, 2539, 2540, 2541, 2542, 2543, 2544, 2545, 2546, 2547, 2548, 2549, 2550, 2551, 2552, 2553, 2554, 2555, 2556, 2557, 2558, 2559, 2560, 2561, 2562, 2563, 2564, 2565, 2566, 2567, 2568, 2569, 2570, 2571, 2572, 2573, 2574, 2575, 2576, 2577, 2578, 2579, 2580, 2581, 2582, 2583, 2584, 2585, 2586, 2587, 2588, 2589, 2590, 2591

Duarte Rodríguez-Granada, Bernardo:
Durand, Federico G.:
Dworak, Fernando F.:
Elbittar, Alexander:
Elguea, Javier A.:
Elizarrarás, Rodrigo:
Elizondo Flores, Alan:
Elmandjra, Kenza:
Endo Martínez, Anabel Mitsuko:
Enríquez Rosas, José David:
Epelbaum, Mario:
Epinette, Olivier:
Erreguerena Albaitero, Juan Carlos:
Eryuruk, Gunce:
Escalante, Miguel A.:
Escárcega Jiménez, Hugo Francisco:
Escobar Fernández de la Vega, Marcos:
Escobar, Alejandra:
Escobar-Unda, Carlos:
Espinasa, José María:
Espíndola Flores, Silvano:
Espino Martín, Javier:
Espinosa Armenta, Ramón:
Espinosa Elguea, Araceli:
Espinosa Félix, Yolanda:
Esponda, Fernando:
Esquivel, Nora:
Esteinou Madrid, María Teresa:
Estevane Chávez, Munir Raúl:
Estévez Estévez, Federico:
Estrada, Ernesto:
Estrada, Luis:
Evdokimov, Piotr:
Fadl Kuri, Sergio:
Faegri, Christina Wagner:
Fajardo, Ángel F.:
Fakos, Alexandros:
Farah Ibáñez, José Luis:
Farfán, Rosa Ma.:
Félix, Luz Aurora:
Fernández Durán, Juan José:
Fernández Martínez, Marco Antonio:
Fernández Pavón, Javier:
Fernández Santillán, José Florencio:
Fernández, Leobardo:
Fernández-Duque, David:
Ferrer Mac-Gregor, Eduardo:
Ferrer, Eulalio:
Ferrer-García de Alba, Leonardo J.:
Fierro Ferráez, Ana Elena:
Figueroa, Ana Paulina:
Figueroa, Laura:
Fimbres, María de la Cruz:
Flores Alcázar, Isabel:
Flores Cuevas, Guillermo:
Flores Díaz, Esteban Ignacio:
Flores Mangas, Fernando:
Flores Mosri, Alejandra:
Flores Ramiro, Alejandra:
Flores Subias, Yessica Pamela:
Fonseca Correa, Juan Pablo:
Franco González Salas, José Fernando:
Frank, Pablo:
Franzoni Velázquez, Ana Lidia:
Frost, Elsa Cecilia:
Fuentes Molinar, Olac:
Fuentes-Berain, Rossana:
Gaffron, Swetlana:
Gala Palacios, José Efraín:
Galán Vélez, Rosa Margarita:
Galaviz, Cecilia:
Galindo Flores, Carlos Emilio:
Galindo Monroy, Jorge Antonio:
Galindo Rodríguez, José:
Galindo, Raúl:
Gallardo Calva, Dora Sofia Emilia:
Gallardo Hurtado, Georgina Yolotl:
Gallegos David, Alberto:
Gallo Reinoso, Gabriel:
Gama Leyva, Raymundo:
Gamboa González, Rafael:
Gamboa Hirales, Rafael Gregorio:
Gamboa, Lucía:
Garbuno Iñigo, Alfredo:
García Alcocer, Guillermo:
García Appendini, Katina:
García Azaola, Jorge:
García de la Rosa, Ricardo:
García de la Sienra, Adolfo:
García García, César Luis:
García Huitrón, Manuel Enrique:
García Jaramillo, Miguel Alejandro:
García Ríos, Lídice:
García Sais, Fernando:
García Sámano, Federico Reynaldo:
García Ugarte, Marta Eugenia:
García, Jesús:
García-Cuéllar Céspedes, María Regina:
García-Naranjo, Luis C.:
García-Salcedo, Javier:
Garciacano Cárdenas, Orlando:
Garciadiego Dantan, Javier:
Garduño, Carlos Alfonso:
Garduño, Elmer:
Garfagnini, Umberto:
Gargallo, Francesca:
Garibaldi, José Alberto:
Garza Ruiz Esparza, Graciela:
Garzón Valdés, Ernesto:
Gastelum Lage, Jesús:
Gavito, Javier:
Geifman, Abraham:
Gelati, Bruno:
Geneyro, Juan Carlos:
Geniez, Romain:
Genina Cervantes, José:
Gigola Paglialunga, María Cristina:
Gil Villegas M., Francisco:
Gil Zuarth, Roberto:
Gil-Díaz, Francisco:
Gil-White, Francisco:
Gilsdorf, Thomas E.:
Giménez Valdés Román, Rafael:
Giri, Rahul:
Goddard, Haynes C.:
Gomberg, Andrei:
Gómez Albert, María Fernanda:
Gómez Alcalá, Alberto:
Gómez Alcalá, Eduardo:
Gómez Arciniega, Luis A.:
Gómez de Silva Garza, Andrés:
Gómez Galvarriato, Aurora:
Gómez Leyja, María del Socorro:
Gómez Wulschner, Claudia:
Gómez, Christian Guillermo:
Gómez, Leopoldo:
Gómez-Tagle, Benjamín F.:
Góngora Svartzman, Gabriela N.:
Góngora y Moreno, Sergio F.:
González Armendáriz, Ana:
González Bertomeu, Juan F.:
González Brambila, Claudia Noemí:
González Cruz, Consuelo:
González de la Vega, René:
González Díaz, J. Rafael:
González Franco, Francisco Alejandro:
González Galindo, Ana:
González Hernández, Carlos:
González Isás, Francisco:
González Jáuregui, Carlos:
González Martínez, Carlos:
González Peláez, Marcela:
González Pérez, Luis Felipe:
González Ruiz, Edgar:
González Ruiz, Samuel:
González Sánchez, José Frank:
González, José Luis:
González, Juliana:
González, Luis F.:
González-Casanova, Enrique:
González-Martínez, Jorge A.:
González-Salas Campos, Raúl:
González-Sánchez, David:
Goodliffe, Gabriel:
Gordillo, Juan Carlos:
Goyeneche Polo, José Javier:
Goñi, Hugo:
Grabiszewski, Konrad Cezary:
Granados Osegueda, Edgar:
Granja Castro, Dulce María:
Gregorio Domínguez, María Mercedes:
Guadarrama Marrón, Luis:
Guajardo Soto, Guillermo Agustín:
Guardati Buemo, Silvia del Carmen:
Gudiño Antillón, Juliana:
Guerra, Mauricio:
Guerrero Sevilla, María Teresa:
Guevara Sanginés, Alejandro:
Guim, Mauricio:
Guiza Pérez, Julieta Irma:
Gurrola Pérez, Pedro:
Gutiérrez de Velasco, Luzelena:
Gutiérrez Fernández, Emilio:
Gutiérrez García, José O.:
Guzmán Romero, Erika:
Guzmán Ziga, Zory Marystel:
Haro Suinaga, Victoria:
Hartasánchez Garaña, José Miguel:
Heredia Zubieta, Carlos:
Hernández Arellano, Fabián M.:
Hernández Cid, José Rubén:
Hernández Cortés, Janko:
Hernández Delgado, Alejandro:
Hernández Gálvez, Carlos:
Hernández García, Gabriela:
Hernández Gómez, Inés:
Hernández Licona, Gonzalo:
Hernández Magallanes, Irma:
Hernández Poblete, Alfredo:
Hernández Rodríguez, Silvia Coralia:
Hernández Scharrer, Marina:
Hernández Troncoso, Brandon Francisco:
Hernández Valdés, Héctor C.:
Hernández, Daniel:
Hernández, Jorge F.:
Hernández, Marco:
Hernández-Balderrama, María G.:
Hernández-Garduño, Antonio:
Hernández-Martínez, Luz:
Hernández-Romo Valencia, Pablo:
Herrán, Eric:
Herranz, Juan Carlos:
Herrera González, E.:
Herrera Musi, Carlos:
Herrera, Helios:
Herrerías Franco, Renata:
Hess, Martin:
Heyman, Timothy:
Hills, Peter Matthew:
Horenstein, Alex R.:
Hosaka, Yukiko:
Hoshino, Tetsuya:
Hoyos Argüelles, Ricardo:
Hoyos, Rafael E. de:
Huacuja Acevedo, Luis Antonio:
Huerta Espinosa, Esperanza:
Huerta García, Luis:
Huerta Ochoa, Carla:
Huerta, Kael:
Huggett, Mark:
Huipet, Héctor Hugo:
Hulverson, Elizabeth:
Hurtado-Lopez, Carlos:
Huybens, Elisabeth:
Ibargüen, Claudia:
Ibarguren, Nilda:
Ibarra Chaoul, Alejandra:
Ibarra Fernández, Carlos:
Ibarra Herrerías, Ma. de Lourdes:
Ibarra Pardo, Luis Alberto:
Ibarra Zavala, Darío Guadalupe:
Ibáñez, Alfredo:
Illanes, Ana Isabel:
Illescas, María Dolores:
Incera Diéguez, José Alberto Domingo:
Iraola Guzmán, Miguel Ángel:
Iriberri Ajuria, Alicia Mireya:
Isla Veraza, Carlos de la:
Islas-Camargo, Alejandro:
Izquierdo-Tort, Santiago:
Jácome Madariaga, Laura:
Jaubert, Leny:
Jeffs, Jennifer:
Jinich Fainsod, Ilan:
Jirash, Carlos:
Johnson, Philip:
Juárez González, Laura:
Juárez, Rodrigo:
Kaiser Aranda, Max:
Kajihara Cruz, Víctor Kagekiyo:
Kalis Letayf, Virginia:
Kalmanovitz, Pablo:
Kaplan, David Scott:
Kaplan, Marcos:
Kaye, Dionisio J.:
Keister, Todd:
Kessel, Georgina:
Kheifets, Igor:
Kim, Chong-Sup:
Kim, Seon Tae:
Kuhlmann Rodríguez, Federico José:
Kurz, Andreas:
Lacalle, Marina:
Lamko, Koulsy:
Landeros Jinéz, Mónica:
Landerreche Cardillo, Esteban:
Landerreche, Rafael:
Landín, Aurora:
Lapuente Romo, Roberto:
Lara Chagoyán, Roberto:
Lara Marín, Ricardo:
Lara Ponte, Rodolfo:
Lara, Alejandro:
Lara, Ricardo:
Larrañaga, Pablo:
Larreguy Arbesú, Horacio Alejandro:
Lascuráin, Miguel de:
Latorre Navarro, M. Felisa:
Laux, Fritz L.:
Lavaniegos, Manuel:
Laveaga, Gerardo:
Lavore Fanjul, Elisa:
Layton, Michael D.:
Le, Frank M. Q.:
Leal, Norma:
Lebrija Hirschfeld, Alicia Alejandra:
Ledesma Villar, Luis Carlos:
Levy, Santiago:
Leycegui, Beatriz:
Li-Ju Su, Tessie:
Linhard, Tabea Alexa:
Lira González, Andrés:
Lizaola, Julieta:
Lizondo, José Saúl:
Llanas Mejía, Annapaola:
Llerenas Morales, Vidal:
Lloret Carrillo, Antonio Rodolfo:
Lobato García Mijan, Ignacio Norberto:
Lomelí Ortega, Héctor Ernesto:
López Acevedo, Gladys:
López Álvarez, Teresa:
López Escobar, Emilio:
López Fernández, María Teresa:
López Gallegos, Humberto:
López Gamino, Felipe:
López García, Ana Isabel:
López Laiseca, María del Carmen:
López Mestre, Severo:
López Noriega, Mauricio:
López Noriega, Saúl:
López Pérez, Aaron:
López Santibáñez, María Isabel:
López Tapia, Alejandra:
López Vela, Valeria:
López, David:
López, Edgar D.:
López-Araiza, Sergio:
López-de-Silanes, Florencio:
López-Santibáñez-Jácome, Laura:
Lourdes de León, Viridiana:
Lozano Medecigo, Samantha:
Lozano Suárez, María de los Dolores:
Lozano Valencia, Genaro Fausto:
Lucardi, Adrián:
Lujambio Irazábal, José María:
Luna Muñoz, Agapito:
Luque Salcedo, Víctor Hugo:
Luquín Rivera, Ernesto:
Luzanilla, Adrián:
Machado Arias, Juan Pedro:
MacLean, Joan:
Madriz-Mendoza, Maira:
Magallón Gómez, Eduardo:
Magaloni Kerpel, Ana Laura:
Magaloni, Beatriz:
Maldonado Sansores, Mariana:
Maldonado Talamantes, Armando:
Malem Seña, Jorge Francisco:
Mancera, Miguel:
Mancilla, Antonio:
Mandujano Canto, José Ángel:
Manuell Cid, Gerardo:
Manzanilla Galaviz, José María:
Margadant S., Guillermo Floris:
Margheritis, Ana:
Marian, Ivette:
Marichal, Carlos:
Marín González, Juan Carlos:
Mariscal, Judith:
Mármol Yahya, Salvador:
Martell, David:
Martinelli, César:
Martínez Bowness, Jaime:
Martínez Cantú, Diego Eduardo:
Martínez Ham, Gerardo:
Martínez Mayer, Gonzalo:
Martínez Neri, Eduardo David:
Martínez Ojeda, Alfredo Gerardo:
Martínez Ovando, Juan Carlos:
Martínez Santana, Walter:
Martínez, Gabriel:
Martínez, Gustavo C.:
Martínez, Luz Aleida:
Martínez, Víctor R.:
Martínez-Avendaño, Rubén A.:
Martínez-Gómez, Elizabeth:
Marván Urquiza, Santiago:
Marván, Luz María:
Matthys, Felix H. A.:
Matuk Villazón, José:
Maurer, Noel:
Mazzuchino, María Gabriela:
Mañón, Guillermo:
McBride Pitts, John Bradford:
McRae, Shaun David:
McWilliams, Bruce Peter:
Medina Álvarez, Ricardo:
Medina Covarrubias, Ricardo:
Medina Rioja, Grecia:
Medina, Salvador:
Méjan, Luis Manuel C.:
Mejía Garza, Raúl Manuel:
Mejía Morelos, Jorge Humberto:
Melgar-Palacios, Lucía:
Melissas, Nicolas:
Melrose Aguilar, Enrique:
Meltis, Mónica:
Mena K., Hugo:
Mena Labarthe, Carlos:
Mendes Tavares, Marina:
Méndez Barrera, José Alfredo:
Méndez Gallardo, B. Mariana:
Méndez Rodríguez, Laura A.:
Mendizábal, Yuritzi:
Mendoza Ramírez, Manuel:
Mendoza Trejo, Francisco:
Mendoza, Luis Fernando:
Menéndez, Javier:
Meníndez, Rosalía:
Meraz Sepúlveda, Andrea:
Mercado López, Pedro Eduardo:
Meriläinen, Jaakko:
Merino Juárez, Gustavo Adolfo:
Merino Sanz, María Cruz:
Meson, Patrick:
Messmacher, Miguel:
Mex Perera, Jorge Carlos:
Meyer Suárez, Víctor:
Meza Goiz, Felipe:
Meza González, Javier:
Meza, Héctor:
Micozzi, Juan Pablo:
Mier, Milagros:
Mijangos, Claudia:
Milán, Eduardo:
Milia, Matías Federico:
Milo Caraza, Alexis:
Minaburo Villar, Sandra Patricia:
Minor Molina, José Rafael:
Mireles Flores, Luis:
Moctezuma Barragán, Gonzalo:
Mogollón, O.:
Molina Ayala, José:
Molina León, Gabriel de Jesús Guadalupe:
Molinar Horcasitas, Juan:
Molinet Malpica, Jonathan:
Moncayo-Martínez, Luis A.:
Mondragón Liévana, Carlos Rafael:
Monroy Alarcón, Aurora Guadalupe:
Montaño, Jorge:
Monterrubio, Luis:
Montes de Oca León, Mariza:
Montiel, Luis V.:
Morado, Raymundo:
Morales Aguirre, Marco Antonio:
Morales Barba, Marco A.:
Morales Bañuelos, Paula:
Morales Pérez, José Luis:
Morales, María Fernanda:
Morán, Marco:
Moreno A., Héctor A.:
Moreno Guinea, David:
Moreno Toscano, Alejandra:
Moreno Treviño, Jorge Omar:
Moreno, Alejandro:
210, 235, 236, 678, 1177, 1178, 1297, 1298, 1299, 1302, 1393, 2043, 2101, 2104, 3066, 3735, 4852, 4857, 5120, 5124, 5160, 5426, 5548, 5844, 6190, 6191, 6192, 6193, 6194, 6195, 6196, 6197, 6198, 6199, 6200, 6201, 6202, 6203, 6204, 6205, 6206, 6207, 6208, 6209, 6210, 6211, 6212, 6213, 6214, 6215, 6216, 6217, 6218, 6219, 6220, 6221, 6222, 6223, 6224, 6225, 6226, 6227, 6228, 6229, 6230, 6231, 6232, 6233, 6234, 6235, 6236, 6237, 6238, 6239, 6240, 6241, 6242, 6243, 6244, 6245, 6246, 6247, 6248, 6249, 6250, 6251, 6252, 6253, 6254, 6255, 6256, 6257, 6258, 6259, 6260, 6261, 6262, 6263, 6264, 6265, 6266, 6267, 6268, 6269, 6270, 6271, 6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281, 6282, 6283, 6284, 6285, 6286, 6287, 6288, 6289, 6290, 6291, 6292, 6293, 6294, 6295, 6296, 6297, 6298, 6299, 6300, 6301, 6302, 6303, 6304, 6305, 6306, 6307, 6308, 6309, 6310, 6311, 6312, 6313, 6314, 6315, 6316, 6317, 6318, 6319, 6320, 6321, 6322, 6323, 6324, 6325, 6326, 6327, 6328, 6329, 6330, 6331, 6332, 6333, 6334, 6335, 6336, 6337, 6338, 6339, 6340, 6341, 6342, 6343, 6344, 6345, 6346, 6347, 6348, 6349, 6350, 6351, 6352, 6353, 6354, 6355, 6356, 6357, 6358, 6359, 6360, 6361, 6362, 6363, 6364, 6365, 6366, 6367, 6368, 6369, 6370, 6371, 6372, 6373, 6374, 6375, 6376, 6377, 6378, 6379, 6380, 6381, 6382, 6383, 6384, 6385, 6386, 6387, 6388, 6389, 6390, 6391, 6392, 6393, 6394, 6395, 6396, 6397, 6398, 6399, 6400, 6401, 6402, 6403, 6404, 6405, 6406, 6407, 6408, 6409, 6410, 6411, 6412, 6413, 6414, 6415, 6416, 6417, 6418, 6419, 6420, 6421, 6422, 6423, 6424, 6425, 6426, 6427, 6428, 6429, 6430, 6431, 6432, 6433, 6434, 6435, 6436, 6437, 6438, 6439, 6440, 6441, 6442, 6443, 6444, 6445, 6446, 6447, 6448, 6449, 6450, 6451, 6452, 6453, 6454, 6455, 6456, 6457, 6458, 6459, 6460, 6461, 6462, 6463, 6464, 6465, 6466, 6467, 6468, 6469, 6470, 6471, 6472, 6473, 6474, 6475, 6476, 6477, 6478, 6479, 6480, 6481, 6482, 6483, 6484, 6485, 6486, 6487, 6488, 6489, 6490, 6491, 6492, 6493, 6494, 6495, 6496, 6497, 6498, 6499, 6500, 6501, 6502, 6503, 6504, 6505, 6506, 6507, 6508, 6509, 6510, 6511, 6512, 6513, 6514, 6515, 6516, 6517, 9372, 9373, 9374, 9375, 9376

Moreno-Centeno, Erick:
Morfín Maciel, Antonio:
Morganti, Paolo:
Morgenstern, Ilan:
Morones Escobar, Rafael Edmundo:
Mota Mendoza, Carlos:
Mota, Miguel Ángel:
Motta, María de Lourdes A.:
Moy Campos, Valeria:
3740, 3741, 3742, 3743, 3744, 6543, 6544, 6545, 6546, 6547, 6548, 6549, 6550, 6551, 6552, 6553, 6554, 6555, 6556, 6557, 6558, 6559, 6560, 6561, 6562, 6563, 6564, 6565, 6566, 6567, 6568, 6569, 6570, 6571, 6572, 6573, 6574, 6575, 6576, 6577, 6578, 6579, 6580, 6581, 6582, 6583, 6584, 6585, 6586, 6587, 6588, 6589, 6590, 6591, 6592, 6593, 6594, 6595, 6596, 6597, 6598, 6599, 6600, 6601, 6602, 6603, 6604, 6605, 6606, 6607, 6608, 6609, 6610, 6611, 6612, 6613, 6614, 6615, 6616, 6617, 6618, 6619, 6620, 6621, 6622, 6623, 6624, 6625, 6626, 6627, 6628, 6629, 6630, 6631, 6632, 6633, 6634, 6635, 6636, 6637, 6638, 6639, 6640, 6641, 6642, 6643, 6644, 6645, 6646, 6647, 6648, 6649, 6650, 6651, 6652, 6653, 6654, 6655, 6656, 6657, 6658, 6659, 6660, 6661, 6662, 6663, 6664, 6665, 6666, 6667, 6668, 6669, 6670, 6671, 6672, 6673, 6674, 6675, 6676, 6677, 6678, 6679, 6680, 6681, 6682, 6683, 6684, 6685, 6686, 6687, 6688, 6689, 6690, 6691, 6692, 6693, 6694, 6695, 6696, 6697, 6698, 6699, 6700, 6701, 6702, 6703, 6704, 6705, 6706, 6707, 6708, 6709, 6710, 6711, 6712, 6713, 6714, 6715, 6716, 6717, 6718, 6719, 6720, 6721, 6722, 6723, 6724, 6725, 6726, 6727, 6728, 6729, 6730, 6731, 6732, 6733, 6734, 6735, 6736, 6737, 6738, 6739, 6740, 6741, 6742, 6743, 6744, 6745, 6746, 6747, 6748, 6749, 6750, 6751, 6752, 6753, 6754, 6755, 6756, 6757, 6758, 6759, 6760, 6761, 6762, 6763, 6764, 6765, 6766, 6767, 6768, 6769, 6770, 6771, 6772, 6773, 6774, 6775, 6776, 6777, 6778, 6779, 6780, 6781, 6782, 6783, 6784, 6785, 6786, 6787, 6788, 6789, 6790, 6791, 6792, 6793, 6794, 6795, 6796, 6797, 6798, 6799, 6800, 6801, 6802

Muga Naredo, José Antonio:
Mureddu Torres, César:
Murillo-Othón, Enrique:
Murra Antón, Zeky Ahmed:
Muñoz Medina, David Gonzalo:
Muñoz Piña, Carlos:
Nacif, Benito:
Nagata, Mateus Hiro:
Napolitano Niosi, Alberto:
Natera Niño de Rivera, Christian Raúl:
Nava Ramírez, Rodrigo:
Navarrete Jiménez, Mercedes:
Navarro Isla, Jorge:
Nebel, Mathias:
Negretto, Gabriel L.:
Neumann, Pedro:
Nicolai, Alfredo:
Niembro Ortega, Roberto:
Nolla Suárez, Joan:
Nosnik, Abraham:
Núñez Antonio, Gabriel:
Núñez-López, Mayra:
Nyenhuis, Gerald:
O Torres, Ana Lorena de la:
O'Donoghue, Jennifer L.:
Oberarzbacher, Franz Peter:
Obeso-Orendain, A. de:
Obiols-Homs, Francesc:
Obregón-Schael, Dalia:
Ojeda Cárdenas, Lucía:
Olimón Nolasco, Manuel:
Oneto Suberbie, Manuel A.:
Ordieres, Alejandro:
Orduña Ferreira, Fabián:
Orellana Moyao, Alfredo:
Orlandini, Amedeo:
Ormsby Jenkins, Lilyth:
Ornelas Núñez, Lina:
Oros, José Eduardo:
Orozco García, Wistano:
Orozco Henríquez, José de Jesús:
Orozco y Villa, Luz Helena:
Ortega García, Ramón:
Ortega Guerrero, Carlos:
Ortega Ibarra, Arturo:
Ortega Velázquez, Aída:
Ortigueira, Salvador:
Ortiz Ahlf, Cecilia:
Ortiz Flores, Javier Miguel:
Ortiz Gómez, Ernesto:
Ortiz Islas, Ana:
Ortiz Mancera, María Teresa:
Ortiz Ortega, Adriana:
Ortiz, Luisa:
Ortiz, Ma. de los Ángeles:
Otaduy Aranzadi, José:
Padilla Toca, Mariana:
Palacios Brun, Arturo Alberto:
Palma, Jaime A.:
Pancs, Romans:
Pantoja, Leandro:
Parada García, Zeferino:
Parás, Pablo:
Pardo, José Antonio:
Parra Prieto, Irene:
Pasapera Mora, Alfonso:
Pasternac, Marcelo:
Pasternac, Silvia:
Pastor Jiménez, José Guillermo:
Patán, Federico:
Patula, Jan:
Paullada F., Juan José:
Payró, Fernando:
Peguero, Oscar:
Peláez, Arturo:
Peniche Casares, Pedro:
Peniche, Francisco:
Perales Contreras, Jaime:
Peralta, José Milton:
Pereda Trejo, Luis Enrique:
Pereira, Armando:
Perera Salazar, Rafael:
Pereyra, Guillermo:
Pérez Arce Novaro, Francisco:
Pérez Cervantes, Fernando Diego:
Pérez Chávez, Valeria Aurora:
Pérez de Acha, Luis M.:
Pérez de Acha, Tere:
Pérez Floriano, Lorena Raquel:
Pérez Garmendia, José Luis A.:
Pérez Hernández, José Luis:
Pérez Pérez, Carlos Samuel:
Pérez Ponce, Jesús:
Pérez-Chavela, E.:
Pérez-de la Rosa, M. A.:
Pérez-González, Francisco:
Pérezgrovas, Armando:
Pereznieto Castro, Leonel:
Petrides Jiménez, Yanira:
Peña Merino, José Antonio:
Peña Rangel, David:
Philipson, Daniela:
Pi-Suñer Llorens, Antonia:
Piedras Feria, Ernesto:
Plata Pérez, Leobardo:
Platero, Gonzalo:
Poblete, Karen:
Poiré, Alejandro:
Polishuk Szapiro, Dalia:
Politi-Salame, Isaac:
Ponce Sánchez, José Ignacio:
Ponsich, Antonin:
Possani Espinosa, Andre:
Possani Espinosa, Edgar:
Pratap, Sangeeta:
Pratt Romero, Cassandra:
Preciado Rosas, Gustavo:
Pretelin Muñoz de Cote, Fausto Everardo:
Prieto, Carlos:
Prieto, Francisco:
Puente, Jesús:
Puga-Méndez, Diana:
Puppo, Alberto:
Quercioli, Elena:
Quesada Palacios, José Antonio:
Quintana Osuna, Karla Irasema:
Quiroz, Margarita:
Rabadán Gallardo, Marcela:
Rabasa, Javier:
Raja Kali:
Ramírez Álvarez, Aurora Alejandra:
Ramírez Gallardo, Jorge Alejandro:
Ramírez Mireles, Fernando:
Ramírez Nafarrate, Adrián:
Ramírez Ramírez, Lilia Leticia:
Ramírez Verdugo, Arturo:
Ramos Carlos, Víctor Hugo:
Ramos Tercero, Raúl M.:
Ramos, Carlos:
Ramos, Jorge:
Ramos-Alarcón B., Rafael:
Ramos-Alarcón, Rafael:
Rangel Contreras, Mónica:
Rapetti, Pablo Ariel:
Rattinger, Álvaro:
Razo-Zapata, Iván S.:
Reding Blase, Sofia:
Redondo, Cristina:
Reina Antuña, Lucía:
Rendón Elizondo, Jorge:
Rendón, Silvio:
Renero, Juan Manuel:
Reséndiz Núñez, Cuauhtémoc:
Revah Meyohas, Benito:
Revueltas, Andrea:
Reyes Aldasoro, Constantino Carlos:
Reyes Báez, Rodolfo:
Reyes Garza, Jose:
Reyes Heroles C., Ricardo:
Reyes Heroles, Federico:
Reyes, Araceli:
Reyes, Juan José:
Reyes-González, Leonardo:
Reyes-López, Daniel:
Rincón, Germán A. del:
Río Castillo, Jaime del:
Ríos Sánchez, José Ramón:
Ríos-Curiel, Alejandro:
Ritchie-Dunham, James L.:
Rivas Sepúlveda, Carlos:
Rivas, José Luis:
Rivera Soto, Leandro Alberto:
Rivera-Noriega, Jorge:
Roa Jacobo, Juan Carlos:
Robinson Dillord, James Franklin:
Robledo y González Plata, Roxana:
Robles Cartes, Marta:
Robles Valdés, Gloria:
Rocha Álvarez, José María da:
Rocha Guzmán, Samuel:
Rocha Rodrigo, Marianito:
Roche, Hervé:
Rodiles, Alejandro:
Rodríguez Alemán, Luis Ángel:
Rodríguez Caballero, Carlos Vladimir:
Rodríguez Chávez, Ernesto:
Rodríguez Gallardo, Adolfo:
Rodríguez Gómez, Francisco Tonatiuh:
Rodríguez Martínez, Gumer Israel:
Rodríguez Mejía, Gregorio:
Rodríguez, José Pablo:
Rodríguez-Cortés, H.:
Rodríguez-Pueblita, José Carlos:
Roemer, Andrés:
Rojas Nandayapa, Leonardo:
Rojas Vértiz C., Rosa María:
Rojas-Contreras, Frida:
Rojón González, Gonzalo:
Roldán, Pablo:
Roman, Luis M.:
Roman-Rangel, Edgar:
Romero Fraga, Nayeli:
Romero, Magdalena:
Romero, Mauricio:
Romero, Silvia:
Romo Murillo, David:
Rosado, Juan Antonio:
Rosales Sanabria, Andrea:
Rosario-Gabriel, Edwin:
Rosas, A. Lorena:
Rosas, Guillermo:
Rosenzweig, Fernando:
Rubach Lueters, Gisela Ilse Hanna:
Rubiales, Francisco:
Rubio Díaz Leal, Laura Gabriela:
Rubio Torres, Job Aristides:
Rubio-Freidberg, Luis:
Rubli, Adrian:
Rudolf, Thomas Martin:
Ruiz Galindo, Gilberto:
Ruiz García, Benjamín:
Ruíz Massieu Salinas, Daniela:
Ruiz Miguel, Alfonso:
Ruiz Moreno, Iván:
Ruiz Pérez, Andrés:
Ruiz Sandoval, Érika:
Ruíz, Elisa:
Ruiz-Funes Macedo, Mariano:
Rumbos Pellicer, Beatriz:
Saade Hazin, Lilian:
Sabau, H. C.:
Sachse, Matthías:
Sacristán Fanjul, Mónica:
Sacristán, Antonio:
Sadeghkhani, Abdolnasser:
Sadka, Joyce C.:
Sagols, Lizbeth:
Sainz López, María Esperanza:
Salaiz Gabriel, Gustavo:
Salas Mayme, Erika Estefanía:
Salazar Slack, Ana María:
Salazar, Pedro:
Salcedo González, Bruno:
Salcedo, Ante:
Saldívar, Juan:
Sales Heredia, Renato:
Sales-Sarrapy, Carlos:
Salgado Meade, María:
Salgado, Hilda:
Salmerón, Fernando:
Samaniego, Ricardo:
San Martín, Pedro:
San Miguel Aguirre, Alfonso:
Sánchez Andrés, Agustín:
Sánchez Arias, Marco:
Sánchez Castañeda, María de los Dolores:
Sanchez Díaz, Julio Ernesto:
Sánchez Garrido, Jesús Zukhov:
Sánchez Gómez, Elia:
Sánchez González, Manuel:
Sánchez Santana, Priscila:
Sánchez Vargas, Natalia:
Sánchez-Ugarte, Fernando J.:
Sanín Restrepo, Ricardo:
Santaella, Julio A.:
Santamaría Ortuño, Ulises:
Santamaría, Gema:
Santiago-Castillejos, Ita-Andehui:
Santos Santos, Manuel:
Sarmiento Donate, Rosario:
Sarralde, Julieta:
Sastre Castillo, Miguel Ángel:
Savage Carmona, Jesús:
Schiavon, Jorge A.:
Schütte Ricaud, Javier:
Segovia Camelo, Francisco:
Segovia Martínez, María Luisa:
Seira, Enrique:
Sekiguchi Miyamoto, María Teresa:
Sendra Salcedo, José:
Sepúlveda Carmona, M. Magdalena:
Sepúlveda Ortiz, Patricio:
Serrano Gómez, Enrique:
Serrano, Jorge A.:
Servitje, Anna:
Sevilla, Javier I.:
Sharma, Tridib:
Sheinbaum, Diego:
Shubich Green, Yoanna:
Sierra Moncayo, María Julia:
Sigala Alanís, Gamaliel Rodrigo:
Silva Castañeda, Sergio:
Silva Gutiérrez, Gustavo de:
Silva Nava, Carlos de:
Silva Ortíz, Luz María:
Silva-Herzog Márquez, Jesús J.:
Silva-Méndez, Jorge Luis:
Siman, Yael:
Simpser, Alberto:
Smith, Heidi Jane M.:
Sobrevilla, David:
Solares, Blanca:
Solís González, Alejandra:
Solís Soberón, Fernando:
Sordo Cedeño, Reynaldo:
Soriano Cienfuegos, Carlos:
Soriano, Juan Pablo:
Soto Tamayo, Luis Fernando:
Soto, Miguel:
Soto-Bajo, Moisés:
Sottil de Aguinaga, Fernanda:
Souza Peñalosa, Patricia:
Stanchi, Rossana:
Starr, Pamela K.:
Stevenson, Brian J. R.:
Straulino Torre, Stéfano:
Studer-Noguez, Isabel:
Suárez Belmont, Gonzalo Tomás:
Suárez Prado, Gonzalo J.:
Subirats, Eduardo:
Sucar, Germán:
Suh Ahn, Ik Seon:
Székely, Miguel:
Tame Jacobo, Adrián:
Tapia Gutiérrez, Ingrid:
Tarragona, Margarita:
Tarrasó, Jorge:
Tavares, Tiago:
Tellaeche Merino, Aura:
Téllez García, Francisco R.:
Tello Carrasco, José Antonio:
Tello Peón, Jorge Enrique:
Tenorio Antiga, Xiuh Guillermo:
Tenorio-Trillo, Mauricio:
Terán Castellanos, Alejandro:
Terrones, María Eugenia:
Teshima, Kensuke:
Timmons, Jeffrey Frank:
Toledo, Manuel:
Toledo, Roberto:
Tornell, Aaron:
Torrado, María Fernanda:
Torre, Rodolfo de la:
Torres Campos, Ana Patricia:
Torres Chimal, Ma. Elena:
Torres de la Rosa, Danaé:
Torres Serrano, Carlos Alberto:
Torres Vázquez, Francisco:
Torres, Marisol:
Torres, Ricard:
Tovar Landa, Ramiro:
Trabulse, Elías:
Trejo, Javier:
Trejo, Sofía:
Trigueros Legarreta, Ignacio:
Tron Petit, Jean Claude:
Trueba, Carmen:
Tudón M., José F.:
Tukiainen, Janne:
Tur, Carlos M.:
Turhan, Bertan:
Turrent Díaz, Eduardo:
Ugalde, Luis Carlos:
Ülkü, Levent:
Unánue T., Adolfo de:
Urbina Galicia, Adrián:
Urías Horcasitas, Beatriz:
Uribe Castañeda, Manuel:
Uribe Coughlan, Alexandra:
Uribe, Milena:
Urrutia, Carlos:
Ursúa, José Francisco:
Urteaga-Reyesvera, J. Carlos:
Urzúa Valverde, María José:
Vadovic, Radovan:
Valenzuela Valenzuela, Edgar:
Vallín, Roberto:
Vargas Chanes, Delfino:
Vargas García, Edith:
Vargas Gordillo, Isaac:
Vargas, Ramón:
Vásquez Gómez, Jesús María:
Vásquez, Aurelio:
Vásquez-Cortés, Mateo:
Vazquez Corte, Mario:
Vázquez M., Gabriel:
Vega Góngora, Jorge Francisco de la:
Vela, Halina:
Velasco Ibarra, Eugenio:
Velasco Márquez, Jesús:
Velasco, Gustavo R.:
Velázquez Delgado, Bruno:
Velázquez Roa, Jorge:
Velázquez-Trejo, Diego Arturo:
Vélez Bertomeu, Fabio:
Vélez Fernández Varela, Félix:
Vélez Salas, Alejandro:
Venier, Martha Elena:
Ventosa-Santaulària, Daniel:
Ventura González, Cristina Odalis:
Vera Ferrer, Oscar Humberto:
Vera Rea, Sergio Daniel:
Vera, R.:
Vera-Cruz, Matías:
Verma, Rubina:
Vidal Flores, Daniela:
Vidales Calderón, Pablo Antonio:
Videgaray Caso, Luis:
Vila Freyer, Ana:
Vilá Ventayol, Blanca:
Vilar Compte, Mireya:
Vilaça, Guilherme Vasconcelos:
Vilchis, Miguel Alonso:
Villa López, Francisco:
Villa Trapala, Tania:
Villafranca Quinto, Alfredo:
Villafuerte, Diana:
Villanueva Domínguez, Mauricio:
Villanueva Pliego, Francisco:
Villar Alrich, Rafael Noel del:
Villarreal Ruvalcaba, Marta:
Villarreal Schutz, Jorge Luis:
Villaseñor A., Marco:
Villavicencio Navarro, Víctor A.:
Villegas, Hugo:
Villicaña Estrada, Abel:
Villoro, Luis:
Vite, Edgar:
Vives Segl, Horacio:
1381, 2711, 5089, 8601, 8602, 9837, 9838, 9839, 9840, 9841, 9842, 9843, 9844, 9845, 9846, 9847, 9848, 9849, 9850, 9851, 9852, 9853, 9854, 9855, 9856, 9857, 9858, 9859, 9860, 9861, 9862, 9863, 9864, 9865, 9866, 9867, 9868, 9869, 9870, 9871, 9872, 9873, 9874, 9875, 9876, 9877, 9878, 9879, 9880, 9881, 9882, 9883, 9884, 9885, 9886, 9887, 9888, 9889, 9890, 9891, 9892, 9893, 9894, 9895, 9896, 9897, 9898, 9899, 9900, 9901, 9902, 9903, 9904, 9905, 9906, 9907, 9908, 9909, 9910, 9911, 9912, 9913, 9914, 9915, 9916, 9917, 9918, 9919, 9920, 9921, 9922, 9923, 9924, 9925, 9926, 9927, 9928, 9929, 9930, 9931, 9932, 9933, 9934, 9935, 9936, 9937, 9938, 9939, 9940, 9941, 9942, 9943, 9944, 9945, 9946, 9947, 9948, 9949, 9950, 9951, 9952, 9953, 9954, 9955, 9956, 9957, 9958, 9959, 9960, 9961, 9962, 9963, 9964, 9965, 9966, 9967, 9968, 9969, 9970, 9971, 9972, 9973, 9974, 9975, 9976, 9977, 9978, 9979, 9980, 9981, 9982, 9983, 9984, 9985, 9986, 9987, 9988, 9989, 9990, 9991, 9992, 9993, 9994, 9995, 9996, 9997, 9998, 9999, 10000, 10001, 10002, 10003, 10004, 10005, 10006, 10007, 10008, 10009, 10010, 10011, 10012, 10013, 10014, 10015, 10016, 10017, 10018, 10019, 10020, 10021, 10022, 10023, 10024, 10025, 10026, 10027, 10028, 10029, 10030, 10031, 10032, 10033, 10034, 10035, 10036, 10037, 10038, 10039, 10040, 10041, 10042, 10043, 10044, 10045, 10046, 10047, 10048, 10049, 10050, 10051, 10052, 10053, 10054, 10055, 10056, 10057, 10058, 10059, 10060, 10061, 10062, 10063, 10064, 10065, 10066, 10067, 10068, 10069, 10070, 10071, 10072, 10073, 10074, 10075, 10076, 10077, 10078, 10079, 10080, 10081, 10082, 10083, 10084, 10085, 10086, 10087, 10088, 10089, 10090, 10091, 10092, 10093, 10094, 10095, 10096, 10097, 10098, 10099, 10100, 10101, 10102, 10103, 10104, 10105, 10106, 10107, 10108, 10109, 10110, 10111, 10112, 10113, 10114, 10115, 10116, 10117, 10118, 10119, 10120, 10121, 10122, 10123, 10124, 10125, 10126, 10127, 10128, 10129, 10130, 10131, 10132, 10133, 10134, 10135, 10136, 10137, 10138, 10139, 10140, 10141, 10142, 10143, 10144, 10145, 10146, 10147, 10148, 10149, 10150, 10151, 10152, 10153, 10154, 10155, 10156, 10157, 10158, 10159, 10160, 10161, 10162, 10163, 10164, 10165, 10166, 10167, 10168, 10169, 10170, 10171, 10172, 10173, 10174, 10175, 10176, 10177, 10178, 10179, 10180, 10181, 10182, 10183, 10184, 10185, 10186, 10187, 10188, 10189, 10190, 10191, 10192, 10193, 10194, 10195, 10196, 10197, 10198, 10199, 10200, 10201, 10202, 10203, 10204, 10205, 10206, 10207, 10208, 10209, 10210, 10211, 10212, 10213, 10214, 10215, 10216, 10217, 10218, 10219, 10220, 10221, 10222, 10223, 10224, 10225, 10226, 10227, 10228, 10229, 10230, 10231, 10232, 10233, 10234, 10235, 10236, 10237, 10238, 10239, 10240, 10241, 10242, 10243, 10244, 10245, 10246, 10247, 10248, 10249, 10250, 10251, 10252, 10253, 10254, 10255, 10256, 10257, 10258, 10259, 10260, 10261, 10262, 10263, 10264, 10265, 10266, 10267, 10268, 10269, 10270, 10271, 10272, 10273, 10274, 10275, 10276, 10277, 10278, 10279, 10280, 10281, 10282, 10283, 10284, 10285

Wachtel, Andreas:
Weihmann Illades, Gerardo:
Weitzenfeld Ridel, Alfredo José:
Weldon Uitti, Jeffrey A.:
Whitehouse Charpenel, Matthew:
Williams, G. Steve:
Winnberg, Ulrika:
Winter, Christoph K.:
Witker, Jorge:
Wolf, Sonja:
Wuthenau Mayer, Sebastián von:
Xirau, Ramón:
Yarrington Morales, María Antonieta:
Yeves, Jesús:
Zabludovsky Kuper, Jaime:
Zaccour, Georges:
Zaitseva, Elena:
Zamora Lara, Carlos Alberto:
Zapatero, Fernando:
Zárate de Lara, Guillermo Pedro:
Zavala Rubach, Dirk:
Zayed, Hanaa M.:
Zebadúa, Emilio:
Zecchetto, Franco:
Zelaya, Eduardo:
Zepeda Tello, Rodrigo:
Zepeda Trejo, Valeria:
Zocco, Roberto:
Zogaib Achcar, Lorena:
Zorrilla Mateos, Francisco Marcos:
Zorrilla Noriega, Ana María:
Zorrilla Strother, Ramón:
Zubiaurre, María Teresa:
Zubiría Maqueo, José María: