Publicaciones de la facultad
Se muestran 0 publicaciones de 0 resultados.
Abstract:
In recent years, much attention has been paid to the role of classical special functions of a real or complex variable in mathematical physics, especially in boundary value problems (BVPs). In the present paper, we propose a higher-dimensional analogue of the generalized bessel polynomials within Clifford analysis via a special set of monogenic polynomials. We give the definition and derive a number of important properties of the generalized monogenic bessel polynomials (GMBPs), which are defined by a generating exponential function and are shown to satisfy an analogue of Rodrigues' formula. As a consequence, we establish an expansion of particular monogenic functions in terms of GMBPs and show that the underlying basic series is everywhere effective. We further prove a second-order homogeneous differential equation for these polynomials
Abstract:
Momentum profits depend mainly on the short leg and therefore on barriers to short sales. Our research indicates that the decline in momentum profitability in the past 2 decades is driven partly by a contemporaneous growth in stock options trading. Stock options offer an alternative to short selling, augmenting the stock lending market, and thereby contributing to improved pricing efficiency. The resulting reduction in barriers to short sales contributes to lower returns to momentum trading from the short leg. Our results persist after matching stocks with and without options based on different firm-level characteristics
Abstract:
We consider twisted graphs, that is, topological graphs that are weakly isomorphic to subgraphs of the complete twisted graph Tn. We determine the exact minimum number of crossings of edges among the set of twisted graphs with n vertices and m edges; state a version of the crossing lemma for twisted graphs and conclude that the mid-range crossing constant for twisted graphs is 1/6. Let e(k)(n) be the maximum number of edges over all twisted graphs with n vertices and local crossing number at most k. We give lower and upper bounds for e(k)(n) and settle its exact value for k is an element of {0, 1, 2, 3, 6, 10). We conjecture that for every t > = 1, e(t 2) (n) = (t+1) n - (t+2 2), n > = t + 1
Abstract:
This study has carried out a review of the literature appearing on diversity in the last 50 years. Research findings from this period reveal it is impossible to assume there is a pure and simple relationship between diversity and performance without considering a series of variables that affect this relationship. In this study, emphasis has been placed on the analysis of results arrived at through empirical investigation on the relation between the most studied dimensions of diversity and performance. The results presented are part of a more extensive research
Abstract:
In order to analyze the influence of substituent groups, both electron-donating and electron-attracting and the number of π-electrons on the corrosion inhibiting properties of organic molecules, a theoretical quantum chemical study under vacuo and in the presence of water, using the Polarizable Continuum Model (PCM), was carried out for four different molecules, bearing similar chemical framework structure: 2-mercaptoimidazole (2MI), 2-mercaptobenzimidazole (2MBI), 2-mercapto-5- methylbenzimidazole (2M5MBI), and 2-mercapto-5-nitrobenzimidazole (2M5NBI). From an electrochemical study conducted previously in our group, (R. Álvarez-Bustamante, G. Negrón-Silva, M. Abreu-Quijano, H. Herrera-Hernández, M. Romero-Romo, A. Cuán, M. Palomar-Pardavé. Electrochim. Acta, 54, (2009) 539), it was found that the corrosion inhibition efficiency, IE, order followed by the molecules tested was 2MI > 2MBI > 2M5MBI > 2M5NBI. Thus 2MI turned out to be the best inhibitor. This fact strongly suggests that, contrary to a hitherto generally suggested notion, an efficient corrosion inhibiting molecule neither requires to be a large one, nor possesses an extensive π-electrons number. In this work, from a theoretical study a correlation was found between EHOMO, hardness (η), electron charge transfer (ΔN), electrophilicity (W), back-donation (ΔEBack-donation) and the inhibition efficiency, IE. The negative values of EHOMO and the estimated value of the Standard Free Gibbs energy for all the molecules (based on the calculated equilibrium constant) were negative, indicating that the complete chemical processes in which the inhibitors are involved, occur spontaneously
Abstract:
The run sum chart is an effective two-sided chart that can be used to monitor for process changes. It is known that it is more sensible than the Shewhart chart with runs rules and its performance improves as the number of regions increases. However, as the number of regions increses the resulting chart has more parameters to be defined and its design becomes more involved. In this article, we introduce a one-parameter run sum chart. This chart accumulates scores equal to the subgroup means and signals when the cummulative sum exceeds a limit value. A fast initial response feature is proposed and its run length distribution function is found by a set of recursive relations. We compare this chart with other charts suggested in the literature and find it competitive with the CUSUM, the FIR CUSUM, and the combined Shewhart FIR CUSUM schemes
Abstract:
Processes with a low fraction of nonconforming units are known as high-yield processes. These processes produce a small number of nonconforming parts per million. Traditional methods for monitoring the fraction of nonconforming units such as the binomial and geometric control charts with probability limits are not effective. In order to properly monitor these processes, we propose new two-sided geometric-based control charts. In this article we show how to design, analyze, and evaluate their performance. We conclude that these new charts outperform other geometric charts suggested in the literature
Abstract:
To improve the performance of control charts the conditional decision procedure (CDP) incorporates a number of previous observations into the chart's decision rule. It is expected that charts with this runs rule are more sensitive to shifts in the process parameter. To signal an out-of-control condition more quickly some charts use a headstart feature. They are referred as charts with fast initial response (FIR). The CDP chart can also be used with FIR. In this articIe we analyze and compare the performance of geometric CDP charts with and with no F1R. To do it we model the CDP charts with a Markov chain and find cIosed-form ARL expressions. We find the conditional decision procedure useful when the fraction p of nonconforming units detenorates. However the CDP chart is not very effective for signaling decreases in p
Abstract:
A fast initial response (FIR) feature for the run sum R chart is proposed and its ARL performance estimated by a Markov chain representation. It is shown that this chart is more sensitive than several R charts with runs rules proposed by different authors. We conclude that the run sum R chart is simple to use and a very effective tool for monitoring increases and decreases in process dispersion
Abstract:
The standard S chart signals an out-of-control condition when one point exceeds a control limit. It can be augmented with runs rules to improve its performance in detecting assignable causes. A commonly used rule signals when k consecutive points exceed a control limit. This rule can be used alone or to supplement the standard chart. In this article we derive ARL expressions for charts with the k-ofk runs rule. We show how to design S charts with this runs rule, compare their ARL performance, and make a control chart recommendation when it is important to monitor for both increases and decreases in process dispersion
Abstract:
We analyze the performance of traditional R charts and introduce modifications for monitoring both increases and decreases in the process dispersion. We show that the use of equal tail probability limits and the use of sorne runs rules does not represent a significant improvement for monitoring increases and decreases in the variability of the process. We propose to use the R chart with a central line equal. to the median of the distribution of the range. We al so suggest supplementing this chart with a runs rule that signals when nine consecutive points lie on the same side of the median line. We find that such modifications lead to R charts with improved performance for monitoring the process dispersion
Abstract:
To increase the performance for detecting small shifts, control charts are used with runs rules, The Western Electrical Handbook (1956) suggests runs rules to be used with the Shewhart X chart. In this article, we review the performance of two sets of runs rules. The rules of one set signal if k succesive points fall beyond a limito The rules of the other set signal if k out of k+ 1 consecutive points fall beyond a different limito We suggest runs rules from these sets. They are intended to be used with a modified Shewhart X chart, a chart with 3.3σ limits. We find that for small shifts all suggested charts have improved performance than the Shewhart X chart. For large shifts they have comparable performance
Abstract:
Most control charts for variables data are constructed and analyzed under the assumption that observations are independent and normally distributed. Although this may be adequate for many processes, there are situations where these basic assumptions do not hold. In such cases the use of traditional control charts may not be effective. In this article, we estimate the performance of control charts derived under the assumption of normality (normal charts) but used with a much broader range of distributions. We consider monitoring the dispersion of processes that follow the exponential power family of distributions (a family of distributions which includes the normal as a special case). We have found that if a normal CUSUM chart is used with several of these processes the rate of false alarms might be quite different from the rate that results when a normal process is monitored. A normal chart might also be not sensitive enough in detecting changes in such processes. CUSUM charts suitable for monitoring this family of processes are derived to show how much sensitivity is recovered when the correct chart is used
Abstract:
In this paper we analyze several control charts suitable for monitoring process dispersion when subgrouping is not possible or not desirable. We compare the performances of a moving range chart, a cumulative sum (CUSUM) chart based on moving ranges, a CUSUM chart based on an approximate normalizing transformation, a self-starting CUSUM chart, a change-point CUSUM chart, and a exponentially weighted moving average chart based on the subgroup variance. The average run length performances of these charts are also estimated and compared
Abstract:
We consider several control charts for monitoring normal processes for changes in dispersion. We present comparisons of the average run length performances of these charts. We demonstrate that a CUSUM chart based on the likelihood ratio test for the change point problem for normal variances has an ARL performance that is superior to other procedures. Graphs are given to aid in designing this control chart
Abstract:
It is a common practice to monitor the fraction p of non-conforming units to detect whether the quality of a process improves or deteriorates. Users commonly assume that the number of non-conforming units in a subgroup is approximately normal, since large subgroup sizes are considered. If p is small this approximation might fail even for large subgroup sizes. If in addition, both upper and lower limits are used, the performance of the chart in terms of fast detection may be poor. This means that the chart might not quickly detect the presence of special causes. In this paper the performance of several charts for monitoring increases and decreases in p is analyzed based on their Run Length (RL) distribution. It is shown that replacing the lower control limit by a simple runs rule can result in an increase in the overall chart performance. The concept of RL unbiased performance is introduced. It is found that many commonly used p charts and other charts proposed in the literature have RL biased performance. For this reason new control limits that yield an exact (or nearly) RL unbiased chart are proposed
Abstract:
When monitoring a process it is important to quickly detect increases and decreases in its variability. In addition to preventing any increase in the variability of the process and any deterioration in the quality of the output, it is also important to search for special causes that may result in a smaller process dispersion. Considering this, users should always try to monitor for both increases and decreases in the variability. The process variability is commonly monitored by means of a Shewhart range chart. For small subgroup sizes this control chart has a lower control limit equal to zero. To help monitor for both increases and decreases in variability, Shewhart charts with probability limits or runs rules can be used. CUSUM and EWMA charts based on the range or a function of the subgroup variance can also be used. In this paper a CUSUM chart based on the subgroup range is proposed. Its performance is compared with that of other charts proposed in the literature. It is found that for small subgroup sizes, it has an excellent performance and it thus represents a powerful alternative to currently utilized strategies.
Abstract:
Most control charts for variables data are constructed and analyzed under the assumption that observations are independent and normally distributed. Although this may be adequate for many processes, there are situations where these basic assumptions do not hold. The use of control charts in such cases may not be effective if the rate of false alarms is high or if the control chart is not sensitive in detecting changes in the process. In this paper, the ARL performance of a CUSUM chart for dispersion is analyzed under a variety of non-normal process models. We will consider processes that follow the exponential power family of distributions, a symmetric class of distributions which includes the normal distribution as a special case
Resumen:
En este trabajo estudiamos la elección de financiamiento a largo plazo entre créditos sindicatos y la emisión de bonos de las empresas mexicanas. Restringimos nuestra atención a aquellas empresas que emiten títulos financieros en los mercados de capitales. Nuestros hallazgos sugieren que el tamaño de la empresa, la calidad de sus garantías, y su calidad crediticia, juegan un papel importante en el resultado de la elección. También encontramos que los efectos marginales de estas tres características no son consistentes a lo largo de su rango de valores. En particular, encontramos evidencia de que la calidad crediticia de las empresas tiene un efecto no monótono en la elección
Abstract:
We study the long-term financing choice between syndicated loans and Mexican firms' issuance of public debt. We restrict our attention to those firms that issue securities in capital markets. Our findings suggest that the firm's size, the quality of its collateral, and credit quality play an important role in the outcome of choice. We also find that the marginal effects of these three characteristics are not consistent along their range of values. In particular, we found evidence that the credit quality of firms has a non-monotonic effect in the choice
Resumen:
Propósito. En este artículo se examinan los factores que afectan la probabilidad de que una empresa salga a Bolsa, utilizando una base de datos integral de empresas privadas y públicas en México, de todos los sectores, durante 2006-2014
Abstract:
Purpose. The purpose of this paper is to examine the factors that affect the likelihood of being public using a comprehensive database of private and public companies in Mexico, from all sectors, during 2006-2014
Resumen:
Este artículo analiza la demanda de vivienda en México a través del gasto en servicios de vivienda y el costo de uso del capital residencial de cada hogar representativo por percentil de ingreso. La hipótesis de ingreso permanente se considera como función de las características sociodemográficas y el grado de educación del jefe del hogar. Asimismo, se obtienen las elasticidades de ingreso, riqueza, edad del jefe de familia, tamaño del hogar y número de empleados, así como la semielasticidad del costo de uso de capital residencial. El gasto en vivienda es inelástico, aunque es más sensible al ingreso corriente que al permanente. También demostramos que existe una estructura regresiva en este mercado y se realiza un análisis de sensibilidad con el fin de medir el impacto en el gasto de vivienda ante ciertas variaciones del costo de uso residencial de largo plazo
Abstract:
This article analyzes the demand for housing in Mexico through the approach of spending on housing services and user cost of owner-occupied of each representative household by income percentile. The hypothesis of permanent income as a function of the socio-demographic characteristics and the degree of education of the household head is included in the model. We obtain the elasticity of income, wealth, age of head of household, size of household and number of occupied; as well as the semi-elasticity of the user cost of residential capital. It should be noted that expenditure on housing is inelastic, although it is more sensitive to current income than the permanent income. We show that this market struture is regresive, therefore a sensitivity analysis is performed in order to measure the impact on the housing expenditure related to certain variations of the long-run user cost of owner-occupied
Abstract:
This paper proposes a theoretical model that offers a rationale for the formation of lender syndicates. We argue that the ex-ante process of information acquisition may affect the strategies used to create syndicates. For large loans, the restrictions on lending impose a natural reason for syndication. We study medium-sized loans instead, where there is some room for competition since each financial institution has the ability to take the loan in full by itself. In this case, syndication would be the optimal choice only if their screening costs are similar. Otherwise, lenders would be compelled to compete, since a lower screening cost can create a comparative advantage in interest rates
Abstract:
We consider a model of bargaining by concessions where agents can terminate negotiations by accepting the settlement of an arbitrator. The impact of pragmatic arbitrators—that enforce concessions that precede their appointment—is compared with that of arbitrators that act on principle—ignoring prior concessions. We show that while the impact of arbitration always depends on how costly that intervention is relative to direct negotiation, the range of scenarios for which it has an impact, and the precise effect of such impact, does change depending on the behavior— pragmatic or on principle—of the arbitrator. Moreover the requirement of mutual consent to appoint the arbitrator matters only when he is pragmatic. Efficiency and equilibrium are not aligned since agents sometimes reach negotiated agreements when an arbitrated settlement is more efficient and vice versa. What system of arbitration has the best performance depends on the arbitration and negotiation costs, and each can be optimal for plausible environments
Abstract:
This paper analyzes a War of Attrition where players enjoy private information about their outside opportunities. The main message is that uncertainty about the possibility that the opponent opts out increases the equilibrium probability of concession
Abstract:
In many modern production systems the human operator is faced with problems when it comes to interacting with the production system using the control system. One reason for this is that the control systems are mainly designed with respect to production, without taking into account how an operator is supposed to interact with it. This article presents a control system where the goal is to increase flexibility and reusability of production equipment and program modules. Apart from this, the control system is also suitable for human operator interaction. To make it easier for an operator to interact with the control system, the operator activities vis-a-vis the control system have been divided into so called control levels. One of the six predefined control levels is described more in detail to illustrate how production can be manipulated with the help of a control system at this level. The communication with the control system is accomplished with the help of a graphical tool that interacts with a relational database. The tool has been implemented in Java to make it platform-independent. Some examples of the tool and how it can be used are provided
Abstract:
Modern control systems often exhibit problems in switches between automatic and manual system control. One reason for this is the structure of the control system, which is usually not designed for this type of action. This article presents a method for splitting the control system into different control levels. By switching between these control levels, the operator can increase or decrease the number of manual control activities he wishes to perform while still enjoying the support of the control system. The structural advantages of the control levels are demonstrated for two types of operator activity; 1 control flow tracing; and 2 control flow alteration. These two types of operator activity can be used in such situations as when locating an error, introducing a new machine, changing the ordering of products or optimizing the production flow
Abstract:
Partly due to the introduction of computers and intelligent machines in modern manufacturing, the role of the operator has changed with time. More and more of the work tasks have been automated, reducing the need for human interactions. One reason for this is the decrease in the relative cost of computers and machinery compared to the cost of having operators. Even though this statement may be true in industrialized countries it is not evident that it is valid in developing countries. However, a statement that is valid for both industrialized countries and developing countries is to obtain balanced automation systems. A balanced automation system is characterized by "the correct mix of automated activities and the human activities". The way of reaching this goal, however, might be different depending on the place of the manufacturing installation. Aspects, such as time, money, safety, flexibility and quality, govern the steps to take in order to reach a balanced automation system. In this paper there are defined six steps of automation that identify areas of work activities in a modern manufacturing system, that might be performed by either an automatic system or a human. By combining these steps of automation in what is called levels of automation, a mix of automatic and manual activities is obtained. Through the analysis of these levels of automation, with respect to machine costs and product quality, it is demonstrated which the lowest possible automation level should be when striving for balanced automation systems in developing countries. The bottom line of the discussion is that product supervision should not be left to human operators solely, but rather be performed automatically by the system
Abstract:
In the matching with contracts literature, three well-known conditions (from stronger to weaker)–substitutes, unilateral substitutes (US), and bilateral substitutes (BS)–have proven to be critical. This paper aims to deepen our understanding of them by separately axiomatizing the gap between BS and the other two. We first introduce a new “doctor separability” condition (DS) and show that BS, DS, and irrelevance of rejected contracts (IRC) are equivalent to US and IRC. Due to Hatfield and Kojima (2010) and Aygün and Sönmez (2012), we know that US, “Pareto separability” (PS), and IRC are the same as substitutes and IRC. This, along with our result, implies that BS, DS, PS, and IRC are equivalent to substitutes and IRC. All of these results are given without IRC whenever hospitals have preferences
Abstract:
The paper analyzes the role of labor market segmentation and relative wage rigidity in the transmission process of disinflation policies in an open economy facing imperfect capital markets. Wages are flexible in the nontradables sector, and based on efficiency factors in the tradables sector. With perfect labor mobility, a permanent reduction in the devaluation rate leads in the long run to a real appreciation, a lower ratio of output of tradables to nontradables, an increase in real wages measured in terms of tradables, and a fall in the product wage in the nontradables sector. Under imperfect labor mobility, unemployment temporarily rises.
Abstract:
In this paper we study the adjacency matrix of some infinite graphs, which we call the shift operator on the Lp space of the graph. In particular, we establish norm estimates, we find the norm for some cases, we decide the triviality of the kernel of some infinite trees, and we find the eigenvalues of certain infinite graphs obtained by attaching an infinite tail to some finite graphs
Resumen:
México es un país que se encuentra inmerso en un proceso acelerado de envejecimiento poblacional. Las personas adultas mayores están expuestas a múltiples riesgos, entre ellos la violencia familiar, no solo la generada por sus parejas, sino también por parte de otros miembros de la familia. El objetivo propuesto en este artículo fue analizar las características y principales factores asociados con la violencia familiar de las mujeres adultas mayores (sin incluir la violencia de pareja), según grupos de edad (60-69 o 70 años o más) y subtipos de violencia (emocional, económica, física y sexual). Se hizo un análisis secundario de la Encuesta Nacional sobre la Dinámica de las Relaciones en los Hogares (ENDIREH) 2016. La muestra final estuvo conformada por 18,416 mujeres de 60 años o más, lo cual representó un total de 7,043,622 mujeres de dicho grupo de edad. Se hizo un análisis descriptivo y se estimaron modelos de regresión binaria para determinar los principales factores asociados con la violencia y sus subtipos. La violencia emocional fue la más frecuente, seguida de la económica. Las mujeres de edad más avanzada tuvieron una mayor prevalencia de violencia familiar
Abstract:
Mexico as a country is immersed in a process of accelerated population aging. Older people are exposed to multiple risks, including domestic violence, not only from their couple, but also from other family members. The proposed objective of this article is to analyze the characteristics and main factors associated with family violence of older adult women in 2016 -excluding intimate partner violence- by age groups (60-69 and 70 years old or more) and subtypes of violence (emotional, economical, physical and sexual). An analysis of The National Survey on the Dynamics of Household Relationships (ENDIREH in Spanish) 2016 was used. The final sample consisted of 18,416 women aged 60 or more, which represented a total of 7,043,622 women in the age group. A descriptive analysis was carried out and the binary regression models were estimated to determine to main factors associated with violence and its subtypes. The emotional violence was the most frequent, followed by economic violence
Abstract:
The fact that shocks in early life can have long-term consequences is well established in the literature. This paper examines the effects of extreme precipitations on cognitive and health outcomes and shows that impacts can be detected as early as 2 years of age. Our analyses indicate that negative conditions (i.e., extreme precipitations) experienced during the early stages of life affect children’s physical, cognitive and behavioral development measured between 2 and 6 years of age. Affected children exhibit lower cognitive development (measured through language, working and long-term memory and visual-spatial thinking) in the magnitude of 0.15 to 0.19 SDs. Lower height and weight impacts are also identified. Changes in food consumption and diet composition appear to be key drivers behind these impacts. Partial evidence of mitigation from the delivery of government programs is found, suggesting that if not addressed promptly and with targeted policies, cognitive functioning delays may not be easily recovered
Abstract:
A number of studies document gender differentials in agricultural productivity. However, they are limited to region and crop-specific estimates of the mean gender gap. This article improves on previous work in three ways. First, data representative at the national level and for a wide variety of crops is exploited. Second, decomposition methods—traditionally used in the analysis of wage gender gaps—are employed. Third, heterogeneous effects by women's marital status and along the productivity distribution are analyzed. Drawing on data from the 2011–2012 Ethiopian Rural Socioeconomic Survey, we find an overall 23.4 percentage point productivity differential in favor of men, of which 13.5 percentage points (57%) remain unexplained after accounting for gender differences in land manager characteristics, land attributes, and access to resources. The magnitude of the unexplained fraction is large relative to prior estimates in the literature. A more detailed analysis suggests that differences in the returns to extension services, land certification, land extension, and product diversification may contribute to the unexplained fraction. Moreover, the productivity gap is mostly driven by non-married female managers—particularly divorced women—; married female managers do not display a disadvantage. Finally, overall and unexplained gender differentials are more pronounced at mid-levels of productivity
Abstract:
Public transportation is a basic everyday activity. Costs imposed by violence might have far-reaching consequences. We conduct a survey and exploit the discontinuity in the hours of operation of a program that reserves subway cars exclusively for women in Mexico City. The program seems to be successful at reducing sexual harassment toward women by 2.9 percentage points. However, it produces unintended consequences by increasing nonsexual aggression incidents (e.g., insults, shoving) among men by 15.3 percentage points. Both sexual and nonsexual violence seem to be costly; however, our results do not imply that costs of the program outweigh its benefits
Abstract:
We measure the effect of a large nationwide tax reform on sugar-added drinks and caloric-dense food introduced in Mexico in 2014. Using scanner data containing weekly purchasesof 47,973 barcodes by 8,130 households and an RD design, we find that calories purchasedfrom taxed drinks and taxed food decreased respectively by 2.7% and 3%. However, this wascompensated by increases from untaxed categories, such that total calories purchased didnot change. We find increases in cholesterol (12.6%), sodium (5.8%), saturated fat (3.1%), carbohydrates (2%), and proteins (3.8%)
Abstract:
The frequency and intensity of extreme temperature events are likely to increase with climate change. Using a detailed dataset containing information on the universe of loans extended by commercial banks to private firms in Mexico, we examine the relationship between extreme temperatures and credit performance. We find that unusually hot days increase delinquency rates, primarily affecting the agricultural sector, but also non-agricultural industries that rely heavily on local demand. Our results are consistent with general equilibrium effects originated in agriculture that expand to other sectors in agricultural regions. Additionally, following a temperature shock, affected firms face increased challenges in accessing credit, pay higher interest rates, and provide more collateral, indicating a tightening of credit during financial distress
Abstract:
In this work we construct a numerical scheme based on finite differences to approximate the free boundary of an American call option. Points of the free boundary are calculated by approximating the solution of the Black-Scholes partial differential equation with finite differences on domains that are parallelograms for each time step. Numerical results are reported
Abstract:
We present higher-order quadrature rules with end corrections for general Newton–Cotes quadrature rules. The construction is based on the Euler–Maclaurin formula for the trapezoidal rule. We present examples with 6 well-known Newton–Cotes quadrature rules. We analyzemodified end corrected quadrature rules, which consist on a simple modification of the Newton–Cotes quadratures with end corrections. Numerical tests and stability estimates show the superiority of the corrected rules based on the trapezoidal and the midpoint rules
Abstract:
The constructions of the quadratures are based on the method of central corrections described in [4]. The quadratures consist of the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. Integrals of the above type appear in scattering calculations; we test the performance of the quadrature rules with an example of this kind
Abstract:
In this report we construct corrected trapezoidal quadrature rules up to order 40 to evaluate 2-dimensional integrals of the form ʃD v(x, y) log((x2 + y2)1/2)dxdy, where the domain D is a square containing the point of singularity (0, 0) and v is a C∞ function of compact support contained in D. The procedure we use is a modification of the method constructed in [1]. These quadratures are particularly useful in acoustic scattering calculations with large wave numbers. We describe how to extend the procedure to calculate other 2-dimensional integrals with different singularities
Abstract:
In this paper we construct an algorithm to approximate the solution of the initial value problem y'(t) = f(t,y) with y(t0) = y0. The method is implicit and combines the classical Simpson’s rule with the Simpson’s 3/8 rule to yield an unconditionally A-stable method of order 4
Abstract:
In this paper, we propose an anisotropic adaptive refinement algorithm based on the finite element methods for the numerical solution of partial differential equations. In 2-D, for a given triangular grid and finite element approximating space V, we obtain information on location and direction of refinement by estimating the reduction of the error if a single degree of freedom is added to V. For our model problem the algorithm fits highly stretched triangles along an interior layer, reducing the number of degrees of freedom that a standard h-type isotropic refinement algorithm would use
Abstract:
In this paper we construct an algorithm that generates a sequence of continuous functions that approximate a given real valued function f of two variables that have jump discontinuities along a closed curve. The algorithm generates a sequence of triangulations of the domain of f . The triangulations include triangles with high aspect ratio along the curve where f has jumps. The sequence of functions generated by the algorithm are obtained by interpolating f on the triangulations using continuous piecewise polynomial functions. The approximation error of this algorithm is O(1/N2) when the triangulation contains N triangles and when the error is measured in the L1 norm. Algorithms that adaptively generate triangulations by local regular refinement produce approximation errors of size O(1/N), even if higher-order polynomial interpolation is used
Abstract:
We construct correction coefficients for high-order trapezoidal quadrature rules to evaluate three-dimensional singular integrals of the form, J(v) = integral(D) v(x, y, z)/root x(2) +y(2) + z(2) dx dy dz, where the domain D is a cube containing the point of singularity (0, 0, 0) and v is a C-infinity function defined on R-3. The procedure employed here is a generalization to 3-D of the method of central corrections for logarithmic singularities [1] in one dimension, and [2] in two dimensions. As in one and two dimensions, the correction coefficients for high-order trapezoidal rules for J(v) are independent of the number of sampling points used to discretize the cube D. When v is compactly supported in D, the approximation is the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. These quadrature rules provide an efficient, stable and accurate way of approximating J(v). We demonstrate the performance of these quadratures of orders up to 17 for highly oscillatory functions v. These type of integrals appear in scattering calculations in 3-D
Abstract:
We present a high-order, fast, iterative solver for the direct scattering calculation for the Helmholtz equation in two dimensions. Our algorithm solves the scattering problem formulated as the Lippmann-Schwinger integral equation for compactly supported, smoothly vanishing scatterers. There are two main components to this algorithm. First, the integral equation is discretized with quadratures based on high-order corrected trapezoidal rules for the logarithmic singularity present in the kernel of the integral equation. Second, on the uniform mesh required for the trapezoidal rule we rewrite the discretized integral operator as a composition of two linear operators: a discrete convolution followed by a diagonal multiplication; therefore, the application of these operators to an arbitrary vector, required by an iterative method for the solution of the discretized linear system, will cost N(2)log(N) for a N-by-N mesh, with the help of FFT. We will demonstrate the performance of the algorithm for scatterers of complex structures and at large wave numbers. For numerical implementations, CMRES iterations will be used, and corrected trapezoidal rules up to order 20 will be tested
Abstract:
In this report, we construct correction coefficients to obtain high-order trapezoidal quadrature rules to evaluate two-dimensional integrals with a logarithmic singularity of the form J(v) = ∫D v(x, y) ln (√x2 + y2) dx dy, where the domain D is a square containing the point of singularity (0,0) and v is a C∞ function defined on the whole plane ℝ2. The procedure we use is a generalization to 2-D of the method of central corrections for logarithmic singularities described in [1]. As in 1-D, the correction coefficients are independent of the number of sampling points used to discretize the square D. When v has compact support contained in D, the approximation is the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. These quadrature rules give an efficient, stable, and accurate way of approximating J(v). We provide the correction coefficients to obtain corrected trapezoidal quadrature rules up to order 20
Abstract:
This paper addresses the problem of the optimal design of batch plants with imprecise demands in product amounts. The design of such plants necessarily involves the way that equipment may be utilized, which means that plant scheduling and production must form an integral part of the design problem. This work relies on a previous study, which proposed an alternative treatment of the imprecision (demands) by introducing fuzzy concepts, embedded in a multi-objective Genetic Algorithm (GA) that takes into account simultaneously maximization of the net present value (NPV) and two other performance criteria, i.e. the production delay/advance and a flexibility criterion. The results showed that an additional interpretation step might be necessary to help the managers choosing among the non-dominated solutions provided by the GA. The analytic hierarchy process (AHP) is a strategy commonly used in Operations Research for the solution of this kind of multicriteria decision problems, allowing the apprehension of manager subjective judgments. The major aim of this study is thus to propose a software integrating the AHP theory for the analysis of the GA Pareto-optimal solutions, as an alternative decision-support tool for the batch plant design problem solution
Abstract:
Nanoliposomes, bilayer vesicles at the nanoscale, are becoming popular because of their safety, patient compliance, high entrapment efficiency, and prompt action. Several notable biological activities of natural essential oils (EOs), including fungal inhibition, are of supreme interest. As developed, multi-compositional nanoliposomes loaded with various concentrations of clove essential oil (CEO) and tea tree oil (TTO) were thoroughly characterized to gain insight into their nano-size distribution. The present work also aimed to reconnoiter the sustainable synthesis conditions to estimate the efficacy of EOs in bulk and EO-loaded nanoliposomes with multi-functional entities. Following a detailed nano-size characterization of in-house fabricated EO-loaded nanoliposomes, the antifungal efficacy was tested by executing the mycelial growth inhibition (MGI) test using Trichophyton rubrum fungi as a test model. The dynamic light scattering (DLS) profile of as-fabricated EO-loaded nanoliposomes revealed the mean size, polydispersity index (PdI), and zeta potential values as 37.12 +_ 1.23 nm, 0.377 +_ 0.007, and −36.94 +_ 0.36 mV, respectively. The sphere-shaped morphology of CEO and TTO-loaded nanoliposomes was confirmed by a scanning electron microscope (SEM). The existence of characteristic functional bands in all tested counterparts was demonstrated by attenuated total reflection-Fourier transform infrared (ATR-FTIR) spectroscopy. Compared to TTO-loaded nanoliposomes, the CEO-loaded nanoliposomes exhibited a maximum entrapment efficacy of 91.57 +_ 2.5%. The CEO-loaded nanoliposome fraction, prepared using 1.5 uL/mL concentration, showed the highest MGI of 98.4 +_ 0.87% tested against T. rubrum strains compared to the rest of the formulations
Abstract:
Modern lifestyle demands high-end commodities, for instance, cosmetics, detergents, shampoos, household cleaning, sanitary items, medicines, and so forth. In recent years, these products' consumption has increased considerably, being antibiotics and some other pharmaceutical and personal care products (PPCPs). Several antibiotics and PPCPs represent a wide range of emerging contaminants with a straight ingress into aquatic systems, given their high persistence in seawater, effluent treatment plants, and even drinking water. Under these considerations, the necessity of developing new and affordable technologies for the treatment and sustainable mitigation of pollutants is highly requisite for a safer and cleaner environment. One possible mitigation solution is an effective deployment of nanotechnological cues as promising matrices that can contribute by attending issues and improving the current strategies to detect, prevent, and mitigate hazardous pollutants in water. Focused on nanoparticles' distinctive physical and chemical properties, such as high surface area, small size, and shape, metallic nanoparticles (MNPs) have been investigated for water remediation. MNPs gained increasing interest among research groups due to their superior efficiency, stability, and high catalyst activity compared with conventional systems. This review summarizes the occurrence of antibiotics and PPCPs and the application of MNPs as pollutant mitigators in the aquatic environment. The work also focuses on transportation fate, toxicity, and current regulations for environmental safety
Abstract:
The development of greener nano-constructs with noteworthy biological activity is of supreme interest, as a robust choice to minimize the extensive use of synthetic drugs. Essential oils (EOs) and their constituents offer medicinal potentialities because of their extensive biological activity, including the inhibition of fungi species. However, their application as natural antifungal agents are limited due to their volatility, low stability, and restricted administration routes. Nanotechnology is receiving particular attention to overcome the drawbacks of EOs such as volatility, degradation, and high sensitivity to environmental/external factors. For the aforementioned reasons, nanoencapsulation of bioactive compounds, for instance, EOs, facilitates protection and controlled-release attributes. Nanoliposomes are bilayer vesicles, at nanoscale, composed of phospholipids, and can encapsulate hydrophilic and hydrophobic compounds. Considering the above critiques, herein, we report the in-house fabrication and nano-size characterization of bioactive oregano essential oil (Origanum vulgare L.) (OEO) molecules loaded with small unilamellar vesicles (SUV) nanoliposomes. The study was focused on three main points: (1) multi-compositional fabrication nanoliposomes using a thin film hydration-sonication method; (2) nano-size characterization using various analytical and imaging techniques; and (3) antifungal efficacy of as-developed OEO nanoliposomes against Trichophyton rubrum (T. rubrum) by performing the mycelial growth inhibition test (MGI)
Abstract:
The necessity to develop more efficient, biocompatible, patient compliance, and safer treatments in biomedical settings is receiving special attention using nanotechnology as a potential platform to design new drug delivery systems (DDS). Despite the broad range of nanocarrier systems in drug delivery, lack of biocompatibility, poor penetration, low entrapment efficiency, and toxicity are significant challenges that remain to address. Such practices are even more demanding when bioactive agents are intended to be loaded on a nanocarrier system, especially for topical treatment purposes. For the aforesaid reasons, the search for more efficient nano-vesicular systems, such as nanoliposomes, with a high biocompatibility index and controlled releases has increased considerably in the past few decades. Owing to the stratum corneum layer barrier of the skin, the in-practice conventional/conformist drug delivery methods are inefficient, and the effect of the administered therapeutic cues is limited. The current advancement at the nanoscale has transformed the drug delivery sector. Nanoliposomes, as robust nanocarriers, are becoming popular for biomedical applications because of safety, patient compliance, and quick action. Herein, we reviewed state-of-the-art nanoliposomes as a smart and sophisticated drug delivery approach. Following a brief introduction, the drug delivery mechanism of nanoliposomes is discussed with suitable examples for the treatment of numerous diseases with a brief emphasis on fungal infections. The latter half of the work is focused on the applied perspective and clinical translation of nanoliposomes. Furthermore, a detailed overview of clinical applications and future perspectives has been included in this review
Abstract:
A workflow is a set of steps or tasks that model the execution of a process, e.g., protein annotation, invoice generation and composition of astronomical images. Workflow applications commonly require large computational resources. Hence, distributed computing approaches (such as Grid and Cloud computing) emerge as a feasible solution to execute them. Two important factors for executing workflows in distributed computing platforms are (1) workflow scheduling and (2) resource allocation. As a consequence, there is a myriad of workflow scheduling algorithms that map workflow tasks to distributed resources subject to task dependencies, time and budget constraints. In this paper, we present a taxonomy of workflow scheduling algorithms, which categorizes the algorithms into (1) best-effort algorithms (including heuristics, metaheuristics, and approximation algorithms) and (2) quality-of-service algorithms (including budget-constrained, deadline-constrained and algorithms simultaneously constrained by deadline and budget). In addition, a workflow engine simulator was developed to quantitatively compare the performance of scheduling algorithms
Resumen:
Hoy en día, los riesgos y oportunidades relacionados con la sostenibilidad surgen de la dependencia de una entidad ante los recursos que necesita para operar, pero además por el impacto que ocasiona en estos, por lo que podría traer varios impactos contables en la determinación de información financiera. De esta manera, tanto la información contenida en sus estados financieros como la incluida en la información a revelar sobre sostenibilidad relacionada con la información financiera son datos esenciales para que un usuario evalúe el valor de una entidad. Estos aspectos y otros adicionales se abordan de manera sencilla y amigable en la obra Impactos contables de acuerdo con las NIF para el cierre de estados financieros 2022, que es de utilidad para la preparación y presentación de la información financiera del ejercicio 2022; por ello, este libro es imprescindible para los Contadores independientes y de empresas de cualquier tamaño, además de que está dirigido y es de gran ayuda para el personal de despachos, docentes y estudiantes, tanto de nivel licenciatura como de posgrado y, por supuesto, para los preparadores de información financiera, empresarios y público en general
Abstract:
We study the behavior of a decision maker who prefers alternative x to alternative y in menu A if the utility of x exceeds that of y by at least a threshold associated with y and A. Hence the decision maker's preferences are given by menu-dependent interval orders. In every menu, her choice set comprises of undominated alternatives according to this preference. We axiomatize this broad model when thresholds are monotone, i.e., at least as large in larger menus. We also obtain novel characterizations in two special cases that have appeared in the literature: the maximization of a fixed interval order where the thresholds depend on the alternative and not on the menu, and the maximization of monotone semiorders where the thresholds are independent of the alternatives but monotonic in menus
Abstract:
Given a scattered space X = (X, tau) and an ordinal lambda, we define a topology tau+lambda in such a way that tau+0 = tau and, when X is an ordinal with the initial segment topology, the resulting sequence {tau+lambda}(lambda is an element of Ord) coincides with the family of topologies {I-lambda}(lambda is an element of Ord) used by Icard, Joosten, and the second author to provide semantics for polymodal provability logics. We prove that given any scattered space X of large-enough rank and any ordinal lambda > 0, GL is strongly complete for tau(+lambda). The special case where X = omega(omega) + 1 and lambda = 1 yields a strengthening of a theorem of Abashidze and Blass
Abstract:
We introduce verification logic, a variant of Artemov’s logic of proofs with new terms of the form ¡þ! satisfying the axiom schema þ then ¡þ!:þ. The intention is for ¡þ! to denote a proof of þ in Peano arithmetic, whenever such a proof exists. By a suitable restriction of the domain of ¡·!, we obtain the verification logic VS5, which realizes the axioms of Lewis' system S5. Our main result is that VS5 is sound and complete for its arithmetical interpretation
Abstract:
This note examines evidence of non-fundamentalness in the rate of variation of annual per capita capital stock for OECD countries in the period 1955–2020. Leeper et al. (2013) proposed a theoretical model in which, due to agents performing fiscal foresight, this economic series could exhibit a non-fundamental behavior (in particular, a non-invertible moving average component), which has important implications for modeling and forecasting. Using the methodology proposed in Velasco and Lobato (2018), which delivers consistent estimators of the autoregressive and moving average parameters without imposing fundamentalness assumptions, we empirically examine whether the capital data are better represented with an invertible or a non-invertible moving average model. We find strong evidence in favor of the non-invertible representation since for the countries that present significant innovation asymmetry, the selected model is predominantly non-invertible
Abstract:
Shelf life experiments have as an outcome a matrix of zeroes and ones that represent the acceptance or no acceptance of customers when presented with samples of the product under evaluation in a random fashion within a designed experiment. This kind of response is called a Bernoulli response due to the dichotomous nature (0,1) of its values. It is not rare to find inconsistent sequences of responses, that is when a customer rejects a less aged sample and does not reject an older sample. That is, we find a zero before a one. This is due to the human factor present in the experiment. In the presence of this kind of inconsistencies some conventions have been taken in the literature in order to estimate shelf life distribution using methods and software from the reliability field which requires numerical responses. In this work we propose a method that does not require coding the original responses into numerical values. We use a more reliable coding by using the Bernoulli response directly and using a Bayesian approach. The resulting method is based on solid Bayesian theory and proved computer programs. We show by means of an example and simulation studies that the new methodology clearly beats the methodology proposed by Hough. We also provide the R software necessary for the implementation
Abstract:
Definitive Screening Designs (DSD) are a class of experimental designs that have the possibility to estimate linear, quadratic and interaction effects with relatively little experimental effort. The linear or main effects are completely independent of two factor interactions and quadratic effects. The two factor interactions are not completely confounded with other two factor interactions, and quadratic effects are estimable. The number of experimental runs is twice the number of factors of interest plus one. Several approaches have been proposed to analyze the results of these experimental plans, some of these approaches take into account the structure of the design, others do not. The first author of this paper proposed a Bayesian sequential procedure that takes into account the structure of the design, this procedure consider normal and non normal responses. The creators of the DSD originally performed a forward stepwise regression programmed in JMP, and also used the minimization of a bias corrected version of Akaike's information criterion, and later they proposed a frequentist procedure that considers the structure of the DSD. Both the frequentist and Bayesian procedures, when the number of experimental runs is twice the number of factors of interest plus one, use as initial step fitting a model with only main effects and then check the significance of these effects to proceed. In this paper we present modification of the Bayesian procedure that incorporates the Bayesian factor identification which is an approach that computes, for each factor, the posterior probability that it is active, this includes the possibility that it is present in linear, quadratic or two factor interactions. This a more comprehensive approach than just testing the significance of an effect
Resumen:
Paquete estadístico con interfaz gráfica para análisis secuencial bayesiano de diseños discriminantes definitivos
Abstract:
Definitive Screening Designs are a class of experimental designs that under factor sparsity have the potential to estimate linear, quadratic and interaction effects with little experimental effort. BAYESDEF is a package that performs a five step strategy to analyze this kind of experiments that makes use of tools coming from the Bayesian approach. It also includes the least absolute shrinkage and selection operator (lasso) as a check (Aguirre VM. (2016))
Abstract:
With the advent of widespread computing and availability of open source programs to perform many different programming tasks, nowadays there is a trend in Statistics to program tailor made applications for non statistical customers in various areas. This is an alternative to having a large statistical package with many functions many of which never are used. In this article, we present CONS an R package dedicated to Consonance Analysis. Consonance Analysis is a useful numerical and graphical exploratory approach for evaluating the consistency of the measurements and the panel of people involved in sensory evaluation. It makes use of several uni and multivariate techniques either graphical or analytical, particularly Principal Components Analysis. The package is implemented in a graphical user interface in order to get a user friendly package
Abstract:
Definitive screening designs (DSDs) are a class of experimental designs that allow the estimation of linear, quadratic, and interaction effects with little experimental effort if there is effect sparsity. The number of experimental runs is twice the number of factors of interest plus one. Many industrial experiments involve nonnormal responses. Generalized linear models (GLMs) are a useful alternative for analyzing these kind of data. The analysis of GLMs is based on asymptotic theory, something very debatable, for example, in the case of the DSD with only 13 experimental runs. So far, analysis of DSDs considers a normal response. In this work, we show a five-step strategy that makes use of tools coming from the Bayesian approach to analyze this kind of experiment when the response is nonnormal. We consider the case of binomial, gamma, and Poisson responses without having to resort to asymptotic approximations. We use posterior odds that effects are active and posterior probability intervals for the effects and use them to evaluate the significance of the effects. We also combine the results of the Bayesian procedure with the lasso estimation procedure to enhance the scope of the method
Abstract:
It is not uncommon to deal with very small experiments in practice. For example, if the experiment is conducted on the production process, it is likely that only a very few experimental runs will be allowed. If testing involves the destruction of expensive experimental units, we might only have very small fractions as experimental plans. In this paper, we will consider the analysis of very small factorial experiments with only four or eight experimental runs. In addition, the methods presented here could be easily applied to larger experiments. A Daniel plot of the effects to judge significance may be useless for this type of situation. Instead, we will use different tools based on the Bayesian approach to judge significance. The first tool consists of the computation of the posterior probability that each effect is significant. The second tool is referred to in Bayesian analysis as the posterior distribution for each effect. Combining these tools with the Daniel plot gives us more elements to judge the signiicance of an effect. Because, in practice, the response may not necessarily be normally distributed, we will extend our approach to the generalized linear model setup. By simulation, we will show that not only in the case of discrete responses and very small experiments, the usual large sample approach for modeling generalized linear models may produce a very biased and variable estimators, but also that the Bayesian approach provides a very sensible results
Abstract:
Inference for quantile regression parameters presents two problems. First, it is computationally costly because estimation requires optimising a non-differentiable objective function which is a formidable numerical task, specially with many number of observations and regressors. Second, it is controversial because standard asymptotic inference requires the choice of smoothing parameters and different choices may lead to different conclusions. Bootstrap methods solve the latter problem at the price of enlarging the former. We give a theoretical justification for a new inference method consisting of the construction of asymptotic pivots based on a small number of bootstrap replications. The procedure still avoids smoothing and reduces usual bootstrap methods’ computational cost. We show its usefulness to draw inferences on linear or non-linear functions of the parameters of quantile regression models
Abstract:
Repeatability and reproducibility (R&R) studies can be used to pinpoint the parts of a measurement system that might need improvement. By using simulation, there is practically no difference in using five or 10 parts in many R&R studies
Abstract:
The existing methods for analyzing unreplicated fractional factorial experiments that do not contemplate the possibility of outliers in the data have a poor performance for detecting the active effects when that contingency becomes a reality. There are some methods to detect active effects under this experimental setup that consider outliers. We propose a new procedure based on robust regression methods to estimate the effects that allows for outliers. We perform a simulation study to compare its behavior relative to existing methods and find that the new method has a very competitive or even better power. The relative power improves as the contamination and size of outliers increase when the number of active effects is up to four
Abstract:
The paper presents the asymptotic theory of the efficient method of moments when the model of interest is not correctly specified. The paper assumes a sequence of independent and identically distributed observations and a global misspecification. It is found that the limiting distribution of the estimator is still asymptotically normal, but it suffers a strong impact in the covariance matrix. A consistent estimator of this covariance matrix is provided. The large sample distribution on the estimated moment function is also obtained. These results are used to discuss the situation when the moment conditions hold but the model is misspecified. It also is shown that the overidentifying restrictions test has asymptotic power one whenever the limit moment function is different from zero. It is also proved that the bootstrap distributions converge almost surely to the previously mentioned distributions and hence they could be used as an alternative to draw inferences under misspecification. Interestingly, it is also shown that bootstrap can be reliably applied even if the number of bootstrap replications is very small
Abstract:
It is well known that outliers or faulty observations affect the analysis of unreplicated factorial experiments. This work proposes a method that combines the rank transformation of the observations, the Daniel plot and a formal statistical testing procedure to assess the significance of the effects. It is shown, by means of previous theoretical results cited in the literature, examples and a Monte Carlo study, that the approach is helpful in the presence of outlying observations. The simulation study includes an ample set of alternative procedures that have been published in the literature to detect significant effects in unreplicated experiments. The Monte Carlo study also, gives evidence that using the rank transformation as proposed, provides two advantages: keeps control of the experimentwise error rate and improves the relative power to detect active factors in the presence of outlying observations
Abstract:
Most of the inferential results are based on the assumption that the user has a "random" sample, by this it is usually understood that the observations are a realization from a set of independent identically distributed random variables. However most of the time this is not true mainly for two reasons: one, the data are not obtained by means of a probabilistic sampling scheme from the population, the data are just gathered as they becomes available or in the best of the cases using some kind of control variables and quota sampling.; and second, even if a probabilistic scheme is used, the sample design is complex in the sense that it was not simple random sampling with replacement, but instead some sort of stratification or clustering or a combination of both was required. For an excellent discussion about the kind of considerations that should be made in the first situation see Hahn and Meeker (1993) and a related comment in Aguirre (1994). For the second problem there is a book about the topic in Skinner et a1.(1989). In this paper we consider the problem of evaluating the effect of sampling complexity on Pearson's Chi-square and other alternative tests for goodness of fit for proportions. Work on this problem can be found in Shuster and Downing (1976), Rao and Scott (1974), Fellegi (1980), Holt et al. (1980), Rao and Scott (1981), and Thomas and Rao (1987). Out of this work come up several adjustments to Pearson's test, namely: Wald type tests, average eigenvalue correction and Satterthwaite type correction. There is a more recent and general resampling approach given in Sitter (1992), but it was not pursued in this study
Abstract:
Sometimes data analysis using the usual parametric techniques produces misleading results due to violations of the underlying assumptions, such as outliers or non-constant variances. In particular, this could happen in unreplicated factorial or fractional factorial experiments. To help in this situation alternative analyses have been proposed. For example Box and Meyer give a Bayesian analysis allowing for possibly faulty observations in un replicated factorials and the well known Box-Cox transformation can be used when there is a change in dispersion. This paper presents an analysis based on the rank transformation that deals with the above problems. The analysis is simple to use and can be implemented with a general purpose statistical computer package. The procedure is illustrated with examples from the literature. A theoretical justification is outlined at the end of the paper
Abstract:
The article considers the problem of choosing between two (possibly) nonlinear models that have been fitted to the same data using M-estimation methods. An asymptotically normally distributed lest statistics using a Monte Carlo study. We found that the presence of a competitive model either in the null or the alternative hypothesis affects the distributional properties of the tests, and that in the case that the data contains outlying observations the new procedure had a significantly higher power that the rest of the test
Abstract:
Fuller (1976), Anderson (1971), and Hannan (1970) introduce infinite moving average models as the limit in the quadratic mean of a sequence of partial sums, and Fuller (1976) shows that if the assumption of independence of the addends is made then the limit almost surely holds. This note shows that without the assumption of independence, the limit holds with probability one. Moreover, the proofs given here are easier to teach
Abstract:
A test for the problem or choosing between several nonnested nonlinear regression models simultaneously is presented. The test does not require an explicit specification of a parametric family of distributions for the error term and has a closed form
Abstract:
The asymptotic dislribution of the generalized Cox test for choosing between two multivariate, nonlinear regression models in implicit form is derived. The data is assumed to be generated by a model that need not be either the null or the non-null model. As the data-generating model is not subjected to a Pitman drift the analysis is global, not local, and provides a fairly complete qualitative descriptíon of the power characteristics or the generalized Cox test. Some investigations of these characteristics are included. A new test statistic is introduced that does not requíre an explicit specification of the error distributíon of the null model. The idea is to replace an analytical computation of the expectation of the Cox difference with a bootstrap estimate. The null dístributíon of this new test is derived
Abstract:
A great deal of research has investigated how various aspects of ethnic identity influence consumer behavior, yet this literature is fragmented. The objective of this article was to present an integrative theoretical model of how individuals are motivated to think and act in a manner consistent with their salient ethnic identities. The model emerges from a review of social science and consumer research about US Hispanics, but researchers could apply it in its general form and/or adapt it to other populations. Our model extends Oyserman's (Journal of Consumer Psychology, 19, 250) identity-based motivation (IBM) model by differentiating between two types of antecedents of ethnic identity salience: longitudinal cultural processes and situational activation by contextual cues, each with different implications for the availability and accessibility of ethnic cultural knowledge. We provide new insights by introducing three ethnic identity motives that are unique to ethnic (nonmajority) cultural groups: belonging, distinctiveness, and defense. These three motives are in constant tension with one another and guide longitudinal processes like acculturation, and ultimately influence consumers' procedural readiness and action readiness. Our integrative framework organizes and offers insights into the current body of Hispanic consumer research, and highlights gaps in the literature that present opportunities for future research
Abstract:
In many Solvency and Basel loss data, there are thresholds or deductibles that affect the analysis capability. On the other hand, the Birnbaum-Saunders model has received great attention during the last two decades and it can be used as a loss distribution. In this paper, we propose a solution to the problem of deductibles using a truncated version of the Birnbaum-Saunders distribution. The probability density function, cumulative distribution function, and moments of this distribution are obtained. In addition, properties regularly used in insurance industry, such as multiplication by a constant (inflation effect) and reciprocal transformation, are discussed. Furthermore, a study of the behavior of the risk rate and of risk measures is carried out. Moreover, estimation aspects are also considered in this work. Finally, an application based on real loss data from a commercial bank is conducted
Abstract:
This paper proposes two new estimators for determining the number of factors (r) in static approximate factor models. We exploit the well-known fact that the r largest eigenvalues of the variance matrix of N response variables grow unboundedly as N increases, while the other eigenvalues remain bounded. The new estimators are obtained simply by maximizing the ratio of two adjacent eigenvalues. Our simulation results provide promising evidence for the two estimators
Abstract:
We study a modification of the Luce rule for stochastic choice which admits the possibility of zero probabilities. In any given menu, the decision maker uses the Luce rule on a consideration set, potentially a strict subset of the menu. Without imposing any structure on how the consideration sets are formed, we characterize the resulting behavior using a single axiom. Our result offers insight into special cases where consideration sets are formed under various restrictions
Abstract:
Purpose– This paper summarizes the findings of a research project aimed at benchmarking the environmental sustainability practices of the top 500 Mexican companies. Design/methodology/approach– The paper surveyed the firms with regard to various aspects of their adoption of environmental sustainability practices, including who or what prompted adoption, future adoption plans, decision-making responsibility, and internal/external challenges. The survey also explored how the adoption of environmental sustainability practices relates to the competitiveness of these firms. Findings– The results suggest that Mexican companies are very active in the various areas of business where environmental sustainability is relevant. Not surprisingly, however, the Mexican companies are seen to be at an early stage of development along the sustainability “learning curve”. Research limitations/implications– The sample consisted of 103 self-selected firms representing the six primary business sectors in the Mexican economy. Because the manufacturing sector is significantly overrepresented in the sample and because of its importance in addressing issues of environmental sustainability, when appropriate, specific results for this sector are reported and contrasted to the overall sample. Practical implications– The vast majority of these firms see adopting environmental sustainability practices as being profitable and think this will be even more important in the future. Originality/value– Improving the environmental performance of business firms through the adoption of sustainability practices is compatible with competitiveness and improved financial performance. In Mexico, one might expect that the same would be true, but only anecdotal evidence was heretofore available
Abstract:
We analyze an environment where the uncertainty in the equity market return and its volatility are both stochastic and may be potentially disconnected. We solve a representative investor's optimal asset allocation and derive the resulting conditional equity premium and risk-free rate in equilibrium. Our empirical analysis shows that the equity premium appears to be earned for facing uncertainty, especially high uncertainty that is disconnected from lower volatility, rather than for facing volatility as traditionally assumed. Incorporating the possibility of a disconnect between volatility and uncertainty significantly improves portfolio performance, over and above the performance obtained by conditioning on volatility only
Abstract:
We study the consumption-portfolio allocation problem in continuous time when asset prices follow Lévy processes and the investor is concerned about potential model misspecification. We derive optimal consumption and portfolio policies that are robust to uncertainty about the hard-to-estimate drift rate, jump intensity and jump size parameters. We also provide a semi-closed form formula for the detection-error probability and compare various portfolio holding strategies, including robust and non-robust policies. Our quantitative analysis shows that ignoring uncertainty leads to significant wealth loss for the investor
Abstract:
We exploit the manifold increase in homicides in 2008–11 in Mexico resulting from its war on organized drug traffickers to estimate the effect of drug-related homicides on housing prices. We use an unusually rich data set that provides national coverage of housing prices and homicides and exploits within-municipality variations. We find that the impact of violence on housing prices is borne entirely by the poor sectors of the population. An increase in homicides equivalent to 1 standard deviation leads to a 3 percent decrease in the price of low-income housing
Abstract:
This paper examines foreign direct investment (FDI) in the Hungarian economy in the period of post-Communist transition since 1989. Hungary took a quite aggressive approach in welcoming foreign investment during this period and as a result had the highest per capita FDI in the region as of 2001. We discuss the impact of FDI in terms of strategic intent, i.e., market serving and resource seeking FDI. The effect of these two kinds of FDI is contrasted by examining the impact of resource seeking FDI in manufacturing sectors and market serving FDI in service industries. In the case of transition economies, we argue that due to the strategic intent, resource seeking FDI can imply a short-term impact on economic development whereas market serving FDI strategically implies a long-term presence with increased benefits for the economic development of a transition economy. Our focus is that of market serving FDI in the Hungarian banking sector, which has brought improved service and products to multinational and Hungarian firms. This has been accompanied by the introduction of innovative financial products to the Hungarian consumer, in particular consumer credit including mortgage financing. However, the latter remains an underserved segment with much growth potential. For public policy in Hungary and other transition economies, we conclude that policymakers should consider the strategic intent of FDI in order to maximize its benefits in their economies
Abstract:
We propose a general framework for extracting rotation invariant features from images for the tasks of image analysis and classification. Our framework is inspired in the form of the Zernike set of orthogonal functions. It provides a way to use a set of one-dimensional functions to form an orthogonal set over the unit disk by non-linearly scaling its domain, and then associating it an exponential term. When the images are projected into the subspace created with the proposed framework, the rotations in the image affect only the exponential term while the value of the orthogonal functions serve as rotation invariant features. We exemplify our framework using the Haar wavelet functions to extract features from several thousand images of symbols. We then use the features in an OCR experiment to demonstrate the robustness of the method
Abstract:
In this paper we explore the use of orthogonal functions as generators of representative, compact descriptors of image content. In Image Analysis and Pattern Recognition such descriptors are referred to as image features, and there are some useful properties they should possess such as rotation invariance and the capacity to identify different instances of one class of images. We exemplify our algorithmic methodology using the family of Daubechies wavelets, since they form an orthogonal function set. We benchmark the quality of the image features generated by doing a comparative OCR experiment with three different sets of image features. Our algorithm can use a wide variety of orthogonal functions to generate rotation invariant features, thus providing the flexibility to identify sets of image features that are best suited for the recognition of different classes of images
Abstract:
Work from home (WFH) surged worldwide during the COVID-19 pandemic, then partially receded as the pandemic subsided. Using our Global Survey of Working Arrangements covering dozens of countries, we find that average WFH rates among college-educated employees stabilized after 2022. The average number of WFH days per week is steady at roughly 1 d per week globally from 2023 through early 2025. Cross-country variation persists: WFH is about twice as common in advanced English-speaking economies as in much of Asia. These results show how the pandemic-driven shift to remote work has persisted and reached a new equilibrium with implications for urban economies, workforce flexibility, and future research on labor markets
Abstract:
The COVID-19 pandemic triggered a huge, sudden uptake in work from home, as individuals and organizations responded to contagion fears and government restrictions on commercial and social activities (Adams-Prassl et al. 2020; Bartik et al. 2020; Barrero et al. 2020; De Fraja et al. 2021). Over time, it has become evident that the big shift to work from home will endure after the pandemic ends (Barrero et al. 2021). No other episode in modern history involves such a pronounced and widespread shift in working arrangements in such a compressed time frame. The Industrial Revolution and the later shift away from factory jobs brought greater changes in skill requirements and business operations, but they unfolded over many decades. These facts prompt some questions: What explains the pandemic's role as catalyst for a lasting uptake in work from home (WFH)? When looking across countries and regions, have differences in pandemic severity and the stringency of government lockdowns had lasting effects on WFH levels? What does a large, lasting shift to remote work portend for workers? Finally, how might the big shift to remote work affect the pace of innovation and the fortunes of cities?
Abstract:
The pandemic triggered a large, lasting shift to work from home (WFH). To study this shift, we survey full-time workers who finished primary school in twenty-seven countries as of mid-2021 and early 2022. Our crosscountry comparisons control for age, gender, education, and industry and treat the United States mean as the baseline. We find, first, that WFH averages 1.5 days per week in our sample, ranging widely across countries. Second, employers plan an average of 0.7 WFH days per week after the pandemic, but workers want 1.7 days. Third, employees value the option to WFH two to three days per week at 5 percent of pay, on average, with higher valuations for women, people with children, and those with longer commutes. Fourth, most employees were favorably surprised by their WFH productivity during the pandemic. Fifth, looking across individuals, employer plans for WFH levels after the pandemic rise strongly with WFH productivity surprises during the pandemic. Sixth, looking across countries, planned WFH levels rise with the cumulative stringency of government-mandated lockdowns during the pandemic. We draw on these results to explain the big shift to WFH and to consider some implications for workers, organization, cities, and the pace of innovation
Resumen:
This paper presents evidence of large learning losses and partial recovery in Guanajuato, Mexico, during and after the school closures related to the COVID-19 pandemic. Learning losses were estimated using administrative data from enrollment records and by comparing the results of a census-based standardized test administered to approximately 20,000 5th and 6th graders in: (a) March 2020 (a few weeks before school closed); (b) November 2021 (2 months after schools reopened); and (c) June of 2023 (21 months after schools re-opened and over three years after the pandemic started). On average, students performed 0.2 to 0.3 standard deviations lower in Spanish and math after schools reopened, equivalent to 0.66 to 0.87 years of schooling in Spanish and 0.87 to 1.05 years of schooling in math. By June of 2023, students were able to make up for -60% of the learning loss that built up during school closures but still scored 0.08–0.11 standard deviations below their pre-pandemic levels (equivalent to 0.23–0.36 years of schooling)
Abstract:
In this work, we propose a hybrid AI system consisting of a multi-agent system simulating students during an exam and a teacher monitoring them, as well as an evolutionary algorithm that finds classroom arrangements which minimize cheating incentives. The students will answer the exam based on how much knowledge they have about the topic of the exam. In our simulation, then they enter a decision phase in which, for those questions they don't know the answer to, they will either cheat or answer by guessing. If a student gets caught cheating, his/her exam will be cancelled. The purpose of this study is to examine the question of how different monitoring behaviors on the part of the teacher affect the cheating behaviors of students. The results of this study show that an unbiased teacher, that is, a teacher that monitors every student with the same probability, produces minimal cheating incentives for students
Abstract:
When analyzing catastrophic risk, traditional measures for evaluating risk, such as the probable maximum loss (PML), value at risk (VaR), tail-VaR, and others, can become practically impossible to obtain analytically in certain types of insurance, such as earthquake, and certain types of reinsurance arrangements, specially non-proportional with reinstatements. Given the available information, it can be very difficult for an insurer to measure its risk exposure. The transfer of risk in this type of insurance is usually done through reinsurance schemes combining diverse types of contracts that can greatly reduce the extreme tail of the cedant’s loss distribution. This effect can be assessed mathematically. The PML is defined in terms of a very extreme quantile. Also, under standard operating conditions, insurers use several “layers” of non proportional reinsurance that may or may not be combined with some type of proportional reinsurance. The resulting reinsurance structures will then be very complicated to analyze and to evaluate their mitigation or transfer effects analytically, so it may be necessary to use alternative approaches, such as Monte Carlo simulation methods. This is what we do in this paper in order to measure the effect of a complex reinsurance treaty on the risk profile of an insurance company. We compute the pure risk premium, PML as well as a host of results: impact on the insured portfolio, risk transfer effect of reinsurance programs, proportion of times reinsurance is exhausted, percentage of years it was necessary to use the contractual reinstatements, etc. Since the estimators of quantiles are known to be biased, we explore the alternative of using an Extreme Value approach to complement the analysis
Abstract:
Estimation of adequate reserves for outstanding claims is one of the main activities of actuaries in property/casualty insurance and a major topic in actuarial science. The need to estimate future claims has led to the development of many loss reserving techniques. There are two important problems that must be dealt with in the process of estimating reserves for outstanding claims: one is to determine an appropriate model for the claims process, and the other is to assess the degree of correlation among claim payments in different calendar and origin years. We approach both problems here. On the one hand we use a gamma distribution to model the claims process and, in addition, we allow the claims to be correlated. We follow a Bayesian approach for making inference with vague prior distributions. The methodology is illustrated with a real data set and compared with other standard methods
Abstract:
Consider a random sample X1, X2,. . ., Xn, from a normal population with unknown mean and standard deviation. Only the sample size, mean and range are recorded and it is necessary to estimate the unknown population mean and standard deviation. In this paper the estimation of the mean and standard deviation is made from a Bayesian perspective by using a Markov Chain Monte Carlo (MCMC) algorithm to simulate samples from the intractable joint posterior distribution of the mean and standard deviation. The proposed methodology is applied to simulated and real data. The real data refers to the sugar content (°BRIX level) of orange juice produced in different countries
Abstract:
This paper is concerned with the situation that occurs in claims reserving when there are negative values in the development triangle of incremental claim amounts. Typically these negative values will be the result of salvage recoveries, payments from third parties, total or partial cancellation of outstanding claims due to initial overestimation of the loss or to a possible favorable jury decision in favor of the insurer, rejection by the insurer, or just plain errors. Some of the traditional methods of claims reserving, such as the chain-ladder technique, may produce estimates of the reserves even when there are negative values. However, many methods can break down in the presence of enough (in number and/or size) negative incremental claims if certain constraints are not met. Historically the chain-ladder method has been used as a gold standard (benchmark) because of its generalized use and ease of application. A method that improves on the gold standard is one that can handle situations where there are many negative incremental claims and/or some of these are large. This paper presents a Bayesian model to consider negative incremental values, based on a three-parameter log-normal distribution. The model presented here allows the actuary to provide point estimates and measures of dispersion, as well as the complete distribution for outstanding claims from which the reserves can be derived. It is concluded that the method has a clear advantage over other existing methods. A Markov chain Monte Carlo simulation is applied using the package WinBUGS
Abstract:
The BMOM is particularly useful for obtaining post-data moments and densities for parameters and future observations when the form of the likelihood function is unknown and thus a traditional Bayesian approach cannot be used. Also, even when the form of the likelihood is assumed known, in time series problems it is sometimes difficult to formulate an appropriate prior density. Here, we show how the BMOM approach can be used in two, nontraditional problems. The first one is conditional forecasting in regression and time series autoregressive models. Specifically, it is shown that when forecasting disaggregated data (say quarterly data) and given aggregate constraints (say in terms of annual data) it is possible to apply a Bayesian approach to derive conditional forecasts in the multiple regression model. The types of constraints (conditioning) usually considered are that the sum, or the average, of the forecasts equals a given value. This kind of condition can be applied to forecasting quarterly values whose sum must be equal to a given annual value. Analogous results are obtained for AR(p) models. The second problem we analyse is the issue of aggregation and disaggregation of data in relation to predictive precision and modelling. Predictive densities are derived for future aggregate values by means of the BMOM based on a model for disaggregated data. They are then compared with those derived based on aggregated data
Resumen:
El problema de estimar el valor acumulado de una variable positiva y continua para la cual se ha observado una acumulación parcial, y generalmente con sólo un reducido número de observaciones (dos años), se puede llevar a cabo aprovechando la existencia de estacionalidad estable (de un periodo a otro). Por ejemplo, la cantidad por pronosticar puede ser el total de un periodo (año) y el cual debe hacerse en cuanto se obtiene información sólo para algunos subperiodos (meses) dados. Estas condiciones se presentan de manera natural en el pronóstico de las ventas estacionales de algunos productos ‘de temporada’, tales como juguetes; en el comportamiento de los inventarios de bienes cuya demanda varía estacionalmente, como los combustibles; o en algunos tipos de depósitos bancarios, entre otros. En este trabajo se analiza el problema en el contexto de muestreo por conglomerados. Se propone un estimador de razón para el total que se quiere pronosticar, bajo el supuesto de estacionalidad estable. Se presenta un estimador puntual y uno para la varianza del total. El método funciona bien cuando no es factible aplicar metodología estándar debido al reducido número de observaciones. Se incluyen algunos ejemplos reales, así como aplicaciones a datos publicados con anterioridad. Se hacen comparaciones con otros métodos
Abstract:
The problem of estimating the accumulated value of a positive and continuous variable for which some partially accumulated data has been observed, and usually with only a small number of observations (two years), can be approached taking advantage of the existence of stable seasonality (from one period to another). For example the quantity to be predicted may be the total for a period (year) and it needs to be made as soon as partial information becomes available for given subperiods (months). These conditions appear in a natural way in the prediction of seasonal sales of style goods, such as toys; in the behavior of inventories of goods where demand varies seasonally, such as fuels; or banking deposits, among many other examples. In this paper, the problem is addressed within a cluster sampling framework. A ratio estimator is proposed for the total value to be forecasted under the assumption of stable seasonality. Estimators are obtained for both the point forecast and the variance. The procedure works well when standard methods cannot be applied due to the reduced number of observations. Some real examples are included as well as applications to some previously published data. Comparisons are made with other procedures
Abstract:
We present a Bayesian solution to forecasting a time series when few observations are available. The quantity to predict is the accumulated value of a positive, continuous variable when partially accumulated data are observed. These conditions appear naturally in predicting sales of style goods and coupon redemption. A simple model describes the relation between partial and total values, assuming stable seasonality. Exact analytic results are obtained for point forecasts and the posterior predictive distribution. Noninformative priors allow automatic implementation. The procedure works well when standard methods cannot be applied due to the reduced number of observations. Examples are provided
Resumen:
Este artículo evalúa algunos aspectos del Proyecto Piloto de Nutrición, Alimentación y Salud (PNAS). Describe brevemente el Proyecto y presenta las características de la población beneficiaria, luego profundiza en el problema de la pobreza y a partir de un índice se evalúa la selección de las comunidades beneficiadas por el Proyecto. Posteriormente se describe la metodología usada en el análisis costo-efectividad y se da el procedimiento para el cálculo de los cocientes del efecto que tuvo el PNAS específicamente en el gasto en alimentos. Por último, se presentan las conclusiones que, entre otros aspectos, arrojan que el efecto del PNAS en el gasto en alimentos de las familias indujo un incremento del gasto de 7.3% en la zona ixtlera y de 4.3% en la zona otomí-mazahua, con un costo de 29.9 nuevos pesos (de 1991) y de 40.9 para cada una de las zonas, respectivamente
Abstract:
An evaluation is made of some aspects of the Proyecto Piloto de Nutrición, Alimentación y Salud, a Pilot Program for Nutrition, Food and Health of the Mexican Government (PNAS). We give a brief description of the Project and characteristics of the target population. We then describe and use the FGT Index to determine if the communities included in the Project were correctly chosen. We describe the method of cost-effectiveness analysis used in this article. The procedure for specifying cost-effectiveness ratios is next presented, and their application to measure the impact of PNAS on Food Expenditures carried out. Finally we present empirical results that show that, among other results, PNAS increased Food Expenditures of the participating households by 7.3% in the Ixtlera Zone and by 4.3% in the Otomí Mazahua Zone, at a cost of N$29.9 (1991) and N$40.9 for each, respectively
Resumen:
Con frecuencia las instituciones financieras internacionales y los gobiernos locales se ven implicados en la implantación de programas de desarrollo. Existe amplia evidencia de que los mejores resultados se obtienen cuando la comunidad se compromete en la operación de los programas, es decir cuando existe participación comunitaria. La evidencia es principalmente cualitativa, pues no hay métodos para medir cuantitativamente esta participación. En este artículo se propone un procedimiento para generar un índice agregado de participación comunitaria. Está orientado de manera específica a medir el grado de participación comunitaria en la construcción de obras de beneficio colectivo. Para estimar los parámetros del modelo que se propone es necesario hacer algunos supuestos, debido a las limitaciones en la información. Se aplica el método a datos de comunidades que participaron en un proyecto piloto de nutrición-alimentación y salud que se llevó a cabo en México
Abstract:
There is ample evidence that the best results are obtained in development programs when the target community gets involved in their implementation and/or operation. The evidence is mostly qualitative, however, since there are no methods for measuring this participation quantitatively. In this paper we present a procedure for generating an aggregate index of community participation based on productivity. It is specifically aimed at measuring community participation in the construction of works for collective benefit. Because there are limitations on the information available, additional assumptions must be made to estimate parameters. The method is applied to data from communities in Mexico participating in a national nutrition, food and health program
Abstract:
A Bayesian approach is used to derive constrained and unconstrained forecasts in an autoregressive time series model. Both are obtained by formulating an AR(p) model in such a way that it is possible to compute numerically the predictive distribution for any number of forecasts. The types of constraints considered are that a linear combination of the forecasts equals a given value. This kind of restriction is applied to forecasting quarterly values whose sum must be equal to a given annual value. Constrained forecasts are generated by conditioning on the predictive distribution of unconstrained forecasts. The procedures are applied to the Quarterly GNP of Mexico, to a simulated series from an AR(4) process and to the Quarterly Unemployment Rate for the United States
Abstract:
The problem of temporal disaggregation of time series is analyzed by means of Bayesian methods. The disaggregated values are obtained through a posterior distribution derived by using a diffuse prior on the parameters. Further analysis is carried out assuming alternative conjugate priors. The means of the different posterior distribution are shown to be equivalent to some sampling theory results. Bayesian prediction intervals are obtained. Forecasts for future disaggregated values are derived assuming a conjugate prior for the future aggregated value
Abstract:
A formulation of the problem of detecting outliers as an empirical Bayes problem is studied. In so doing we encounter a non-standard empirical Bayes problem for which the notion of average risk asymptotic optimality (a.r.a.o.) of procedures is defined. Some general theorems giving sufficient conditions for a.r.a.o. procedures are developed. These general results are then used in various formulations of the outlier problem for underlying normal distributions to give a.r.a.o. empirical Bayes procedures. Rates of convergence results are also given using the methods of Johns and Van Ryzin
Resumen:
El texto examina cuáles son las características y rasgos del habla tanto de las mujeres como de los hombres; hace una valoración a partir de algunas de sus causas y concluye con una invitación a hacernos conscientes de la forma de expresarnos
Abstract:
This article examines the distinctive characteristics and features of how both women and men speak. Based on this analysis, the author will make an assessment, and then invite the reader to become aware of their manner of speaking
Resumen:
Inés Arredondo perteneció a la llamada Generación de Medio Siglo, particularmente, al grupo de intelectuales y artistas que fundaron e impulsaron las actividades culturales de la Casa del Lago durante los años sesenta. El artículo es una semblanza que da cuenta tanto de los hechos más importantes que marcaron la vida y la trayectoria intelectual de Inés Arredondo, como de los rasgos particulares (estéticos) que definen la obra narrativa de esta escritora excepcional
Abstract:
Inés Arredondo belonged to the so-called Mid-Century Generation, namely a group of intellectuals and artists that established and promoted Casa del Lago’s cultural activities in the Sixties. This article gives an account of the important events and intellectual journey that shaped the writer’s life particularly the esthetic characteristics that shaped the narrative work of this exceptional writer
Abstract:
Informality is a structural trait in emerging economies affecting the behavior of labor markets, financial access and economy-wide productivity. This paper develops a simple general equilibrium closed economy model with nominal rigidities, labor and financial frictions to analyze the transmission of shocks and of monetary policy. In the model, the informal sector provides a flexible margin of adjustment to the labor market at the cost of a lower productivity. In addition, only formal sector firms have access to financing, which is instrumental in their production process. In a quantitative version of the model calibrated to Mexican data, we find that informality: (i) dampens the impact of demand and financial shocks, as well as of technology shocks specific to the formal sector, on wages and inflation, but (ii) heightens the inflationary impact of aggregate technology shocks. The presence of an informal sector also increases the sacrifice ratio of monetary policy actions. From a Central Bank perspective, the results imply that informality mitigates inflation volatility for most type of shocks but makes monetary policy less effective
Abstract:
A topological space X is totally Brown if for each n E N \ {1} and every nonempty open subsets U1, U2, …, Un of X we have cl X (U1) Π cl X (U2) Π … Π cl x (Un) = θ. Totally Brown spaces are connected. In this paper we consider a topology ts on the set N of natural numbers. We then present properties of the topological space (N, ts) some of them involve the closure of a set with respect to this topology, while other describe subsets which are wither totally Brown or totally separated. Our theorems generalize results proved by P. Szczuka in 2013, 2014, 2016 and by P. Szyszkowska and M Szyszkowski in 2018
Resumen:
En el presente trabajo, estudiamos los espacios de Brown, que son conexos y no completamente de Hausdorff. Utilizando progresiones aritméticas, construimos una base BG para una topología TG de N, y mostramos que (N, TG), llamado el espacio de Golomb, es de Brown. También probamos que hay elementos de BG que son de Brown, mientras que otros están totalmente separados. Escribimos algunas consecuencias de este resultado. Por ejemplo, (N, TG) no es conexo en pequeño en ninguno de sus puntos. Esto generaliza un resultado probado por Kirch en 1969. También damos una prueba más simple de un resultado presentado por Szczuka en 2010
Abstract:
In the present paper we study Brown spaces which are connected and not completely Hausdorff. Using arithmetic progressions, we construct a base BG for a topology TG on N, and show that (N, TG), called the Golomb space is a Brown space. We also show that some elements of BG are Brown spaces, while others are totally separated. We write some consequences of such result. For example, the space (N, TG) is not connected "im kleinen" at each of its points. This generalizes a result proved by Kirchin 1969. We also present a simpler proof of a result given by Szczuka in 2010
Resumen:
El libro sintetiza la experiencia adquirida en cursos de Ordenadores y Programación impartidos por la autora durante tres años en la Licenciatura de Infórmática de la Facultad de Ciencias de la Universidad Autónoma de Barcelona
Resumen:
En recientes años se ha incrementado el interés en el desarrollo de nuevos materiales en este caso compositos, ya que estos materiales más avanzados pueden realizar mejor su trabajo que los materiales convencionales (K. Morsi, A. Esawi.,, 2006). En el presente trabajo se analiza el efecto de la adición de nanotubos de carbono incorporando nano partículas de plata para aumentar tanto sus propiedades eléctricas como mecánicas. La realización de aleaciones de Aluminio con nanotubos de carbono utilizando molienda de baja energía con una velocidad de 140 rpm y durante un periodo de 24 horas de molienda, partiendo de aluminio al 98% se realizó una aleación con 0.35 de nanotubos de carbono, la molienda se realizó para obtener una buena homogenización ya que la distribución afecta al comportamiento de las propiedades (Amirhossein Javadi, 2013), además de la reducción de partícula y finalmente la incorporación de nanotubos de carbono adicionando nanopartículas de plata por la reducción con borohidruro de sodio por medio de la punta ultrasónica, Las aleaciones obtenidas fueron caracterizadas por Microscopia electrónica de barrido (MEB), Análisis de Difracción de Rayos X, se realizaron pruebas de dureza y finalmente se realizaron pruebas de conductividad eléctrica
Abstract:
In recent years has increased interest in the development of new materials in this case composites, as these more advanced materials can perform their work better than conventional materials. (K. Morsi, A. Esawi.,, 2006). In the present work we analyze the effect of the addition of carbon nanotubes incorporating nano silver particles to increase both their electrical and mechanical properties. The performance of aluminum alloys with carbon nanotubes using low energy grinding with a speed of 140 rpm and during a period of 24 hours of grinding, starting from 98% aluminum, an alloy was made with 0.35 carbon nanotubes, grinding (Amirhossein Javadi, 2013), in addition to the reduction of particle and finally the incorporation of carbon nanotubes by adding silver nanoparticles by the reduction with sodium borohydride by Medium of the ultrasonic tip. The obtained alloys were characterized by Scanning Electron Microscopy (SEM), X-Ray Diffraction Analysis, hardness tests were performed and electrical conductivity tests were finally carried out
Abstract:
In this study, high temperature reactions of Fe–Cr alloys at 500 and 600 °C were investigated using an atmosphere of N2–O2 8 vol% with 220 vppm HCl, 360 vppm H2O and 200 vppm SO2; moreover the following aggressive salts were placed in the inlet: KCl and ZnCl2. The salts were placed in the inlet to promote corrosion and increase the chemical reaction. These salts were applied to the alloys via discontinuous exposures. The corrosion products were characterized using thermo-gravimetric analysis, scanning electron microscopy and X-ray diffraction.The species identified in the corrosion products were: Cr2O3, Cr2O (Fe0.6Cr0.4)2O3, K2CrO4, (Cr, Fe)2O3, Fe–Cr, KCl, ZnCl2, FeOOH, σ-FeCrMo and Fe2O3. The presence of Mo, Al and Si was not significant and there was no evidence of chemical reaction of these elements. The most active elements were the Fe and Cr in the metal base. The Cr presence was beneficial against corrosion; this element decelerated the corrosion process due to the formation of protective oxide scales over the surfaces exposed at 500 °C and even more notable at 600 °C; as it was observed in the thermo-gravimetric analysis increasing mass loss. The steel with the best performance was alloy Fe9Cr3AlSi3Mo, due to the effect of the protective oxides inclusive in presence of the aggressive salts
Abstract:
Latin America is the most violent region in the world, with many countries also suffering from high levels of criminality and the presence of powerful criminal organizations. Identifying government responses that improve citizen security is imperative. Existing research argues that improving intergovernmental coordination helps the state combat criminality, but has limited its analysis to political factors that affect coordination. I study the impact of increasing intergovernmental coordination between law enforcement agencies. Using the generalized synthetic control method, original data on the staggered implementation of a police reform that increased intergovernmental police coordination and detailed data on criminal organizations and criminality in Guanajuato, Mexico, I find that the reform weakened criminal organizations and reduced violent crime, but increased violence
Abstract:
Cognitive appraisal theory predicts that emotions affect participation decisions around risky collective action. However, little existing research has attempted to parse out the mechanisms by which this process occurs. We build a global game of regime change and discuss the effects that fear may have on participation through pessimism about the state of the world, other players' willingness to participate, and risk aversion. We test the behavioral effects of fear in this game by conducting 32 sessions of an experiment in two labs where participants are randomly assigned to an emotion induction procedure. In some rounds of the game, potential mechanisms are shut down to identify their contribution to the overall effect of fear. Our results show that in this context, fear does not affect willingness to participate. This finding highlights the importance of context, including integral versus incidental emotions and the size of the stakes, in shaping effect of emotions on behavior
Abstract:
Multilayer perceptron networks have been designed to solve supervised learning problems in which there is a set of known labeled training feature vectors. The resulting model allows us to infer adequate labels for unknown input vectors. Traditionally, the optimal model is the one that minimizes the error between the known labels and those inferred labels via such a model. The training process results in those weights that achieve the most adequate labels. Training implies a search process which is usually determined by the descent gradient of the error. In this work, we propose to replace the known labels by a set of such labels induced by a validity index. The validity index represents a measure of the adequateness of the model relative only to intrinsic structures and relationships of the set of feature vectors and not to previously known labels. Since, in general, there is no guarantee of the differentiability of such an index, we resort to heuristic optimization techniques. Our proposal results in an unsupervised learning approach for multilayer perceptron networks that allows us to infer the best model relative to labels derived from such a validity index which uncovers the hidden relationships of an unlabeled dataset
Abstract:
Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters) whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method
Abstract:
One of the basic endeavors in Pattern Recognition and particularly in Data Mining is the process of determining which unlabeled objects in a set do share interesting properties. This implies a singular process of classification usually denoted as "clustering", where the objects are grouped into k subsets (clusters) in accordance with an appropriate measure of likelihood. Clustering can be considered the most important unsupervised learning problem. The more traditional clustering methods are based on the minimization of a similarity criteria based on a metric or distance. This fact imposes important constraints on the geometry of the clusters found. Since each element in a cluster lies within a radial distance relative to a given center, the shape of the covering or hull of a cluster is hyper-spherical (convex) which sometimes does not encompass adequately the elements that belong to it. For this reason we propose to solve the clustering problem through the optimization of Shannon's Entropy. The optimization of this criterion represents a hard combinatorial problem which disallows the use of traditional optimization techniques, and thus, the use of a very efficient optimization technique is necessary. We consider that Genetic Algorithms are a good alternative. We show that our method allows to obtain successfull results for problems where the clusters have complex spatial arrangements. Such method obtains clusters with non-convex hulls that adequately encompass its elements. We statistically show that our method displays the best performance that can be achieved under the assumption of normal distribution of the elements of the clusters. We also show that this is a good alternative when this assumption is not met
Abstract:
This paper proposes a novel distributed controller that solves the leader-follower and the leaderless consensus problems in the task space for networks composed of robots that can be kinematically and dynamically different (heterogeneous). In the leader-follower scenario, the controller ensures that all the robots in the network asymptotically reach a given leader pose (position and orientation), provided that at least one follower robot has access to the leader pose. In the leaderless problem, the robots asymptotically reach an agreement pose. The proposed controller is robust to variable time-delays in the communication channel and does not rely on velocity measurements. The controller is dynamic, it cancels-out the gravity effects and it incorporates a proportional to the error term between the robot and the controller virtual position. The controller dynamics consists of a simple proportional scheme plus damping injection through a second-order (virtual) system. The proposed approach employs the singularity free unit-quaternions to represent the orientation of the end-effectors, and the network is represented by an undirected and connected interconnection graph. The application to the control of bilateral teleoperators is described as a special case of the leaderless consensus solution. The paper presents numerical simulations with a network composed of four 6-Degrees-of-Freedom (DoF) and one 7-DoF robot manipulators. Moreover, we also report some experiments with a 6-DoF industrial robot and two 3-DoF haptic devices
Abstract:
This document describes the methodology used to group all the municipalities in the country into 777 local labor markets. The approach used to define local labor markets follows best international practices and hinges on the idea that there are groups of municipalities that are highly economically integrated and, therefore, constitute a single local labor market. After defining local labor markets, census population and housing data collected by INEGI are linked to local labor markets, covering the period from 1990 to 2020. With the definitions generated in this document, researchers from the academic community and the public and private sectors can study different aspects of the Mexican labor market using a unit of analysis that considers the interrelationships between the country's municipalities. The databases are presented at the individual level and aggregated at the local labor market level from 1990 to 2020. This document details the structure of the databases, and provides an initial analysis of the trends exhibited by the Mexican labor markets during 1990-2020 based on the aforementioned database
Abstract:
This article examines the effects of committee specialization and district characteristics on speech participation by topic and congressional forum. It argues that committee specialization should increase speech participation during legislative debates, while district characteristics should affect the likelihood of speech participation in non-lawmaking forums. To examine these expectations, we analyze over 100,000 speeches delivered in the Chilean Chamber of Deputies between 1990 and 2018. To carry out our topic classification task, we utilize the recently developed state-of-the-art multilingual Transformer model XLM-RoBERTa. Consistent with informational theories, we find that committee specialization is a significant predictor of speech participation in legislative debates. In addition, consistent with theories purporting that legislative speech serves as a vehicle for the electoral connection, we find that district characteristics have a significant effect on speech participation in non-lawmaking forums
Abstract:
According to conventional wisdom, closed-list proportional representation (CLPR) electoral systems create incentives for legislators to favor the party line over their voters' positions. However, electoral incentives may induce party leaders to tolerate shirking by some legislators, even under CLPR. This study argues that in considering whose deviations from the party line should be tolerated, party leaders exploit differences in voters' relative electoral influence resulting from malapportionment. We expect defections in roll call votes to be more likely among legislators elected from overrepresented districts than among those from other districts. We empirically test this claim using data on Argentine legislators' voting records and a unique dataset of estimates of voters' and legislators' placements in a common ideological space. Our findings suggest that even under electoral rules known for promoting unified parties, we should expect strategic defections to please voters, which can be advantageous for the party's electoral fortunes
Abstract:
This article examines speech participation under different parliamentary rules: open forums dedicated to bill debates, and closed forums reserved for non-lawmaking speeches. It discusses how electoral incentives influence speechmaking by promoting divergent party norms within those forums. Our empirical analysis focuses on the Chilean Chamber of Deputies. The findings lend support to the view that, in forums dedicated to non-lawmaking speeches, participation is greater among more institutionally disadvantaged members (backbenchers, women, and members from more distant districts), while in those that are dedicated to lawmaking debates, participation is greater among more senior members and members of the opposition
Abstract:
We present a novel approach to disentangle the effects of ideology, partisanship, and constituency pressures on roll-call voting. First, we place voters and legislators on a common ideological space. Next, we use roll-call data to identify the partisan influence on legislators’ behavior. Finally, we use a structural equation model to account for these separate effects on legislative voting. We rely on public opinion data and a survey of Argentine legislators conducted in 2007–08. Our findings indicate that partisanship is the most important determinant of legislative voting, leaving little room for personal ideological position to affect legislators’ behavior
Abstract:
Legislators in presidential countries use a variety of mechanisms to advance their electoral careers and connect with relevant constituents. The most frequently studied activities are bill initiation, co-sponsoring, and legislative speeches. In this paper, the authors examine legislators' information requests (i.e. parliamentary questions) to the government, which have been studied in some parliamentary countries but remain largely unscrutinised in presidential countries. The authors focus on the case of Chile - where strong and cohesive national parties coexist with electoral incentives that emphasise the personal vote - to examine the links between party responsiveness and legislators' efforts to connect with their electoral constituencies. Making use of a new database of parliamentary questions and a comprehensive sample of geographical references, the authors examine how legislators use this mechanism to forge connections with voters, and find that targeted activities tend to increase as a function of electoral insecurity and progressive ambition
Abstract:
In her response, Alfaro fleshes out two main questions that come out from her book The Belief in Intuition. First, what is the relation between Henri Bergson and Max Scheler's personalist anthropology, on the one hand, and politics, on the other? What kinds of political order and civic education would sustain dense moral psychology, such as the one she claims follows from their writings? Second, and more specifically, what theory of authority corresponds to the philosophical anthropology that we learn from these authors? How can the "small-scale exemplarity" that Alfaro argues is articulated in their writings be an alternative to how we think today about charisma and emulation in the public sphere? In articulating these responses, Alfaro reflects on phenomena like uncertainty, education, and social media in the contemporary public sphere
Abstract:
Using insights from two of the major proponents of the hermeneutical approach, Paul Ricoeur and Hannah Arendt-who both recognized the ethicopolitical importance of narrative and acknowledged some of the dangers associated with it-I will flesh out the worry that "narrativity" in political theory has been overly attentive to storytelling and not heedful enough of story listening. More specifically, even if, as Ricoeur says, "narrative intelligence" is crucial for self-understanding, that does not mean, as he invites us to, that we should always seek to develop a "narrative identity" or become, as he says, "the narrator of our own life story." I offer that, perhaps inadvertently, such an injunction might turn out to be detrimental to the "art of listening." This, however, must also be cultivated if we want to do justice to our narrative character and expect narrative to have the political role that both Ricoeur and Arendt envisaged. Thus, although there certainly is a "redemptive power" in narrative, when the latter is understood primarily as the act of narration or as the telling of stories, there is a danger to it as well. Such a danger, I think, intensifies at a time like ours, when, as some scholars have noted, "communicative abundance" or the "ceaseless production of redundancy" in traditional and social media has often led to the impoverishment of the public conversation
Abstract:
In this paper, I take George Lakoff and Mark Johnson's thesis that metaphors shape our reality to approach the judicial imagery of the new criminal justice system in Mexico (in effect since 2016). Based on twenty-nine in-depth interviews with judges and other members of the judiciary, I study what I call the "dirty minds" metaphor, showing its presence in everyday judicial practice and analyzing both its cognitive basis as well as its effects in how criminal judges understand their job. I argue that the such a metaphor, together with the "fear of contamination" it raises as a result, is misleading and goes to the detriment of the judicial virtues that should populate the new system. The conclusions I offer are relevant beyond the national context, inter alia, because they concern a far-reaching paradigm of judgment
Abstract:
Recent efforts to theorize the role of emotions in political life have stressed the importance of sympathy, and have often recurred to Adam Smith to articulate their claims. In the early twentieth-century, Max Scheler disputed the salutary character of sympathy, dismissing it as an ultimately perverse foundation for human association. Unlike later critics of sympathy as a political principle, Scheler rejected it for being ill equipped to salvage what, in his opinion, should be the proper basis of morality, namely, moral value. Even if Scheler's objections against Smith's project prove to be ultimately mistaken, he had important reasons to call into question its moral purchase in his own time. Where the most dangerous idol is not self-love but illusory self-knowledge, the virtue of self-command will not suffice. Where identification with others threatens the social bond more deeply than faction, “standing alone” in moral matters proves a more urgent task
Abstract:
Images of chemical molecules can be produced, manipulated, simulated and analyzed using sophisticated chemical software. However, in the process of publishing such images into scientific literature, all their chemical significance is lost. Although images of chemical molecules can be easily analyzed by the human expert, they cannot be fed back into chemical software and loose much of their potential use. We have developed a system that can automatically reconstruct the chemical information associated to the images of chemical molecules thus rendering them computer readable. We have benchmarked our system against a commercially available product and have also tested it using chemical databases of several thousand images with very encouraging results
Abstract:
We have developed a system for the automatic reconstruction of chemical molecules from images. The system takes as input an electronically produced image of a chemical molecule and produces an SDF file containing the complete chemical description of the molecule. The SDF file can then be read and used by most chemical computer programs. Our system finds extensive application in information extraction problems where the molecule images contained in chemical documents need to be rendered computer readable. We have benchmarked our system against a commercially available product and have also tested it using chemical databases of several thousand images. The system can be parameterized to reconstruct images of different sources and different characteristics
Abstract:
We present an algorithm that automatically segments and classifies the brain structures in a set of magnetic resonance (MR) brain images using expert information contained in a small subset of the image set. The algorithm is intended to do the segmentation and classification tasks mimicking the way a human expert would reason. The algorithm uses a knowledge base taken from a small subset of semiautomatically classified images that is combined with a set of fuzzy indexes that capture the experience and expectation a human expert uses during recognition tasks. The fuzzy indexes are tissue specific and spatial specific, in order to consider the biological variations in the tissues and the acquisition inhomogeneities through the image set. The brain structures are segmented and classified one at a time. For each brain structure the algorithm needs one semiautomatically classified image and makes one pass through the image set. The algorithm uses low-level image processing techniques on a pixel basis for the segmentations, then validates or corrects the segmentations, and makes the final classification decision using higher level criteria measured by the set of fuzzy indexes. We use single-echo MR images because of their high volumetric resolution; but even though we are working with only one image per brain slice, we have multiple sources of information on each pixel: absolute and relative positions in the image, gray level value, statistics of the pixel and its three-dimensional neighborhood and relation to its counterpart pixels in adjacent images. We have validated our algorithm for ease of use and precision both with clinical experts and with measurable error indexes over a Brainweb simulated MR set
Abstract:
We present an attractive methodology for the compression of facial gestures that can be used to drive interaction in real time applications. Using the eigenface method we build compact representation spaces for a variety of facial gestures. These compact spaces are the so called eigenspaces. We do real time tracking and segmentation of facial features from video images and then use the eigenspaces to find compact descriptors of the segmented features. We use the system for an avatar videoconference application where we achieve real time interactivity with very limited bandwidth requirements. The system can also be used as a hands free man-machine interface
Abstract:
We use interactive virtual environments for cognitive behavioral therapy. Working together with children therapists and psychologists, our computer graphics group developed 5 interactive simulators for the treatment of fears and behavior disorders. The simulators run in real time on P4 PCs with graphic accelerators, but also work online using streaming techniques and Web VR engines. The construction of the simulators starts with ideas and situations proposed by the psychologists, these ideas are then developed by graphic designers and finally implemented in 3D virtual worlds by our group. We present the methodology we follow to turn the psychologists ideas and then the graphic designer’s sketches into fully interactive simulators. Our methodology starts with a graphic modeler to build the geometry of the virtual worlds, the models are then exported to a dedicated OpenGL VR engine that can interface with any VR peripheral. Alternatively, the models can be exported to a Web VR engine. The simulators are cost efficient since they require not much more than the PC and the graphics card. We have found that both the therapists and the children that use the simulators find this technology very attractive
Abstract:
We study the motion of the negative curved symmetric two and three center problem on the Poincare upper semi plane model for a surface of constant negative curvature κwhich without loss of generality we assume k = -1. Using this model, we first derive the equations of motion for the 2-and 3-center problems. We prove that for 2-center problem, there exists a unique equilibrium point and we study the dynamics around it. For the motion restricted to the invariant y-axis, we prove that it is a center, but for the general two center problem it is unstable. For the 3-center problem, we show the nonexistence of equilibrium points. We study two particular integrable cases, first when the motion of the free particle is restricted to the y-axis, and second when all particles are along the same geodesic. We classify the singularities of the problem and introduce a local and a global regularization of all them. We show some numerical simulations for each situation
Abstract:
We consider the curved 4-body problems on spheres and hyperbolic spheres. After obtaining a criterion for the existence of quadrilateral configurations on the equator of the sphere, we study two restricted 4-body problems, one in which two masses are negligible and another in which only one mass is negligible. In the former, we prove the evidence square-like relative equilibria, whereas in the latter we discuss the existence of kite-shaped relative equilibria
Abstract:
In this paper, we study the hypercyclic composition operators on weighted Banach spaces of functions defined on discrete metric spaces. We show that the only such composition operators act on the "little" spaces. We characterize the bounded composition operators on the little spaces, as well as provide various necessary conditions for hypercyclicity
Abstract:
We give a complete characterisation of the single and double arrow relations of the standard context K(Ln) of the lattice Ln of partitions of any positive integer n under the dominance order, thereby addressing an open question of Ganter, 2020/2022
Abstract:
Demand response (DR) programs and local markets (LM) are two suitable technologies to mitigate the high penetration of distributed energy resources (DER) that is vastly increasing even during the current pandemic in the world. It is intended to improve operation by incorporating such mechanisms in the energy resource management problem while mitigating the present issues with Smart Grid (SG) technologies and optimization techniques. This paper presents an efficient intraday energy resource management starting from the day-ahead time horizon, which considers load uncertainty and implements both DR programs and LM trading to reduce the operating costs of three load aggregator in an SG. A random perturbation was used to generate the intraday scenarios from the day-ahead time horizon. A recent evolutionary algorithm HyDE-DF, is used to achieve optimization. Results show that the aggregators can manage consumption and generation resources, including DR and power balance compensation, through an implemented LM
Abstract:
Demand response programs, energy storage systems, electric vehicles, and local electricity markets are appropriate solutions to offset the uncertainty associated with the high penetration of distributed energy resources. It aims to enhance efficiency by adding such technologies to the energy resource management problem while also addressing current concerns using smart grid technologies and optimization methodologies. This paper presents an efficient intraday energy resource management starting from the day ahead time horizon, which considers the uncertainty associated with load consumption, renewable generation, electric vehicles, electricity market prices, and the existence of extreme events in a 13-bus distribution network with high integration of renewables and electric vehicles. A risk analysis is implemented through conditional value-at-risk to address these extreme events. In the intraday model, we assume that an extreme event will occur to analyze the outcome of the developed solution. We analyze the solution's impact departing from the day-ahead, considering different risk aversion levels. Multiple metaheuristics optimize the day-ahead problem, and the best-performing algorithm is used for the intraday problem. Results show that HyDE gives the best day-ahead solution compared to the other algorithms, achieving a reduction of around 37% in the cost of the worst scenarios. For the intraday model, considering risk aversion also reduces the impact of the extreme scenarios
Abstract:
The central configurations given by an equilateral triangle and a regular tetrahedron with equal masses at the vertices and a body at the barycenter have been widely studied in [9] and [14] due to the phenomena of bifurcation occurring when the central mass has a determined value m*. We propose a variation of this problem setting the central mass as the critical value m* and letting a mass at a vertex to be the parameter of bifurcation. In both cases, 2D and 3D, we verify the existence of bifurcation, that is, for a same set of masses we determine two new central configurations. The computation of the bifurcations, as well as their pictures have been performed considering homogeneous force laws with exponent a < −1
Abstract:
The leaderless and the leader-follower consensus are the most basic synchronization behaviors for multiagent systems. For networks of Euler-Lagrange (EL) agents different controllers have been proposed to achieve consensus, requiring in all cases, either the cancellation or the estimation of the gravity forces. While, in the first case, it is shown that a simple Proportional plus damping (P+d) scheme with exact gravity cancellation can achieve consensus, in the latter case, it is necessary to estimate, not just the gravity forces, but the parameters of the whole dynamics. This requires the computation of a complicated regressor matrix, that grows in complexity as the degrees-of-freedom of the EL-agents increase. To simplify the controller implementation we propose in this paper a simple P+d scheme with only adaptive gravity compensation. In particular, two adaptive controllers that solve both consensus problems by only estimating the gravitational term of the agents and hence without requiring the complete regressor matrix are reported. The first controller is a simple P+d scheme that does not require to exchange velocity information between the agents but requires centralized information. The second controller is a Proportional-Derivative plus damping (PD+d) scheme that is fully decentralized but requires exchanges of speed information between the agents. Simulation results demonstrate the performance of the proposed controllers
Abstract:
Medical image segmentation is one of the most productive research areas in medical image processing. The goal of most new image segmentation algorithms is to achieve higher segmentation accuracy than existing algorithms. But the issue of quantitative, reproducible validation of segmentation results, and the questions: What is segmentation accuracy ?, and: What segmentation accuracy can a segmentation algorithm achieve ? remain wide open. The creation of a validation framework is relevant and necessary for consistent and realistic comparisons of existing, new and future segmentation algorithms. An important component of a reproducible and quantitative validation framework for segmentation algorithms is a composite index that will measure segmentation performance at a variety of levels. In this paper we present a prototype composite index that includes the measurement of seven metrics on segmented image sets. We explain how the composite index is a more complete and robust representation of algorithmic performance than currently used indices that rate segmentation results using a single metric. Our proposed index can be read as an averaged global metric or as a series of algorithmic ratings that will allow the user to compare how an algorithm performs under many categories
Abstract:
How is the size of the informal sector affected when the distribution of social expenditures across formal and informal workers changes? How is it affected when the tax rate changes along with the generosity of these transfers? In our search model, taxes are levied on formal-sector workers as a proportion of their wage. Transfers, in contrast, are lump-sum and are received by both formal and informal workers. This implies that high-wage formal workers subsidize low-wage formal workers as well as informal workers. We calibrate the model to Mexico and perform counterfactuals. We find that the size of the informal sector is quite inelastic to changes in taxes and transfers. This is due to the presence of search frictions and to the cross-subsidy in our model: for low-wage formal jobs, a tax increase is roughly offset by an increase in benefits, leaving the unemployed approximately indifferent. Our results are consistent with the empirical evidence on the recent introduction of the “Seguro Popular” healthcare program
Abstract:
We calibrate the cost of sovereign defaults using a continuous time model, where government default decisions may trigger a change in the regime of a stochastic TFP process. We calibrate the model to a sample of European countries from 2009 to 2012. By comparing the estimated drift in default relative to that in no-default, we find that TFP falls in the range of 3.70–5.88 %. The model is consistent with observed falls in GDP growth rates and subsequent recoveries and illustrates why fiscal multipliers are small during sovereign debt crises
Abstract:
Employment to population ratios differ markedly across Organization for Economic Cooperation and Development (OECD) countries, especially for people aged over 55 years. In addition, social security features differ markedly across the OECD, particularly with respect to features such as generosity, entitlement ages, and implicit taxes on social security benefits. This study postulates that differences in social security features explain many differences in employment to population ratios at older ages. This conjecture is assessed quantitatively with a life cycle general equilibrium model of retirement. At ages 60-64 years, the correlation between the simulations of this study's model and observed data is 0.67. Generosity and implicit taxes are key features to explain the cross-country variation, whereas entitlement age is not
Abstract:
The consequences of increases in the scale of tax and transfer programs are assessed in the context of a model with idiosyncratic productivity shocks and incomplete markets. The effects are contrasted with those obtained in a stand-in house hold model featuring no idiosyncratic shocks and complete markets. The main finding is that the impact on hours remains very large, but the welfare consequences are very different. The analysis also suggests that tax and transfer policies have large effects on average labor productivity via selection effects on employment
Abstract:
We also noticed that 5 months of historical data are enough for accurate training of the forecast models, and that shallow models with around 50,000 parameters have enough predicting power for this task
Abstract:
Atmospheric pollution components have negative effects in the health and life of people. Outdoor pollution has been extenseively studied, but a large portion of people stay indoors. Our research focuses on indoor pollution forecasting using deep learning techniques coupled with the large processing capabilities of the cloud computing. This paper also shares the implementation using an open source approach of the code for modeling time-series of different sources data. We believe that further research can leverage the outcomes of our research
Abstract:
Sergio Verdugo's provocative Foreword challenges us to think about whether the concepts we inherited from classical constitutionalism are still useful for understanding our current reality. Verdugo refutes any attempt to defend what he calls "the conventional approach to constituent power." The objective of this article is to contradict Verdugo's assertions which, the Foreword claims, are based on an incorrect notion of the people as a unified body, or as a social consensus. The article argues, instead, for the plausibility of defending the popular notion of constituent power by anchoring it in a historical and dynamic concept of democratic legitimacy. It concludes that, although legitimizing deviations from the established channels for political transformation entails risks, we must assume them for the sake of the emancipatory potential of constituent power
Resumen:
El libro La Corte Enrique Santiago Petracchi II que se reseña muestra la impronta de Enrique Santiago Petracchi durante su segundo período a cargo de la presidencia de la Corte Suprema de Justicia de la Nación Argentina, ocurrida entre los años 2003 y 2006. La Corte se estudia tanto desde el punto de vista interno, con sus reformas en busca de transparencia y legitimidad, como desde el punto de vista externo, contextual y de relación con el resto de los poderes del Estado y la sociedad civil. El recuento incluye una mención de los hitos jurisprudenciales de dicho período y explica cómo la figura de Petracchi es central para comprender este momento de la Corte
Abstract:
The book "The Court Enrique Santiago Petracchi II" that is reviewed shows the imprint of Chief Justice Enrique Santiago Petracchi at the Argentinean Supreme Court of Justice during his second term that occurred between 2003 and 2006. The Court is studied both from the internal point of view, with its reforms in search of transparency and legitimacy, as well as from the external, contextual point of view and relationship with the rest of the State branches and civil society. The account includes a mention of the leading cases of that period and explains how the figure of Petracchi is central to understanding this moment of the Court
Resumen:
El artículo plantea algunas inquietudes sobre el análisis que Gargarella hace de la sentencia de la Corte Interamericana de Derechos Humanos en el caso Gelman vs. Uruguay. Aceptando la idea principal de que las decisiones pueden gradarse según su legitimidad democrática, cuestiona que la Ley de Caducidad uruguaya haya sido legítima en un grado significativo y por tanto haya debido respetarse en su validez por la Corte Interamericana. Ello por cuanto la decisión no fue tomada por todas las personas posiblemente afectadas por la misma. Este principio normativo de inclusión es fundamental para la teoría de la democracia de Gargarella, sin embargo, no fue considerado en su análisis del caso. El artículo explora las consecuencias de tomarse en serio el principio de inclusión y los problemas prácticos que este apareja, en especial respecto de la constitución del demos. Finalmente, propone una alternativa de solución mediante la justicia constitucional/convencional, de acuerdo con un entendimiento procedimental de la misma basado en la participación, según lo pensó J. H. Ely
Abstract:
The article raises some questions about Gargarella’s analysis of the Inter-American Court of Human Rights’ ruling in Gelman v. Uruguay. It accepts the main idea regarding the possibility to grad democratic legitimacy of deci-sions, but it questions that the Uruguayan amnesty has had a high degree of legitimacy. This is because the decision was not made by all possibly affected people. This normative principle of inclusion is fundamental to Gargarella’s theory of democracy, however, it was not considered in his analysis of the case. The article explores the consequences of taking the principle of inclu-sion seriously and the practical problems that it involves, especially regard-ing the constitution of the demos. Finally, it proposes an alternative solution through a procedural understanding of judicial review based on participation, as proposed by J.H. Ely
Resumen:
Este artículo reflexionará críticamente sobre la jurisprudencia de la Suprema Corte de Justicia de la Nación mexicana en torno al principio constitucional de paridad de género. Para hacerlo, se apoyará en una reconstrucción teórica de tres diferentes fundamentos en los que se puede basar la paridad según se entienda la representación política de las mujeres y el principio de igualdad. Estas posturas se identificarán como “de la igualdad formal”, “de la defensa de las cuotas” y “de la paridad”, resaltando cómo en las sentencias mexicanas se mezclan los argumentos de las dos últimas posturas de forma desordenada e incoherente. Finalmente, el artículo propiciará una interpretación del principio de paridad de género como dinámico y por tanto abierto a aportaciones progresivas del sujeto político feminista, que evite los riesgos del esencialismo como el reforzar el sistema binario de sexos. Un principio que, en este momento histórico, requiere medidas de acción afirmativa correctivas, provisionales y estratégicas para alcanzar la igualdad sustantiva de género en la representación política
Abstract:
This article will critically reflect on the jurisprudence of the Mexican Supreme Court of Justice regarding the constitutional principle of gender parity. To do so, it will rely on a theoretical reconstruction of three different foundations of the principle, based on the understanding of the political representation of women and the principle of equality. I will call these positions: "of formal equality," "of the defense of quotas," and "of parity," highlighting how the arguments of the last two positions are mixed in the Mexican judgments in a disorderly and incoherent way. Finally, the article will promote an interpretation of the principle of gender parity as dynamic and therefore open to progressive contributions from the feminist political subject, avoiding both the risks of essentialism and reinforcing the binary system of sexes. This principle requires, at this historical moment, corrective, provisional, and strategic affirmative action measures to achieve substantive gender equality in political representation
Resumen:
El artículo analiza la posibilidad de que estemos ante una transformación constitucional más allá del texto constitucional. Así, en el marco de la llamada "Cuarta Transformación" que se anunció para el país luego de las últimas elecciones, se intenta una reflexión diagnóstica sobre el papel que juega la Suprema Corte de Justicia.
Resumen:
El presente trabajo tiene por objeto analizar el funcionamiento de la rigidez constitucional en México como garantía de la supremacía constitucional. Para ello comenzaré con un estudio sobre la idea de rigidez y la distinguiré del concepto de supremacía. Posteriormente utilizaré dichas categorías para analizar el sistema mexicano y cuestionar su eficacia, es decir, la adecuación entre el medio (rigidez) y el fin (supremacía). Por último haré un par de propuestas de modificación del mecanismo de reforma constitucional en vistas a hacerlo más democrático, más deliberativo y con ello más eficaz para garantizar la supremacía constitucional
Abstract:
This paper analyzes how constitutional rigidity works in México and its consequences for constitutional supremacy. It starts with a conceptual distinction between rigidity and supremacy. Subsequently those categories are used to analyze mexican system and to question the amendment process capability to guarantee constitutional supremacy. Finally, the paper makes some proposals to amend the Mexican constitutional amendment process in order to make it more democratic, deliberative and effective to guarantee constitutional supremacy
Resumen:
El constitucionalismo popular como corriente constitucional contemporánea plantea una revisión crítica a la historia del constitucionalismo norteamericano, reivindicando el papel del pueblo en la interpretación constitucional. A la vez, presenta un contenido normativo anti-elitista que trasciende sus orígenes y que pone en cuestión la idea de que los jueces deban tener la última palabra en las controversias sobre derechos fundamentales. Esta faceta tiene su correlato en los diseños institucionales propuestos, propios de un constitucionalismo débil y centrados en la participación popular democrática
Abstract:
Popular constitutionalism is a contemporary constitutional theory with a critical view of U.S' constitutional narrative focus on judicial supremacy. Instead, popular constitutionalism regards the people as main actor. It defends an anti-elitist understanding of constitutional law. From the institutional perspective, popular constitutionalism proposes a weak model of constitutionalism and a strong participatory democracy
Resumen:
El trabajo tiene como objetivo analizar la sentencia dictada por la Primera Sala de la Suprema Corte de Justicia mexicana en el amparo en revisión 152/2013 en la que se declaró la inconstitucionalidad de la exclusión de los homosexuales del régimen matrimonial en el Estado de Oaxaca. Esta sentencia refleja un cambio importante en la forma de entender el interés legítimo tratándose de la impugnación de normas autoaplicativas (es decir, de normas que causan perjuicio sin que medie acto de aplicación), dando paso a la justiciabilidad de los mensajes estigmatizantes. En el caso, esta forma más amplia de entender el interés legítimo está basada en la percepción de que el derecho discrimina a través de los mensajes que transmite; situación que la Suprema Corte considera puede combatir a través de sus sentencias de amparo. Asimismo, se plantean algunos retos e inquietudes que suscita la sentencia a la luz del activismo judicial que puede conllevar
Abstract:
This paper is focus on the amparo en revision 152/2013 issued by the first chamber of the Supreme Court of Mexico. For now on the Supreme Court is able to judge the stigmatizing messages of law. Furthermore, the amparo en revision 152/2013 develops a broader conception of discrimination and a more activist role of the Supreme Court. Finally, I express some thoughts about the issues that this judgment could pose to the Supreme Court
Abstract:
We elicit subjective probability distributions from business executives about their own-firm outcomes at a one-year look-ahead horizon. In terms of question design, our key innovation is to let survey respondents freely select support points and probabilities in five-point distributions over future sales growth, employment, and investment. In terms of data collection, we develop and field a new monthly panel Survey of Business Uncertainty. The SBU began in 2014 and now covers about 1,750 firms drawn from all 50 states, every major nonfarm industry, and a range of firm sizes. We find three key results. First, firm-level growth expectations are highly predictive of realized growth rates. Second, firm-level subjective uncertainty predicts the magnitudes of future forecast errors and future forecast revisions. Third, subjective uncertainty rises with the firm’s absolute growth rate in the previous year and with the magnitude of recent revisions to its expected growth rate. We aggregate over firm-level forecast distributions to construct monthly indices of business expectations (first moment) and uncertainty (second moment) for the U. S. private sector
Abstract:
We consider several economic uncertainty indicators for the US and UK before and during the COVID-19 pandemic: implied stock market volatility, newspaper-based policy uncertainty, Twitter chatter about economic uncertainty, subjective uncertainty about business growth, forecaster disagreement about future GDP growth, and a model-based measure of macro uncertainty. Four results emerge. First, all indicators show huge uncertainty jumps in reaction to the pandemic and its economic fallout. Indeed, most indicators reach their highest values on record. Second, peak amplitudes differ greatly -from a 35% rise for the model-based measure of US economic uncertainty (relative to January 2020) to a 20-fold rise in forecaster disagreement about UK growth. Third, time paths also differ: Implied volatility rose rapidly from late February, peaked in mid-March, and fell back by late March as stock prices began to recover. In contrast, broader measures of uncertainty peaked later and then plateaued, as job losses mounted, highlighting differences between Wall Street and Main Street uncertainty measures. Fourth, in Cholesky-identified VAR models fit to monthly U.S. data, a COVID-size uncertainty shock foreshadows peak drops in industrial production of 12-19%
Resumen:
Uno de los principales problemas en la optimización en general es el modelado, es decir obtener un modelo para posteriormente aplicar alguna técnica de optimización adecuada. Sin embargo, existen situaciones en las que el modelo no se presenta de manera explícita o encontrar el modelo es sumamente complicado. Para estos casos es necesario el uso de la simulación para conocer el desempeño de un sistema. En este trabajo se abordará el caso particular del mantenimiento preventivo a máquinas utilizando técnicas de optimización-simulación
Abstract:
One of the most important problems in optimization is the modeling. The process of build a model and apply some technique to solve. Unfortunately exist several problems which is not easy get a model, due is difficult build an explicit form. One tool for this situation is using simulation as a black box evaluator with controllable variables and not controllable random variables which determines de performance of a system. In this work we deal with the preventive maintenance of machines by using techniques of simulation and optimization
Abstract:
This paper proposes a tree-based incremental-learning model to estimate house pricing using publicly available information on geography, city characteristics, transportation, and real estate for sale. Previous machine-learning models capture the marginal effects of property characteristics and location on prices using big datasets for training. In contrast, our scenario is constrained to small batches of data that become available in a daily basis, therefore our model learns from daily city data, employing incremental-learning to provide accurate price estimations each day. Our results show that property prices are highly influenced by the city characteristics and its connectivity, and that incremental models efficiently adapt to the nature of the house pricing estimation task
Abstract:
In this work, we introduce a framework to combine arbitrary image segmentation algorithms from different agents under data privacy constraints to produce an aggregated prediction set satisfying finite-sample risk control guarantees. We leverage distribution-free uncertainty quantification techniques in order to aggregate deep neural networks for image segmentation tasks. Our method can be applied in settings to merge the predictions of multiple agents with arbitrarily dependent prediction sets. Moreover, we perform experiments in medical imaging tasks to illustrate our proposed framework. Our results show that the framework reduced the empirical false positive rate by 50% without compromising the false negative rate, with respect to the false positive rate of any of the constituent models in the aggregated prediction algorithm
Abstract:
In journalism, innovation can be achieved by integrating various factors of change. This article reports the results of an investigation carried out in 2020; the study sample comprised journalists who participated as social researchers in a longitudinal study focused on citizens' perceptions of the Mexican electoral process in 2018. Journalistic innovation was promoted by the development of a novel methodology that combined existing journalistic resources and the use of qualitative social research methodologies. This combination provided depth to the journalistic coverage, optimized reporting, editing, design, and publication processes, improved access to more complete and diverse information in real time, and enhanced the capabilities of journalists. The latter transformed, through differential behaviors, their way of thinking about and valuing the profession by reconceptualizing and re-evaluating journalistic practices in which they were involved, resulting in an improvement in journalistic quality
Abstract:
Purpose: This research analyzes national identity representations held by Generation Z youth living in the United States-Mexico-Canada Agreement (USMCA) countries. In addition, it aims to identify the information on these issues that they are exposed to through social media. Methods: A qualitative approach carried out through in-depth interviews was selected for the study. The objective is to reconstruct social meaning and the social representation system. The constant comparative method was used for the information analysis, backed by the NVivo program. Findings: National identity perceptions of the adolescents interviewed are positive in terms of their own groups, very favorable regarding Canadians, and unfavorable vis-à-vis Americans. Furthermore, the interviewees agreed that social media have influenced their desire to travel or migrate, and if considering migrating, they have also provided advice as to which country they might go to. On another point, Mexicans are quite familiar with the Treaty; Americans are split between those who know something about it and those who have no information whatsoever; whereas Canadians know nothing about it. This reflects a possible way to improve information generated and spread by social media. Practical implications: The results could improve our understanding of how young people interpret the information circulating in social media and what representations are constructed about national identities. We believe this research can be replicated in other countries. Social implications: We might consider that the representations Generation Z has about the national identities of these three countries and what it means to migrate could have an impact on the democratic life of each nation and, in turn, on the relationship among the three USMCA partners
Resumen:
Este artículo tiene como objetivo describir las etapas de transformación de la identidad social de los habitantes de la región de Los Altos de Jalisco, México, a partir de la Guerra Cristera y hasta la década de los años 90. El proceso se ha desarrollado en cuatro fases: Oposición (1926-1929), Ajuste (1930-1970), Reforzamiento (1970-1990) y Cambio (1990- ). Este análisis se realiza desde la teoría de la mediación social y presenta un avance de la investigación realizada para la tesis de doctorado Los mitos vivos de México: Identidad regional en Los Altos de Jalisco, dirigida por Manuel Martín Serrano
Abstract:
This article aims to describe the stages of transformation of the social identity in Los Altos de Jalisco, Mexico, from the Cristera War until the 1990s. The process has been developed in four phases: Opposition (1926-1929), Adjustment (1930-1970), Reinforcement (1970-1990) and Change (1990- ). This analysis is carried out from the theory of social mediation and presents an advance of the research performed for the doctoral thesis Los mitos vivos de México: Identidad regional en Los Altos de Jalisco, directed by Manuel Martín Serrano
Resumen:
Las identidades criollas en México han sido poco investigadas. Una de estas identidades se gestó en Los Altos de Jalisco a lo largo de cuatro siglos y permaneció casi intacta, incluso luego de la guerra cristera que cimbró la zona en 1926. Fundamentalmente, la identidad alteña está cimentada en la percepción que se tiene del origen hispano de la comunidad, así como del catolicismo arraigado y reforzado por los cristeros. De esta forma, la familia, institución sagrada para los alteños, guarda y conserva lo hispano y lo católico. El matrimonio funda la familia, le da soporte, orden y estabilidad a la organización social. Al hablar de noviazgo, el concepto de virginidad es central en la conformación de la identidad. La mujer debe conservar su pureza para que la familia alteña siga manteniendo sus valores
Resumen:
¿Cómo se ven a sí mismos y a los otros los habitantes de una región de México? ¿Cómo se articulan las percepciones sobre las creencias del grupo y sus acciones? En este artículo se presenta un modelo para conceptualizar cómo está estructurada la identidad social. A partir de un estudio de caso, la investigación toma como centro los elementos de la identidad, a los que se definen como unidades de significación contenidas en una creencia o enunciado. Posteriormente se detalla el proceso de análisis a través del cual es posible entender su articulación y organización. A través de dos Mundos y dos Dimensiones que conforman la identidad social así como tres ejes que la configuran, el Espacio, el Tiempo y la Relación, este documento analiza el caso de los habitantes de Los Altos, Jalisco, México, con la encomienda de dotar de una primera herramienta para entender quiénes y cómo son, a decir de sí mismos, los habitantes de esta región
Abstract:
How do the individuals from a particular Mexican region see themselves and the rest of the inhabitants of the area? How are the perceptions about the group’s beliefs and their actions articu-lated? This article presents a model aimed at conceptualizing how social identity is structured. Based on a case study, the research focuses on the elements of identity, defined as units of signification within a belief or statement. The paper also details the process of analysis through which it is possible to understand its articulation and organization. We do it by means of two Worlds and two Dimensions that conform the social identity, as well as three axes that shape it: Space, Time and the Relationship, this document analyzes the case of inhabi-tants of Los Altos, Jalisco, Mexico, in order to provide a first tool that helps understand who the inhabitants of this region are, and what they are like, according to themselves
Abstract:
We propose an algorithm for creating line graphs from binary images. The algorithm consists of a vectorizer followed by a line detector that can handle a large variety of binary images and is tolerant to noise. The proposed algorithm can accurately extract higher-level geometry from the images lending itself well to automatic image recognition tasks. Our algorithm revisits the technique of image polygonization proposing a very robust variant based on subpixel resolution and the construction of directed paths along the center of the border pixels where each pixel can correspond to multiple nodes along one path. The algorithm has been used in the areas of chemical structure and musical score recognition and is available for testing at www.docnition.com. Extensive testing of the algorithm against commercial and noncommercial methods has been conducted with favorable results
Resumen:
La relación entre la profesionalidad del periodismo y las normas éticas forman un vínculo importante para una actividad que se enfrenta a nuevos desafíos, que provienen tanto de la reconfiguración de las industrias mediáticas como de la crisis de identidad de la propia profesión. En este marco, se suma la revalorización de saber contar historias con contenidos informativos desde el empleo de herramientas y recursos literarios. Así, el periodismo narrativo potencia un periodismo comprometido, crítico y transformador que, en entornos transmedia, se enfrenta a nuevas preguntas: ¿qué implicaciones éticas tiene la participación activa de las audiencias, la generación de comunidades virtuales, el uso del lenguaje claro, la conjunción de diversos medios de comunicación y el manejo de datos e historias personales? En esta comunicación se presenta un análisis de esta problemática desde la Teoría de la Otredad, como esencia de una ética actual y solidaria que permite el entendimiento y aceptación del ser humano. Como línea central se describen los principales retos que se han detectado a partir de análisis de los principios del periodismo transmedia
Abstract:
The relationship between the professionalism of journalism and ethical standards is an important link for an activity that is facing new challenges, stemming both from the reconfiguration of media industries and the identity crisis of the profession itself. In this context, the revaluation of the ability to tell stories with informative content from the use of literary tools and resources is added. Thus, narrative journalism enhances a committed, critical and transformative journalism that, in transmedia environments, faces new questions: what are the ethical implications of the active participation of audiences, the generation of virtual communities, the use of clear language, the combination of different media and the handling of data and personal stories? This paper presents an analysis of this problem from the perspective of the Theory of Otherness, as the essence of a current and supportive ethics that allows the understanding and acceptance of the human being. As a central line, the main challenges that have been detected from the analysis of the principles of transmedia journalism
Abstract:
Business cycles in emerging economies display very volatile consumption and strongly countercyclical trade balance. We show that aggregate consumption in these economies is not more volatile than output once durables are accounted for. Then, we present and estimate a real business cycles model for a small open economy that accounts for this empirical observation. Our results show that the role of permanent shocks to aggregate productivity in explaining cyclical fluctuations in emerging economies is considerably lower than previously documented. Moreover, we find that financial frictions are crucial to explain some key business cycle properties of these economies
Abstract:
We give several relations among real homogeneous biharmonic polynomials in three real variables of fixed degree together with their partial derivatives. An illustration of the usefulness of such formulas is given in the context of inframonogenic reduced-quaternion valued functions
Abstract:
Afunction f from a domain in R3 to the quaternions is said to be inframono-genic if ðfð =0, where ð=ð/ðx0+(ð/ð𝜕x1)e1+(ð/ðx2)e2. All inframonogenic functions are biharmonic. In the context of functions f=f0+f1e1+f2e2 taking values in the reduced quaternions, we show that the inframonogenic homoge-neous polynomials of degree n form a subspace of dimension 6n+3. We use the homogeneous polynomials to construct an explicit, computable orthogonal basis for the Hilbert space of square-integrable inframonogenic functions defined inthe ball in R3
Abstract:
We decompose traditional betas into semibetas based on the signed covariation between the returns of individual stocks in an international market and the returns of three risk factors: local, global, and foreign exchange. Using high-frequency data, we empirically assess stock return co-movements with these three risk factors and find novel relationships between these factors and future returns. Our analysis shows that only semibetas derived from negative risk factor and stock return downturns command significant risk premia. Global downside risk is negatively priced in the international market and local downside risk is positively priced
Abstract:
We use intraday data to compute weekly realized variance, skewness, and kurtosis for equity returns and study the realized moments' time-series and cross-sectional properties. We investigate if this week's realized moments are informative for the cross-section of next week's stock returns. We find a very strong negative relationship between realized skewness and next week's stock returns. A trading strategy that buys stocks in the lowest realized skewness decile and sells stocks in the highest realized skewness decile generates an average weekly return of 19 basis points with a t-statistic of 3.70. Our results on realized skewness are robust across a wide variety of implementations, sample periods, portfolio weightings, and firm characteristics, and are not captured by the Fama-French and Carhart factors. We find some evidence that the relationship between realized kurtosis and next week's stock returns is positive, but the evidence is not always robust and statistically significant. We do not find a strong relationship between realized volatility and next week's stock returns
Resumen:
En este artículo se propone que el humor constituye una forma de comunicación intrapersonal particularmente apta para la (auto) educación filosófica que se encuentra en el corazón de la práctica de la filosofía. Se explican los resultados epistemológicos y éticos de un uso sistemático de la risa autorreferencial. Se defienden los beneficios de una cosmovisión basada en el reconocimiento del ridículo humano, Homo risibilis, comparándolo con otros enfoques de la condición humana
Abstract:
This article presents humor as enacting an intra-personal communication particularly apt for the philosophic (self) education that lies at the heart of the practice of philosophy. It explains the epistemological and ethical outcomes of a systematic use of self-referential laughter. It argues for the benefits of a worldview predicated on acknowledging human ridicule, Homo risibilis, comparing it with other approaches to the human predicament
Abstract:
This paper examines the effects of noncontributory pension programs at the federal and state levels on Mexican households' saving patterns using micro data from the Mexican Income and Expenditure Survey. We find that the federal program curtails saving among households whose oldest member is either 18-54 or 65-69 years old, possibly through anticipation effects, a decrease in the longevity risk faced by households, and a redistribution of income between households of different generations. Specifically, these households appear to be reallocating income away from saving into human capital investments, like education and health. Generally, state programs have neither significant effects on household saving, nor does the combination of federal and state programs. Finally, with a few exceptions, noncontributory pensions have no significant impact on the saving of households with members 70 years of age or older-individuals eligible for those pensions, plausibly because of their dissaving stage in the lifecycle
Abstract:
This paper empirically investigates the determinants of the Internet and cellular phone penetration levels in a crosscountry setting. It offers a framework to explain differences in the use of information and communication technologies in terms of differences in the institutional environment and the resulting investment climate. Using three measures of the quality of the investment climate, Internet access is shown to depend strongly on the country’s institutional setting because fixed-line Internet investment is characterized by a high risk of state expropriation, given its considerable asset specificity. Mobile phone networks, on the other hand, are built on less site-specific, re-deployable modules, which make this technology less dependent on institutional characteristics. It is speculated that the existence of telecommunications technology that is less sensitive to the parameters of the institutional environment and, in particular, to poor investment protection provides an opportunity for better understanding of the constraints and prospects for economic development
Abstract:
We consider a restricted three body problem on surfaces of constant curvature. As in the classical Newtonian case the collision singularities occur when the position particle with infinitesimal mass coincides with the position of one of the primaries. We prove that the singularities due to collision can be locally (each one separately) and globally (both as the same time) regularized through the construction of Levi-Civita and Birkhoff type transformations respectively. As an application we study some general properties of the Hill’s regions and we present some ejection-collision orbits for the symmetrical problem
Abstract:
We consider a symmetric restricted three-body problem on surfaces Mk2 of constant Gaussian curvature k ≠ 0, which can be reduced to the cases k = ±1. This problem consists in the analysis of the dynamics of an infinitesimal mass particle attracted by two primaries of identical masses describing elliptic relative equilibria of the two body problem on Mk2, i.e., the primaries move on opposite sides of the same parallel of radius a. The Hamiltonian formulation of this problem is pointed out in intrinsic coordinates. The goal of this paper is to describe analytically, important aspects of the global dynamics in both cases k = ±1 and determine the main differences with the classical Newtonian circular restricted three-body problem. In this sense, we describe the number of equilibria and its linear stability depending on its bifurcation parameter corresponding to the radial parameter a. After that, we prove the existence of families of periodic orbits and KAM 2-tori related to these orbits
Abstract:
We classify and analyze the orbits of the Kepler problem on surfaces of constant curvature (both positive and negative, S2 and H2, respectively) as functions of the angular momentum and the energy. Hill's regions are characterized and the problem of time-collision is studied. We also regularize the problem in Cartesian and intrinsic coordinates, depending on the constant angular momentum, and we describe the orbits of the regularized vector field. The phase portraits both for S2 and H2 are pointed out
Abstract:
The paper examines the macroeconomic and bank-level effects of raising foreign currency reserve requirements in a partially dollarized economy. Focusing on Peru, we study policy changes implemented by the Central Bank between 2008 and 2017, aimed at containing rapid credit growth fueled by foreign currency deposits. Empirical results show that higher reserve requirements in foreign currency reduced overall credit supply, with heterogeneous effects across banks depending on their reliance on foreign currency funding. Motivated by these findings, we develop a dynamic stochastic general equilibrium model of a small open economy with financial frictions, bank heterogeneity, and financial dollarization. The model replicates the empirical results and provides insights into the mechanism through which this macroprudential tool affects credit and aggregate dynamics, highlighting its effectiveness in managing credit booms in dollarized banking systems
Abstract:
We consider a setup in which a principal must decide whether or not to legalize a socially undesirable activity. The law is enforced by a monitor who may be bribed to conceal evidence of the offense and who may also engage in extortionary practices. The principal may legalize the activity even if it is a very harmful one. The principal may also declare the activity illegal knowing that the monitor will abuse the law to extract bribes out of innocent people. Our model offers a novel rationale for legalizing possession and consumption of drugs while continuing to prosecute drug dealers
Abstract:
In this paper we initiate the study of the forward and backward shifts on the discrete generalized Hardy space of a tree and the discrete generalized little Hardy space of a tree. In particular, we investigate when these shifts are bounded, find the norm of the shifts if they are bounded, characterize the trees in which they are an isometry, compute the spectrum in some concrete examples, and completely determine when they are hypercyclic
Abstract:
We study a channel through which inflation can have effects on the real economy. Using job creation and destruction data from U.S. manufacturing establishments from 1973-1988, we show that both jobs created by new establishments and jobs destroyed by dying establishments are negatively correlated with inflation. These results are robust to controls for the real-business cycle and monetary policy. Over a longer time frame, data on business failures confirm our results obtained from job creation and destruction data. We discuss how interaction of inflation with financial-markets, nominal-wage rigidities, and imperfect competition could explain the empirical evidence
Abstract:
We study how discount window policy affects the frequency of banking crises, the level of investment, and the scope for indeterminacy of equilibrium. Previous work has shown that providing costless liquidity through a discount window has mixed effects in terms of these criteria: It prevents episodes of high liquidity demand from causing crises but can lead to indeterminacy of stationary equilibrium and to inefficiently low levels of investment. We show how offering discount window loans at an above-market interest rate can be unambiguously beneficial. Such a policy generates a unique stationary equilibrium. Banking crises occur with positive probability in this equilibrium and the level of investment is suboptimal, but a proper combination of discount window and monetary policies can make the welfare effects of these inefficiencies arbitrarily small. The near-optimal policies can be viewed as approximately implementing the Friedman rule
Abstract:
We investigate the dependence of the dynamic behavior of an endogenous growth model on the degree of returns to scale. We focus on a simple (but representative) growth model with publicly funded inventive activity. We show that constant returns to reproducible factors (the leading case in the endogenous growth literature) is a bifurcation point, and that it has the characteristics of a transcritical bifurcation. The bifurcation involves the boundary of the state space, making it difficult to formally verify this classification. For a special case, we provide a transformation that allows formal classification by existing methods. We discuss the new methods that would be needed for formal verification of transcriticality in a broader class of models
Abstract:
We evaluate the desirability of having an elastic currency generated by a lender of last resort that prints money and lends it to banks in distress. When banks cannot borrow, the economy has a unique equilibrium that is not Pareto optimal. The introduction of unlimited borrowing at a zero nominal interest rate generates a steady state equilibrium that is Pareto optimal. However, this policy is destabilizing in the sense that it also introduces a continuum of nonoptimal inflationary equilibria. We explore two alternate policies aimed at eliminating such monetary instability while preserving the steady-state benefits of an elastic currency. If the lender of last resort imposes an upper bound on borrowing that is low enough, no inflationary equilibria can arise. For some (but not all) economies, the unique equilibrium under this policy is Pareto optimal. If the lender of last resort instead charges a zero real interest rate, no inflationary equilibria can arise. The unique equilibrium in this case is always Pareto optimal
Abstract:
We consider the nature of the relationship between the real exchange rate and capital formation. We present a model of a small open economy that produces and consumes two goods, one tradable and one not. Domestic residents can borrow and lend abroad, and costly state verification (CSV) is a source of frictions in domestic credit markets. The real exchange rate matters for capital accumulation because it affects the potential for investors to provide internal finance, which mitigates the CSV problem. We demonstrate that the real exchange rate must monotonically approach its steady state level. However, capital accumulation need not be monotonic and real exchange rate appreciation can be associated with either a rising or a falling capital stock. The relationship between world financial market conditions and the real exchange rate is also investigated
Abstract:
Given a dynamical system (X, f) and a positive integer n, we study the concept of strong transitivity in the induced map fxn : Xn - Xn. The main problem is to figure out when the strong transitivity of fx2 implies the strong transitivity of fxn. In certain spaces, such as finite connected graphs, we show that if fx2 is strongly transitive then f is strongly mixing which also implies that fxn is strongly transitive for any n. We also give an example of a map f : S1 - S1 which is strongly mixing but not exact, showing that our result is optimal in finite graphs
Abstract:
We present a Bayesian Power Ranking model to analyze Mexico's electoral preferences. This model ranks potential presidential candidates for the 2024 elections using a wide array of surveys, each including a different subset of the roster of plausible candidates. The model employs a multinomial approach for observations, corrects for polling house biases, and uses a correlated random walk to dynamically update preferences. We defined the Potential Index, an index of relative preference probabilities, and established the Power Ranking, a comprehensive preference order over time. Throughout 2023, both the Potential Index and the Power Ranking provided insights into public opinion and media influence before the announcement of the official candidate list
Abstract:
In the Mexican elections, the quick count consists in selecting a random sample of polling stations to forecast the election results. Its main challenge is that the estimation is done with incomplete samples, where the missingness is not at random. We present one of the statistical models used in the quick count of the gubernatorial elections of 2021. The model is a negative binomial regression with a hierarchical structure. The prior distributions are thoroughly tested for consistency. Also, we present a fitting procedure with an adjustment for bias, capable of running in less than 5 min. The model yields probability intervals with approximately 95% coverage, even with certain patterns of biased samples observed in previous elections. Furthermore, the robustness of the negative binomial distribution translates to robustness in the model, which can fit well big and small candidates, and provides an additional layer of protection when there are database errors
Abstract:
Quick counts based on probabilistic samples are powerful methods for monitoring election processes. However, the complete designed samples are rarely collected to publish the results in a timely manner. Hence, the results are announced using partial samples, which have biases associated to the arrival pattern of the information. In this paper, we present a Bayesian hierarchical model to produce estimates for the Mexican gubernatorial elections. The model considers the poll stations poststratified by demographic, geographic, and other covariates. As a result, it provides a principled means of controlling for biases associated to such covariates. We compare methods through simulation exercises and apply our proposal in the July 2018 elections for governor in certain states. Our studies find the proposal to be more robust than the classical ratio estimator and other estimators that have been used for this purpose
Abstract:
Despite the rapid change in cellular technologies, Mobile Network Operators (MNOs) keep a high percentage of their deployed infrastructure using Global System for Mobile communications (GSM) technologies. With about 3.5 billion subscribers, GSM remains as the de facto standard for cellular communications. However, the security criteria envisioned 30 years ago, when the standard was designed, are no longer sufficient to ensure the security and privacy of the users. Furthermore, even with the newest fourth generation (4G) cellular technologies starting to be deployed, these networks could never achieve strong security guarantees because the MNOs keep backwards- compatibility given the huge amount of GSM subscribers. In this paper, we present and describe the tools and necessary steps to perform an active attack against a GSM-compatible network, by exploiting the GSM protocol lack of mutual authentication between the subscribers and the network. The attack consists of a so-called man-in-the- middle attack implementation. By using Software Defined Radio (SDR), open-source libraries and open- source hardware, we setup a fake GSM base station to impersonate the network and therefore eavesdrop any communications that are being routed through it and extract information from their victims. Finally, we point out some implications of the protocol vulnerabilities and how these can not be mitigated in the short term since 4G deployments will take long time to entirely replace the current GSM infrastructure
Abstract:
This study investigates whether incidental electoral successes of women contribute to sustained gender parity in Finland, a leader in gender equality. Utilizing election lotteries used to resolve ties in vote counts, we estimate the causal effect of female representation. Our findings indicate that women's electoral performance (measured by female seat and vote shares) within parties improves following a lottery win, potentially due to increased exposure. However, these gains are offset by negative spillovers on female candidates from other parties. One reason why this may occur is that voters and parties may perceive sufficient female representation has been achieved, leading to reduced support for female candidates in other parties. Consequently, marginal increases in female representation do not translate to overall gains in women elected. In high but uneven gender parity contexts, such increases may not be self-reinforcing, highlighting the complexity of achieving sustained gender equality in politics
Abstract:
It is shown in the paper that the problem of speed observation for mechanical systems that are partially linearisable via coordinate changes admits a very simple and robust (exponentially stable) solution with a Luenberger-like observer. This result should be contrasted with the very complicated observers based on immersion and invariance reported in the literature. A second contribution of the paper is to compare, via realistic simulations and highly detailed experiments, the performance of the proposed observer with well-known high-gain and sliding mode observers. In particular, to show that – due to their high sensitivity to noise, that is unavoidable in mechanical systems applications – the performance of the two latter designs is well below par
Abstract:
This paper deals with the problem of control of partially known nonlinear systems, which have an open-loop stable equilibrium, but we would like to add a PI controller to regulate its behavior around another operating point. Our main contribution is the identification of a class of systems for which a globally stable PI can be designed knowing only the systems input matrix and measuring only the actuated coordinates. The construction of the PI is carried out invoking passivity theory. The difficulties encountered in the design of adaptive PI controllers with the existing theoretical tools are also discussed. As an illustration of the theory, we consider a class of thermal processes
Abstract:
This paper deals with the problem of control of partially known nonlinear systems, which have an open-loop stable equilibrium, but we would like to add a PI controller to regulate its behavior around another operating point. Our main contribution is the identification of a class of systems for which a globally stable PI can be designed knowing only the systems input matrix and measuring only the actuated coordinates. The construction of the PI is done invoking passivity theory. The difficulties encountered in the design of adaptive PI controllers with the existing theoretical tools are also discussed. As an illustration of the theory, we consider port-Hamiltonian systems and a class of thermal processes
Abstract:
We formulate a p-median facility location model with a queuing approximation to determine the optimal locations of a given number of dispensing sites (Point of Dispensing-PODs) from a predetermined set of possible locations and the optimal allocation of staff to the selected locations. Specific to an anthrax attack, dispensing operations should be completed in 48 hours to cover all exposed and possibly exposed people. A nonlinear integer programming model is developed and it formulates the problem of determining the optimal locations of facilities with appropriate facility deployment strategies, including the amount of servers with different skills to be allocated to each open facility. The objective of the mathematical model is to minimize the average transportation and waiting times of individuals to receive the required service. The mathematical model has waiting time performance measures approximated with a queuing formula and these waiting times at PODs are incorporated into the p-median facility location model. A genetic algorithm is developed to solve this problem. Our computational results show that appropriate locations of these facilities can significantly decrease the average time for individuals to receive services. Consideration of demographics and allocation of the staff decreases waiting times in PODs and increases the throughput of PODs. When the number of PODs to open is high, the right staffing at each facility decreases the average waiting times significantly. The results presented in this paper can help public health decision makers make better planning and resource allocation decisions based on the demographic needs of the affected population
Abstract:
Robust statistical data modelling under potential model mis-specification often requires leaving the parametric world for the nonparametric. In the latter, parameters are infinite dimensional objects such as functions, probability distributions or infinite vectors. In the Bayesian nonparametric approach, prior distributions are designed for these parameters, which provide a handle to manage the complexity of nonparametric models in practice. However, most modern Bayesian nonparametric models seem often out of reach to practitioners, as inference algorithms need careful design to deal with the infinite number of parameters. The aim of this work is to facilitate the journey by providing computational tools for Bayesian nonparametric inference. The article describes a set of functions available in the R package BNPdensity in order to carry out density estimation with an infinite mixture model, including all types of censored data. The package provides access to a large class of such models based on normalised random measures, which represent a generalisation of the popular Dirichlet process mixture. One striking advantage of this generalisation is that it offers much more robust priors on the number of clusters than the Dirichlet. Another crucial advantage is the complete flexibility in specifying the prior for the scale and location parameters of the clusters, because conjugacy is not required. Inference is performed using a theoretically grounded approximate sampling methodology known as the Ferguson & Klass algorithm. The package also offers several goodness-of-fit diagnostics such as QQ plots, including a cross-validation criterion, the conditional predictive ordinate. The proposed methodology is illustrated on a classical ecological risk assessment method called the species sensitivity distribution problem, showcasing the benefits of the Bayesian nonparametric framework
Abstract:
This paper focuses on model selection, specification and estimation of a global asset return model within an asset allocation and asset and liability management framework. The development departs from a single currency capital market model with four state variables: stock index, short and long term interest rates and currency exchange rates. The model is then extended to the major currency areas, United States, United Kingdom, European Union and Japan, and to include a US economic model containing GDP, inflation, wages and government borrowing requirements affecting the US capital market variables. In addition, we develop variables representing emerging market stock and bond indices. In the largest extension we treat a four currency capital markets model and US, UK, EU and Japan macroeconomic variables. The system models are estimated with seemingly unrelated regression estimation (SURE) and generalised autoregressive conditional heteroscedasticity (GARCH) techniques. Simulation, impulse response and forecasting performance is discussed in order to analyse the dynamics of the models developed
Abstract:
Gender stereotypes, the assumptions concerning appropriate social roles for men and women, permeate the labor market. Analyzing information from over 2.5 million job advertisements on three different employment search websites in Mexico, exploiting approximately 235,00 that are explicitly gender-targeted, we find evidence that advertisements seeking "communal" characteristics, stereotypically associated with women, specify lower salaries than those seeking "agentic" characteristics, stereotypically associated with men. Given the use of gender-targeted advertisements in Mexico, we use a random forest algorithm to predict whether non-targeted ads are in fact directed toward men or women, based on the language they use. We find that the non-targeted ads for which we predict gender show larger salary gaps (8–35 percent) than explicitly gender-targeted ads (0–13 percent). If women are segregated into occupations deemed appropriate for their gender, this pay gap between jobs requiring communal versus agentic characteristics translates into a gender pay gap in the labor market
Abstract:
This paper analyses the contract between an entrepreneur and an investor, using a non-zero sum game in which the entrepreneur is interested in company survival and the investor in maximizing expected net present value. Theoretical results are given and the model's usefulness is exemplified using simulations. We have observed that both the entrepreneur and the investor are better off under a contract which involves repayments and a share of the start-up company. We also have observed that the entrepreneur will choose riskier actions as the repayments become harder to meet up to a level where the company is no longer able to survive
Abstract:
We consider the problem of managing inventory and production capacity in a start-up manufacturing firm with the objective of maximising the probability of the firm surviving as well as the more common objective of maximising profit. Using Markov decision process models, we characterise and compare the form of optimal policies under the two objectives. This analysis shows the importance of coordination in the management of inventory and production capacity. The analysis also reveals that a start-up firm seeking to maximise its chance of survival will often choose to keep production capacity significantly below the profit-maximising level for a considerable time. This insight helps us to explain the seemingly cautious policies adopted by a real start-up manufacturing firm
Abstract:
Start-up companies are considered an important factor in the success of a nation’s economy. We are interested in the decisions for long-term survival of these firms when they have considerable cash restrictions. In this paper we analyse several inventory control models to manage inventory purchasing and return policies. The Markov decision models are formulated for both established companies that look at maximising average profit and start-up companies that look at maximising their long-term survival probability. We contrast both objectives, and present properties of the policies and the survival probabilities. We find that start-up companies may need to be riskier if the return price is very low, but there is a period where a start-up firm becomes more cautious than an established company and there is a point, as it accumulates capital, where it starts behaving as an established firm. We compare the various models and give conditions under which their policies are equivalent
Abstract:
Biopolymer films (biofilms) were evaluated for suitability in meat packaging applications. Polyesters, proteins, and carbohydrates are among the most commonly utilized materials for producing bioplastics due to their mechanical properties, which closely resemble those of conventional plastics. However, these properties are significantly influenced by the film's composition, molecular weight, solvent type, pH, component concentration, and processing temperature. Conventional plastic films, including microplastics, are non-bioactive and contribute to persistent pollution. This underscores the importance of developing novel materials that incorporate bioactive plant compounds to endow plastic films with antimicrobial and antioxidant functionalities. In this study, two gelatin-chitosan films were fabricated: one without any extract and another incorporating guava leaf extract. These films were characterized to determine their optical, mechanical, morphological, and thermal properties. Additionally, a microbiological analysis was conducted to assess the impact of polymeric biofilm packaging on the shelf life of beef. Both films exhibited favorable tensile strength values (24.74–27.12 MPa), high transparency (0.79–1.18), and effective barrier properties against water vapor (8.95–9.29 × 10-8 g.mm.Pa-1 h-1 m-2) and oxygen (9.7–9.10 × 10-18 m3.m-2 s-1 Pa-1). During storage, the biofilm containing guava leaf extract demonstrated a reduction in microbial growth, and this enhanced the beef's color stability. In conclusion, biopolymeric films incorporating guava leaf extract demonstrated notable antioxidants and antimicrobial properties, effectively inhibiting microbial growth, preserving the physical quality of beef, and significantly extending its shelf life under refrigerated conditions
Abstract:
A recent cross-cultural study suggests employees may be classified, based on their scores on a measure of work ethic, into three profiles labeled as "live to work," "work to live," and "work as a necessary evil." The present study assesses whether these profiles were stable before and after an extended lockdown that forced employees to work from home for 2 years because of the COVID-19 pandemic. To assess our core research question, we conducted a longitudinal study with employees of a company in the financial sector, collecting data in two waves: February 2020 (n = 692) and June 2022 (n = 598). Tests of profile similarity indicated a robust structural and configural equivalence of the profiles before and after the lockdown. As expected, the prolonged pandemic-based lockdown had a significant effect on the proportion of individuals in each profile. Implications for leading and managing in a post-pandemic workforce are presented and discussed
Abstract:
This study examines the impact of a specific training intervention on both individual- and unit-level outcomes. We sought to examine the extent to which a training intervention incorporating key elements of error management training: (1) positively impacted sales specific self-efficacy beliefs of trainees; and (2) positively impacted unit-level sales growth over time. Results of an 11-week longitudinal field experiment across 19 stores in a national bakery chain indicated that the sales self-efficacy of trainees significantly increased between the levels they had 2 weeks before the intervention started and 4 weeks after it was initiated. Results based on a repeated measures ANOVA also indicated significantly higher sales performance in the intervention group compared with a non-intervention control group. We also sought to address the extent to which individual-level effects may be linked to the organizational level. We also provide evidence with respect to the extent to which changes in individual self-efficacy were associated with unit-level sales performance. Results confirmed this multi-level effect as evidenced by a moderate significant correlation between the average self-efficacy of the staff of each store and its sales performance across the weeks the intervention was in effect. The study contributes to the existing literature by providing direct evidence of the impact of an HRD intervention at multiple organizational levels
Abstract:
Despite the acceptance of work ethic as an important individual difference, little research has examined the extent to which work ethic may reflect shared environmental or socio-economic factors. This research addresses this concern by examining the influence of geographic proximity on the work ethic experienced by 254 employees from Mexico, working in 11 different cities in the Northern, Central and Southern regions of the country. Using a sequence of complementary analyses to assess the main source of variance on seven dimensions of work ethic, our results indicate that work ethic is most appropriately considered at the individual level
Abstract:
This paper explores the relationship between individual work values and unethical decision-making and actual behavior at work through two complementary studies. Specifically, we use a robust and comprehensive model of individual work values to predict unethical decision-making in a sample of working professionals and accounting students enrolled in ethics courses, and IT employees working in sales and customer service. Study 1 demonstrates that young professionals who rate power as a relatively important value (i.e., those reporting high levels of the self-enhancement value) are more likely to violate professional conduct guidelines despite receiving training regarding ethical professional principles. Study 2, which examines a group of employees from an IT firm, demonstrates that those rating power as an important value are more likely to engage in non-work-related computing (i.e., cyberloafing) even when they are aware of a monitoring software that tracks their computer usage and an explicit policy prohibiting the use of these computers for personal reasons
Abstract:
This panel study, conducted in a large Venezuelan organization, took advantage of a serendipitous opportunity to examine the organizational commitment profiles of employees before and after a series of dramatic, and unexpected, political events directed specifically at the organization. Two waves of organizational commitment data were collected, 6 months apart, from a sample of 152 employees. No evidence was found that employees' continuance commitment to the organization was altered by the events described here. Interestingly, however, both affective and normative commitment increased significantly during the period of the study. Further, employee's commitment profiles at Wave 2 were more differentiated than they were at Wave 1
Abstract:
This study, based in a manufacturing plant in Venezuela, examines the relationship between perceived task characteristics, psychological empowerment and commitment, using a questionnaire survey of 313 employees. The objective of the study was to assess the effects of an organizational intervention at the plant aimed at increasing productivity by providing performance feedback on key aspects of its daily operations. It was hypothesized that perceived characteristics of the task environment, such as task meaningfulness and task feedback, will enhance psychological empowerment, which in turn will have a positive impact on employee commitment. Test of a structural model revealed that the relationship of task meaningfulness and task feedback with affective commitment was partially mediated by the empowerment dimensions of perceived control and goal internalization. The results highlight the role of goal internalization as a key mediating mechanism between job characteristics and affective commitment. The study also validates a Spanish-language version of the psychological empowerment scale by Menon (2001)
Resumen:
A pesar de la extensa validación transcultural del modelo de compromiso organizacional de Meyer y Allen (1991), han surgido ciertas dudas respecto a la independencia de los componentes afectivo y normativo y, también, sobre la unidimensionalidad de este último. Este estudio analiza la estabilidad de la estructura del modelo y examina el comportamiento de la escala normativa, empleando 100 muestras, de 250 sujetos cada una, extraídas aleatoriamente de una base de datos de 4.689 empleados. Los resultados muestran cierta estabilidad del modelo, y apoyan parcialmente a la corriente que propone el desdoblamiento del componente normativo en dos subdimensiones: el deber moral y el sentimiento de deuda moral
Abstract:
Although there has been extensive cross-cultural validation of Meyer and Allen’s (1991) model of organizational commitment, some doubts have emerged concerning both the independence of the affective and normative components, and the unidimensionality of the former. This study focuses on analyzing the stability of the model’s structure, and on examining the behaviour of the normative scale. For this purpose, we employed 100 samples of 250 subjects each, extracted randomly from a database of 4,689 employees. The results show certain stability of the model, and partially support research work suggesting the unfolding of the normative component into two subdimensions: one related to a moral duty, and the other to a sense of indebtedness
Abstract:
In recent years there has been an increasing interest among researchers and practitioners to analyze what makes a firm attractive in the eyes of university students, and if individual differences such as personality traits have an impact on this general affect towards a particular organization. The main goal of the present research is to demonstrate that a recently conceptualized narrow trait of personality named dispositional resistance to change (RTC), that is, the inherent tendency of individuals to avoid and oppose changes (Oreg, 2003), can predict organizational attraction of university students to firms that are perceived as innovative or conservative. Three complementary studies were carried out using a total sample of 443 college students from Mexico. In addition to validating the hypotheses, our findings suggest that as the formation of the images of organizations in students’ minds is done through social cognitions, simple stimuli such as physical artifacts, when used in an isolated manner, do not have a significant impact on organizational attraction
Abstract:
The Work Values Scale EVAT (based on its initials in Spanish: Escala de Valores hacia el Trabajo) was created in 2000 to measure values in the work contexto The instrument operationalizes the four higher-order-values of the Schwartz 'rheory (1992) through sixteen items focused on work scenarios. The questionnaire has been used among large samples of Mexican and Spanish individuals reporting adequate psychometric properties. The instrument has recently been translated into Portuguese and Italian, and subsequent1y used in a large-scale study with nurses in Portugal andin a sample of various occupations in Italy. The purpose of this research was to demonstrate the cross-cultural validity of the Work Values Scale EVAT in Spanish, Portuguese, and Italian. Our results suggest that the original Spanish version of the EVAT scale and the new Portuguese and Italian versions are equivalent
Abstract:
The authors examined the validity of the Spanish-language version of the dispositional resistance lO change (RTC) scale. First, the structural validity of the new questionnaire was evaluated using a nested sequence of confirmatory factor analyses. Second, the external validity ofthe questionnaire was assessed, using the four higher-order values of the Schwartz's theory and the four dimensions of the RTC scale: routine seeking, emotional reaction, short-term focus and cognitive rigidity. A sample of 553 undergraduate students from Mexico and Spain was used in the analyses. The results confirmed both the construct structure and the external validity of the questionnaire
Abstract:
The authors examined the convergent validity of the four dimensions of the Resistance to Change scale (RTC): routine seeking, emotional reaction, short-term focus and cognitive rigidity and the four higher-order values of the Schwartz’s theory, using a nested sequence of confirmatory factor analyses. A sample of 553 undergraduate students from Mexico and Spain was used in the analyses. The results confirmed the external validity of the questionnaire
Resumen:
Este estudio analiza el impacto de la diversidad de valores entre los integrantes de los equipos sobre un conjunto de variables de proceso, así como sobre dos tareas con diferentes demandas de interacción social. En particular, se analiza el efecto de la diversidad de valores sobre el conflicto en la tarea y en las relaciones, la cohesión y la autoeficacia grupal. Utilizando un simulador de trabajo en equipo y una muestra de 22 equipos de entre cinco y siete individuos, se comprobó que la diversidad en valores en un equipo, influye de forma directa sobre las variables de proceso y sobre la tarea que demanda baja interacción social, la relación entre la diversidad de valores sobre el desempeño, se ve mediada por las variables de proceso. Se proponen algunas acciones que permitirían poner en práctica los resultados de esta investigación en el contexto organizacional
Abstract:
This study investigates the impact of value diversity among team members on team process and performance criteria on two tasks of differing social interaction demands. Specifically, the criteria of interest included task conflict, relationship conflict, cohesion, and team efficacy and task performance on two tasks demanding different levels of social interaction. Utilizing a team work simulator and a sample comprised of 22 learns of five to seven individuals, it was demonstrated that value diversity directly impacts both task performance and process criteria on the task demanding low social interaction. Meanwhile, in the task requiring high social interaction, value diversity related to task performance via the mediating effects of team processes. Some specific actions are proposed in order to apply the results of this research in the daily context of organizations
Abstract:
The Work Values Scale EVAT (based on its initials in Spanish) was created in 2000 to measure values in the work context. The instrument operationalizes the four higher-order-values of the Schwartz Theory (1992) through sixteen items focused on work scenarios. The questionnaire has been used among large samples of Mexican and Spanish individuals (Arciniega & González, 2006: 2005, González & Arciniega 2005), reporting adequate psychometric properties. The instrument has recently been translated into Portuguese and Italian, and subsequently used in a large-scale study with nurses in Portugal and in a sample of various occupations in Italy. The purpose of this research was to demonstrate the cross-cultural validity of the Work Values Scale EVAT in Spanish, Portuguese, and Italian, using a new technique of measurement equivalence: confirmatory multidimensional scaling (CMDS). Our results suggest that CMDS is a serviceable technique for assessing measurement equivalence, but requires improvements to provide precise fit indices
Abstract:
We examine the impact of team member value and personality diversity on team processes and performance. The research is divided into two studies. First, we examine the impact of personality and value diversity on team performance, relationship and task conflict, cohesion, and team self-efficacy. Second, we evaluate the effect of team members’ values diversity on team performance in two different types of tasks, one cognitive, and the other complex. In general, our results suggest that higher levels of diversity with respect to values were associated with lower levels of team process variables. Also, as expected we found that the influence of team values diversity is higher on a cognitive task than on a complex one
Abstract:
sidering the propositions of Simon (1990;1993) and Korsgaard áp.d collaborators (1997), that an individual who assigns priority to values related to altruism tends to pay less attention to evaluating personal costs and benefits when processing social information, as well as the basic premises of job satisfaction that establishes that this attitude is centered on a cognitive process of evaluating how specific conditions or outcomes in a job fulfill the needs and values of a persono We proposed that individuals who score higher on values associated with altruism, will reveal higher scores on all specific facets of job satisfaction than those who score lower. A sample of 3,201 Mexican employees, living in 11 cities and working for 30 different companies belonging to the same holding, was used in this study. The results of the research c1early support the central hypothesis
Abstract:
Some reviews have shown how different attitudes, demographic and organizational variables generate organizational commitment. Few studies have reported how work values and organizational factors create organizational commitment. This investigation is an attempt to explore the influence that both sets of variables have on organizational commitment. Using the four high-order values proposed by Schwartz (1992) to operationalize the construct of work values, we evaluated the influence of these work values on the development of organizational commitment, in comparison with four facets of work satisfaction and four organizational factors: empowerment, knowledge of organizational goals, and training and communication practices. A sample of 982 employees from eight companies of Northeastern Mexico was used in this study. Our findings suggest that work values occupy less important place on the development of organizational commitment when compared to organizational factors, such as the perceived knowledge of the goals of the organization, or some attitudes such as satisfaction with security and opportunities of development
Abstract:
In this paper we propose the use of new iterative methods to solve symmetric linear complementarity problems (SLCP) that arise in the computation of dry frictional contacts in Multi-Rigid-Body Dynamics. Specifically, we explore the two-stage iterative algorithm developed by Morales, Nocedal and Smelyanskiy [1]. The underlying idea of that method is to combine projected Gauss-Seidel iterations with subspace minimization steps. Gauss-Seidel iterations are aimed to obtain a high quality estimation of the active set. Subspace minimization steps focus on the accurate computation of the inactive components of the solution. Overall the new method is able to compute fast and accurate solutions of severely ill-conditioned LCPs. We compare the performance of a modification of the iterative method of Morales et al with Lemke’s algorithm on robotic object grasping problems.
Abstract:
This paper investigates corporate social (and environmental) responsibility (CSR) disclosure practices in Mexico. By analysing a sample of Mexican companies in 2010, it utilises a detailed manual content analysis and identifies corporate-governance-related determinants of CSR disclosure. The study shows a general association between the governance variables and both the content and the semantic properties of CSR information published by Mexican companies. Although an increased international influence on CSR disclosure is noted, the study reveals the symbolic role of CSR committees and the negative influence of foreign ownership on community disclosure, suggesting that improvements in business engagement with stakeholders are needed for CSR to be instrumental in business conduct
Abstract:
Evidence that information campaigns help citizens elect better politicians is mixed. We investigate whether comparative performance information and public dissemination can enhance information's effects on electoral accountability, by respectively helping citizens to identify malfeasance by incumbent parties and facilitating coordination around the information provided. We test these mechanisms using a large-scale field experiment that provided citizens with the results of audit reports documenting mayoral malfeasance before the 2015 Mexican municipal elections. Although citizens used incumbent performance indicators to hold governments to account, we find that neither benchmarking incumbent performance against mayors from other parties within the state, nor accompanying leaflet delivery with loudspeakers announcing the leaflets' delivery, significantly moderated the effects of information on citizen beliefs or incumbent party vote share. Comparative performance information's ineffectiveness likely reflected citizens' limited updating from the particular comparison provided, while the loudspeaker generated somewhat greater common knowledge without meaningfully facilitating voter coordination. The results highlight challenges in designing information campaigns to capture the theoretical conditions conducive to electoral accountability
Abstract:
Effective policy-making requires that voters avoid electing malfeasant politicians. However, informing voters of incumbent malfeasance in corrupt contexts may not reduce incumbent support. As our simple learning model shows, electoral sanctioning is limited where voters already believed incumbents to be malfeasant, while information's effect on turnout is non-monotonic in the magnitude of reported malfeasance. We conducted a field experiment in Mexico that informed voters about malfeasant mayoral spending before municipal elections, to test whether these Bayesian predictions apply in a developing context where many voters are poorly informed. Consistent with voter learning, the intervention increased incumbent vote share where voters possessed unfavorable prior beliefs and when audit reports caused voters to favorably update their posterior beliefs about the incumbent's malfeasance. Furthermore, we find that low and, especially, high malfeasance revelations increased turnout, while less surprising information reduced turnout. These results suggest that improved governance requires greater transparency and citizen expectations
Abstract:
A Vector Autoregressive Model of the mexican economy was employed to empirically find the transmission channels of price formation. The structural changes affecting the behavior of the inflation rate during 1970-1987, motivated the analysis of the changing influences of the explanatory variables within three different subperiods, namely: 1970-1976, 1978-1982 and 1983- 1987. A main finding is that, among the variables considered, the public prices were the most important in explaining the variability of the inflation, irrespective of the subperiod under study. Another finding is that inflationary inertia played a different role in each subperiod
Abstract:
Value stream mapping (VSM) is a valuable practice employed by industry experts to identify inefficiencies in the value chain due to its visual representation capability and general ease of use. Enabled by a shift towards digitalization and smart connected systems, this project investigates the possibilities of transitioning VSM from a manual to digital process through the utilization of data generated from a Real-Time Location System (RTLS). This study focuses on merging the aspects of RTLS and VSM such that their advantages are combined to form a more robust and effective VSM process. Two simulated experiments and an initial validation test were conducted to demonstrate the capability of the system to function in an industrial environment by replicating an actual production process. The two experiments represent the current state of conditions of the company in two different instances of time. These outputs from the tracking system are then modified and converted into inputs for VSM. A VSM application was modified and utilized to create a digital value stream map with relevant performance parameters. Finally, a stochastic simulation was carried out to compare and extrapolate the results to a 16hrs shift to measure, among other outputs, the utilization of the machines with the two RTLS scenarios
Abstract:
The degradation of biopolymers such as polylactic acid (PLA) has been studied for several years; however, the results regarding the mechanism of degradation are not completely understood yet. PLA is easily processed by traditional techniques including injection molding, blow molding, extrusion, and thermoforming; in this research, the extrusion and injection molding processes were used to produce PLA samples for accelerated destructive testing. The methodology employed consisted of carrying out material testing under the guidelines of several ASTM standards; this research hypothesized that the effects of UV light, humidity, and temperature exposure have a statistical difference in the PLA degradation rate. The multivariate analysis of non-parametric data is presented as an alternative to multivariate analysis, in which the data do not satisfy the essential assumptions of a regular MANOVA, such as multivariate normality. A package in the R software that allows the user to perform a non-parametric multivariate analysis when necessary was used. This paper presents a study to determine if there is a significant difference in the degradation rate after 2000 h of accelerated degradation of a biopolymer using the multivariate and non-parametric analyses of variance. The combination of the statistical techniques, multivariate analysis of variance and repeated measures, provided information for a better understanding of the degradation path of the biopolymer
Abstract:
Dried red chile peppers [Capsicum annuum (L.)] are an important agricultural product grown throughout the Southwestern United States and is extensively used in food and for commercial application. Given the high, broad demand for chile attention to the methods of harvesting, storage, transport, and packaging are critical for profitability. Currently, chile should be stored no more than 24 to 36 hours at ambient temperatures from the time of harvest due to the potential for natural fermentation to destroy the crop. The rate for calculating and determining the amount of useable/destroyed chile in ambient conditions is determined by several variables that include the harvesting method (hand-picked, mechanized), time of harvest following the optimal harvesting point (season), weather variations (moisture). In this work, a stochastic simulation-based model is presented to forecast optimal harvesting scenarios capable of supporting farmers and chile processors better plan/manage planting and growth acceleration programs. The tool developed allows for the economic feasibility of storage/stabilization systems, advanced mechanical harvesters, and other future advances based on the amount increase in chile yield to be analyzed. We used described simulation as an analysis tool to obtain the expected coverage and the estimation of the mean and quantile
Abstract:
While the degradation of Polylactic Acid (PLA) has been studied for several years, results regarding the mechanism for determining degradation are not completely understood. Through accelerated degradation testing, data can be extrapolated and modeled to test parameters such as temperature, voltage, time, and humidity. Accelerated lifetime testing is used as an alternative to experimentation under normal conditions. The methodology to create this model consisted of fabricating series of ASTM specimens using extrusion and injection molding. These specimens were tested through accelerated degradation; tensile and flexural testing were conducted at different points of time. Nonparametric inference tests for multivariate data are presented. The results indicate that the effect of the independent variable or treatment effect (time) is highly significant. This research intends to provide a better understanding of biopolymer degradation. The findings indicated that the proposed statistical models can be used as a tool for characterization of the material regarding the durability of the biopolymer as an engineering material. Having multiple models, one for each individual accelerating variable, allow deciding which parameter is critical in the characterization of the material
Abstract:
The degradation of biopolymers such as polylactic acid (PLA) has been studied for several years, however, results regarding the mechanism of degradation are not completely understood yet. It would be advantageous to predict and model the degradation of PLA rates by means of performance. High strength and thermoplasticity allow PLA to be used to manufacture a great variety of products. This material is easily processed by traditional techniques including injection molding process, blow molding, extrusion, and thermoforming ; extrusion and injection molding processes were used to produce PLA samples for accelerated destructive testing in this research; the methodology employed consists of carrying out material testing under the guidelines of several ASTM standards, this research hypothesizes that UV light, humidity, and temperature exposure have a statistical difference in the degradation rate. The multivariate analysis of non-parametric data is presented as an alternative for multivariate analysis in which the data do not satisfy the essential assumptions of regular MANOVA; such as multivariate normality. Ellis et al. created a package in R software that allows the user to perform a non-parametric multivariate analysis, when necessary. This paper presents a study to determine if there is a significant difference in the degradation process of a biopolymer using the multivariate and nonparametric analysis of variance. The combination of the statistical techniques, multivariate analysis of variance, and repeated measures provided information for a better understanding of the degradation path of the biopolymers
Abstract:
Formal models of animal sensorimotor behavior can provide effective methods for generating robotic intelligence. In this article we describe how schema-theoretic models of the praying mantis derived from behavioral and neuroscientific data can be implemented on a hexapod robot equipped with a real time color vision system. This implementation incorporates a wide range of behaviors, including obstacle avoidance, prey acquisition, predator avoidance, mating, and chantlitaxia behaviors that can provide guidance to neuroscientists, ethologists, and roboticists alike. The goals of this study are threefold: to provide an understanding and means by which fielded robotic systems are not competing with other agents that are more effective at their designated task; to permit them to be successful competitors within the ecological system and capable of displacing less efficient agents; and that they are ecologically sensitive so that agent–environment dynamics are well-modeled and as predictable as possible whenever new robotic technology is introduced.
Resumen:
Los estudios sobre el folklore irrumpen con fuerza a escala mundial a principios del siglo XX, y el ámbito literario mexicano se pregunta sobre el espacio que estos ocupan tanto en la creación como en la crítica, habida cuenta de que la literatura popular y tradicional ya constituía una fuente de inspiración y de estudio. Se establece, entonces, una línea de carácter histórico para observar cuáles fueron las principales contribuciones, debates y, en definitiva, ideas que se propusieron en México, especialmente por escritores/as o personas vinculadas al ámbito letrado. Esta línea propuesta desemboca en tres circunstancias, cuya observación detallada se convierte en parte esencial del análisis propuesto: la publicación de los tres volúmenes que constituyen la obra folklórica de Rubén M. Campos, así como la recepción que hacen de ella tanto Alfonso Reyes como Salvador Novo (el primero con una reseña de uno de sus libros y, el segundo, con una crítica feroz en una columna, que tuvo la réplica del escritor guanajuatense). En el análisis discursivo de estas circunstancias encontramos las claves para entender dos extremos de la observación del mismo fenómeno, eso sí, siempre teniendo como telón de fondo la construcción de un nacionalismo que impregnaba a la mayor parte de las manifestaciones culturales del momento y obligaba a los escritores a tomar alguna posición
Abstract:
Folklore studies burst with force worldwide at the beginning of the 20th century and the Mexican literary field wonders about the space they occupy both in creation and criticism, considering that popular and traditional literature was already a source of inspiration and study. A historical line is then established to observe what were the main contributions, debates and, in short, ideas that were proposed in Mexico, especially by writers or people linked to the literary sphere. This proposed line leads to three circumstances, whose detailed observation becomes an essential part of the proposed analysis: the publication of the three volumes that constitute the folkloric work of Rubén M. Campos, as well as the reception given to it by Alfonso Reyes and Salvador Novo (the first with a review of one of his books and the second with a fierce criticism in a column, which had the reply of the writer from Guanajuato). In the discursive analysis of these circumstances, we find the keys to understand two extremes of observation of the same phenomenon, always taking as a backdrop the construction of a nationalism that permeated most of the cultural manifestations of the time and forced writers to take a position
Resumen:
En algunas culturas músico-poéticas de América Latina, suele ser recurrente el uso de los términos "argumento" o "fundamento" para referirse a la determinada forma de versar que es característica de los cantos dialogados, debates poéticos, contradicciones, controversias o topadas. A grandes rasgos, las características de estas poesías son las siguientes: se producen para desarrollar un tema o tópico en particular y, por tanto, para mostrar conocimiento; suponen una competencia de ingenio y juego en su estructura general de pregunta y respuesta; pueden haber sido memorizadas con anterioridad, pero también suele ser común la improvisación; se producen en un contexto oral, en el que el ritual festivo o performance aporta significaciones relevantes; y tiene una significación propia para la comunidad. Entonces, se analiza y compara cómo se conceptualizan, cuándo se enuncian, qué características tiene y, en definitiva, las variaciones de estas poesías de argumento o fundamento diferentes culturas-poéticas de países como México, Panamá, Perú, Ecuador, Colombia o Chile, lo cual constituiría una breve cartografía de este tipo de canto o recitación. Así, al "argumento" -o "fundamento"- poético se le podría considerar como un elemento tan particular y relevante de canto dialogado, que se podría catalogar como una vertiente del mismo
Abstract:
In some musical-poetic cultures of Latin America, the use of the terms "argumento" or "fundamento" is often recurrent to refer to a certain way of speaking that is characteristic of dialogue songs, poetic debates, contradictions, controversies or topadas. In broad strokes, the characteristics of these poems are the following: They are produced to develop a particular theme or topic and, therefore, to show knowledge; They involve a competition of ingenuity and play in their general question and answer structure; They may have been memorized previously, but improvisation is also common; They occur in an oral context, in which the festive ritual or performance provides relevant meanings; and they have their own significance for the community. Then, we analyze and compare how they are conceptualized, when they are stated, what characteristics they have, and, ultimately, the variations of these poems of "argumento" or "fundamento" in different musical-poetic cultures of countries such as Mexico, Panama, Peru, Ecuador, Colombia or Chile, which would constitute a brief cartography of this type of singing or recitation. Thus, the poetic "argumento" or "fundamento" could be considered such a particular and relevant element of dialogue singing, that it could be classified as a side of it
Resumen:
Las denominadas escrituras del yo que se produjeron a raíz del exilio español republicano han constituido una invaluable fuente referencial para la Historia, pero han sido menos estudiadas desde una perspectiva literaria, más aún si la autora era una mujer y no disponía de más obra y, todavía más, si su lectura no se podía hacer en una lengua hegemónica. Este es el caso de Syra Alonso, exiliada gallega en México, cuya huida, junto con sus tres hijos, se produjo tras el asesinato de su esposo, el pintor vanguardista Francisco Miguel. Estas circunstancias impulsaron la escritura de sus Diarios: el Diario de Tordoia, escrito en Galicia antes de salir hacia el exilio, y el Diario de Actopan, escrito en México un año después de su llegada, ambos publicados íntegramente por primera vez en gallego en el año 2000. Tras un trabajo de búsqueda hemerográfica, la reunión en este artículo de hasta siete narraciones desconocidas de la autora, publicadas entre 1943 y 1951 en dos revistas literarias mexicanas, resignifica la obra de Syra Alonso y ofrece respuestas a momentos biográficos esenciales de su exilio, que no se conocían hasta ahora. Por tanto, se propone, por un lado, un análisis de la recepción crítica de sus Diarios que pueda dar respuesta a la invisibilidad de la autora dentro del canon y, por otro, un análisis contextual, literario e incluso etnográfico de estas narraciones en diálogo con su obra diarística y su propia biografía. Todo, ante la urgencia de un presente marcado por el hallazgo del cuerpo de Francisco Miguel en una fosa común de Bértoa hace escasos meses, un presente de memoria histórica, reparación y homenaje
Abstract:
The so-called self-writings that were produced as a result of the Spanish Republican exile have been an invaluable source of historical reference, but they have been less studied from a literary perspective, even more so when the author was a female and had no other published work and, even more so, if her reading could not be done in a hegemonic language. This is the case of Syra Alonso, a Galician exile in Mexico, who fled with her three children after the murder of her husband, the avant-garde painter Francisco Miguel. These circumstances prompted the writing of her diaries: the Diario de Tordoia, written in Galicia before leaving for exile, and the Diario de Actopan, written in Mexico a year after her arrival, both published in full in Galician for the first time in 2000. After a hemerographic research work, the gathering in this article of up to seven unknown narrations of the author, published between 1943 and 1951 in two Mexican literary magazines, reinterpret Syra Alonso's work and offer relevant information to essential biographical moments of her exile, which were unknown until now. Therefore, we propose an analysis of the critical reception of her Diaries that may explain the invisibility of the author within the canon of the Spanish exile and, on the other hand, a contextual, literary and even ethnographic analysis of these narratives in dialogue with her diaristic journal? work and her own biography. All this, in view of the urgency of a present marked by the discovery of Francisco Miguel's body in a mass grave in Bértoa a few months ago. It's a present of historical memory, reparation, and homage
Resumen:
Francisco Villa no es personaje protagónico de la obra literaria de Mauricio Magdaleno, sin embargo, a lo largo de toda su trayectoria este trató de reflexionar sobre la relevancia histórica e identitaria de aquel para México. Entonces, se propone y se analiza un amplio corpus de obras literarias y periodísticas del escritor para conocer su postura ante el villismo y su indiscutible líder. A partir de un enfoque esencialmente historiográfico y literario se trazan las confluencias familiares y estilísticas que atraviesan al autor. Debemos tener en cuenta que Magdaleno vivió su infancia en dos ciudades relevantes para el encumbramiento del militar, Zacatecas y Aguascalientes, e incluso existe un relato que narra el encuentro entre ambos. El análisis evidencia la presencia de Pancho Villa en los géneros literarios tradicionales y de tradición oral, así como las diferentes formas de apropiación que la literatura de la Revolución hizo de su figura para leer la historia actual
Abstract:
Francisco Villa is not a leading character in the literary work of Mauricio Magdaleno, however, throughout his career he tried to reflect on his historical and identity relevance of Villa for Mexico. Then, a wide corpus of literary and journalistic works of the writer is proposed and analyzed to know the Magdaleno’s position about the Villismo and its indisputable leader. From an essentially historiographical and literary approach, the family relationship and stylistic confluences that cross the author are traced. We must know that Magdaleno lived his childhood in two cities relevant to the rise of the military, Zacatecas and Aguascalientes, and there is even a story that narrates the meeting between both. The analysis evidences the presence of Pancho Villa in traditional literary genres and oral tradition, as well as the different forms of appropriation that the literature of the Mexican Revolution made of his figure to read current history
Resumen:
En México, y en relación con unas circunstancias nacionalistas e identitarias muy concretas, se desarrolló una literatura indigenista que llevaron a cabo algunos escritores a partir de los relatos y la historia de unas comunidades originarias que no tenían acceso directo al circuito cultural, y ni mucho menos al editorial, de la época; unas comunidades que habían sido las vencidas en todos los procesos históricos, que estaban silenciadas. A la par que esta literatura, palpitaba otra indígena, mayoritariamente oral, y preservada tanto en crónicas, códices o estudios etnográficos como en la memoria individual y colectiva de aquellas comunidades. Ambas manifestaciones parecían tener en el cuento un molde natural para alumbras los relatos del pasado, encontrar en ellos la raíz más honda del ser mestizo, recuperar las voces posibles de la tierra y dar así sentido al presente. En el campo literario, además, el tema o los personajes indígenas fueron parte del material narrativo de un grupo de escritores que, tras las polémicas por una literatura viril y nacionalista, buscaron poner de manifiesto los problemas más acuciantes de la realidad mexicana del momento frente a otro grupo más cosmopolita. Se trata entonces de recorrer algunas de estas manifestaciones, encontrar diálogos y tensiones, analizar temas, perspectivas y recursos que emplearon algunos autores paradigmáticos, dentro y fuera del canon
Resumen:
El objetivo principal de este trabajo es entender la contribución del guion a la película Río Escondido en la creación de una obra colectiva, polifónica, de convergencia y alejada de una visión clásica de la autoría. Para ello, proponemos un análisis documental de la historia y otro propiamente del guion como vehículo semiótico que posibilita el camino de un lenguaje escrito a uno visual. Este último análisis se centra en la relevancia del discurso literario del guion que se omite en la película y que puede suponer una discordia entre el guionista y el director. Hallar estos espacios de quiebre posibilita un acercamiento a nuevas formas de ver el cine mexicano, alejadas de la industria y los clichés, y más cercanas al ámbito literario
Abstract:
The aim of this work is to establish the significance of the film Río Escondido’s script. Consequently, it is evident that in this film the role of the author is different since its script is a collective, polyphonic, and convergent work. In such a way, it is analyzed the History’s documents in connection to the story from the script as a semiotic approach that allows the transfer from text to screen. By this way, the analysis is focused on the relevance of the script’s literariness which is supressed in the film and might implies a controversy in between the writer and the director. It is concluded, in this approach, the importance of the literary analysis in the Mexican film industry, far away from mercantilization and stated cliches
Resumo:
O objetivo principal deste trabalho é contribuição de roteiro do filme Río Escondido na criação de uma obra coletiva, polifônica, de convergência e longe de una visão clássica da autoria. Assim, propomos uma análise documental da história e outra propriamente do roteiro de filme como veículo semiótico que visibiliza o percuso de uma linguagem escrita a uma visual. Esta última análise foca na relevância do discurso literário do roteiro e filme que é omitido do filme que que e pode levar a um desentendimiento entre o rotereista e o diretor. Encontrar esses espaços inovadores permite uma abordagem de novas formas de ver o cinema mexicano, longe da indústria e dos clichés, e mais perto do campo literário
Resumen:
El propósito de este artículo es analizar, desde el punto de vista de la historia, la historiografía y el análisis literario, la novela Serpa Pinto. Pueblos en la tormenta (1943), de Giuseppe Garretto. Esta novela, publicada por primera vez en castellano a pesar de la nacionalidad italiana de su autor, pasó desapercibida para la crítica. Revisamos tanto el contexto histórico de las referencias de la obra como de la publicación de la primera edición; situamos la obra como parte de la historiografía literaria del exilio de refugiados europeos en Latinoamérica; analizamos las características literarias de la novela, así como las estrategias de rememoración narrativa que emplea el autor
Abstract:
The purpose of this article is to analyze the novel Serpa Pinto. Pueblos en la tormenta (1943) by Giuseppe Garretto, from the point of view of history, historiography and literary analysis. Published for the first time in Spanish despite the Italian nationality of the author, this novel went unnoticed by critics. We aim to study the historical context of the work references and the publication of the first edition, in order to situate the work as part of the literary historiography of the exile of European refugees in Latin America. Moreover, we analyze the literary characteristics of the novel, as well as the narrative remembrance strategies used by the author
Resumen:
Alfonso Cravioto formó parte de las organizaciones intelectuales y políticas más destacadas de principios del siglo XX. A lo largo de su vida combinó la actividad política y diplomática con la creación literaria, poesía y ensayo, principalmente. En este artículo nos proponemos destacar su contribución a la educación en México, desde una perspectiva que enlaza su experiencia vital con la sensibilidad que lo caracterizó y le ganó el reconocimiento y afecto de sus contemporáneos, al tiempo que anticipó y participó en los primeros años de la Secretaría de Educación Pública
Abstract:
Alfonso Cravioto was a member of the most prominent intellectual and political organizations of the early 20th century. Throughout his life he joined political and diplomatic activities with creating literature, poetry and essays. In this article we intend to highlight his contribution to education in Mexico, from a perspective that links his life experience with the sensitivity which was one of his most pronounced characteristics. Cravioto was recognized and beloved by his contemporaries, and he anticipated and participated in the first years of the Secretary of Mexican Public Education
Resumen:
En la actualidad, una edición crítica completa no sólo debe ir acompañada de una crítica textual rigurosa, sino también de una genética que considere todos los testimonios. En este sentido, el presente trabajo analiza algunas de las problemáticas que se podrían producir si elaborásemos una edición crítica de todos los cuentos de Mauricio Magdaleno, dos de las más importantes serían las siguientes: primera, el hecho de que la mayor parte de los cuentos, tal y como los conocemos hoy, tuviera versiones previas publicadas en fuentes periódicas, lo cual nos obligaría a una investigación más exhaustiva y a una constitución del texto a partir de la colación de variantes de testimonios hemerográficos; segunda, la decisión de criterios que permitan superar la frontera genérica entre un relato costumbrista escrito a partir de una anécdota personal y un cuento susceptible de ser incluido en esta propuesta. Así, ambas problemáticas tienen como raíz común que la labor escritural más estable en la que se desempeñó Magdaleno fuera la periodística, ya que con ésta pudo compatibilizar otras actividades que, sobre todo, le servían para un propósito de sustento material. Una edición crítica de los cuentos de Mauricio Magdaleno otorgaría no sólo la seguridad de poder hacer una buena lectura del texto final, sino que también pondría de relieve el proceso creador del autor, lo cual ayudaría a superar la crítica anacrónica a la que se le ha sometido. Por último, y con base en el estado actual de la cuestión que se expone en este trabajo, se presenta una tabla con las versiones de los cuentos encontradas hasta ahora y otra con una propuesta sustentada de un posible índice de los cuentos que integrarían la edición crítica
Abstract:
Currently, a complete critical edition must not only be accompanied by a rigorous textual criticism but also by a genetic edition that considers all the testimonies. In this sense, this work analyzes some of the problems that could occur if we publish a critical edition the entirety of Mauricio Magdaleno’s short stories. There would be two important problems: first, the fact that most of the stories, as we know them today, had previous versions published in periodical sources, which would oblige us to make an exhaustive investigation and to gather and collate the texts from the varied hemerographic testimonies; second, the decision of any criterion that allows overcoming the generic border between a story written from a personal anecdote and a short story that can be included in our edition. Thus, both problems have as a common root: Magdaleno was a journalist and supplemented his journalistic career with other activities that served as material support. A critical edition of Mauricio Magdaleno’s short stories would be a good reading of his work, and we could know about the author’s creative process, which would help to overcome the anachronistic criticism that has been published on the topic. Finally, and based on the current status, we propose a table with the versions of the short stories found and another table with a proposal supported by a possible index of the short stories that would make up the critical edition
Resumen:
El objetivo principal de este trabajo es aproximarse a una conceptualización de la poesía de argumento o versos de argumentar que se produce en el son jarocho, especialmente en la región de Los Tuxtlas (México). Además, se analizarán sus características poéticas y la forma en que se desarrolla dentro del ritual festivo. Para ello, el estudio se apoyará en el análisis de algunas de estas poesías contenidas en cuadernos de poetas, así como en testimonios orales de estos, ya recogido en libros o en entrevistas, es decir, se analizará la poesía en relación con su contexto. En el dialogismo y en la tópica de esta poesía se encuentran dos elementos fundamentales para entender las dinámicas de tradicionalización e innovación que se producen en estos rituales festivos músico-poéticos, a pesar del componente creativo —improvisado a veces— que depende de los verseros
Abstract:
The main objective of this work is to approach a conceptualization of the poetry of argu ment or verses of arguing that occurs in the son jarocho, especially in the region of Los Tu xtlas (Mexico). In addition, we will analyze its poetic characteristics and the way it develops within the festive ritual. For this, we will do an analysis of some of these poems contained in poets’ notebooks, as well as on oral testimonies of these poets, whether they have been collected in books or interviews, that is, we will analyze the poetry in relation to its context. In the dialogism and in the topic of this poetry we find two fundamental elements to unders tand the dynamics of traditionalization and innovation that take place in these poetic-musical festive rituals, despite the creative component —sometimes improvised— that depends on the verseros
Resumo:
O objetivo principal deste trabalho é abordar uma conceituação da poesia de argumento ou versos de argumentação que se produz no son jarocho, especialmente na região de Los Tuxtlas. Além disso, serão analisadas as suas características poéticas e a forma como se des envolve dentro do ritual festivo. Para tanto, será desenvolvida uma análise tanto de alguns desses poemas contidos em cadernos de poetas, quanto os depoimentos orais destes, sejam eles coletados em livros ou entrevistas, ou seja, se analisará a poesia em relação ao seu con texto. Encontramos no dialogismo e na temática desta poesia dois elementos fundamentais para compreender a dinâmica de tradicionalização e inovação que se realiza nestes rituais poético-musicais festivos, apesar da componente criativa —por vezes improvisada— que depende dos versos
Resumen:
La tradición del huapango arribeño que se lleva a cabo en la Sierra Gorda de Querétaro y Guanajuato, y en la Zona Media de San Luis Potosí, se revela en la celebración de rituales festivos de carácter músico-poético, tanto civiles como religiosos, en donde la glosa conjuga la copla y la décima. La tradición se rige bajo la influencia de una serie de normas de carácter consuetudinario y oral, entre las cuales destacan el «reglamento» y el «compromiso». El presente artículo indaga en torno a la naturaleza jurídica, lingüística y literaria de dicha normatividad; su interacción con otras normas de carácter externo a la fiesta; su influencia, tanto en la performance o ritual festivo -especialmente en la topada-, como en la creación poética. A partir de fuentes etnográficas (entrevistas y grabaciones de fiestas) y bibliográficas, el objetivo es dilucidar el papel que juega dicha normatividad en la conservación y transformación de la tradición
Abstract:
The tradition of the huapango arribeño that is performed in the Sierra Gorda of Querétaro and Guanajuato, and in the Zona Media of San Luis Potosí, is reflected in the celebration of festival rituals of a musical-poetic nature, both civil and religious, where the glosa blends the coplaand the décima. The tradition is governed by a series of rules of traditional and oral nature, among which the «reglamento» (rules) and the «compromiso» (commitment) stand out. This article investigates the legal, linguistic and literary nature of these norms; their interaction with other norms of an external character to the festival; their influence, both in the performance or festive ritual -especially in the topada-, and in the poetic creation. Using ethnographic (interviews and recordings of festivals) and bibliographic sources, the aim is to elucidate the role these norms played in the conservation and transformation of tradition
Resumen:
Desde el punto de vista estético, los escritores Mauricio Magdaleno y Salvador Novo formaron parte de dos corrientes literarias extremas. Las trayectorias de ambos autores gozan de sorprendentes paralelismos tanto biográficos como en relación con el cultivo de diferentes disciplinas literarias. El debate que se suscitó en México en los años veinte y treinta en torno a la identidad y a la nacionalidad provocó un enfrentamiento feroz entre ambos, que, sin embargo, no impidió un progresivo acercamiento propiciado por la participación política. El artículo muestra el difícil equilibrio entre la toma de posicionamientos ideológicos, la responsabilidad generacional ante la construcción de un Estado moderno y la inquietud artística. Un poema inédito de Salvador Novo a Maurico Magdaleno y una imagen de ambos velando el cuerpo del padre Ángel María Garibay K. desmuestran la cercanía que se profesaron al final de sus vidas
Abstract:
From an aesthetic point of view, writers Mauricio Magdaleno and Salvador Novo formed part of two extreme literary trends. The literary career of both authors shows striking parallels in their biography and in their relation to the cultivation of different literary disciplines. The causes a fierce confrontation between them. This, however, does not prevent a progresive approach favored by political participation. The article illustrated the difficult balance between takin ideological positions. the generational responsability to build a modern state and the artistic interest. An unpublished poem from Salvador Novo to Mauricio Magdaleno and an image of both keeping vigil over the body of Father Angel Maria Garibay K. demonstrate the closeness that professed themselves at the end of their lives
Abstract:
This article examines the ability of recently developed statistical learning procedures, such as random forests or support vector machines, for forecasting the first two moments of stock market daily returns. These tools present the advantage of the flexibility of the considered nonlinear regression functions even in the presence of many potential predictors. We consider two cases: where the agent's information set only includes the past of the return series, and where this set includes past values of relevant economic series, such as interest rates, commodities prices or exchange rates. Even though these procedures seem to be of no much use for predicting returns, it appears that there is real potential for some of these procedures, especially support vector machines, to improve over the standard GARCH(1,1) model the out-of-sample forecasting ability for squared returns. The researcher has to be cautious on the number of predictors employed and on the specific implementation of the procedures since using many predictors and the default settings of standard computing packages leads to overfitted models and to larger standard errors
Abstract:
Cancer is the deadliest multifactorial disease, constituting variations at the molecular and histological level. Cancer is being treated on advanced step via utilization of the multiple approaches for effectively attaining the favorable outcomes of the chemotherapeutic agents. As far as conventional treatment protocols are being considered, chemotherapy and radiotherapy are pivotal ways to deliver chemotherapeutic drugs like uracil, paclitaxel, vinca alkaloids and doxorubicin (DOX). However, DOX is more preferred owing to its FDA approval and multi-functionalization, but irrespective of its significance, several side effects, exclusively cardiotoxicity and nephrotoxicity are accompanying. These side effects have encouraged the researchers to develop novel alternative drug delivery system for transportation of the DOX with the favor of nanotechnology and co-delivery of bioactive ligands and other chemotherapeutic drugs with DOX. In this review, we have discussed the develop ment of novel nanotechnology based drug delivery systems for DOX, which have claimed the improvement the therapeutic window (efficacy and safety) against various cancers. Moreover, co-delivery of anti-cancer drugs is thought provoking in terms of synergistic therapeutic outcomes of more than one anti-cancerous drug for achieving enhanced permeation and retention (EPR) effect
Abstract:
Participating in regular physical activity (PA) can help people maintain a healthy weight, and it reduces their risks of developing cardiovascular diseases and diabetes. Unfortunately, PA declines during early adolescence, particularly in minority populations. This paper explores design requirements for mobile PA-based games to motivate Hispanic teenagers to exercise. We found that some personality traits are significantly correlated to preference for specific motivational phrases and that personality affects game preference. Our qualitative analysis shows that different body weights affect beliefs about PA and games. Design requirements identified from this study include multi-player capabilities, socializing, appropriate challenge level, and variety
Abstract:
To achieve accurate tracking control of robot manipulators, many schemes have been proposed. Some common approaches are based on robust and adaptive control techniques, while when necessary velocity observers are employed. Robust techniques have the advantage of requiring few prior information of the robot model parameters/structure or disturbances while tracking can be achieved, for instance, by using sliding mode control. On the contrary, adaptive techniques guarantee trajectory tracking but under the assumption that the robot model structure is perfectly known and it is linear in the unknown parameters, while joint velocities are also available. In this letter, some experiments are carried out to find out whether combining a robust and an adaptive controller may increase the performance of the system, as long as the adaptive term can be treated as a perturbation by the robust controller. The results are compared with an adaptive robust control law, showing that the proposed combined scheme performs better than the separated algorithms, working on their own and then the comparison laws
Abstract:
To achieve accurate tracking control of robot manipulators many schemes have been proposed. A common approach is based on adaptive control techniques, which guarantee trajectory tracking under the assumption that the robot model structure is perfectly known and linear in the unknown parameters, while joint velocities are available. Despite tracking errors tend to zero, parameter errors do not unless some persistent excitation condition is fulfilled. There are few works dealing with velocity observation in conjunction with adaptive laws. In this note, an adaptive control/observer scheme is proposed for tracking position of robot manipulators. It is shown that tracking and observation errors are ultimately bounded, with the characteristic that when a persistent excitation condition is matched then they, as well as the parameter errors, tend to zero. Simulation results are in good agreement with the developed theory
Abstract:
This study determinates that morbidity presents a mediating impact between intimate partner violence against women and labor productivity in terms of absenteeism and presenteeism. Partial least squares structural equation modeling (PLS-SEM) was used on a nationwide representative sample of 357 female owners of micro-films in Peru. The resulting data reveals that morbidity is a mediating variable between intimate partner violence against women and absenteeism (ß=0.213; p<.001), as well as between intimate partner violence against women and presenteeism (ß=0.336; p<.001). This finding allows us to understand how such intimate partner violence against women negatively affects the workplace productivity in the context of a micro-enterprise, a key element in many economies across the world
Abstract:
The purpose of this paper is to determine the prevalence of economic violence against women, specifically in formal sector micro-firms managed by women in Peru, a key Latin American emerging market. Additionally, the authors have identified the demographic characteristics of the micro-firms, financing and credit associated with women who suffer economic violence. Design/methodology/approach. In this study, a structured questionnaire was administered to a representative sample nationwide (357 female micro-entrepreneurs). Findings. The authors found that 22.2 percent of female micro-entrepreneurs have been affected by economic violence at some point in their lives, while at the same time 25 percent of respondents have been forced by their partner to obtain credit against their will. Lower education level, living with one’s partner, having children, business location in the home, lower income, not having access to credit, not applying credit to working capital needs, late payments and being forced to obtain credit against one’s will were all factors associated with economic violence. Furthermore, the results showed a significant correlation between suffering economic violence and being a victim of other types of violence (including psychological, physical or sexual); the highest correlation was with serious physical violence (r=0.523, p<0.01)
Abstract:
A standard technique for producing monogenic functions is to apply the adjoint quaternionic Fueter operator to harmonic functions. We will show that this technique does not give a complete system in L2 of a solid torus, where toroidal harmonics appear in a natural way. One reason is that this index-increasing operator fails to produce monogenic functions with zero index. Another reason is that the non-trivial topology of the torus requires taking into account a cohomology coefficient associated with monogenic functions, apparently not previously identified because it vanishes for simply connected domains. In this paper, we build a reverse-Appell basis of harmonic functions on the torus expressed in terms of classical toroidal harmonics. This means that the partial derivative of any element of the basis with respect to the axial variable is a constant multiple of another basis element with subindex increased by one. This special basis is used to construct respective bases in the real L2-Hilbert spaces of reduced quaternion and quaternion-valued monogenic functions on toroidal domains
Abstract:
The Fourier method approach to the Neumann problem for the Laplacian operator in the case of a solid torus contrasts in many respects with the much more straight forward situation of a ball in 3-space. Although the Dirichlet-to-Neumann map can be readily expressed in terms of series expansions with toroidal harmonics, we show that the resulting equations contain undetermined parameters which cannot be calculated algebraically. A method for rapidly computing numerical solutions of the Neumann problem is presented with numerical illustrations. The results for interior and exterior domains combine to provide a solution for the Neumann problem for the case of a shell between two tori
Abstract:
The paper addresses the issues raised by the simultaneity between the supply function and the domestic and foreign demand for exportables, analysing the microeconomic foundations of the simultaneous price and output decisions of a firm which operates in the exportables sector of an open economy facing a domestic and a foreign demand for its output. A specific characteristic of the model is that it allows for the possibility of price discrimination, which is suggested by the observed divergencies in the behaviour of domestic and export prices. The famework developed is used to investigate the recent behaviour of prices and output in two industries of the German manufacturing sector
Abstract:
We introduce a new method for building models of CH, together with Π2 statements over H(ω2), by forcing. Unlike other forcing constructions in the literature, our construction adds new reals, although only 1ﭏ-many of them. Using this approach, we build a model in which a very strong form of the negation of Club Guessing at ω1 known as Measuring holds together with CH, thereby answering a well-known question of Moore. This construction can be described as a finite-support weak forcing iteration with side conditions consisting of suitable graphs of sets of models with markers. The CH-preservation is accomplished through the imposition of copying constraints on the information carried by the condition, as dictated by the edges in the graph
Abstract:
Measuring says that for every sequence (Cδ)δ<ω1 with each Cδ being a closed subset of δ there is a club C ⊆ ω1 such that for every δ ∈ C, a tail of C ∩ δ is either contained in or disjoint from Cδ. We answer a question of Justin Moore by building a forcing extension satisfying measuring together with 2ℵ0 > ℵ2. The construction works over any model of ZFC + CH and can be described as a finite support forcing iteration with systems of countable structures as side conditions and with symmetry constraints imposed on its initial segments. One interesting feature of this iteration is that it adds dominating functions f : ω1 −→ ω1 mod. countable at each stage
Abstract:
We separate various weak forms of Club Guessing at ω1 in the presence of 2No large, Martin's Axiom, and related forcing axioms. We also answer a question of Abraham and Cummings concerning the consistency of the failure of a certain polychromatic Ramsey statement together with the continuum large. All these models are generic extensions via finite support iterations with symmetric systems of structures as side conditions, possibly enhanced with ω-sequences of predicates, and in which the iterands are taken from a relatively small class of forcing notions. We also prove that the natural forcing for adding a large symmetric system of structures (the first member in all our iterations) adds N1-many reals but preserves CH
Resumen:
El ensayo explica la teoría del estilo de Ricardo Garibay y la ilustra con ejemplos (acompañados de comentarios críticos) tomados de dos libros del autor: Acapulco y Chicoasén
Abstract:
The essay explains Ricardo Garibay's theory of style and illustrates it with examples (accompanied by critical comments) taken from two of the author's books: Acapulco and Chicoasén
Resumen:
La estructura de À la recherche du temps perdu se explica a partir de la experiencia psíquica del tiempo del protagonista de la novela, según la concepción proustiana de la temporalidad a la luz del pensamiento de San Agustín, Henri Bergson y Edmund Husserl
Abstract:
The structure of À la recherche du temps perdu is explained through the psychic experience of time of the protagonist of the novel, according to the Proustian conception of temporality in light of the thought of Saint Augustine, Henri Bergson and Edmund Husserl
Resumen:
La actividad onírica ocupa un lugar preponderante en la antropología filosófica de María Zambrano. Para la pensadora andaluza, soñar es una facultad cognitiva de la que depende, en último análisis, el desarrollo anímico y la salud emocional de la persona humana. Esta nota muestra la actualidad del pensamiento zambraniano sobre los sueños a la luz de algunas tesis de la neurociencia contemporánea
Abstract:
The oneiric activity occupies a preponderant place in the philosophical anthropology of María Zambrano. For the Andalusian thinker, dreaming is a cognitive faculty on which depends, in the last analysis, the development of the soul and the emotional health of the human person. This note shows the actuality of Zambrano's thought on dreams in the light of some theses of contemporary neuroscience
Resumen:
Utilizando como motivo conductor la idea de que la biblioteca de un académico refleja su ethos y su cosmovisión, este texto epidíctico celebra la trayectoria intelectual de Nora Pasternac, profesora del Departamento Académico de Lenguas del ITAM
Abstract:
Using as a driving motif the idea that an academic’s library reflects his ethos and wordview, this epidictic text celebrates the intellectual trajectory of Nora Pasternac, professor of the Academic Department of Languages at ITAM
Resumen:
En el texto se comentan algunos pasajes de tres novelas y un cuento de Ignacio Padilla a la luz de la Monadología, de G. W. Leibniz, y del Manuscrito encontrado en Zaragoza, de Jan Potocki, con el propósito de mostrar el uso de la construcción en abismo como procedimiento narrativo en la obra de este escritor mexicano
Abstract:
The text discusses some passages of three novels and a story by Ignacio Padilla in light of Monadologie by G.W. Leibniz and the Manuscript Found in Saragossa by Jan Potocki, with the purpose of showing the use of mise en abyme as a narrative technique in the work of this Mexican writer
Resumen:
Utilizando como principio explicativo la noción de distancia fenomenológica, en el ensayo se exponen las formas de relación entre lengua fuente y lengua meta, y entre texto fuente y texto meta, en la teoría de la traducción de Walter Benjamin
Abstract:
Using the notion of phenomenological distance as an explanatory principle, we will explore in this article the relationship between source language and target language and between source text and target text in Walter Benjamin's translation theory
Resumen:
En la antropología filosófica de María Zambrano, dormir y despertar no son meros actos empíricos del vivir cotidiano, sino operaciones egológicas formales involucradas en el proceso de autoconstitución del sujeto. En este artículo se describen poniendo en diálogo el libro zambraniano Los sueños y el tiempo con la noción de ipseidad, tal como la entienden Paul Ricoeur, Edmund Husserl, Hans Blumenberg y Michel Henry
Abstract:
In Maria Zambrano’s philosophical anthropology, sleeping and awakening are not mere empirical acts of daily life, but formal egological acts involves in self-construction. In this article, we will discuss Maria Zambrano’s work, The dreams and time, along with the idea of selfhood as understood by Paul Ricoeur, Edmund Husserl, Hans Blumenberg, and Michel Henry
Abstract:
The homotopy classification problem for complete intersections is settled when the complex dimension is larger than the total degree
Abstract:
A rigidity theorem is proved for principal Eschenburg spaces of positive sectional curvature. It is shown that for a very large class of such spaces the homotopy type determines the diffeomorphism type
Abstract:
We address the problem of parallelizability and stable parallelizability of a family of manifolds that are obtained as quotients of circle actions on complex Stiefel manifolds. We settle the question in all cases but one, and obtain in the remaining case a partial result
Abstract:
The question of parallelizability of the complex projective Stiefel manifolds is settled.
Abstract:
The cohomology algebra mod p of the complex projective Stiefel manifolds is determined for all primes p. When p = 2 we also determine the action of the Steenrod algebra and apply this to the problem of existence of trivial subbundles of multiples of the canonical line bundle over a lens space with 2-torsion, obtaining optimal results in many cases
Abstract:
The machinery of M. Kreck and S. Stoltz is used to obtain a homeomorphism and diffeomorphism classification of a family of Eschenburg spaces. In contrast with the family of Wallach spaces studied by Kreck and Stolz we obtain abundant examples of homeomorphic but not diffeomorphic Eschenburg spaces. The problem of stable parallelizability of Eschenburg spaces is discussed in an appendix
Abstract:
In this paper, we introduce the notion of a linked domain and prove that a non-manipulable social choice function defined on such a domain must be dictatorial. This result not only generalizes the Gibbard-Satterthwaite Theorem but also demonstrates that the equivalence between dictatorship and non-manipulability is far more robust than suggested by that theorem. We provide an application of this result in a particular model of voting. We also provide a necessary condition for a domain to be dictatorial and use it to characterize dictatorial domains in the cases where the number of altematives is three
Abstract:
We study entry and bidding patterns in sealed bid and open auctions. Using data fromthe U.S. Forest Service timber auctions, we document a set of systematic effects: sealed bid auctions attract more small bidders, shift the allocation toward these bidders, and can also generate higher revenue. A private value auction model with endogenous participation can account for these qualitative effects of auction format. We estimate the model’s parameters and show that it can explain the quantitative effects as well. We then use the model to assess bidder competitiveness, which has important consequences for auction design
Abstract:
The role of domestic courts in the application of international law is one of the most vividly debated issues in contemporary international legal doctrine. However, the methodology of interpretation of international norms used by these courts remains underexplored. In particular, the application of the Vienna rules of treaty interpretation by domestic courts has not been sufficiently assessed so far. Three case studies (from the US Supreme Court, the Mexican Supreme Court, and the European Court of Justice) show the diversity of approaches in this respect. In the light of these case studies, the article explores the inevitable tensions between two opposite, yet equally legitimate, normative expectations: the desirability of a common, predictable methodology versus the need for flexibility in adapting international norms to a plurality of domestic environments
Abstract:
Christensen, Baumann, Ruggles, and Sadtler (2006) proposed that organizations addressing social problems may use catalytic innovation as a strategy to create social change. These innovations aim to create scalable, sustainable, and systems-changing solutions. This empirical study examines: (a) whether catalytic innovation applies to Mexican social entrepreneurship; (b) whether those who adopt Christensen et al.’s (2006) strategy generate more social impact; and (c) whether they demonstrate economic success. We performed a survey of 219 Mexican social entrepreneurs and found that catalytic innovation does occur within social entrepreneurship, and that those social entrepreneurs who use catalytic innovations not only maximize their social impact but also maximize their profits, and that they do so with diminishing returns to scale
Résumé:
Christensen, Baumann, Ruggles et Sadtler (2006) proposent que les organisations qui s'occupent de problèmes sociaux, peuvent utiliser l'innovation catalytique comme une stratégie visant à créer un changement social. Ces innovations cherchent à créer des solutions évolutives, durables, et qui changent le système. Cette étude empirique examine : (a) si l'innovation catalytique peut s'appliquer dans le domaine de l'entrepreneuriat social mexicain; (b) si les entrepreneurs qui adoptent la stratégie de Christensen et al. (2006), donne lieu à un impact social; et (c) si elle démontre engendrer un succès économique. Nous avons effectué un sondage à 219 entrepreneurs sociaux mexicains et avons constaté que l'innovation catalytique se produit au sein des entreprenariats sociaux, et que les entrepreneurs sociaux qui utilisent des innovations catalytiques maximisent non seulement l'impact social, mais aussi leurs profits et qu'ils le font avec des rendements à échelle décroissante
Abstract:
Solid lipid nanoparticles (SLNs) provide excellent biocompatibility and efficient encapsulation as pharmaceutical delivery systems. In this study, shea butter SLNs were synthesized by the hot homogenization technique, with excellent stability. The penetration capacity of nanoparticles was validated by atomic force microscopy, scanning electron microscopy, differential scanning calorimetry, dynamic light scattering, using zeta potential measurements, and confocal microscopy. Triborheological tests such as viscosity shear rate profiling, normal stress profiling and sliding speed sweeping were conducted to identify and quantify the impact of SLNs in topical formulations. We found that the SLNs had a lower coefficient of friction than the bulk lipids owing to a more stable lipid layer formation of the SLNs. SLNs in topical formulations have potential applications in cosmetics such as anti-aging agents owing to their emollient, occlusion, antioxidant and anti-inflammatory properties
Abstract:
We study pattern formation in a 2D reaction-diffusion (RD) subcellular model characterizing the effect of a spatial gradient of a plant hormone distribution on a family of G-proteins associated with root hair (RH) initiation in the plant cell Arabidopsis thaliana. The activation of these G-proteins, known as the Rho of Plants (ROPs), by the plant hormone auxin is known to promote certain protuberances on RH cells, which are crucial for both anchorage and the uptake of nutrients from the soil. Our mathematical model for the activation of ROPs by the auxin gradient is an extension of the model of Payne and Grierson [PLoS ONE, 4 (2009), e8337] and consists of a two-component Schnakenberg-type RD system with spatially heterogeneous coefficients on a 2D domain. The nonlinear kinetics in this RD system model the nonlinear interactions between the active and inactive forms of ROPs. By using a singular perturbation analysis to study 2D localized spatial patterns of active ROPs, it is shown that the spatial variations in the nonlinear reaction kinetics, due to the auxin gradient, lead to a slow spatial alignment of the localized regions of active ROPs along the longitudinal midline of the plant cell. Numerical bifurcation analysis together with time-dependent numerical simulations of the RD system are used to illustrate both 2D localized patterns in the model and the spatial alignment of localized structures
Abstract:
We aimed to make a theoretical contribution to the happy-productive worker thesis by expanding the study to cases where this thesis does not fit. We hypothesized and corroborated the existence of four relations between job satisfaction and innovative performance: (a) unhappy-unproductive, (b) unhappy-productive, (c) happy-unproductive, and (d) happy-productive. We also aimed to contribute to the happy-productive worker thesis by studying some conditions that influence and differentiate among the four patterns. Hypotheses were tested in a sample of 513 young employees representative of Spain. Cluster analysis and discriminant analysis were performed. We identified the four patterns. Almost 15 % of the employees had a pattern largely ignored by previous studies (e.g., unhappy-productive). As hypothesized, to promote well-being and performance among young employees, it is necessary to fulfill the psychological contract, encourage initiative, and promote job self-efficacy. We also confirmed that over-qualification characterizes the unhappy-productive pattern, but we failed to confirm that high job self-efficacy characterizes the happy-productive pattern. The results show the relevance of personal and organizational factors in studying the well-being-performance link in young employees
Abstract:
Conventional wisdom suggests that promising free information to an agent would crowd out costly information acquisition. We theoretically demonstrate that this intuition only holds as a knife-edge case in which priors are symmetric. Indeed, when priors are asymmetric, a promise of free information in the future induces agents to increase information acquisition. In the lab, we test whether such crowding out occurs for both symmetric and asymmetric priors. Our results are qualitatively in line with the predictions: When priors are asymmetric, the promise of future free information induces subjects to acquire more costly information
Abstract:
Region-of-Interest (ROI) tomography aims at reconstructing a region of interest C inside a body using only x-ray projections intersecting C and it is useful to reduce overall radiation exposure when only a small specific region of a body needs to be examined. We consider x-ray acquisition from sources located on a smooth curve Γ in R3 verifying the classical Tuy condition. In this generic situation, the non-trucated cone-beam transform of smooth density functions f admits an explicit inverse Z as originally shown by Grangeat. However Z cannot directly reconstruct f from ROI-truncated projections. To deal with the ROI tomography problem, we introduce a novel reconstruction approach. For densities f in L∞(B) where B is a bounded ball in R3, our method iterates an operator U combining ROI-truncated projections, inversion by the operator Z and appropriate regularization operators. Assuming only knowledge of projections corresponding to a spherical ROI C subset of subset B, given ɛ > 0, we prove that if C is sufficiently large our iterative reconstruction algorithm converges at exponential speed to an ɛ-accurate approximation of f in L∞. The accuracy depends on the regularity of f quantified by its Sobolev norm in W5(B). Our result guarantees the existence of a critical ROI radius ensuring the convergence of our ROI reconstruction algorithm to an ɛ-accurate approximation of f. We have numerically verified these theoretical results using simulated acquisition of ROI-truncated cone-beam projection data for multiple acquisition geometries. Numerical experiments indicate that the critical ROI radius is fairly small with respect to the support region B
Resumen:
El Tratado de Libre Comercio entre la UE y México (tlcuem) entró en vigor en el año 2000, constituyéndose en uno de los acuerdos más importantes del comercio transatlántico. El objetivo de este trabajo es analizar los resultados del acuerdo en materia de comercio entre los países socios al cabo de una década, e identificar los principales determinantes económicos. Se estima un modelo de gravedad para una muestra de 60 países durante el periodo 1994-2011. Los resultados indican que dicho tratado ha sido relevante en la intensificación de las relaciones comerciales entre ambos socios
Abstract:
The Free Trade Agreement between the European Union and Mexico (eumfta) was enforced in 2000, becoming one of the most important transatlantic trade agreements. The goal of this research is to analyze the results of this agreement a decade after the signature. A gravity model is estimated for a sample of 60 countries along the period 1994-2011. The results indicate that such an agreement has given rise to an increase in the bilateral trade flows between these two commercial partners
Abstract:
We study an at-scale natural experiment in which debit cards were given to cash transfer recipients who already had a bank account. Using administrative account data and household surveys, we find that beneficiaries accumulated a savings stock equal to 2% of annual income after two years with the card. The increase in formal savings represents an increase in overall savings, financed by a reduction in current consumption. There are two mechanisms. First, debit cards reduce transaction costs of accessing money. Second, they reduce monitoring costs, which led beneficiaries to check their account balances frequently and build trust in the bank
Abstract:
Transaction costs are a significant barrier to the take-up and use of formal financial services. Account opening fees and minimum balance requirements prevent the poor from opening bank accounts (Dupas and Robinson 2013), and small subsidies can lead to large increases in take-up (Cole, Sampson, and Zia 2011). Indirect transaction costs—such as travel time -are also a barrier: the distance to the nearest bank or mobile money agent is a key predictor of take-up of savings accounts (Dupas et al. forthcoming) and mobile money (Jack and Suri 2014). In turn, increased access to financial services can reduce poverty and increase welfare (Burgess and Pande 2005; Suri and Jack 2016). Digital financial services, such as ATMs, debit cards, mobile money, and digital credit, have the potential to reduce transaction costs. However, existing studies rarely measure indirect transaction costs. We provide evidence on how a specific technology -a debit card- lowers indirect transaction costs by reducing travel distance and foregone activities. We study a natural experiment in which debit cards tied to existing savings accounts were rolled out geographically over time to beneficiaries of the Mexican cash transfer program Oportunidades. Prior to receiving debit cards, beneficiaries received transfers directly into a savings account every two months. After receiving cards, beneficiaries continue to receive their benefits in the savings account, but can access their transfers and savings at any bank’s ATM. They can also check their balances at any bank’s ATM or use the card to make purchases at point of sale terminals. We find that debit cards reduce the median road distance to access the account from 4.8 to 1.3 kilometers (km). As a result, the proportion of beneficiaries who walk to withdraw the transfer payments increases by 59 percent. Furthermore, prior to receiving debit cards, 84 percent of beneficiari
Abstract:
We study a natural experiment in which debit cards are rolled out to beneficiaries of a cash transfer program, who already received transfers directly deposited into a savings account. Using administrative account data and household surveys, we find that before receiving debit cards, few beneficiaries used the accounts to make more than one withdrawal per period, or to save. With cards, beneficiaries increase their number of withdrawals and check their balances frequently; the number of checks decreases over time as their reported trust in the bank and savings increase. Their overall savings rate increases by 3–4 percent of household income
Abstract:
Two career-concerned experts sequentially give advice to a Bayesian decision maker (D). We find that secrecy dominates transparency, yielding superior decisions for D. Secrecy empowers the expert moving late to be pivotal more often. Further, (i) only secrecy enables the second expert to partially communicate her information and its high precision to D and swing the decision away from first expert's recommendation; (ii) if experts have high average precision, then the second expert is effective only under secrecy. These results are obtained when experts only recommend decisions. If they also report the quality of advice, fully revealing equilibrium may exist
Abstract:
In the recent years, there has been an upsurge in the number of countries that are mainstreaming gender equality concerns in their trade and investment agreements. These recent developments challenge the longstanding assumption that trade, investment, and gender equality are not related. They also show that gender mainstreaming in trade and investment agreements is here to stay. However, very few countries - mostly developed countries - have led this mainstreaming approach and have made efforts to incentivize other countries to negotiate gender-responsive trade and investment agreements. The majority of developing countries are yet to take their first steps in negotiating such policy instruments with a gender lens, and their hesitation can be grounded in various reasons including fears of protectionism, lack of data, paucity of understanding and expertise, and, more broadly, constraints relating to their negotiation capacity. Moreover, the inclusion of gender-related concerns in the negotiation of such agreements has deepened and widened the negotiation capacity gap between developed and developing countries. In this article, the authors attempt to assess this widening negotiation capacity gap with the help of empirical research, and how this capacity gap can lead to disproportionate and negative repercussions for developing countries more than developed countries
Abstract:
In recent years, more and more countries have included different kinds of gender considerations in their trade agreements. Yet many countries have still not signed their very first agreement with a gender equality-related provision. Though most of the agreements negotiated by countries in the Asia-Pacific region have not explicitly accommodated gender concerns, a limited number of trade agreements signed by countries in the region have presented a distinct approach: the nature of provisions, drafting style, location in the agreements, and topic coverage of such provisions contrast with the gender-mainstreaming approach employed by the Americas or other regions. This chapter provides a comprehensive account and assessment of gender-related provisions included in the existing trade agreements negotiated by countries in the Asia-Pacific, explains the extent to which gender concerns are mainstreamed in these agreements, and summarizes the factors that impede such mainstreaming efforts in the region
Abstract:
The most common provisions we find in almost all multilateral, regional and bilateral trade agreements are the exception clauses that allow countries to protect public morals, humans, animals or plant health and life and conserve exhaustible natural resources. If countries can allow trade-restrictive measures that aim to protect these non-economic interests, is it possible to negotiate a specific exception to justify measures that are aimed at protecting women's economic interests as well? Is the removal of barriers that impede women's participation in trade any less important than the conservation of exhaustible natural resources such as sea turtles or dolphins? In that context, this article prepares a case for the inclusion of a specific exception that can allow countries to leverage women's economic empowerment through international trade agreements. This is done after carrying out an objective assessment of whether a respondent could seek protection under the existing public morality exception to justify a measure that is taken to protect women's economic interests
Abstract:
Mexico has by far the world's highest death rate linked to obesity and other chronic diseases. As a response to the growing pandemic of obesity, Mexico has adopted a new compulsory front-of-pack labeling regulation for pre-packaged foods and nonalcoholic beverages. This article provides an assessment of the regulation's consistency with international trade law and the arguments that might be invoked by either side in a hypothetical trade dispute on this matter
Abstract:
In the past few months, we have witnessed the 'worst deal' in the history of the USA become the 'best deal' in the history of the USA. The negotiation leading to the United States-Mexico-Canada Agreement (USMCA) appeared as an 'asymmetrical exchange' scenario that could have led to an unbalanced outcome for Mexico. However, Mexico stood firm on its positions and negotiated a modernized version of North American Free Trade Agreement. Mexico faced various challenges during this renegotiation, not only because it was required to negotiate with two developed countries but also due to the high level of ambition and demands raised by the new US administration. This paper provides an account of these impediments. More importantly, it analyzes the strategies that Mexico used to overcome the resource constraints it faced amidst the unpredictable political dilemma in the US and at home. In this manner, this paper seeks to provide a blueprint of strategies that other developing countries could employ to overcome their negotiation capacity constraints, especially when they are dealing with developed countries and in uncertain political environments
Abstract:
Health pandemics affect women and men differently, and they can make the existing gender inequalities much worse. COVID-19 is one such pandemic, which can have substantial gendered implications both during and in the post-pandemic world. Its economic and social consequences could deepen the existing gender inequalities and roll back the limited gains made in respect of women empowerment in the past few decades. The impending global recession, multiple trade restrictions, economic lockdown, and social distancing measures can expose vulnerabilities in social, political, and economic systems, which, in turn, could have a profound impact on women’s participation in trade and commerce. The article outlines five main reasons that explain why this health pandemic has put women employees, entrepreneurs, and consumers at the frontline of the struggle. It then explores how free trade agreements can contribute in repairing the harm in the post-pandemic world. In doing so, the author sheds light on various ways in which the existing trade agreements embrace gender equality considerations and how they can be better prepared to help minimize the pandemic-inflicted economic loss to women
Abstract:
The World Trade Organization (WTO) Dispute Settlement System (DSS) is in peril. The Appellate Body (AB) is being held as a 'hostage' by the very architect and the most frequent user of WTO DSS, the United States of America. This will bring the whole DSS to a standstill as the inability of AB to review the appeals will have a kill-off effect on the binding value of Panel rulings. If the most celebrated DSS collapses, the members would not be able to enforce their WTO rights. The WTO-inconsistent practices and violations would increase and remain unchallenged. The rights without remedies would soon lose their charm, and we might witness a higher and faster drift away from multilateral trade regulation. This is a grave situation. This piece is an academic attempt to analyse and diffuse the key points of criticism against AB. A comprehensive assessment of reasons behind this criticism could be a starting point to resolve this gridlock. The first part of this Article investigates the reasons and motivations of the US behind these actions as we cannot address the problems without understanding them in a comprehensive manner. The second part looks at this issue from a systemic angle as it seeks to address the debate on whether WTO resembles common or civil law, as most of the criticism directed towards judicial activism and overreach is 'much ado about nothing'. The concluding part of this piece briefly looks at the proposals already made by scholars to resolve this deadlock, and it leaves the readers with a fresh proposal to deliberate upon
Abstract:
In the recent years, we have witnessed a sharp increase in the number of free trade agreements (FTAs) with gender-related provisions. The key champions of this evolution include Canada, Chile, New Zealand, Australia and Uruguay. These countries have proposed a new paradigm, i.e. a paradigm where FTAs are considered vehicles to achieving the economic empowerment of women. This trend is spreading like a wild-fire to other parts of the world. More and more countries are expressing their interest in ensuring that their FTAs are genderresponsive and not simply gender-neutral or gender-blind in nature. The momentum is on, and we can expect many more agreements in the future to include stand-alone chapters or exclusive provisions on gender issues. This article is an attempt to tap into this ongoing momentum, as it puts forward a newly designed self-evaluation maturity framework to measure gender-responsiveness of trade agreements. The proposed framework is to help policy-makers and negotiators to: (1) measure gender-responsiveness of trade agreements; (2) identify areas where agreements need critical improvements; and (3) receive recommendations to improve the gender-fabric of trade agreements that they are negotiating or have already negotiated. This is the first academic intervention presenting this type of gender-responsiveness model for trade agreements
Abstract:
Purpose - World Trade Organisation grants rights to its members, and WTO Dispute Settlement Understanding (DSU) provides a rule-oriented consultative and judicial mechanism to protect these rights in cases of WTO-incompatible trade infringements. However, the DSU participation benefits come at a cost. These costs are acutely formidable for least developing countries (LDCs) which have small market size and trading stakes. No LDC has ever filed a WTO compliant, with the only exception of India-Battery dispute filed by Bangladesh against India. This paper aims to look at the experience of how Bangladesh – so far the only LDC member that has filed a formal WTO complaint – persuaded India to withdraw anti-dumping duties India had imposed on the import of acid battery from Bangladesh. Design/methodology/approach - The investigation is grounded on practically informed findings gathered through authors’ work experience and several semi-structured interviews and discussions which the authors have conducted with government representatives from Bangladesh, government and industry representatives from other developing countries, trade lawyers and officials based in Geneva and Brussels, and civil society organisations. Findings - The discussion provides a sound indication of the participation impediments that LDCs can face at WTO DSU and the ways in which such challenges can be overcome with the help of resources available at the domestic level. It also exemplifies how domestic laws and practices can respond to international legal instruments and impact the performance of an LDC at an international adjudicatory forum. Originality/value - Except one book chapter and a working paper, there is no literature available on this matter. This investigation is grounded on practically informed findings gathered with the help of original empirical research conducted by the authors
Abstract:
Mexico has employed special methodologies for price-determination and calculation of dumping margins againts Chinese imports in almost all anti-dumping investigations. This chapter attemps to explain and analyze the NME-especific procedures eployed by Mexican authorities in anti-dumping proceedings againts China. It also clarifies the Mexican standpoint on the controversial issue of how the expiry of section 15(a)(ii) of China's Accession Protocol to the WTO impacts the surviving parts of Section 15 of the Protocol, and whether Mexico has changed its treatment towards Chinese imports following the expiry of Section 15(a)(ii) post 12 December 2016
Abstract:
Multiple scholarly works have argued that developing country members of World Trade Organization (WTO) should enhance their dispute settlement capacity to successfully and cost effectively navigate the system of WTO Dispute Settlement Understanding (DSU). It is one thing to be a part of WTO agreements and know the WTO rules, and another to know how to use and take advantage of those agreements and rules in practice. The present investigation seeks to conduct a detailed examination of the latter with a specific focus on critically examining public private partnership (PPP) strategies that can enable developing countries to effectively utilize the provisions of WTO DSU. To achieve this purpose, the article examines how Brazil, one of the most active DSU users among developing countries, has strengthened its DSU participation by engaging its private stakeholders during the management of WTO disputes. The identification and evaluation of the PPP strategies employed by the government and industries in Brazil may prompt other developing countries to determine their individual approach towards PPP for the handling of WTO disputes
Abstract:
World Trade Organisation Dispute Settlement Understanding (WTO DSU) is a two-tier mechanism. The first tier is international adjudication and the second tier is domestic handling of trade disputes. Both tiers are interdependent and interconnected. A case that is poorly handled at the domestic level generally stands a relatively lower chance of success at the international level, and hence, the future of WTO litigation is partially predetermined by the manner in which it is handled at the domestic level. Moreover, most of the capacity-related challenges faced by developing countries at WTO DSU are deeply rooted in the domestic context of these countries, and their solutions can best be found at the domestic level. The present empirical investigation seeks to explore a domestic solution to the capacity-related challenges faced mainly by developing countries, as it examines the model of public private partnership (PPP). In particular, the article examines how India, one of the most active DSU users among developing countries, has strengthened its DSU participation by engaging its private stakeholders during the management of WTO disputes. The identification and evaluation of the PPP strategies employed by the government and industries, along with an analysis of the challenges and potential limitations that such partnerships have faced in India, may prompt other developing countries to review or revise their individual approach towards the future handling of WTO dispute
Abstract:
With the advent of globalization and industrialization, the significance of WTO DSU as an international institution of trade dispute governance has expanded tremendously, as a landmark achievement of Uruguay Round negotiations. Exploring the fact that whether the 'pendulum' of DSU is tilted towards developed economies, much to the disadvantage of the developing world, it becomes imperative to devise a strategy within the existing framework, to balance the equilibrium of this tilted pendulum. WTO, being recognized as an area of public international law, and expanding its routes to the sphere of private international law, the approach of public private partnership can be efficiently designed to help developing countries overcome their challenges in using WTO DSU, among the other approaches suggested by various experts. This study aims at exploring ways in which this partnership can be devised and implemented in the context of developing countries and also analyzing the limits of developing countries in implementing this strategy
Resumen:
Las respuestas más comunes al escalamiento percibido del crimen con violencia a través de la mayor parte de América Latina son el aumento del tamaiío y los poderes de la policía local y -en la mayorra de los casos incrementar-la participación de las fuerzas armadas para confrontar tanto al crimen común como al organizado. En México el debate se ha visto agudizado por la extensa violencia vinculada a los conflictos entre organizaciones de narcotráfico y entre éstas y las fuerzas de seguridad del gobierno, en las cuales el ejército y la marina han desempeiíado papeles importantes. Con base en la World Values Survey y datos del Barómetro de las Américas, examinamos tendencias de la confianza pública en la policía, el sistema judicial y las fuerzas armadas en México entre 1990 y 2010. Aquí preguntamos: ¿Está difundido y generalizado a través de la muestra el apoyo público para emplear a los militares como policías? ¿O existen patrones de apoyo y oposición respecto a la opinión pública? Nuestros hallazgos principales fueron: 1) que las fuerzas armadas clasificaron en primer lugar en relación con la confianza, mientras que la confianza en otras instituciones mexicanas tuvo una tendencia negativa entre 2008 y 2010, además la confianza en los militares aumentó ligeramente; 2) los encuestados respondieron que los militares respetan los derechos humanos más que el promedio y sustancialmente más que la policla o el gobierno en general; 3) el apoyo público para los militares en la lucha contra el crimen es fuerte y está distribuido de manera equitativa a través del espectro ideológico y de los grupos sociodemográficos, y 4) los patrones de apoyo surgen con mayor claridad respecto a percepciones, actitudes y juicios de desempeño. A modo de conclusión consideramos algunas de las implicaciones pollticas y de polltica de nuestros hallazgos.
Abstract:
Typical responses to the perceived escalation of violent crime throughout most of Latin America are to increase the size and powers of the regular police and -in most cases- to expand the involvement by the armed forces to confront both common and organized crime. Participation by the armed forces in domestic policing, in turn, has sparked debates in several countries about the serious risks incurred, especially with respect to human rights violations. In Mexico the debate is sharpened by the extensive violence linked to conflicts among drug-trafficking organizations and between these and the government's security forces, in which the Army and Navy have played leading roles. Using World Values Survey and Americas Barometer data, we examine trends in public confidence in the police, justice system, and armed forces in Mexico over 1990-2010. Using Vanderbilt University's 2010 LAPOP survey we compare levels of trust in various social, political, and government actors, locating Mexico in the broader Latin America context. Here we ask: Is public support for using the military as police widespread and generalized across the sample? Or are there patterns of support and opposition with respect to public opinion? Our main findings are that: 1) the armed forces rank at the top regarding trust, and -while trust in other Mexican institutions tended to decline in 2008-2010- trust in the military increased slightly; 2) respondents indicate that the military respects human rights more than the average and substantially more than the police or government generally; 3) public support for the military in fighting crime is strong and distributed evenly across the ideological spectrum and across socio-demographic groups, and 4) patterns of support emerge more clearly with respect to perceptions, attitudes, and performance judgments. By way of conclusion we conconsider some of the political and policy implications of our findings
Abstract:
We study a class of boundedly rational choice functions which operate as follows. The decision maker uses two criteria in two stages to make a choice. First, she shortlists the top two alternatives, i.e. two finalists, according to one criterion. Next, she chooses the winner in this binary shortlist using the second criterion. The criteria are linear orders that rank the alternatives. Only the winner is observable. We study the behavior exhibited by this choice procedure and provide an axiomatic characterization of it. We leave as an open question the characterization of a generalization to larger shortlists
Abstract:
In this study, we analyzed students' understanding of a complex calculus graphing problem. Students were asked to sketch the graph of a function, given its analytic properties (1st and 2nd derivatives, limits, and continuity) on specific intervals of the domain. The triad of schema development in the context of APOS theory was utilized to study students' responses. Two dimensions of understanding emerged, 1 involving properties and the other involving intervals. A student's coordination of the 2 dimensions is referred to as that student's overall calculus graphing schema. Additionally, a number of conceptual problems were consistently demonstrated by students throughout the study, and these difficulties are discussed in some detail
Abstract:
Array-based comparative genomic hybridization (aCGH) is a high-resolution, high-throughput technique for studying the genetic basis of cancer. The resulting data consist of log fluorescence ratios as a function of the genomic DNA location and provide a cytogenetic representation of the relative DNA copy number variation. Analysis of such data typically involves estimating the underlying copy number state at each location and segmenting regions of DNA with similar copy number states. Most current methods proceed by modeling a single sample/array at a time, and thus fail to borrow strength across multiple samples to infer shared regions of copy number aberrations. We propose a hierarchical Bayesian random segmentation approach for modeling aCGH data that uses information across arrays from a common population to yield segments of shared copy number changes. These changes characterize the underlying population and allow us to compare different population aCGH profiles to assess which regions of the genome have differential alterations. Our method, which we term Bayesian detection of shared aberrations in aCGH (BDSAScgh), is based on a unified Bayesian hierarchical model that allows us to obtain probabilities of alteration states as well as probabilities of differential alterations that correspond to local false discovery rates for both single and multiple groups. We evaluate the operating characteristics of our method via simulations and an application using a lung cancer aCGH data set. This article has supplementary material online
Abstract:
The quadratic and linear cash flow dispersion measures M2 and Ñ are two immunization risk measures designed to build immunized bond portfolios. This paper generalizes these two measures by showing that any dispersion measure is an immunization risk measure and therefore, it sets up a tool to be used in empirical testing. Each new measure is derived from a different set of shocks (changes on the term structure of interest rates) and depends on the corresponding subset of worst shocks. Consequently, a criterion for choosing appropriate immunization risk measures is to take those developed from the most reasonable sets of shocks and the associated subset of worst shocks and then select those that work best empirically. Adopting this approach, this paper then explores both numerical examples and a short empirical study on the Spanish Bond Market in the mid-1990s to show that measures between linear and quadratic are the most appropriate, and amongst them, the linear measure has the best properties. This confirms previous studies on US and Canadian markets that maturity-constrained-duration-matched portfolios also have good empirical behavior
Abstract:
This paper presents a condition equivalent to the existence of a Riskless Shadow Asset that guarantees a minimum return when the asset prices are convex functions of interest rates or other state variables. We apply this lemma to immunize default-free and option-free coupon bonds and reach three main conclusions. First, we give a solution to an old puzzle: why do simple duration matching portfolios work well in empirical studies of immunization even though they are derived in a model inconsistent with equilibrium and shifts on the term structure of interest rates are not parallel, as assumed? Second, we establish a clear distinction between the concepts of immunized and maxmin portfolios. Third, we develop a framework that includes the main results of this literature as special cases. Next, we present a new strategy of immunization that consists in matching duration and minimizing a new linear dispersion measure of immunization risk
Abstract:
Given modal logics L1 and L2, their lexicographic product L1 x L2 is a new logic whose frames are the Cartesian products of an L1-frame and an L2-frame, but with the new accessibility relations reminiscent of a lexicographic ordering. This article considers the lexicographic products of several modal logics with linear temporal logic (LTL) based on "next" and "always in the future". We provide axiomatizations for logics of the form L x LTL and define cover-simple classes of frames; we then prove that, under fairly general conditions, our axiomatizations are sound and complete whenever the class of L-frames is cover-simple. Finally, we prove completeness for several concrete logics of the form L x LTL
Abstract:
Classic institutionalism claims that even authoritarian and non-democratic regimens would prefer institutions where all members could make advantageous transactions. Thus, structural reform geared towards preventing and combating corruption should be largely preferred by all actors in any given setting. The puzzle, then, is why governments decide to maintain, or even create, inefficient institutions. A perfect example of this paradox is the establishment of the National Anti-corruption System (SNA) in Mexico. This is a watchdog institution, created to fight corruption, which is itself often portrayed as highly corrupted and inefficient. The limited scope of anti-corruption reforms in the country is explained by the institutional setting in which these reforms take place, where political behaviour is highly determined by embedded institutions that privilege centralized decision-making. Mexican reformers have historically privileged those reforms that increase their gains and power, and delayed and boycotted those that negatively affect them. Since anti-corruption reforms adversely affected rent extraction and diminished the power of a set of political actors, the bureaucrats who benefited from the current institutional setting embraced limited reforms or even boycotted them. Thus, to understand failed reforms it is necessary to understand the deep-rooted political institutions that shape the behaviour of political actors. This analysis is important for other modern democracies where powerful bureaucratic minorities are often able to block changes that would be costly to their interests, even if the changes would increase net gains for the country as a whole
Abstract:
In this paper we study the problem of Hamiltonization of nonholonomic systems from a geometric point of view. We use gauge transformations by 2-forms (in the sense of Ševera and Weinstein in Progr Theoret Phys Suppl 144:145–154 2001) to construct different almost Poisson structures describing the same nonholonomic system. In the presence of symmetries, we observe that these almost Poisson structures, although gauge related, may have fundamentally different properties after reduction, and that brackets that Hamiltonize the problem may be found within this family. We illustrate this framework with the example of rigid bodies with generalized rolling constraints, including the Chaplygin sphere rolling problem. We also see through these examples how twisted Poisson brackets appear naturally in nonholonomic mechanics
Abstract:
We propose a novel method for reliably inducing stress in drivers for the purpose of generating real-world participant data for machine learning, using both scripted in-vehicle stressor events and unscripted on-road stressors such as pedestrians and construction zones. On-road drives took place in a vehicle outfitted with an experimental display that lead drivers to believe they had prematurely ran out of charge on an isolated road. We describe the elicitation method, course design, instrumentation, data collection procedure and the post-hoc labeling of unplanned road events to illustrate how rich data about a variety of stress-related events can be elicited from study participants on-road. We validate this method with data including psychophysiological measurements, video, voice, and GPS data from (N=20) participants. Results from algorithmic psychophysiological stress analysis were validated using participant self-reports. Results of stress elicitation analysis show that our method elicited a stress-state in 89% of participants
Abstract:
This paper examines the effectiveness of media literacy interventions in countering misinformation among in-transit migrants in Mexico and Colombia. We conducted experiments to assess whether well-known strategies for fighting misinformation are effective for this understudied yet particularly vulnerable population. We evaluate the impact of digital media literacy tips on migrants' ability to identify false information and their intentions to share migration-related content. We find that these interventions can effectively decrease migrants' intentions to share misinformation. We also find suggestive evidence that asking participants to consider accuracy may inadvertently influence their sharing behavior by acting as a behavioral nudge, rather than simply eliciting their sharing intentions. Additionally, the interventions reduced trust in social media as an information source while maintaining trust in official channels. The findings suggest that incorporating digital literacy tips into official websites could be a cost-effective strategy to reduce misinformation circulation among migrant populations
Abstract:
Do economic incentives explain forced displacement during conflict? This paper examines this questionin Colombia, which has had one of the world’s most acute situations of internal displacement associatedwith conflict. Using data on the price of bananas along with data on historical levels of production, I findthat price increases generate more forced displacement in municipalities more suitable to produce thisgood. However, I also show that this effect is concentrated in the period in which paramilitary powerand operations reached an all-time peak. Additional evidence shows that land concentration amongthe rich has increased substantially in districts that produce these goods. These findings are consistentwith extensive qualitative evidence that documents the link between economic interests and local polit-ical actors who collude with illegal armed groups to forcibly displace locals and appropriate their land,especially in areas with more informal land tenure systems, like those where bananas are grown morefrequently
Abstract:
This study was undertaken to explore pre-service teachers' understanding of injections and surjections. There were 54 pre-service teachers specialising in the teaching of Mathematics in Grades 10–12 curriculum who participated in the project. The concepts were covered as part of a real analysis course at a South African university. Questionnaires based on an initial genetic decomposition of the concepts of surjective and injective functions were administered to the 54 participants. Their written responses, which were used to identify the mental constructions of these concepts, were analysed using an APOS (action-process-object-schema) framework and five interviews were carried out. The findings indicated that most participants constructed only Action conceptions of bijection and none demonstrated the construction of an Object conception of this concept. Difficulties in understanding can be related to students' lack of construction of the concepts of functions and sets that are a prerequisite to working with bijections
Resumen:
El autor intenta deducir una teoría poética del escritor mexicano partiendo de su obra, que divide en tres partes; enseguida, tras un interludio –“la noche obscura del poeta”– trata la función del recuerdo como estímulo literario. Y termina con un apunte hacia el popularismo artístico de Alfonso Reyes
Abstract:
The author attempts to formulate a poetic theory of this Mexican writer based on his works, which is divided in three parts; after an interlude –“the poet’s darkest night”– he studies how remembrance works as a literary stimulus and concludes commenting on Alfonso Reyes’ artistic popularism
Abstract:
The cognitive domains of a communication scheme for learning physics are related to a framework based on epistemology, and the planning of an introductory calculus textbook in classical mechanics is shown as an example of application
Abstract:
Uniform inf-sup conditions are of fundamental importance for the finite element solution of problems in incompressible fluid mechanics, such as the Stokes and Navier–Stokes equations. In this work we prove a uniform inf-sup condition for the lowest-order Taylor–Hood pairs Q2×Q1 and P2×P1 on a family of affine anisotropic meshes. These meshes may contain refined edge and corner patches. We identify necessary hypotheses for edge patches to allow uniform stability and sufficient conditions for corner patches. For the proof, we generalize Verfürth’s trick and recent results by some of the authors. Numerical evidence confirms the theoretical results
Abstract:
In this work we present and analyze new inf-sup stable, and stabilised, finite element methods for the Oseen equation in anisotropic quadrilateral meshes. The meshes are formed of closed parallelograms, and the analysis is restricted to two space dimensions. Starting with the lowest order Q 1 2 x P 0 pair, we first identify the pressure components that make this finite element pair to be non-inf-sup stable, especially with respect to the aspect ratio. We then propose a way to penalise them, both strongly, by directly removing them from the space, and weakly, by adding a stabilisation term based on jumps of the pressure across selected edges. Concerning the velocity stabilisation, we propose an enhanced grad-div term. Stability and optimal a priori error estimates are given, and the results are confirmed numerically
Abstract:
In his landmark article, Richard Morris (1981) introduced a set of rat experiments intended “to demonstrate that rats can rapidly learn to locate an object that they can never see, hear, or smell provided it remains in a fixed spatial location relative to distal room cues” (p. 239). These experimental studies have greatly impacted our understanding of rat spatial cognition. In this article, we address a spatial cognition model primarily based on hippocampus place cell computation where we extend the prior Barrera–Weitzenfeld model (2008) intended to allow navigation in mazes containing corridors. The current work extends beyond the limitations of corridors to enable navigation in open arenas where a rat may move in any direction at any time. The extended work reproduces Morris’s rat experiments through virtual rats that search for a hidden platform using visual cues in a circular open maze analogous to the Morris water maze experiments. We show results with virtual rats comparing them to Morris’s original studies with rats
Abstract:
The study of behavioral and neurophysiological mechanisms involved in rat spatial cognition provides a basis for the development of computational models and robotic experimentation of goal-oriented learning tasks. These models and robotics architectures offer neurobiologists and neuroethologists alternative platforms to study, analyze and predict spatial cognition based behaviors. In this paper we present a comparative analysis of spatial cognition in rats and robots by contrasting similar goal-oriented tasks in a cyclical maze, where studies in rat spatial cognition are used to develop computational system-level models of hippocampus and striatum integrating kinesthetic and visual information to produce a cognitive map of the environment and drive robot experimentation. During training, Hebbian learning and reinforcement learning, in the form of Actor-Critic architecture, enable robots to learn the optimal route leading to a goal from a designated fixed location in the maze. During testing, robots exploit maximum expectations of reward stored within the previously acquired cognitive map to reach the goal from different starting positions. A detailed discussion of comparative experiments in rats and robots is presented contrasting learning latency while characterizing behavioral procedures during navigation such as errors associated with the selection of a non-optimal route, body rotations, normalized length of the traveled path, and hesitations. Additionally, we present results from evaluating neural activity in rats through detection of the immediate early gene Arc to verify the engagement of hippocampus and striatum in information processing while solving the cyclical maze task, such as robots use our corresponding models of those neural structures
Abstract:
Anticipation of sensory consequences of actions is critical for the predictive control of movement that explains most of our sensory-motor behaviors. Plenty of neuroscientific studies in humans suggest evidence of anticipatory mechanisms based on internal models. Several robotic implementations of predictive behaviors have been inspired on those biological mechanisms in order to achieve adaptive agents. This paper provides an overview of such neuroscientific and robotic evidences; a high-level architecture of sensory-motor coordination based on anticipatory visual perception and internal models is then introduced; and finally, the paper concludes by discussing the relevance of the proposed architecture within the context of current research in humanoid robotics
Abstract:
The study of spatial memory and learning in rats has inspired the development of multiple computational models that have lead to novel robotics architectures. Evaluation of computational models and resulting robotic architecture is usually carried out at the behavioral level by evaluating experimental tasks similar to those performed with rats. While multiple metrics are defined to evaluate behavioral performance in rats, metrics for robot task evaluation are very limited mostly to success/failure and time to complete task. In this paper we present a set of metrics taken from rat spatial memory and learning evaluation to further analyze performance in robots. The proposed set of metrics, learning latency and ability to navigate minimal distance to goal, should offer the robotics community additional tools to assess performance and validity of models in biologically-inspired robotic architectures at the task performance level. We also provide a comparative evaluation using these metrics between similar spatial tasks performed by rat and robot in comparable environments
Abstract:
In this paper we present a comparative behavioral analysis of spatial cognition in rats and robots by contrasting a similar goal-oriented task in a cyclical maze, where a computational system-level model of rat spatial cognition is used integrating kinesthetic aud visual information to produce a cognitive map of the environnnent and drive robot experimentation. A discussion of experiments in rats and robots is presented contrasting learning latency while characterizing behavioral procedures such as body rotations during navigation and election of routes to the goal
Abstract:
This paper presents a robot architecture with spatial cognition and navigation capabilities that captures some properties of the rat brain structures involved in learning and memory. This architecture relies on the integration of kinesthetic and visual information derived from artificial landmarks, as well as on Hebbian learning, to build a holistic topological-metric spatial representation during exploration, and employs reinforcement learning by means of an Actor-Critic architecture to enable learning and unlearning of goal locations. From a robotics perspective, this work can be placed in the gap between mapping and map exploitation currently existent in the SLAM literature. The exploitation of the cognitive map allows the robot to recognize places already visited and to find a target from any given departure location, thus enabling goal-directed navigation. From a biological perspective, this study aims at initiating a contribution to experimental neuroscience by providing the system as a tool to test with robots hypotheses concerned with the underlying mechanisms of rats' spatial cognition. Results from different experiments with a mobile AlBO robot inspired on c1assical spatial tasks with rats are described, and a comparative analysis is provided in reference to the reversal task devised by O'Keefe in 1983
Abstract:
A computational model of spatial cognition in rats is used to control an autonomous mobile robot while solving a spatial task within a cyclic maze. In this paper we evaluate the robot's behavior in terms of place recognition in multiple directions and goal-oriented navigation against the results derived from experimenting with laboratory rats solving the same spatial task in a similar maze. We provide a general description of the bio-inspired model, and a comparative behavioral analysis between rats and robot
Abstract:
In this paper we present a model designed on the basis of the rat's brain neurophysiology to provide a robot with spatial cognition and goal-oriented navigation capabilities. We describe place representation and recognition processes in rats as the basis for topological map building and exploitation by robots. We experiment with the model by training a robot to find the goal in a maze starting from a fixed location, and by testing it to reach the same target from new different starting locations
Abstract:
We present a model designed on the basis of the rat's brain neurophysiology to provide a robot with spatial cognition and goal-oriented navigation capabilities. We describe target learning and place recognition processes in rats as basis for topological map building and exploitation by robots. We experiment with the model in different maze configurations by training a robot to find the goal starting from a fixed localion, and by testing it to reach the same target from new different starting locations
Abstract:
In this paper we present a model designed on the basis of the neurophysiology of the rat hippocampus to control the navigation of a real robot. The model allows the robot to learn reward locations dynamically moved in different environments, to build a topological map, and to return home autonomously. We describe robot experimentation results from our tests in a T-maze, an 8-arm, radial maze and an extended maze
Abstract:
In this paper we present a model composed of layers of neurons designed on the basis of the neurophysiology of the rat hippocampus to control the navigation of a real robot. The model allows the robot to learn reward locations in different mazes and to return home autonomously by building a topological map of the environment. We described robotic experimentation results from our tests in a T-maze, an 8-arm radial maze and a 3-T shaped maze
Abstract:
In this paper we present a biologically-inspired robotic exploration and navigation model based on the neurophysiology of the rat hippocampus that allows a robot to rind goals and return home autonomously by building a topological map of the environment. We present simulation and experimentation results from a T-maze tested and discuss future research
Abstract:
The time-dependent restricted (n+ 1) -body problem concerns the study of a massless body (satellite) under the influence of the gravitational field generated by n primary bodies following a periodic solution of the n-body problem. We prove that the satellite has periodic solutions close to the large-amplitude circular orbits of the Kepler problem (comet solutions), and in the case that the primaries are in a relative equilibrium, close to small-amplitude circular orbits near a primary body (moon solutions). The comet and moon solutions are constructed with the application of a Lyapunov–Schmidt reduction to the action functional. In addition, using reversibility techniques, we compute numerically the comet and moon solutions for the case of four primaries following the super-eight choreography
Abstract:
We aim to associate a cytokine profile obtained through data mining with the clinical characteristics of patients with subgroups with advanced non-small-cell lung cancer (NSCLC). Our results provide evidence that complex cytokine networks may be used to identify patient sub-groups with different prognoses in advanced NSCLC that could serve as potential biomarkers for best treatment choices
Resumen:
La neumonitis por hipersensibilidad (NH) es una enfermedad inflamatoria difusa del parénquima pulmonar provocada por la inhalación repetida de partículas orgánicas. Las células dendríticas y sus precursores desempeñan un papel importante no sólo como células presentadoras de antígenos, sino también como parte de una red de procesos inmunorregulatorios. Dependiendo de su linaje y estado de diferenciación y activación, las células dentríticas pueden promover una intensa respuesta inmunológica por parte de los linfocitos T o bien, producir un estado de anergia. Objetivo: Caracterizar fenotípicamente las células dentríticas de origen mieloide (CDm) y plasmacitoide (CDp) presentes en el lavado bronquioalveolar (LBA) de pacientes con NH en etapas subaguda o crónica y comparada con lo observado en pacientes con fibrosis pulmonar idiopática (FPI) y sujetos control
Abstract:
Hypersensitivity pneumonitis (HP) is a diffuse inflammatory disease of lung parenchyma resulting from repetitive inhalation of organic particles. Dendritic cells and their precursors play an important role not only as antigen presenting cells but also as part of an immunoregulatory network. Depending on their lineage and stage of differentiation and activation, dendritic cells can promote a strong T-lymphocytes-mediated immunological response or an anergy state. Objective: To phenotypically characterize myeloid (mDCs) and plasmacitoid (pDCs) dendritic cells recovered in bronchoalveolar lavage from patients with subacute or chronic HP, and to compare the results with that obtained in patients with idiopathic pulmonary fibrosis (IPF) and healthy controls. Methods: BAL cells from 8 patients with subacute HP, 8 with chronic HP, 8 with IPF and 4 healthy subjects were used. The phenotype of dendritic cells subpopulations were characterized by means of flow cytometry
Abstract:
The monitoring of patients with dementia who receive comprehensive care in day centers allows formal caregivers to make better decisions and provide better care to patients. For instance, cognitive and physical therapies can be tailored based on the current stage of disease progression. In the context of day centers of the Mexican Federation of Alzheimer, this work aims to design and evaluate Alzaid, a technological platform for assisting formal caregivers in monitoring patients with dementia. Alzaid was devised using a participatory design methodology that consisted in eliciting and validating requirements from 22 and 9 participants, respectively, which were unified to guide the construction of a high-fidelity prototype evaluated by 14 participants. The participants were formal caregivers, medical staff, and management. This work contributes a high-fidelity prototype of a technological platform for assisting formal caregivers in monitoring patients with dementia considering restrictions and requirements of four Mexican day centers. In general, the participants perceived the prototype as quite likely to be useful, usable, and relevant in the job of monitoring patients with dementia (p-value < 0.05). By evaluating and designing Alzaid that unifies requirements for monitoring patients of four day centers, this work is the first effort towards a standard monitoring process of patients with dementia in the context of the Mexican Federation of Alzheimer
Abstract:
Many Americans continued some forms of social distancing after the pandemic. This phenomenon is stronger among older persons, less educated individuals, and those who interact daily with persons at high risk from infectious diseases. Regression models fit to individual-level data suggest that social distancing lowered labor force participation by 2.4 percentage points in 2022, 1.2 points on an earnings-weighted basis. When combined with simple equilibrium models, our results imply that the social distancing drag on participation reduced US output by $205 billion in 2022, shrank the college wage premium by 2.1 percentage points, and modestly steepened the cross-sectional age-wage profile
Abstract:
This paper studies how biases in managerial beliefs affect managerial decisions, firm performance, and the macroeconomy. Using a new survey of US managers I establish three facts. (1) Managers are not overoptimistic: sales growth forecasts on average do not exceed realizations. (2) Managers are overprecise: they underestimate future sales growth volatility. (3) Managers overextrapolate: their forecasts are too optimistic after positive shocks and too pessimistic after negative shocks. To quantify the implications, I estimate a dynamic general equilibrium model in which managers of heterogeneous firms use a subjective beliefs process to make forward-looking hiring decisions. Overprecision and overextrapolation lead managers to overreact to firm-level shocks and overspend on adjustment costs, destroying 2.1% to 6.8% of the typical firm's value. Pervasive overreaction leads to excess volatility and reallocation, lowering consumer welfare by 0.5% to 2.3% relative to the rational-expectations equilibrium. These findings suggest overreaction could amplify asset-price and business-cycle fluctuations
Abstract:
As knowledge workers have shifted to hybrid, we're not seeing an equivalent drop in demand for office space. New survey data suggests cuts in office space of 1% to 2% on average. There are three trends driving this: 1) Workers are uncomfortable with density, and the only surefire way to reduce density is to cut person days on site without cutting square footage; 2) Most employees want to work from home on Mondays and Fridays, which means the shift to hybrid affords only meager opportunities to economize on office space; and 3) Employers are reshaping office space to become more inviting social spaces that encourage face-to-face collaboration, creativity, and serendipitous interactions
Abstract:
Drawing on data from the firm-level Survey of Business Uncertainty, we present three pieces of evidence that COVID-19 is a persistent reallocation shock. First, rates of excess job and sales reallocation over 24-month periods (looking back 12 months and ahead 12 months) have risen sharply since the pandemic struck, especially for sales. Second, as of December 2020, firm-level forecasts of sales revenue growth over the next year imply a continuation of recent changes, not a reversal. Third, COVID-19 shifted relative employment growth trends in favor of industries with a high capacity for employees to work from home
Abstract:
Employees want to work from home 2.5 days a week on average, according to a monthly survey of 5,000 Americans. Desires to work from home and cut commuting have strengthened as the pandemic has lingered, and many have become increasingly comfortable with remote interactions. The rapid spread of the Delta variant is also undercutting the drive for a full-time return to the office any time soon. Tight labor markets are also a challenge for firms that want a full-time return
Abstract:
About one-fifth of paid workdays will be supplied from home in the post-pandemic economy, and more than one-fourth on an earnings-weighted basis. In view of this projection, we consider some implications of home internet access quality, exploiting data from the new Survey of Working Arrangements and Attitudes. Moving to high-quality, fully reliable home internet service for all Americans ("universal access") would raise earnings-weighted labor productivity by an estimated 1.1% in the coming years. The implied output gains are $160 billion per year, or $4 trillion when capitalized at a 4% rate. Estimated flow output payoffs to universal access are nearly three times as large in economic disasters like the COVID-19 pandemic. Our survey data also say that subjective well-being was higher during the pandemic for people with better home internet service conditional on age, employment status, earnings, working arrangements, and other controls. In short, universal access would raise productivity, and it would promote greater economic and social resilience during future disasters that inhibit travel and in-person interactions
Abstract:
We develop several pieces of evidence about the reallocative effects of the COVID-19 shock on impact and over time. First, the shock caused three to four new hires for every ten layoffs from March 1 to mid-May 2020. Second, we project that one-third or more of the layoffs during this period are permanent in the sense that job losers won’t return to their old jobs at their previous employers. Third, firm-level forecasts at a one-year horizon imply rates of expected job and sales reallocation that are two to five times larger from April to June 2020 than before the pandemic. Fourth, full days working from home will triple from 5 percent of all workdays in 2019 to more than 15 percent after the pandemic ends. We also document pandemic-induced job gains at many firms and a sharp rise in cross-firm equity return dispersion in reaction to the pandemic. After developing the evidence, we consider implications for the economic outlook and for policy. Unemployment benefit levels that exceed worker earnings, policies that subsidize employee retention irrespective of the employer’s commercial outlook, and barriers to worker mobility and business formation impede reallocation responses to the COVID-19 shock
Abstract:
Economic uncertainty jumped in reaction to the COVID-19 pandemic, with most indicators reaching their highest values on record. Alongside this rise in uncertainty has been an increase in downside tail-risk reported by firms. This uncertainty has played three roles. First, amplifying the drop in economic activity early in the pandemic; second slowing the subsequent recovery; and finally reducing the impact of policy as uncertainty tends to make firms more cautious in responding to changes in business conditions. As such, the incredibly high levels of uncertainty are a major impediment to a rapid recovery. We also discuss three other factors exacerbating the situation: the need for massive reallocation as COVID-19 permanently reshapes the economy; the rise in working from home, which is impeding firm hiring; and the ongoing medical uncertainty over extent and duration of the pandemic. Collectively, these conditions are generating powerful headwinds against a rapid recovery from the COVID-19 recession
Abstract:
The Dirichlet process mixture model and more general mixtures based on discrete random probability measures have been shown to be flexible and accurate models for density estimation and clustering. The goal of this paper is to illustrate the use of normalized random measures as mixing measures in nonparametric hierarchical mixture models and point out how possible computational issues can be successfully addressed. To this end, we first provide a concise and accessible introduction to normalized random measures with independent increments. Then, we explain in detail a particular way of sampling from the posterior using the Ferguson–Klass representation. We develop a thorough comparative analysis for location-scale mixtures that considers a set of alternatives for the mixture kernel and for the nonparametric component. Simulation results indicate that normalized random measure mixtures potentially represent a valid default choice for density estimation problems. As a byproduct of this study an R package to fit these models was produced and is available in the Comprehensive R Archive Network (CRAN)
Resumen:
Objetivo. Caracterizar el impacto de la infección por SARS-CoV-2 en trabajadores de una gran empresa esencial del Área Metropolitana de la Ciudad de México, utilizando prevalencia puntual de infección aguda, prevalencia puntual de infección pasada a través de anticuerpos séricos e incapacidades temporales para el trabajo por enfermedad respiratoria (ITT-ER). Material y métodos. Cuatro encuestas aleatorias, tres durante 2020 y una en 2021, sobre de disponibilidad de vacunas. Desenlaces: prevalencia puntual de infección aguda a través de pruebas de PCR (polymerase chain reaction, por sus siglas en inglés) en saliva, prevalencia puntual de infección pasada a través de anticuerpos séricos contra Covid-19 (niveles de anticuerpos S/N), ITT-ERs y prevalencia de síntomas durante los seis meses anteriores. Resultados. La prevalencia de casos positivos para SARS-CoV-2 fue de 1.29-4.88% y, en promedio, la cuarta parte de participantes presentó anticuerpos prevacunación; más de la mitad de participantes con ITT-ER tenían anticuerpos. Las posibilidades de tener anticuerpos fueron 6-7 veces mayores entre aquellos con ITT-ER. Conclusiones. Los altos niveles de anticuerpos contra el Covid-19 en la población de estudio reflejan que la cobertura es alta entre los trabajadores de esta industria. Las ITTs son una herramienta útil para rastrear epidemias en el lugar de trabajo
Abstract:
Objective. To characterize the impact of SARS-CoV-2 infection in workers from an essential large-scale company in the Greater Mexico City Metropolitan Area using point prevalence of acute infection, point prevalence of past infection through serum antibodies and respiratory disease short-term disability claims (RD-STDC). Materials and methods. Four randomized surveys, three during 2020 before and one after (December 2021) vaccines' availability. Outcomes: point prevalence of acute infection through saliva PCR (polymerase chain reaction) testing, point prevalence of past infection through serum antibodies against Covid-19, RD-STDC and prevalence of symptoms during the previous six months. Results. Prevalence of SARS-CoV-2 cases was 1.29-4.88%, on average, a quarter of participants pre-vaccination were seropositive; over half of participants with a RD-STDC had antibodies. The odds of having antibodies were 6-7 times more among workers with an RD-STDC. Conclusions. High antibody levels against Covid-19 in this study population reflects that coverage is high among workers in this industry. STDCs are a useful tool to track workplace epidemics
Abstract:
Chaotic properties in the dynamics of Toeplitz operators on the Hardy–Hilbert space H2(D) are studied. Based on previous results of Shkarin and Baranov and Lishanskii, a characterization of different versions of chaos formulated in terms of the coefficients of the symbol for the tridiagonal case are obtained. In addition, easily computable sufficient conditions that depend on the coefficients are found for the chaotic behavior of certain Toeplitz operators
Resumen:
Este artículo realiza una revisión sistemática de treinta y seis taxonomías de riesgos asociados a la Inteligencia Artificial (IA) que se han realizado desde el 2010 hasta la fecha, utilizando como metodología el protocolo Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). El estudio se basa en la importancia de estas para estructurar la investigación de los riesgos y para distinguir y definir amenazas. Ello permite identificar las cuestiones que generan mayor preocupación y, por lo tanto, requieren mejor gobernanza. La investigación permite extraer tres conclusiones. En primer lugar, se observa que la mayoría de los estudios se centran en amenazas como la privacidad y la desinformación, posiblemente debido a su concreción y evidencia empírica existente. Por el contrario, amenazas como los ciberataques y el desarrollo de tecnologías estratégicas son menos citadas, a pesar de su creciente relevancia. En segundo lugar, encontramos que los artículos enfocados en el origen del riesgo tienden a considerar más frecuentemente riesgos extremos en comparación con los trabajos que abordan las consecuencias. Esto sugiere que la literatura ha sabido identificar las potenciales causas de una catástrofe, pero no las formas concretas en las que esta se puede materializar en la práctica. Finalmente, existe una cierta división entre aquellos artículos que tratan daños tangibles presentes y aquellos que cubren daños potenciales futuros. No obstante, varias amenazas se tratan en la mayoría de los artículos de todo el espectro indicando que existen puntos de unión entre clústeres
Abstract:
This article performs a systematic review of thirty-six taxonomies of risks associated with Artificial Intelligence (AI) that have been conducted from 2010 to date, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol as a methodology. The study is based on the importance of these to structure risk research and to distinguish and define threats. This makes it possible to identify the issues that are of greatest concern and therefore require better governance. Three conclusions can be drawn from the research. First, it is observed that most studies focus on threats such as privacy and disinformation, possibly due to their concreteness and existing empirical evidence. In contrast, threats such as cyberattacks and the development of strategic technologies are less cited, despite their increasing relevance. Second, we find that articles focused on the origin of risk tend to consider more frequently extreme risks compared to papers addressing consequences. This suggests that the literature has been able to identify the potential causes of a catastrophe, but not the concrete ways in which it may materialize in practice. Finally, there is some division between those articles that deal with present tangible damage and those that cover potential future damage. Nevertheless, several threats are addressed in the majority of articles across the spectrum indicating that there are commonalities between clusters
Abstract:
This work studies ultra wideband (UWB) communications over multipath residential indoor channels. We study the relationship between the fading margin and the transmitter–receiver separation distance for both the line of sight and the no line of sight scenarios. Impairments such as small scale fading as well as large scale fading are considered. Some implications of the results for UWB indoor network design are discussed
Abstract:
We study a repeated game with payoff externalities and observable actions where two players receive information over time about an underlying payoff-relevant state, and strategically coordinate their actions. Players learn about the true state from private signals, as well as the actions of others. They commonly learn the true state (Cripps et al., 2008), but do not coordinate in every equilibrium. We show that there exist stable equilibria in which players can overcome unfavorable signal realizations and eventually coordinate on the correct action, for any discount factor. For high discount factors, we show that in addition players can also achieve efficient payoffs
Abstract:
Energy resulting from an impact is manifested through unwanted damage to objects or persons. New materials made of cellular structures have enhanced energy absorption (EA) capabilities. The hexagonal honeycomb is widely known for its space-filling capacity, structural stability, and high EA potential. Additive manufacturing (AM) technologies have been effectively useful in a vast range of applications. The evolution of these technologies has been studied continuously, with a focus on improving the mechanical and structural characteristics of three-dimensional (3D)-printed models to create complex quality parts that satisfy design and mechanical requirements. In this study, 3D honeycomb structures of novel material polyethylene terephthalate glycol (PET-G) were fabricated by the fused deposition modeling (FDM) method with different infill density values (30%, 70%, and 100%) and printing orientations (edge, flat, and upright). The effectiveness for EA of the design and the effect of the process parameters of infill density and layer printing orientation were investigated by performing in-plane compression tests, and the set of parameters that produced superior results for better EA was determined by analyzing the area under the curve and the welding between the filament layers in the printed object via FDM. The results showed that the printing parameters implemented in this study considerably affected the mechanical properties of the 3D-printed PET-G honeycomb structure. The structure with the upright printing direction and 100% infill density exhibited an extension to delamination and fragmentation, thus, a desirable performance with a long plateau region in the load-displacement curve and major absorption of energy
Abstract:
The question whether a minimum rate of sick pay should be mandated is much debated. We study the effects of this kind of intervention with student subjects in an experimental laboratory setting rich enough to allow for moral hazard, adverse selection, and crowding out of good intentions. Both wages and replacement rates offered by competing employers are reciprocated by workers. However, replacement rates are only reciprocated as long as no minimum level is mandated. Although we observe adverse selection when workers have different exogenous probabilities for being absent from work, this does not lead to a market breakdown. In our experiment, mandating replacement rates actually leads to a higher voluntary provision of replacement rates by employers
Abstract:
Computation of the volume of space required for a robot to execute a sweeping motion from a start to a goal has long been identified as a critical primitive operation in both task and motion planning. However, swept volume computation is particularly challenging for multi-link robots with geometric complexity, e.g., manipulators, due to the non-linear geometry. While earlier work has shown that deep neural networks can approximate the swept volume quantity, a useful parameter in sampling-based planning, general network structures do not lend themselves to outputting geometries. In this paper we train and evaluate the learning of a deep neural network that predicts the swept volume geometry from pairs of robot configurations and outputs discretized voxel grids. We perform this training on a variety of robots from 6 to 16 degrees of freedom. We show that most errors in the prediction of the geometry lie within a distance of 3 voxels from the surface of the true geometry and it is possible to adjust the rates of different error types using a heuristic approach. We also show it is possible to train these networks at varying resolutions by training networks with up to 4x smaller grid resolution with errors remaining close to the boundary of the true swept volume geometry surface
Abstract:
This article is devoted to the design of robust position-tracking controllers for a perturbed wheeled mobile robot. We address the final objective of pose-regulation in a predefined time, which means that the robot position and orientation must reach desired final values simultaneously in a user-defined time. To do so, we propose the robust tracking of adequate trajectories for position coordinates, enforcing that the robot's heading evolves tangent to the position trajectory and consequently the robot reaches a desired orientation. The robust tracking is achieved by a proportional-integral action or by a super-twisting sliding mode control. The main contribution of this article is a kinematic control approach for pose-regulation of wheeled mobile robots in which the orientation angle is not directly controlled in the closed-loop, which simplifies the structure of the control system with respect to existing approaches. An offline trajectory planning method based on parabolic and cubic curves is proposed and integrated with robust controllers to achieve good accuracy in the final values of position and orientation. The novelty in the trajectory planning is the generation of a set of candidate trajectories and the selection of one of them that favors the correction of the robot's final orientation. Realistic simulations and experiments using a real robot show the good performance of the proposed scheme even in the presence of strong disturbances
Abstract:
Two robust kinematic controllers for position trajectory tracking of a perturbed wheeled mobile robot are presented. We address a final objective of fixed-time pose-regulation, which means that the robot position and orientation must reach desired final values simultaneously in a user-defined time. To achieve that, we propose the robust tracking of adequate trajectories for position, which drives the robot to get a desired final orientation due to the nonholonomic motion constraint. Hence, the main contribution of the paper is a complete strategy to define adequate reference trajectories as well as robust controllers to track them in order to enforce the pose-regulation of a wheeled mobile robot in a desired time. Realistic simulations show the good performance of the proposed scheme even in the presence of strong disturbances
Abstract:
Correlations of measures of percentages of white coat color, five measures of production and two mea sures of reproduction were obtained from 4293 first lactation Holsteins from eight Florida dairy farms. Percentages of white coat color were analyzed as recorded and transformed by an extension of Box-Cox procedures. Statistical analyses were by derivative-free restricted maximum likelihood (DFREML) with an animal model. Phenotypic and genetic correlations of white percentage (not transformed) were with milk yield, 0.047 and 0.097; fat yield, 0.002 and 0.004; fat percentage, -0.047 and -0.090; protein yield, 0.024 and 0.048; protein percentage, -0.070 and -0.116; days open, -0.012 and -0.065; and calving interval, -0.007 and -0.029. Changes in magnitud e of correlations were very small for all variables except days open. Genetic and phenotypic correlations of transformed values with days open were -0.027 and -0.140. Modest positive correlated responses would be expected for white coat color percentage following direct selection for milk, fat, and protein yields, but selection for fat and protein percentages, days open, or calving interval would lead to small decreases
Abstract:
Based on a computer search we identify all majority operations on {0, 1, 2, 3, 4} with values in {0, 1} on the injective triples that generate a minimal clone. We classify these functions up to an equivalence extending conjugacy (isomorphism) and count the number of majority operations in the generated clones
Abstract:
As part of a project to identify all maximal centralising monoids on a four-element set, we determine all centralising monoids witnessed by unary or by idempotent binary operations on a four-element set. Moreover, we show that every centralising monoid on a set with at least four elements witnessed by the Mal’cev operation of a Boolean group operation is always a maximal centralising monoid, i.e., a co-atom below the full transformation monoid. On the other hand, we also prove that centralising monoids witnessed by certain types of permutations or retractive operations can never be maximal
Abstract:
Motivated by reconstruction results by Rubin, we introduce a new reconstruction notion for permutation groups, transformation monoids and clones, called automatic action compatibility, which entails automatic homeomorphicity. We further give a characterization of automatic homeomorphicity for transformation monoids on arbitrary carriers with a dense group of invertibles having automatic homeomorphicity. We then show how to lift automatic action compatibility from groups to monoids and from monoids to clones under fairly weak assumptions. We finally employ these theorems to get automatic action compatibility results for monoids and clones over several well-known countable structures, including the strictly ordered rationals, the directed and undirected version of the random graph, the random tournament and bipartite graph, the generic strictly ordered set, and the directed and undirected versions of the universal homogeneous Henson graphs
Abstract:
We investigate the standard context, denoted by K(Ln), of the lattice Ln of partitions of a positive integer n under the dominance order. Motivated by the discrete dynamical model to study integer partitions by Latapy and Duong Phan and by the characterization of the supremum and (infimum) irreducible partitions of n by Brylawski, we show how to construct the join-irreducible elements of Ln+1 from Ln. We employ this construction to count the number of join-irreducible elements of Ln, and confirm that the number of objects (and attributes) of K(Ln) has order Θ(n2). We also discuss the embeddability of K(Ln) into K(Ln+1) with special emphasis on n=9
Abstract:
We consider finitary relations (also known as crosses) that are definable via finite disjunctions of unary relations, i.e. subsets, taken from a fixed finite parameter set Γ. We prove that whenever Γ contains at least one non-empty relation distinct from the full carrier set, there is a countably infinite number of polymorphism clones determined by relations that are disjunctively definable from Γ. Finally, we extend our result to finitely related polymorphism clones and countably infinite sets Γ. These results address an open problem raised in Creignou, N., et al. Theory Comput. Syst. 42(2), 239–255 (2008), which is connected to the complexity analysis of the satisfiability problem of certain multiple-valued logics studied in Hähnle, R. Proc. 31st ISMVL 2001, 137–146 (2001)
Abstract:
C-clones are polymorphism sets of so-called clausal relations, a special type of relations on a finite domain, which first appeared in connection with constraint satisfaction problems in work by Creignou et al. from 2008. We completely describe the relationship regarding set inclusion between maximal C-clones and maximal clones. As a main result we obtain that for every maximal C-clone there exists exactly one maximal clone in which it is contained. A precise description of this unique maximal clone, as well as a corresponding completeness criterion for C-clones is given
Abstract:
We show how to reconstruct the topology on the monoid of endomorphisms of the rational numbers under the strict or reflexive order relation, and the polymorphism clone of the rational numbers under the reflexive relation. In addition we show how automatic homeomorphicity results can be lifted to polymorphism clones generated by monoids
Abstract:
Our goal is to model the joint distribution of a series of 4 × 2 × 2 × 2 contingency tables for which some of the data are partially collapsed (i.e., aggregated in as few as two dimensions). More specifically, the joint distribution of four clinical characteristics in breast cancer patients is estimated. These characteristics include estrogen receptor status (positive/negative), nodal involvement (positive/negative), HER2-neu expression (positive/negative), and stage of disease (I, II, III, IV). The joint distribution of the first three characteristics is estimated conditional on stage of disease and we propose a dynamic model for the conditional probabilities that let them evolve as the stage of disease progresses. The dynamic model is based on a series of Dirichlet distributions whose parameters are related by a Markov prior structure (called dynamic Dirichlet prior). This model makes use of information across disease stage (known as “borrowing strength” and provides a way of estimating the distribution of patients with particular tumor characteristics. In addition, since some of the data sources are aggregated, a data augmentation technique is proposed to carry out a meta-analysis of the different datasets
Abstract:
We introduce the logics GLPΛ, a generalization of Japaridze’s polymodal provability logic GLPω where Λ is any linearly ordered set representing a hierarchy of provability operators of increasing strength. We shall provide a reduction of these logics to GLPω yielding among other things a finitary proof of the normal form theorem for the variable-free fragment of GLPΛ and the decidability of GLPΛ for recursive orderings Λ. Further, we give a restricted axiomatization of the variable-free fragment of GLPΛ
Resumen:
Las empresas familiares representan la mayoría de las organizaciones en México y participan en una gran variedad de industrias. Las hay de todos tamaños, incluso dentro de las más grandes del país. En la Bolsa Mexicana de Valores (BMV), el 70% de las empresas que emiten acciones son familiares (Belausteguigoitia, 2012). Se han realizado investigaciones en diversos países que comparan los rendimientos entre empresas familiares y no familiares (Villalonga y Amit, 2006). Faltaba en México un estudio comparativo de esta índole, que permitiera conocer el desempeño de las organizaciones familiares. Este trabajo ofrece un valor adicional, ya que compara los rendimientos durante años de crisis y estabilidad. Los resultados preliminares indican que durante años de relativa inestabilidad financiera (por la crisis de 2008), las organizaciones familiares muestran un rendimiento significativamente mayor que las no familiares, mientras que en años de estabilidad los rendimientos tienden a ser semejantes. Además de exponer resultados pormenorizados durante este período, se plantean algunas hipótesis que explican el hecho de que las organizaciones familiares se desempeñen mejor en tiempos de crisis
Abstract:
Family businesses represent most organizations in Mexico and participate in a wide variety of industries. They come in all sizes, even among the largest in the country. In the Mexican Stock Exchange (BMV), 70% of the companies that issue shares are family members (Belausteguigoitia, 2012). Research has been conducted in various countries comparing returns between family and non-family businesses (Villalonga and Amit, 2006). A comparative study of this nature was lacking in Mexico, which would allow us to know the performance of family organizations. This work offers additional value as it compares returns during years of crisis and stability. Preliminary results indicate that during years of relative financial instability (due to the 2008 crisis), family organizations show significantly higher returns than non-family ones, while in years of stability the returns tend to be similar. In addition to presenting detailed results during this period, some hypotheses are put forward that explain the fact that family organizations perform better in times of crisis
Resumen:
Este artículo analiza la relación entre compartir conocimiento y el comportamiento pro-organizacional no ético (CPE), así como el potencial efecto amplificador de dos factores: la resistencia al cambio de los empleados y la percepción del clima político de la organización
Abstract:
This paper aims to investigate the relationship of knowledge sharing with unethical pro-organizational behavior (UPB) and the potential augmenting effects of two factors: employees' dispositional resistance to change and perceptions of organizational politics
Resumo:
Este artigo analisa a relação entre compartilhar o conhecimento e comportamento pró-organizacional antiético (CPA), bem como o potencial efeito ampliador de dois fatores: a resistência a mudança de funcionários e a percepção do clima político da organização
Resumen:
Las Empresas Familiares son organizaciones con una gran carga emotiva. Por esta razón, la dimensión familiar, que ejerce una gran influencia sobre la empresa, debe ser correctamente canalizada en la empresa, con la idea de lograr que su impacto sea positivo. A continuación expongo algunas ideas prácticas que pueden ayudar a prevenir conflictos. Como es de imaginarse, estas ideas, además de contribuir a reducir el potencial de conflicto, pueden mejorar la marcha en las organizaciones familiares
Resumen:
El capítulo identifica y analiza los elementos esenciales de cualquier modelo de descarbonización y sugiere estrategias de implementación para llevarla a cabo con éxito. El texto empieza por definir descarbonización, cero emisiones netas de dióxido de carbono y cero emisiones netas de gases de efecto invernadero (GEI) y explicar su importancia. A continuación analiza los elementos que cualquier modelo de descarbonización y de cero emisiones netas de GEI deben contener: (i) eficiencia energética; (ii) descarbonización del sector eléctrico; (iii) substitución del uso de combustibles fósiles por electricidad de los sectores donde la electrificación sea posible y uso de los combustibles limpios en los sectores que no lo sea; (iv) mantenimiento y creación de sumideros de carbono para capturar las emisiones restantes; y (v) control de las emisiones antropogénicas de otros gases de efecto invernadero, para el caso de cero emisiones netas de GEI. El capítulo, analiza tres elementos fundamentales para el éxito de la descarbonización: el momento en el que se toman y ejecutan las decisiones (timing); la aceptabilidad política de la división de los costos y los beneficios; y la credibilidad de las metas y las medidas de políticas públicas para lograrlas. El capítulo finaliza con una reflexión sobre las condiciones que podrían hacer que las acciones de mitigación de emisiones ofrecidas y realizadas por los países estuvieran más cerca de lo que se necesita para cumplir con la meta del Acuerdo de parís (que el aumento de temperatura no pase de 2 grados centígrados y esté lo más cera posible a 1.5 grados centígrados)
Abstract:
The chapter identifies and analyzes the essential elements of any decarbonization model and suggests implementation strategies to carry it out successfully. The texts stars by defining descarbonization, net-zero carbon dioxide emissions, and net-zero greenhouse gas (GHG) emissions, explaining their significance. It proceeds to analyze the key components the any descarbonization and net-zero GHG emissions model should include: (i) energy efficiency; (ii) descarbonization of electricity sector; (iii) substitution of fossil fuel use with electricity in sectors where electrification is possible and the use of clean fuels in sectors where it is not; (iv) maintenance and creation of carbon sinks to capture the remaining emissions; and (v) control of anthropogenic emissions of other green house gases, in the case of net zero GHG emissions. Next, three fundamental elements for the success of descarbonization are analyzed: timing; the political acceptability of the division of costs and benefits; and the credibility of the goals and policy measures to achieve them. The chapter concludes with a reflection on the conditions that could make the migration actions offered and carried out by countries closer to what is needed to comply with the goal of the Paris Agreement (that the increase in temperature does not exceed 2 degrees Celsius and is as close as possible to 1.5 degrees Celsius)
Abstract:
Price-based climate change policy instruments, such as carbon taxes or cap-and-trade systems, are known for their potential to generate desirable results such as reducing the cost of meeting environmental targets. Nonetheless, carbon pricing policies face important economic and political hurdles. Powerful stakeholders tend to obstruct such policies or dilute their impacts. Additionally, costs are borne by those who implement the policies or comply with them, while benefits accrue to all, creating incentives to free ride. Finally, costs must be paid in the present, while benefits only materialize over time. This chapter analyses the political economy of the introduction of a carbon tax in Mexico in 2013 with the objective of learning from that process in order to facilitate the eventual implementation of an effective cap-and-trade system in Mexico. Many of the lessons in Mexico are likely to be applicable elsewhere. As countries struggle to meet the goals of international environmental agreements, it is of utmost importance that we understand the conditions under which it is feasible to implement policies that reduce carbon emissions
Abstract:
The Global International Waters Assessment (GIWA) was created to help develop a priority setting mechanism for actions in international waters. Apart from assessing the severity of environmental problems in ecosystems, the GIWA's task is to analyze potential policy actions that could solve or mitigate these problems. Given the complex nature of the problems, understanding their root causes is essential to develop effective solutions. The GIWA provides a framework to analyze these causes, which is based on identifying the factors that shape human behavior in relation to the use (direct or indirect) of aquatic resources. Two sets of factors are analyzed. The first one consists of social coordination mechanisms (institutions). Faults in these mechanisms lead to wasteful use of resources. The second consists of factors that do not cause wasteful use of resources per se (poverty, trade, demographic growth, technology), but expose and magnify the faults of the first group of factors. The picture that comes out is that diagnosing simple generic causes, e.g. poverty or trade, without analyzing the case specific ways in which the root causes act and interact to degrade the environment, will likely ignore important links that may put the effectiveness of the recommended policies at risk. A summary of the causal chain analysis for the Colorado River Delta is provided as an example
Abstract:
In this article, a distributed control system for robotíc applications is presented. First, the robot control theory based on task-functions is outlined; then a model for robotic applications is proposed. This philosophy is more general than the classical hierarchical structure and allows smart sensor feedback at the servo level. Next a software and hardware architecture is proposed to implement such control algorithms. As all the system activity depends on communication between tasks, it is necessary to design a real-time communication system a topic addressed in this paper
Abstract:
This paper presents a proportional-integral passivity-based control (PI-PBC) approach for a system consisting of a proton-exchange membrane fuel cell as the primary energy source and a super-capacitor as an auxiliary energy storage device. Its objectives are to regulate the output and super-capacitor voltages while smoothing variations in the energy extracted from the cell. The controller design leverages the difference in time-scales, enabling the system dynamics to be partitioned into fast currents and slow voltages. It follows a current-mode control framework, where both inner and outer loops are developed using the PI-PBC methodology. The inner loop adjusts the duty cycles to track current reference trajectories, while the outer loop generates these references to regulate voltages. Additionally, an adaptive law based on Immersion and Invariance theory enhances the robustness of the outer loop controller. Numerical simulations validate the proposed approach, demonstrating stability and precise regulation despite pulsating energy demand
Abstract:
In this paper, a controller is designed to regulate the output voltage of a fuel-cell (FC) system comprising a proton-exchange membrane FC feeding a purely resistive load through a boost converter. The controller aims to maintain voltage regulation regardless of uncertainties in the resistive load. Leveraging the monotonicity of the FC polarization curve, it is demonstrated that the non-linear system can be controlled with a simple proportional-integral (PI) action through the PI-passivity-based control (PI-PBC) methodology. The result is subsequently extended to an adaptive version, enabling it to address parametric uncertainties, including inductor parasitic resistance, load variations, and fuel cell polarization curve parameters. The overall system is proved to be stable by regulating the output voltage under parametric uncertainty. Experimental results validate the proposed controller
Abstract:
In this article, we addressed the issue of a proton-exchange membrane fuel cell (PEMFC) voltage regulation when it is coupled to an uncertain load via a DC-DC boost converter. We demonstrate that a straightforward proportional-integral (PI) action designed using the passivity-based control (PBC) approach can regulate the voltage of the PEMFC boost converter system in spite of the intrinsic nonlinearities in the current-voltage behavior of the PEMFC. We show that for all positive values of the controller gains, the voltage converges to its set point. Moreover, a parameter estimator is afterward proposed with the Immersion and Invariance (I&I) theory, allowing the PI-PBC to regulate the output voltage of the system as the load changes
Abstract:
This paper describes a double proportional-integral passivity-based control design to coordinate the operation of a proton-exchange membrane fuel cell system supported by two energy storage devices, such as a supercapacitor and a battery. Considering the singular perturbation theory, a timescale separation is applied to decouple the current and voltage dynamics and synthesize two control loops: an inner current loop and an outer voltage loop. Both current and voltage dynamics are controlled by exploiting the passive properties of each subsystem through a simple proportional-integral action over the passive output. The overall control objectives are to ensure load and super-capacitor voltage regulation and smooth variations in fuel cell current despite variations in energy demand. Numerical results demonstrate the correct performance of the closed-loop system despite pulsating variations in energy demand
Abstract:
We present a controller for a power generation system composed of a fuel cell connected to a boost converter which feeds a resistive load. The controller aims to regulate the output voltage of the converter, regardless of sudden changes of the load and the fuel cell voltage. Leveraging monotonicity, we prove that the nonlinear system can be controlled by means of a simple passivity-based PI. We afterward extend the result to an adaptive version, allowing the controller to deal with parameter uncertainties. This adaptive design is based on an indirect control approach with parameter identification performed by a "hybrid" estimator, which combines two techniques: the gradient-descent and the immersion-and-invariance algorithms. The overall system is proven to be stable, with the output voltage regulated to its reference. Furthermore, realistic simulation results validate our proposal
Abstract:
In this article, the problem of online parameter estimation of a proton exchange membrane fuel cell (PEMFC) polarization curve, that is, the static relation between the voltage and the current of the PEMFC is addressed and solved. The task of designing this estimator-even off-line-is complicated by the fact that the uncertain parameters enter the curve in a highly nonlinear fashion, namely in the form of nonseparable nonlinearities. We consider several scenarios for the model of the polarization curve, starting from the standard full model and including several popular simplifications to this complicated mathematical function. In all cases, separable regression equations are derived-either linearly or nonlinearly parameterized-which are instrumental for the implementation of the parameter estimators. We concentrate our attention on online estimation schemes for which, under suitable excitation conditions, global parameter convergence is ensured. Due to these global convergence properties, the estimators are robust to unavoidable additive noise and structural uncertainty. Moreover, since the schemes are online, they are able to track (slow) parameter variations, that occur during the operation of the PEMFC. These two features-unavailable in time-consuming offline data-fitting procedures-make the proposed estimators helpful for online time-saving characterization of a given PEMFC, and the implementation of fault-detection procedures and model-based adaptive control strategies. Simulation and experimental results that validate the theoretical claims are presented
Abstract:
In this paper, based on interconnection and damping assignment passivity-based control approach, a multi-loop adaptive controller for output voltage regulation of a hybrid system comprised of a proton exchange membrane fuel cell energy generation system and a hybrid energy storage system consisting of supercapacitors and batteries, is detailed. The control scheme relies on the design of two control loops, i.e., an outer loop for voltage regulation through current reference generation, and an inner loop for current reference tracking through duty cycle generation. Furthermore, an adaptive law based on immersion and invariance theory is designed to enhance the outer loop behavior through unknown load approximation. Finally, numeric simulation results show the correct performance of the adaptive multi-loop controller when sudden load changes are induced in the system
Abstract:
In this work, we establish the concept of reversing symmetry in the three-body problem on the sphere, a novel approach that has not been previously explored. We introduce three reversing symmetries: one valid for arbitrary masses, and two that require two equal masses. We also provide a thorough characterization of their fixed points, which are crucial for understanding the dynamics of the system due to their connection with the symmetric periodic orbits of the system. Using two reversing symmetries, we numerically compute a choreography in the three-body problem on the sphere, a particular type of symmetric periodic orbit. This orbit is closely related to the classical figure-eight choreography, a well-known symmetric periodic orbit in the Newtonian planar three-body problem
Abstract:
We study symmetric periodic orbits near collision in a nonautonomous restricted planar four-body problem. The restricted problem consists of a massless particle moving under the gravitational influence due to three bodies with the same positive mass (the primaries), following the figure-eight choreography. We use regularized coordinates, in order to deal numerically with motions near collision between the massless particle and one of the primaries. By means of reversing symmetries, we characterize the symmetric periodic orbits near collision. The initial conditions for these orbits were computed by solving numerically some boundary value problems. We explain theoretically, and confirm numerically, how different parts of the diagram of initial conditions are related due to the symmetry of the figure-eight choreography
Abstract:
In this paper, we study a three-dimensional system of differential equations which is a generalization of the system introduced by Yu and Wang (Eng Technol Appl Sci Res 3:352-358, 2013), a continuation of the study of chaotic attractors [see Yu and Wang (Eng Tech Appl Sci Res 2:209-215, 2012)]. We show that these systems admit a zero-Hopf non-isolated equilibrium point at the origin and prove the existence of a limit cycle emanating from it. We illustrate our results with some numerical simulations
Abstract:
We prove that all non-degenerate relative equilibria of the planar Newtonian n–body problem can be continued to spaces of constant curvature κ, positive or negative, for small enough values of this parameter. We also compute the extension of some classical relative equilibria to curved spaces using numerical continuation. In particular, we extend Lagrange's triangle configuration with different masses to both positive and negative curvature spaces
Abstract:
We study a particular (1+2n)-body problem, conformed by a massive body and 2n equal small masses, since this problem is related with Maxwell's ring solutions, we call planet to the massive body, and satellites to the other 2n masses. Our goal is to obtain doubly-symmetric orbits in this problem. By means of studying the reversing symmetries of the equations of motion, we reduce the set of possible initial conditions that leads to such orbits, and compute the 1-parameter families of time-reversible invariant tori. The initial conditions of the orbits were determined as solutions of a boundary value problem with one free parameter, in this way we find analytically and explicitly a new involution, until we know this is a new and innovative result. The numerical solutions of the boundary value problem were obtained using pseudo arclength continuation. For the numerical analysis we have used the value of 3.5x10-4 as mass ratio of some satellite and the planet, and it was done for n=2,3,4,5,6. We show numerically that the succession of families that we have obtained approach the Maxwell solutions as n increases, and we establish a simple proof why this should happen in the configuration
Abstract:
By using analytical and numerical tools we show the existence of families of quasiperiodic orbits (also called relative periodic orbits) emanating from a kite configuration in the planar four-body problem with three equal masses. Relative equilibria are periodic solutions where all particles are rotating uniformely around the center of mass in the inertial frame, that is the system behaves as a rigid body problem, in rotating coordinates in general these solutions are quasiperidioc. We introduce a new coordinate system which measures (in the planar four-body problem) how far is an arbitrary configuration from a kite configuration. Using these coordinates, and the Lyapunov center theorem, we get families of quasiperiodic orbits, an by using symmetry arguments, we obtain periodic ones, all of them emanating from a kite configuration
Abstract:
In recent research on financial crises, large exogenous shocks to total factor productivity (TFP) are used as the driving force accounting for large output falls. TFP fell 3% after the Korean 1997 financial crisis. We find evidence that the large fall in TFP is mostly due to a sectoral reallocation of labor from the more productive manufacturing and construction sectors to the less productive wholesale trade sector, the public sector and agriculture. We construct a two-sector model that accounts for the labor reallocation. The model has a consumption sector and an investment sector. Firms face sector-specific working capital constraints, which we calibrate with data from financial statements. The rise in interest rates makes inputs more costly. The model accounts for 42% of the TFP fall. The model also accounts for 53% of the fall in GDP. It is broadly consistent with the post-crisis behavior of the Korean economy
Abstract:
Using variation in firms' exposure to their CEOs resulting from hospitalization, we estimate the effect of CEOs on firm policies, holding firm-CEO matches constant. We document three main findings. First, CEOs have a significant effect on profitability and investment. Second, CEO effects are larger for younger CEOs, in growing and family-controlled firms, and in human-capital-intensive industries. Third, CEOs are unique: the hospitalization of other senior executives does not have similar effects on performance. Overall, our findings demonstrate that CEOs are a key driver of firm performance, which suggests that CEO contingency plans are valuable
Abstract:
This paper uses a unique dataset from Denmark to investigate the impact of family characteristics in corporate decision making and the consequences of these decisions on firm performance. We focus on the decision to appoint either a family or external chief executive officer (CEO). The paper uses variation in CEO succession decisions that result from the gender of a departing CEO's firstborn child. This is a plausible instrumental variable (IV), as male first-child firms are more likely to pass on control to a family CEO than are female first-child firms, but the gender of the first child is unlikely to affect firms' outcomes. We find that family successions have a large negative causal impact on firm performance: operating profitability on assets falls by at least four percentage points around CEO transitions. Our IV estimates are significantly larger than those obtained using ordinary least squares. Furthermore, we show that family-CEO underperformance is particularly large in fast-growing industries, industries with highly skilled labor force, and relatively large firms. Overall, our empirical results demonstrate that professional, nonfamily CEOs provide extremely valuable services to the organizations they head
Abstract:
This paper presents a two dimensional convex irregular bin packing problem with guillotine cuts. The problem combines the challenges of tackling the complexity of packing irregular pieces, guaranteeing guillotine cuts that are not always orthogonal to the edges of the bin, and allocating pieces to bins that are not necessarily of the same size. This problem is known as a two-dimensional multi bin size bin packing problem with convex irregular pieces and guillotine cuts. Since pieces are separated by means of guillotine cuts, our study is restricted to convex pieces. A beam search algorithm is described, which is successfully applied to both the multi and single bin size instances. The algorithm is competitive with the results reported in the literature for the single bin size problem and provides the first results for the multi bin size problem
Abstract:
The intuitive notion of evidence has both semantic and syntactic features. In this paper, we develop an evidence logic for epistemic agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N. Further notions of relative plausibility between world sand beliefs based on the latter ordering are then defined in terms of this evidence structure, yielding our intended models for evidence-based beliefs. In addition, we also consider a second more general flavor, where belief and plausibility are modeled using additional primitive relations, and we prove a representation theorem showing that each such general model is a p-morphic image of an intended one. This semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. We give a structural study of the resulting ‘uniform’ and ‘flat’ models. Our main result are sound and complete axiomatizations for the logics of all four major model classes with respect to the modal language of evidence, belief and safe belief. We conclude with an outlook toward logics for the dynamics of changing evidence, and the resulting language extensions and connections with logics of plausibility change.
Abstract:
Opportunistic electoral fiscal policy cycle theory suggests that all subnational officials will raise fiscal spending during elections. Ideological partisan fiscal policy cycle theory suggests that only left-leaning governments will raise election year fiscal spending, with right-leaning parties choosing the reverse. This article assesses which of these competing logics applies to debt policy choices. Cross-sectional time-series analysis of yearly loan acquisition across Mexican municipalities -on statistically matched municipal subsamples to balance creditworthiness across left- and right-leaning governments- shows that all parties engage in electoral policy cycles but not in the way originally thought. It also shows that different parties favored different types of loans, although not always according to partisan predictions. Both electoral and partisan logics thus shape debt policy decisions -in contrast to fiscal policy where these logics are mutually exclusive- because debt policy involves decisions on multiple dimensions, about the total and type of loans
Abstract:
This research focuses on a hyperbolic system that describes bidisperse suspensions, consisting of two types of small particles dispersed in a viscous fluid. The dependence of solutions on the relative position of contact manifolds in the phase space is examined. The wave curve method serves as the basis for the first and second analyses. The former involves the classification of elementary waves that emerge from the origin of the phase space. Analytical solutions to prototypical Riemann problems connecting the origin with any point in the state space are provided. The latter focuses on semi-analytical solutions for Riemann problems connecting any state in the phase space with the maximum packing concentration line, as observed in standard batch sedimentation tests. When the initial condition crosses the first contact manifold, a bifurcation occurs. As the initial condition approaches the second manifold, another structure appears to undergo bifurcation, although it does not represent an actual bifurcation according to the triple shock rule. The study reveals important insights into the behavior of solutions in relation to these contact manifolds. This research sheds light on the existence of emerging quasi-umbilic points within the system, which can potentially lead to new types of bifurcations as crucial elements of the elliptic/hyperbolic boundary in the system of partial differential equations. The implications of these findings and their significance are discussed
Abstract:
This contribution is a condensed version of an extended paper, where a contact manifold emerging in the interior of the phase space of a specific hyperbolic system of two nonlinear conservation laws is examined. The governing equations are modelling bidisperse suspensions, which consist of two types of small particles differing in size and viscosity that are dispersed in a viscous fluid. Based on the calculation of characteristic speeds, the elementary waves with the origin as left Riemann datum and a general right state in the phase space are classified. In particular, the dependence of the solution structure of this Riemann problem on the contact manifold is elaborated
Abstract:
We study the connection between the global liquidity crisis and the severe credit crunch experienced by finance companies (SOFOLES) in Mexico using firm-level data between 2001 and 2011. Our results provide supporting evidence that, as a result of the liquidity shock, SOFOLES faced severely restricted access to their main funding sources (commercial bank loans, loans from other organizations, and public debt markets). After controlling for the potential endogeneity of their funding, we find that the liquidity shock explains 64 percent of SOFOLES' credit contraction during the recent financial crisis (2008-2009). We use our estimates to disentangle supply from demand factors as determinants of the credit contraction. After controlling for the large decline in loan demand during the financial crisis, our findings suggest that supply factors (such as nonperforming loans and lower liquidity buffers) also played a significant role. Finally, we find that financial deregulation implemented in 2006 may have amplified the effects of the global liquidity shock
Abstract:
We develop a two-country, three-sector model to quantify the effects of Korean trade policies for structural change from 1963 through 2000. The model features non-homothetic preferences, Armington trade, proportional import tariffs and export subsidies, and is calibrated to match sectoral value added data on Korean production and trade. Korea's tariff liberalization increased imports and trade, especially agricultural imports, accelerating de-agriculturalization and intensifying industrialization. Korean subsidy liberalization lowered exports and trade, especially industrial exports, attenuating industrialization. Thus, while individually powerful agents for structural change, Korea's tariff and subsidy reforms offset each other. Subsidy reform dominated quantitatively; lower trade, higher agricultural and lower industrial employment shares, and slower industrialization were observed than in a counterfactual economy with no post-1963 policy reform
Abstract:
In this article, we address the problem of adaptive state observation of linear time-varying systems with delayed measurements and unknown parameters. Our new developments extend the results reported in our recently works. The case with known parameters has been studied by many researchers. However in this article we show that the generalized parameter estimation-based observer design provides a very simple solution for the unknown parameter case. Moreover, when this observer design technique is combined with the dynamic regressor extension and mixing estimation procedure the estimated state and parameters converge in fixed-time imposing extremely weak excitation assumptions
Abstract:
We consider an otherwise conventional monetary growth model in which spatial separation and limited communication create a transactions role for currency, and stochastic relocation gives rise to financial intermediaries. In this framework we consider how changes in fiscal and monetary policy, and in reserve requirements, affect inflation, capital formation, and nominal interest rates. There is also considerable scope for multiple equilibria; we show how reserve requirements that never bind along actual equilibrium paths can play an important role in avoiding undesirable equilibria. Finally, we demonstrate that changes in (apparently) nonbinding reserve requirements can have significant, real effects
Abstract:
Efficient modeling of probabilistic dependence is one of the most important areas of research in decision analysis. Several approximations to joint probability distributions have being developed with the intent of capturing the probabilistic dependence among variables. However, the accuracy of these methods in a wide range of settings is unknown. In these paper, we develop a methodology to test the accuracy of several approximations distributions
Abstract:
We characterize dominant-strategy incentive compatibility with multidimensional types. A deterministic social choice function is dominant-strategy incentive compatible if and only if it is weakly monotone (W-Mon). The W-Mon requirement is the following: If changing one agent's type (while keeping the types of other agents fixed) changes the outcome under the social choice function, then the resulting difference in utilities of the new and original outcomes evaluated at the new type of this agent must be no less than this difference in utilities evaluated at the original type of this agent
Abstract:
International organizations play an increasingly central role in contemporary international lawmaking, yet their role in sources theory is still underexplored. Most accounts focus on one of two questions: They either ask to what extent international organizations can generate legally binding obligations for external actors and what role so-called "soft law" plays in this regard, or they explore to what extent international organizations are bound by the sources of international law and can thus be held accountable. In this contribution, I use the 2030 Agenda on Sustainable Development to show that an overlooked, but arguably one of the most important effects of the law generated by international organizations is the effect that it has within the organization itself. The 2030 Agenda on Sustainable Development established a new paradigm for all parts of the UN system and has had effects in all its areas of work, including the UN's external activities and its interaction with member states. While declarations such as the 2030 Agenda might thus not be directly legally binding upon its member states, they have important normative effects within and beyond the UN system that are hitherto unaccounted for. This raises questions of equality and legitimacy that are touched upon in closing
Abstract:
Germany's two-year membership of the UN Security Council ended on 31 December 2020. Having started with big expectations, it was hit by the hard realities of increasingly divisive world politics in the time of a global pandemic. While Germany fell short of its ambitions on the two thematic issues of Women, Peace and Security, and Climate and Security, it can point to some successes, in particular with regard to Sudan. These successes did not, however, increase Germany's prospects of securing a permanent, seat on the Council for the foreseeable future. This section provides an overview of Germany's activities on the Council and ends with a brief evaluation
Abstract:
As the only international organization that aspires to be unviersal both in terms of its membership as well as in terms of the policy fields in which it intervenes, the United Nations (UN) occupies a unique position in international lawmaking. Focusing on the UN's political and judicial or quasi-judicial organs does not, however, fully capture the organization's lawmaking activities. Instead, much of the UN's impact on international law today can be traced back to its civil servants. In this paper, I argue that international lawmaking today is best understood as processual and fluid, and that in contemporary lawmaking thus understood, the UN Secretariat plays an important role. I then propose two ways in which international civil servants add to international lawmaking: they engage in executive interpretation of broad and elusive norms, and they act as an interface between various actors on different governance levels. This raises novel legitimacy challenges for the international legal order
Abstract:
Germany's two-year membership in the UN Security Council ended on 31 December 2020. Starting with big expectations, it hit the hard realities of increasingly divisive world politics in times of a global pandemic. Nevertheless, Germany can point to some notable successes. Every eight years, Germany applies for a non-permanent membership in the Security Council. The recent term was the sixth time that Germany sat on the Council. It had applied with an ambitious program: strengthening the women, peace and security agenda; putting the link between climate change and security squarely on the Council's agenda; strengthening the humanitarian system; and lastly, revitalizing the issue of disarmament and arms control. It chose those four thematic issues to make its mark, besides the Council's day-to-day business dealing with country situations. Has Germany been successful?
Abstract:
When actors express conflicting views about the validity or scope of norms or rules in relation to other norms or rules in the international sphere, they often do so in the language of international law. This contribution argues that international law's hermeneutic acts as a common language that cuts across spheres of authority and can thus serve as a conflict management tool for interface conflicts. Often, this entails resorting to an international court. While acknowledging that courts cannot provide permanent solutions to the underlying political conflict, I submit that court proceedings are interesting objects of study that promote our understanding of how international legal argument operates as a conflict management device. I distinguish three dimensions of common legal form, using the well-known EC-Hormones case as illustration: a procedural, argumentative, and substantive dimension. While previous scholarship has often focused exclusively on the substantive dimension, I argue that the other two dimensions are equally important. In concluding, I reflect on a possible explanation as to why actors are disposed to resort to international legal argument even if this is unlikely to result in a final solution: there is a specific authority claim attached to international law qua law
Abstract:
The International Court of Justice (ICJ) occupies a special position amongst international courts. With its quasi-universal membership and ability to apply in principle the whole body of international law, it should be well-placed to adjudicate its cases with a holistic view on international law. This article examines whether the ICJ has lived up to this expectation. It analyses the Court's case load in the 21st century through the lens of inter-legality as the current condition of international law. With regard to institutional inter-legality, the authors observe an increase of inter-State proceedings based on largely the same facts that are initiated both before the ICJ and other courts and identify an increasing need to address such parallel proceedings. With regard to substantive inter-legality, the article provides analyses of ICJ cases in the fields of consular relations and human rights, as well as international environmental law. The authors find that the ICJ is rather reluctant in situating norms within their broader normative environment and restrictively applies only those rules its jurisdiction is based on. The Court does not make use of its abilities to adjudicate cases holistically and thus falls short of expectations raised by its own members 20 years ago
Abstract:
Contemporary international law often presents itself as an almost impenetrable thicket of overlapping legal regimes that materialize through multilateral treaties at the global, regional and sub-regional levels, customary law and other regulatory orders. Often, overlaps between different regimes manifest themselves as constellations of norms that appear to be conflictual. Valentin Jeutner's book entitled Irresolvable Norm Conflicts in International Law, based on his doctoral thesis defended at Cambridge University in 2015, is the latest in a recent series of monographs that address norm conflicts in international law. On his own account, his work differs from other studies in that it explores certain constellations of public international law where 'the legal order confronts legal subjects with seemingly impossible expectations', where 'a state of legal superposition' exists (at 6). As indicated in the title, Jeutner is not concerned with norm conflicts in general but, rather, with a specific subset of conflicts: those that are, legally speaking, irresolvable. Jeutner proposes that such irresolvable conflicts ought to be called 'legal dilemmas' and argues that such dilemmas ought to be solved by the sovereign actor facing a dilemma: the state. His book is informed by deontic logic and formulates an abstract theory of the legal dilemma as a novel concept for international law, which the author explicitly introduces as a stipulative definition - a term of art (at 19)
Resumen:
Hart dedicó poca atención a la regla de adjudicación –lo mismo hizo la literatura especializada. El propósito de este escrito consiste en intentar ir más allá de las escasas indicaciones brindadas por Hart sobre el tema de la regla de adjudicación y detallar la función que desempeña en el seno de su concepción del derecho. El método elegido es esencialmente reconstructivo: no se trata de tomar inspiración en Hart para elaborar una noción propia de regla de adjudicación, sino de poner de relieve las potencialidades –aunque también los límites– de este tipo de regla secundaria. Para ello, en primer lugar se profundizan las conexiones entre la regla de adjudicación, por un lado, y la coacción y la interpretación jurídica, por el otro: el objetivo consiste en dibujar la posición teórica de los jueces, que se desprende, en particular, de la investigación de sus (distintas) tareas en relación con los casos dudosos y los casos claros. A continuación, tal postura teórica se somete a crítica; prestando atención, en particular, al problema de la definitividad e infalibilidad de las sentencias, se demuestra cómo Hart consideró la aplicación del derecho de forma demasiado declarativa
Abstract:
H.L.A. Hart did not pay much attention to the rule of adjudication –and neither did scholars. This paper aims to go beyond what Hart explicitly says about it and to give an account of its role within his concept of law. The perspective will be reconstructive, since the goal is not to develop an original concept of rule of adjudication, inspired on Hart’s theory of law, but rather to shed light on the potential –but also the limits– of this kind of secondary rule. Therefore, the article will first explore the interrelation between the rule of adjudication, on the one hand, and coercion and legal interpretation, on the other: the goal is to outline the theoretical position of judges, which becomes clear when analyzing their (different) tasks in easy and hard cases. Then, this position is put under criticism; by examining, in particular, the well-known problem of the infallibility and finality of judicial decisions, it is shown that Hart considered the judicial application of law in a too declarative way
Abstract:
Consider an arbitrary large population at the present time, originated at an unspecified arbitrary large time in the past, where individuals within the same generation independently reproduce forward in time, sharing a common offspring distribution that may vary across generations. In other words, the reproduction is driven by a Galton-Watson process in a varying environment. The genealogy of the current generation, traced backward in time, is uniquely determined by the coalescent point process (Ai, i ≥ 1), where Ai denotes the coalescent time between individuals i and i + 1. In general, this process lacks the Markov property. In constant environment, Lambert and Popovic (2013) proposed a Markov process of point measures to reconstruct the coalescent point process. We provide a counterexample showing that their process lacks the Markov property. The main contribution of this work is to propose a vector valued Markov process (Bi, i ≥ 1), that can reconstruct the genealogy, with finite information for every i. Additionally, in the case of linear fractional offspring distributions, we establish that the variables of the coalescent point process (Ai, i ≥ 1) are independent and identically distributed
Abstract:
For a continuous state branching process with two types of individuals which are subject to selection and density dependent competition, we characterize the joint evolution of population size, type configurations and genealogies as the unique strong solution of a system of SDEs. Our construction is achieved in the lookdown framework and provides a synthesis as well as a generalization of cases considered separately in two seminal papers by Donnelly and Kurtz (Ann. Appl. Probab. 9 (1999) 1091-1148; Ann. Probab. 27 (1999) 166-205), namely fluctuating population sizes under neutrality, and selection with constant population size. As a conceptual core in our approach we introduce the selective lookdown space which is obtained from its neutral counterpart through a state-dependent thinning of "potential" selection/competition events whose rates interact with the evolution of the type densities. The updates of the genealogical distance matrix at the "active" selection/competition events are obtained through an appropriate sampling from the selective lookdown space. The solution of the above mentioned system of SDEs is then mapped into the joint evolution of population size and symmetrized type configurations and genealogies, that is, marked distance matrix distributions. By means of Kurtz' Markov mapping theorem, we characterize the latter process as the unique solution of a martingale problem. For the sake of transparency we restrict the main part of our presentation to a prototypical example with two types, which contains the essential features. In the final section we outline an extension to processes with multiple types including mutation
Abstract:
The nested Kingman coalescent describes the ancestral tree of a population undergoing neutral evolution at the level of individuals and at the level of species, simultaneously. We study the speed at which the number of lineages descends from infinity in this hierarchical coalescent process and prove the existence of an early-time phase during which the number of lineages at time t decays as 2 gamma/ct2, where c is the ratio of the coalescence rates at the individual and species levels, and the constant gamma approximate to 3.45 is derived from a recursive distributional equation for the number of lineages contained within a species at a typical time
Abstract:
In this paper, we study the genealogical structure of a Galton-Watson process with neutral mutations. Namely, we extend in two directions the asymptotic results obtained in Bertoin [Stochastic Process. Appl. 120 (2010) 678–697]. In the critical case, we construct the version of the model in Bertoin [Stochastic Process. Appl. 120 (2010) 678–697], conditioned not to be extinct. We establish a version of the limit theorems in Bertoin [Stochastic Process. Appl. 120 (2010) 678–697], when the reproduction law has an infinite variance and it is in the domain of attraction of an a-stable distribution, both for the unconditioned process and for the process conditioned to nonextinction. In the latter case, we obtain the convergence (after re-normalization) of the allelic sub-populations towards a tree indexed CSBP with immigration
Abstract:
We consider the compact space of pairs of nested partitions of N, where by analogy with models used in molecular evolution, we call "gene partition" the finer partition and "species partition" the coarser one. We introduce the class of nondecreasing processes valued in nested partitions, assumed Markovian and with exchangeable semigroup. These processes are said simple when each partition only undergoes one coalescence event at a time (but possibly the same time). Simple nested exchangeable coalescent (SNEC) processes can be seen as the extension of Λ-coalescents to nested partitions. We characterize the law of SNEC processes as follows. In the absence of gene coalescences, species blocks undergo Λ-coalescent type events and in the absence of species coalescences, gene blocks lying in the same species block undergo i.i.d. Λ-coalescents. Simultaneous coalescence of the gene and species partitions are governed by an intensity measure vs on (0,1] x M1 ([0,1]) providing the frequency of species merging and the law in which are drawn (independently) the frequencies of genes merging in each coalescing species block. As an application, we also study the conditions under which a SNEC process comes down from infinity
Abstract:
Survey research suggests that managers and employees see remote work very differently. Managers are more likely to say it harms productivity, while employees are more likely to say it helps. The difference may be commuting: Employees consider hours not spent commuting in their productivity calculations, while managers don't. The answer is clearer communication and policies, and for many companies the best policy will be managed hybrid with two to three mandatory days in office
Abstract:
Many CEOs are publicly gearing up for yet another return-to-office push. Privately, though, executives expect remote work to keep on growing, according to a new survey. That makes sense: Employees like it, the technology is improving, and - at least for hybrid work - there seems to be no loss of productivity. Despite the headlines, executives expect both hybrid and fully remote work to keep increasing over the next five years
Abstract:
This brief deals with the problem of online parameter identification of the parameters of the dynamic model of a photovoltaic (PV) array connected to a power system through a power converter. It has been shown in the literature that when interacting with switching power converters, the dynamic model is able to better account for the PV array operation compared to the classical five-parameter static model of the array. While there are many results of identification of the parameters of the latter model, to the best of our knowledge, no one has provided a solution for the aforementioned more complex dynamic model since it concerns the parameter estimation of a nonlinear, underexcited system with unmeasurable state variables. Achieving such an objective is the main contribution of this brief. We propose a new parameterization of the dynamic model, which, combined with the powerful identification technique of dynamic regressor extension and mixing (DREM), ensures a fast and accurate online estimation of the unknown parameters. Realistic numerical examples via computer simulations are presented to assess the performance of the proposed approach-even being able to track the parameter variations when the system changes operating point
Abstract:
In this paper we consider the problems of leaderless consensus for networks of fully actuated Euler-Lagrange agents perturbed by unknown additive disturbances. The network is an undirected weighted graph with time delays. The proposed controller has a PD structure that incorporates, in a certainty-equivalent way, the estimate of the unknown disturbance. The design of the disturbance estimator proceeds along the following steps. First, the derivation of a regression equation, that turns out to be nonlinearly parameterized, but with an injective mapping. Second, we propose to use a recently introduced least-squares plus dynamic regressor extension algorithm that allows us to estimate the unknown frequencies imposing extremely weak excitation assumptions. In this way, we derive a sufficient condition on the proportional and derivative gains of the controller to ensure that the systems globally and asymptotically converge to a consensus position
Abstract:
In this paper we provide two significant extensions to the recently developed parameter estimation-based observer design technique for state-affine systems. First, we consider the case when the full state of the system is reconstructed in spite of the presence of unknown, time-varying parameters entering into the system dynamics. Second, we address the problem of reduced order observers with finite convergence time. For the first problem, we propose a simple gradient-based adaptive observer that converges asymptotically under the assumption of generalised persistent excitation. For the reduced order observer we invoke the advanced dynamic regressor extension and mixing parameter estimator technique to show that we can achieve finite convergence time under the weak interval excitation assumption. Simulation results that illustrate the performance of the proposed adaptive observers are given. This include, an unobservable system, an example reported in the literature and the widely popular, and difficult to control, single-ended primary inductor converter
Abstract:
Wind turbines are often controlled to harvest the maximum power from the wind, which corresponds to the operation at the top of the bell-shaped power coefficient graph. Such a mode of operation may be achieved implementing an extremum seeking data-based strategy, which is an invasive technique that requires the injection of harmonic disturbances. Another approach is based on the knowledge of the analytic expression of the power coefficient function, an information usually unreliably provided by the turbine manufacturer. In this paper we propose a globally, exponentially convergent on-line estimator of the parameters entering into the windmill power coefficient function. This corresponds to the solution of an identification problem for a nonlinear, nonlinearly parameterized, underexcited system. To the best of our knowledge we have provided the first solution to this challenging, practically important, problem
Abstract:
In this paper we address the problem of adaptive state observation of affine-in-the-states time-varying systems with delayed measurements and unknown parameters. The development of the results proposed in the [Bobtsov et al. 2021a] and in the [Bobtsov et al. 2021c] is considered. The case with known parameters has been studied by many researchers--see [Sanz et al. 2019, Bobtsov et al. 2021b] and references therein-- where, similarly to the approach adopted here, the system is treated as a linear time-varying system. We show that the parameter estimation-based observer (PEBO) design proposed in [Ortega et al. 2015, 2021] provides a very simple solution for the unknown parameter case. Moreover, when PEBO is combined with the dynamic regressor extension and mixing (DREM) estimation technique [Aranovskiy et al. 2016, Ortega et al. 2019], the estimated state converges in fixed-time with extremely weak excitation assumptions
Abstract:
The problem of effective use of Phasor Measurement Units (PMUs) to enhance power systems awareness and security is a topic of key interest. The central question to solve is how to use these new measurements to reconstruct the state of the system. In this article, we provide the first solution to the problem of (globally convergent) state estimation of multimachine power systems equipped with PMUs and described by the fourth-order flux-decay model. This article is a significant extension of our previous result, where this problem was solved for the simpler third-order model, for which it is possible to recover algebraically part of the unknown state. Unfortunately, this property is lost in the more accurate fourth-order model, and we are confronted with the problem of estimating the full state vector. The design of the observer relies on two recent developments proposed by the authors, a parameter estimation based approach to the problem of state estimation and the use of the Dynamic Regressor Extension and Mixing (DREM) technique to estimate these parameters. The use of DREM allows us to overcome the problem of lack of persistent excitation that stymies the application of standard parameter estimation designs. Simulation results illustrate the latter fact and show the improved performance of the proposed observer with respect to a locally stable gradient-descent-based observer
Abstract:
In this article, we propose a PI passivity-based controller, applicable to a large class of switched power converters, that ensures global state regulation to a desired equilibrium point. A solution to this problem requires full state-feedback, which makes it practically unfeasible. To overcome this limitation we construct a state observer that is implementable with measurements that are available in practical applications. The observer reconstructs the state in finite-time, ensuring global convergence of the PI. An adaptive version of the observer, where some parameters of the converter are estimated, is also proposed. The excitation requirement for the observer is very weak and is satisfied in normal operation of the converters. Realistic simulation results illustrate the excellent performance and robustness vis-à-vis noise and parameter uncertainty of the proposed output-feedback PI
Abstract:
In this paper we address the problem of state observation of linear time-varying (LTV) systems with delayed measurements, which has attracted the attention of many researchers-see Sanz et al. (2019) and references therein. We show that, the parameter estimation-based observer (PEBO) design proposed in Ortega, Bobtsov, Nikolaev, Schiffer, and Dochain (2021), Ortega, Bobtsov, Pyrkin, and Aranovskiy (2015) provides a very simple solution to the problem with reduced prior knowledge. Moreover, when PEBO is combined with the dynamic regressor extension and mixing (DREM) estimation technique (Aranovskiy, Bobtsov, Ortega, & Pyrkin, 2017; Ortega, Gerasimov, Barabanov, & Nikiforov, 2019), the estimated state converges in fixed-time with extremely weak excitation assumptions
Abstract:
How many dimensions adequately characterize voting on U.S. trade policy? How are these dimensions to be interpreted? This paper seeks those answers in the context ofvoting on the landmark 1988 Omnibus Trade and Competitiveness Act. The paper takes steps beyond the existing literature. First, using a factor analytic approach, the dimension issue is examined to determine whether subsets of roll call votes on trade policy are correlated. A factor-analytic result allows the use of a limited number ofvotes for this purpose. Second, a structural model with latent variables is used to find what economic and political factors comprise these dimensions. The study yields two main findings. More than one dimension determines voting in the Senate, with the main dimension driven by economic interest, not ideology. Although two dimensions are required to fuHy account for House voting, one dimension dominates. That dimension is driven primarily by party. Based on reported evidence, and a growing consensus in the congressional studies literature, this finding is attributed to interest-based leadership that evolves in order to solve collective action problems faced by individual legislators
Abstract:
This paper presents the design of a Proportional-Integral Passivity-based Controller (PI-PBC) for a current source inverter feeding a resistive load. Thanks to the definition of a new passive output, the closed-loop system is shown to be globally asymptotically stable. This result solves the internal stability problem reported for these power converters. To robustify the control algorithm, the paper also includes the design of a parameter estimation scheme for the parasitic resistances and the load conductance. Numerical simulations are carried out to validate the control algorithm. The simulation stage compares the behaviour using the averaged model of the power converter and a more realistic switching model, including the three-phase implementation
Abstract:
At World Trade Organization (WTO), nothing is agreed until everything is agreed and until everyone agrees at the negotiating tables, and that 'magic' moment has been difficult to arrive at. Some WTO Members have argued that if all Members cannot move ahead together with the acceptance of new rules, the Members who are able and willing to move ahead should be provided with the required space to do so. Some Members have indeed chosen to push ahead as they have recently sought progress in negotiations through the Joint Statement Initiatives (JSIs). The JSI proponents claim that JSIs can contribute to building a more responsive and relevant WTO - which will be critical to restoring global trade and economic growth in the wake of the COVID-19 crisis. Others have staunchly opposed such plurilateral attempts at trade liberalization on various grounds, often labelling them as attempts to circumvent the WTO's core tenets of multilateralism. The article contributes to this debate, as the authors assess different routes through which JSIs can be added to the WTO acquis and the WTO-compatibility of each of these routes. It then assesses the possible detrimental impact that JSIs can have on the essence and fabric of the multilateral trading system (MTS)
Abstract:
The World Trade Organization (WTO) has three main functions: (i) it provides a negotiation forum where Members can negotiate new agreements and understandings, (ii) it provides a judicial forum where trade disputes between countries can be settled, and (iii) it acts as an executive forum for the administration and application of the WTO agreements, including capacity-building and training in this respect. Currently, it is only performing its executive function, as the other two functions remain stalled. The authors in this article analyse two challenges that have contributed to paralysing the WTO's legislative and judicial functions. With this assessment, the authors suggest that the 'real elephant in the room', i.e., the root-cause behind these challenges, is the avoidable 'consensus-based decision-making
Abstract:
For a multilateral system to be sustainable, it is important to have several escape clauses which can allow countries to protect their national security concerns. However, when these escape windows are too wide or ambiguous, defining their ambit and scope becomes challenging yet crucial to ensure that they are not open to misuse. The recent Panel Ruling in Russia-Measures Concerning Traffic in Transit is the very first attempt by the WTO to clarify the scope and ambit of National Security Exception. In this paper, we argue that the Panel has employed a combination of an objective and a subjective approach to interpret this exception. This hybrid approach to interpret GATT Article XXI (b) provides a systemic balance between the sovereign rights of the members to invoke the security exception and their right to free and open trade. But has this Ruling opened Pandora's box? In this paper, we address this issue by providing an in-depth analysis of the Panel's decision
Abstract:
Network data arises through the observation of relational information between a collection of entities, for example, friendships (relations) amongst a sample of people (entities). Traditionally, statistical models of such data have been developed to analyse a single network, that is, a single collection of entities and relations. More recently, attention has shifted to analysing samples of networks. A driving force has been the analysis of connectome data, arising in neuroscience applications, where a single network is observed for each patient in a study. These models typically assume, within each network, the entities are the units of observation, that is, more data equates to including more entities. However, an alternative paradigm considers relations-such as edges or paths-as the observational units, exemplified by email exchanges or user navigations across a website. This interaction network framework has generally been applied to single networks, without extending to the case where multiple such networks are observed, for instance, analysing navigation patterns from many users. Motivated by this gap, we propose a new Bayesian modelling framework to analyse such data. Our approach is based on practitioner-specified distance metrics between networks, allowing us to parameterise models analogous to Gaussian distributions in network space, using location and scale parameters. We address the key challenge of defining meaningful distances between interaction networks, proposing two new metrics with theoretical guarantees and practical computation strategies
Abstract:
This essay is concerned with computation as a tool for the analysis of mathematical models in economics. It is our contention that the use of high-performance computers is likely to play the substantial role in providing understanding of economic models that it does with regard to models in the physical and biological sciences. The main thrust of our commentary is that numerical simulations of mathematical models are in certain respects like experiments performed in a laboratory, and that this view imposes conditions on the way they are carried out and reported
Abstract:
Our research aims to aid children with autism improve social interactions through discrete messages received in a waist smart band. In this paper, we describe the design and development of an interactive system in a Microsoft Band 2 and present results of an evaluation with three students that gives us positive evidence that using this form of support can increase the quantity and quality of the social interactions
Resumen:
La Historia Roderici Campidocti es una obra hispano-latina medieval que narra los sucesos más significativos en la brillante trayectoria militar de Rodrigo Díaz de Vivar, el Cid Campeador. En esta nota, presentamos algunos comentarios en torno a la aparición y la función de las aves en dicha obra, así como en otros textos de tema cidiano, para establecer posibles relaciones e influencias entre ellos a partir de un tópico en común
Abstract:
The Historia Roderici Campidocti is a medieval Hispanic-latino novel relating the most important events in the military career of Rodrigo Díaz de Vivar, the Lord Champion. In this article, we will comment on the appearance and the role of birds in El Cid literature in order to establish their potential relationship and influence
Abstract:
In this article, we present some new results for the design of PID passivity-based controllers (PBCs) for the regulation of port-Hamiltonian (pH) systems. The main contributions of this article are: (i) new algebraic conditions for the explicit solution of the partial differential equation required in this design; (ii) revealing the deleterious impact of the dissipation obstacle that limits the application of the standard PID-PBC to systems without pervasive dissipation; (iii) the proposal of a new PID-PBC which is generated by two passive outputs, one with relative degree zero and the other with relative degree one. The first output ensures that the PID-PBC is not hindered by the dissipation obstacle, while the relative degree of the second passive output allows the inclusion of a derivative term. Making the procedure more constructive and removing the requirement on the dissipation significantly extends the realm of application of PID-PBC. Moreover, allowing the possibility of adding a derivative term to the control, enhances its transient performance
Abstract:
Equilibrium stabilization of nonlinear systems via energy shaping is a well-established, robust, passivity-based controller design technique. Unfortunately, its application is often stymied by the need to solve partial differential equations, which is usually a difficult task. In this paper a new, fully constructive, procedure to shape the energy for a class of port-Hamiltonian systems that obviates the solution of partial differential equations is proposed. Proceeding from the well-known passive, power shaping output we propose a nonlinear static state-feedback that preserves passivity of this output but with a new storage function. A suitable selection of a controller gain makes this function positive definite, hence it is a suitable Lyapunov function for the closed-loop. The resulting controller may be interpreted as a classical PI-connections with other standard passivity-based controllers are also identified
Abstract:
Equilibrium stabilisation of nonlinear systems via energy shaping is a well-established, robust, passivity-based controller design technique. Unfortunately, its application is often stymied by the need to solve partial differential equations. In this paper a new, fully constructive, procedure to shape the energy for a class of port-Hamiltonian systems that obviates the solution of partial differential equations is proposed. Proceeding from the well-known passive, power shaping output we propose a nonlinear static state-feedback that preserves passivity of this output but with a new storage function. This function contains some tuning gains used to ensure it is positive definite, hence a suitable Lyapunov function for the closed-loop. Connections with other standard passivity-based controllers are indicated and it is shown that the new controller design is applicable to two benchmark examples
Abstract:
In this Action-Process-Object-Schema (APOS) study, we aim to study the extent to which results from our previous research on student understanding of two-variable functions can be replicated in a different institutional context. We conducted the experience at a university in another country and with a different instructor than in previous studies. The experience consisted in comparing two sections of the same course; one taught through lectures and the other using activities designed with APOS theory and the ACE didactical strategy (Activities, Class discussion, Exercises). We show the results of a comparison of students' performance in both groups and give evidence of the generalizability of our previously obtained research results and the possible replication of didactic aspects across institutions. We found that using the APOS theory didactical approach favors a deeper understanding of two-variable functions. We also found that factors other than the activity sets and teaching strategy affected the replication
Abstract:
In this paper we provide a necessary and sufficient condition for the solv-ability of inhomogeneous Cauchy-Riemann type systems where the datum consists of continuous C-valued functions and we describe its general solution by embedding the system in an appropriate quaternionic setting
Abstract:
This paper aims to study several boundary value properties, to derive a Poincare-Bertrand formula and to solve some Dirichlet-type problems for the 2D quaternionic metaharmonic layer potentials
Abstract:
The Cimmino system offers a natural and elegant generalization to four-dimensional case of the Cauchy–Riemann system of first order complex partial differential equations. Recently, it has been proved that many facts from the holomorphic function theory have their extensions onto the Cimmino system theory. In the present work a Poincaré–Bertrand formula related to the Cauchy–Cimmino singular integrals over piecewise Lyapunov surfaces in R4 is derived with recourse to arguments involving quaternionic analysis. Furthermore, this paper obtains some analogues of the Hilbert formulas on the unit 3-sphere and on the 3-dimensional space for the theory of Cimmino system
Abstract:
Starting from Sinclair's 1976 work [6] on automatic continuity of linear operators on Banach spaces, we prove that sequences of intertwining continuous linear maps are eventually constant with respect to the separating space of a fixed linear map. Our proof uses a gliding hump argument. We also consider aspects of continuity of linear functions between locally convex spaces and prove that such a linear function T from the locally convex space X to the locally convex space Y is continuous whenever the separating space G(T) is the zero vector in Y and for which X and Y satisfy conditions for a closed graph theorem
Abstract:
We prove an extension of the Patero optimization criterion to locally complete locally convex vector spaces to guarantee the existence of fixed points of set-valued maps
Abstract:
In this article we discuss the relationship between three types of locally convex spaces: docile spaces, Mackey first countable spaces, and sequentially Mackey first countable spaces. More precisely, we show that docile spaces are sequentially Mackey first countable. We also show the existence of sequentially Mackey first countable spaces that are not Mackey first countable, and we characterize Mackey first countable spaces in terms of normability of certain inductive limits
Abstract:
Bisectors of line segments are quite simple geometrical objects. Despite their simplicity, they have many surprising and useful properties. As metric objects, the shape of bisectors depends upon the metric considered. This article discusses geometric properties of bisectors of line segments in the plane, when the bisectors are taken with respect to the usual p-norms. Although the shape of bisectors changes as their defining p-norm varies, it is shown that the bisectors share exactly three points (or infinitely many points in exceptional cases determined by the orientation of the base line segment)
Abstract:
Ekeland's variational principle and the existence of critical points of dynamical systems, also known as multiobjective optimization, have been proved in the setting of locally complete spaces. In this article we prove that these two properties can be deduced one from the other under certain convexity conditions
Abstract:
In this paper we prove Ekeland's variational principIe in the setting of locally complete spaces for lower semi continuous functions from above and bounded below. We use this theorem to prove Caristi's fixed point theorem in the same setting and also for lower semi continuous functions
Abstract:
We define a generalization of Mackey first countability and prove that it is equivalent to being docile. A consequence of the main result is to give a partial affirmative answer to an old question of Mackey regarding arbitrary quotients of Mackey first countable spaces. Some applications of the main result to spaces such as inductive limits are also given
Abstract:
For a partition P of a set C we prove that if P maximizes the generalized utilitarian social welfare function then P is a Pareto optimal partition. We also give a condition under whic the partition that maximizes is unique
Abstract:
More than half of the students in the Latin American and the Caribbean region are below Pisa level 1 which means that the majority of the students in our region cannot identify information and carry out routine procedures according to direct instructions in explicit situations. There have been some good experiences in each country to reverse the depicted situation but it is not enough and this is not happening in all countries. I will talk about these experiences. In all of them professional mathematicians need to help teachers to have the necessary knowledge, and become more effective instructors that can raise the standard of every student
Abstract:
In this paper we prove an extension of Ekeland’s variational principle in the setting of locally complete spaces. We also present an Equilibrium version of the Ekeland-type variational principle, a Caristi-Kirk type fixed point theorem for multivalued maps and a Takahashi minimization theorem, we then prove that they are equivalent
Abstract:
In this paper we study the Pareto optimization problem in the framework of locally convex spaces with the restricted assumption that only some related sets are locally complete
Abstract:
We prove an extension of Ekeland’s variational principle to locally complete spaces which uses subadditive, strictly increasing continuous functions as perturbations
Abstract:
We prove that for the inductive limit of sequentially complete spaces regularity or local completeness imply the Banach Disk Closure Property (BDCP) (an inductive limite enjoys the BDCP if all Banach disks in the steps of the inductive limit are such that their closures, with respect to the inductive limit topology, are Banach discs as well). In particular we obtain that for an inductive limit of sequentially complete spaces, regularity is equivalent to the BDCP plus an “almost regularity” condition
Abstract:
We study the heredity of local completeness and the strict Mackey convergence property from the locally convex space E to the space of absolutely p-summable sequences on E, lp(E) for 1 ≤ p < ∞
Abstract:
Any inductive limit of bornivorously webbed spaces is sequentially complete iff it is regular
Abstract:
We defined the lp,q-summability property and study the relations between the lp,q-summability property, the Banach-Mackey spaces and the locally complete spaces. We prove that, for c0-quasibarrelled spaces, Banach-Mackey and locally complete are equivalent. Last section is devoted to the study of CS-closed sets introduced by Jameson and Kakol
Abstract:
In this paper we consider the Sobolev-Slobodeckij spaces W^(m,p) (R,E) where E is a strict (LF)-space, m belongs to (0, infinity)\ N and p belongs to [1, infinity). We prove that W^(m,p) (R,E) has the approximation property provided E has it, furthermore if E is a Banach space with the strict approximation property then W^(m,p) (R,E) has this property
Abstract:
This is a study of relationship between the concepts of Mackey, ultrabornological, bornological, barrelled, and infrabarrelled spaces and the concept of fast completeness. An example of a fast complete but not sequentially complete space is presented.
Abstract:
The notion of "praxeology" from the anthropological theory of the didactic (ATD) can be used as a framework to approach what has recently been called the networking of theories in mathematics education. Theories are interpreted as research praxeologies, and different modalities of "dialogues" between research praxeologies are proposed, based on alternatively considering the main features and proposals of one theory from the perspective of the other. To illustrate this networking methodology, we initiate a dialogue between APOS (action-process-object-schema) and the ATD itself. It starts from the theoretical component of both research praxeologies followed by the technological and technical ones. Both dialogue modalities and the resulting insights are illustrated, and the elements of APOS and the ATD that the dialogue can promote and develop are underlined. The results found indicate that a complete dialogue taking into account all components of research praxeologies appears as an unavoidable step in the networking of research praxeologies
Abstract:
We study transaction costs for making deposits within the privatized pension system in Mexico. We analyze an expansion of access channels for additional (voluntary) contributions at 7-Eleven stores, followed by a media campaign providing information on this policy and persuasive messages to save. We estimate a differential 6-9 percent increase in the volume of transactions post-policy in municipalities with 7-Eleven relative to those without. However, due to smaller deposits compared to pre-policy sizes, we find modest effects on the flow of savings. Contribution size was not just smaller for marginal savers, but also decreased significantly for some inframarginal savers
Abstract:
This paper proves the large deviation principle for a class of non-degenerate small noise diffusions with discontinuous drift and with state-dependent diffusion matrix. The proof is based on a variational representation for functionals of strong solutions of stochastic differential equations and on weak convergence methods.
Abstract:
We consider the construction of Markov chain approximations for an important class of deterministic control problems. The emphasis is on the construction of schemes that can be easily implemented and which possess a number of highly desirable qualitative properties. The class of problems covered is that for which the control is affine in the dynamics and with quadratic running cost. This class covers a number of interesting areas of application, including problems that arise in large deviations, risk-sensitive and robust control, robust filtering, and certain problems in computer vision. Examples are given as well as a proof of convergence
Abstract:
Morphometric datasets only convey useful information about variation when measurement landmarks and relevant anatomical axes are clearly defined. We propose that anatomical axes of 3D digital models of bones can be standardized prior to measurement using an algorithm that automatically finds a universal geometric alignment among sampled bones. As a case study, we use teeth of "prosimian" primates. In this sample, equivalent occlusal planes are determined automatically using the R-package auto3dgm. The area of projection into the occlusal plane for each tooth is the measurement of interest. This area is used in computation of a shape metric called relief index (RFI), the natural log of the square root of crown area divided by the square root of occlusal plane projection area. We compare mean and variance parameters of area and RFI values computed from these automatically orientated tooth models with values computed from manually orientated tooth models. According to our results, the manual and automated approaches yield extremely similar mean and variance parameters. The only differences that plausibly modify interpretations of biological meaning slightly favor the automated treatment because a greater proportion of differences among subsamples in the automated treatment are correlated with dietary differences. We conclude that—at least for dental topographic metrics—automated alignment recovers a variance pattern that has meaning similar to previously published datasets based on manual data collection. Therefore, future applications of dental topography can take advantage of automatic alignment to increase objectivity and repeatability
Abstract:
In "Why Democracy Protests Do Not Diffuse," we examine whether or not countries are significantly more likely to experience democracy protests when one or more of their neighbors recently experienced a similar protest. Our goal in so doing was not to attack the existing literature or to present sensational results, but to evaluate the extent to which the existing literature can explain the onset of democracy protests more generally. In addition to numerous studies attributing to diffusion the proliferation of democracy protests in four prominent waves of contention in Europe (1848, 1989, and early 2000s) and in the Middle East and North Africa (2011), there are multiple academic studies, as well as countless articles in the popular press, claiming that democracy protests have diffused outside these well-known regions and periods of contention (e.g., Bratton and van de Walle 1992; Weyland 2009; della Porta 2017). There are also a handful of cross-national statistical analyses that hypothesize that anti-regime contention, which includes but is not limited to democracy protests, diffuses globally (Braithwaite, Braithwaite, and Kucik 2015; Gleditsch and Rivera 2017; Escriba`-Folch, Meseguer, and Wright 2018). Herein, we discuss what we can and cannot conclude from our analysis about the diffusion of democracy protests and join our fellow forum participants in identifying potential areas for future research. Far from closing this debate, we hope our article will stimulate further conversations and analyses about the theoretical and empirical bases of contention, diffusion, and democratization
Abstract:
One of the primary international factors proposed to explain the geographic and temporal clustering of democracy is the diffusion of democracy protests. Democracy protests are thought to diffuse across countries, primarily, through a demonstration effect, whereby protests in one country cause protests in another based on the positive information that they convey about the likelihood of successful protests elsewhere and, secondarily, through the actions of transnational activists. In contrast to this view, we argue that, in general, democracy protests are not likely to diffuse across countries because the motivation for and the outcome of democracy protests result from domestic processes that are unaffected or undermined by the occurrence of democracy protests in other countries. Our statistical analysis supports this argument. Using daily data on the onset of democracy protests around the world between 1989 and 2011, we find that in this period, democracy protests were not significantly more likely to occur in countries when democracy protests had occurred in neighboring countries, either in general or in ways consistent with the expectations of diffusion arguments
Abstract:
In this chapter, we briefly discuss key dynamical features of a generalised Schnakenberg model. This model sheds light on the morphogenesis of plant root hair initiation process. Our discussion is focused on the plant called Arabidopsis thaliana, which is a prime plant-model for plant researchers as is experimentally well known. Here, relationships between physical attributes and biochemical interactions that occur at a sub-cellular level are revealed. The model consists of a two-component non-homogeneous reaction-diffusion system, which takes into account an on-and-off switching process of small G-protein family members called Rho-of-Plants (ROPs). This interaction however is catalysed by the plant hormone auxin, which is known to play a crucial role in many morphogenetic processes in plants. Upon applying semi-strong theory and performing numerical bifurcation analysis, all together with time-step simulations, we present results from a thorough analysis of the dynamics of spatially localised structures in 1D and 2D spatial domains. These compelling dynamical features are found to give place to a wide variety of patterns. Key features of the full analysis are discussed
Abstract:
En este manuscrito discutimos una de las contribuciones que la teoría de los sistemas dinámicos ofrece al entendimiento del crecimiento de una población de individuos con recursos limitados. En este contexto, exploramos brevemente algunas de las ideas que permiten entender dos mecanismos de vital importancia en ecología: la competencia y la interacción con los recursos del medio. El enfoque que seguiremos consistirá en la exposición de los resultados clave publicados en [1] por R. Dilao y T. Domingos para la dinámica de una generalización del modelo de Leslie. Asimismo, presentamos los resultados de algunos experimentos numéricos cuando se considera un retraso de la variable de evolución en la dinámica del crecimiento de los recursos y una dinámica de competencia simple entre los miembros de la población
Resumen:
En este artículo se describen las características esenciales que determinan un comportamiento caótico en los sistemas dinámicos. Con este fin, se reproduce el análisis de la Ecuación Logística. También, se muestran los conceptos de dimensión fractal y autosimilitud por medio del Conjunto de Mandelbrot. La sensibilidad a condiciones iniciales es ejemplificada por el Sistema de Lorenz. Finalmente, se exponen los ingredientes necesarios de la ruta Ruelle–Takens–Newhouse al caos en el sistema de reacción-difusión Barrio–Varea–Aragón–Maini
Resumen:
En este capítulo se expone una introducción a las condiciones que producen espontáneamente estructuras localizadas en sistemas de reacción-difusión, donde la parte cinética contiene no-linealidades cuadráticas y cúbicas y en regiones de parámetros donde este fenómeno ocurre. Se presenta una breve muestra de ejemplos de patrones en sistemas físicos y biológicos cuya naturaleza es de origen localizado. En seguida, se exploran las condiciones que dan pie a la bifurcación de Turing y las propiedades elementales del mecanismo conocido como 'homoclinic snaking'. El objetivo principal de este capítulo consiste en exponer los ingredientes necesarios que capturan la esencia de ambas maquinarias y que, por tanto, inducen la aparición de estructuras localizadas en sistemas que generalmente producen patrones extendidos. Se utiliza un sistema generalizado del tipo Schnackenberg con términos de fuente y pérdida en una y dos dimensiones espaciales como ejemplo
Abstract:
A generalized Schnakenberg reaction-diffusion system with source and loss terms and a spatially dependent coefficient of the nonlinear term is studied both numerically and analytically in two spatial dimensions. The system has been proposed as a model of hair initiation in the epidermal cells of plant roots. Specifically the model captures the kinetics of a small G-protein ROP, which can occur in active and inactive forms, and whose activation is believed to be mediated by a gradient of the plant hormone auxin. Here the model is made more realistic with the inclusion of a transverse coordinate. Localized stripe-like solutions of active ROP occur for high enough total auxin concentration and lie on a complex bifurcation diagram of single- and multipulse solutions. Transverse stability computations, confirmed by numerical simulation show that, apart from a boundary stripe, these one-dimensional (1D) solutions typically undergo a transverse instability into spots. The spots so formed typically drift and undergo secondary instabilities such as spot replication. A novel two-dimensional (2D) numerical continuation analysis is performed that shows that the various stable hybrid spot-like states can coexist. The parameter values studied lead to a natural, singularly perturbed, so-called semistrong interaction regime. This scaling enables an analytical explanation of the initial instability by describing the dispersion relation of a certain nonlocal eigenvalue problem. The analytical results are found to agree favorably with the numerics. Possible biological implications of the results are discussed
Abstract:
A mathematical analysis is undertaken of a Schnakenberg reaction-diffusion system in one dimension with a spatial gradient governing the active reaction. This system has previously been proposed as a model of the initiation of hairs from the root epidermis Arabidopsis, a key cellular-level morphogenesis problem. This process involves the dynamics of the small G-proteins, Rhos of plants, which bind to form a single localized patch on the cell membrane, prompting cell wall softening and subsequent hair growth. A numerical bifurcation analysis is presented as two key parameters, involving the cell length and the overall concentration of the auxin catalyst, are varied. The results show hysteretic transitions from a boundary patch to a single interior patch and to multiple patches whose locations are carefully controlled by the auxin gradient. The results are confirmed by an asymptotic analysis using semistrong interaction theory, leading to closed form expressions for the patch locations and intensities. A close agreement between the numerical bifurcation results and the asymptotic theory is found for biologically realistic parameter values. Insight into the initiation of transition mechanisms is obtained through a linearized stability analysis based on a nonlocal eigenvalue problem. The results provide further explanation of the recent agreement found between the model and biological data for both wild-type and mutant hair cells
Abstract:
Subcritical Turing bifurcations of reaction-diffusion systems in large domains lead to spontaneous onset of well-developed localized patterns via the homoclinic snaking mechanism. This phenomenon is shown to occur naturally when balancing source and loss effect are included in a typical reaction-diffusion system, leading to a super- to subcritical transition. Implications are duscussed for a range of physical problems, arguing that subcritically leads to naturally robust phase transitions to localized patterns
Abstract:
Transmission switching has proven to be a highly useful post-contingency recovery technique by allowing power system operators increased levels of control through leveraging the topology of the power system. However, transmission switching remains only implemented in limited capacity because of concerns over computational complexity, uncertainty of performance in AC systems, and scalability to real-world, large-scale systems. We propose a heuristic which uses a sophisticated guided undersampling procedure combined with logistic regression to accurately identify transmission switching actions to reduce post-contingency AC power flow violations. The proposed heuristic was tested on real-world, large-scale AC power system data and consistently identified optimal or near optimal transmission switching actions. Because the proposed heuristic is computationally inexpensive, addresses an AC system, and is validated on real-world large-scale data, it directly addresses the aforementioned issues regarding transmission switching implementation
Abstract:
This paper assesses the impact of the North American Free Trade Agreement on Mexican manufacturing plants' output prices and markups. We distinguish between Mexican goods that are exported and those sold domestically, and decompose their prices separately into markups and marginal costs. We then analyze how these components were affected by the reductions in Mexican output tariffs, intermediate input tariffs, and U.S. tariffs on Mexican exports. We find that domestically sold products saw a decline in prices as Mexican plants faced more competition and gained access to cheaper inputs. Prices of exported goods fell only slightly as plants increased their markups in response to a favorable competitive environment due to declines in U.S. tariffs
Abstract:
We use new administrative data from Ecuador to study the welfare effects of the misallocation of procurement contracts caused by political connections. We show that firms that form links with the bureaucracy through their shareholders experience an increased probability of being awarded a government contract. We develop a novel sufficient statistic - the average gap in revenue productivity and capital share of revenue - to measure the efficiency effects, in terms of input utilization, of political connections. Our framework allows for heterogeneity in quality, productivity, and non-constant marginal costs. We estimate political connections create welfare losses ranging from 2 to 6% of the procurement budget
Abstract:
We evaluate whether the market reacts rationally to profit warnings by testing for subsequent abnormal returns. Warnings fall into two classes: those that include a new earnings forecast, and those that offer only the guidance that earnings will be below current expectations. We find significant negative abnormal returns in the first three months following both types of warning. There is also evidence that underreaction is more pronounced when the disclosure is less precise. Abnormal returns are significantly more negative following disclosures that offer only qualitative guidance than when a new earnings forecast is included
Abstract:
Nowadays, the tendency to replace conventional fossil-based plastics is increasing considerably; there is a growing trend towards alternatives that involve the development of plastic materials derived from renewable sources, which are compostable and biodegradable. Indeed, only 1.5 % of whole plastic production is part of the small bioplastics market, even when these materials with a partial or full composition from biomass are rapidly expanding. A very interesting field of investigation is currently being developed in which the disposal and processing of the final products are evaluated in terms of reducing environmental harm. This review presents a compilation of polyethylene (PE) types, their uses, and current problems in the waste management of PE and recycling. Particularly, this review is based on the capabilities to synthesize bio-based PE from natural and renewable sources as a replacement for the raw material derived from petroleum. In addition to recent studies in degradation on different types of PE with weight loss ranges from 1 to 47 %, the techniques used and the main changes observed after degradation. Finally, perspectives are presented in the manuscript about renewable and non-renewable poly-mers, depending on the non-degradable, biodegradable, and compostable behavior, including composting recent studies in PE. In addition, it contributes to the 3R approaches to responsible waste management of PE and advancement towards an environmentally friendly PE
Abstract:
In this work we perform a first study of basic invariant sets of the spatial Hill's four-body problem, where we have used both analytical and numerical approaches. This system depends on a mass parameter in such a way that the classical Hill's problem is recovered when m=0. Regarding the numerical work, we perform a numerical continuation, for the Jacobi constant C and several values of the mass parameter m by applying a classical predictor-corrector method, together with a high-order Taylor method considering variable step and order and automatic differentiation techniques, to specific boundary value problems related with the reversing symmetries of the system. The solution of these boundary value problems defines initial conditions of symmetric periodic orbits. Some of the results were obtained departing from periodic orbits within Hill’s three-body problem. The numerical explorations reveal that a second distant disturbing body has a relevant effect on the stability of the orbits and bifurcations among these families. We have also found some new families of periodic orbits that do not exist in the classical Hill's three-body problem; these families have some desirable properties from a practical point of view
Abstract:
The circular restricted four-body problem studies the dynamics of a massless particle under the gravitational force produced by three point masses that follow circular orbits with constant angular velocity, the configuration of these circular orbits forms an equilateral triangle for all time; this problem can be considered as an extension of the celebrated restricted three-body problem. In this work we investigate the orbits which emanate from some equilibrium points. In particular, we focus on the horseshoe shaped orbits (rotating frame), which are well known in the restricted three-body problem. We study some families of symmetric horseshoe orbits and show their relation with a family of the so called Lyapunov orbits
Abstract:
In this work we perform numerical explorations of some families of planar periodic orbits in the Hill approximation of the restricted four-body problem. This approximation is obtained by performing a symplectic scaling which sends the two massive bodies to infinity, by the means of expanding the potential as a power series depending on the mass of the smallest primary, and taking the limit as this mass tends to zero. The limiting Hamiltonian depends only on the relative mass of the second smallest primary. The resulting dynamics shares similarities with both the restricted three-body problem and the restricted four-body problem. We focus on certain families of symmetric periodic orbits of the infinitesimal particle, for some values of the mass parameter. We explore the evolution of these families as the Jacobi constant, or, equivalently, the energy, is varied continuously, and provide details on the horizontal and vertical stability of each family
Abstract:
The legal and economic analysis presented here empirically tests the theoretical framework advanced by Kugler, Verdier, and Zenou (2003) and Buscaglia (1997). This paper goes beyond the prior literature by focusing on the empirical assessment of the actual implementation of the institutional deterrence and prevention mechanisms contained in the United Nations’ Convention against Transnational Organized Crime (Palermo Convention). A sample of 107 countries that have already signed and/or ratified the Convention was selected. The paper verifies that the most effective implemented measures against organized crime are mainly founded on four pillars: (i) the introduction of more effective judicial decision-making control systems causing reductions in the frequencies of abuses of procedural and substantive discretion; (ii) higher frequencies of successful judicial convictions based on evidentiary material provided by financial intelligence systems aimed at the systematic confiscation of assets in the hands of criminal groups and under the control of “licit” businesses linked to organized crime; (iii) the attack against high level public sector corruption (that is captured and feudalized by organized crime) and (iv) the operational presence of government and/or non-governmental preventive programs (funded by the private sector and/or governments and/or international organizations) addressing technical assistance to the private sector, educational opportunities, job training programs and/or rehabilitation (health and/or behavioral) of youth linked to organized crime in high-risk areas (with high-crime, high unemployment, and high poverty)
Abstract:
This paper investigates IS flexibility as a way to meet competitive challenges in hypercompetitive industries by enabling firms to implement strategies by adapting their operations processes quickly to fast-paced changes. It presents a framework for IS flexibility as a multidimensional construct. Through a literature review and initial case study analysis, factors to assess flexibility in each dimension are constructed. Findings of an exploratory study conducted to test the framework are reported. Based on the above it is argued that the concept of IS flexibility in hypercompetitive industries IS flexibility must be pursued from a holistic perspective to understand how it may be exploited to achieve a competitive advantage
Abstract:
Guided by the relationship between the breadth-first walk of a rooted tree and its sequence of generation sizes, we are able to include immigration in the Lamperti representation of continuous-state branching processes.We provide a representation of continuous-state branching processes with immigration by solving a random ordinary differential equation driven by a pair of independent Lévy processes. Stability of the solutions is studied and gives, in particular, limit theorems (of a type previously studied by Grimvall, Kawazu and Watanabe and by Li) and a simulation scheme for continuous-state branching processes with immigration. We further apply our stability analysis to extend Pitman’s limit theorem concerning Galton–Watson processes conditioned on total population size to more general offspring laws
Abstract:
Several industries deal with imperfect and perishable raw materials that need to be cut in order to assemble products. Managing the purchasing, cutting, and sequencing decisions is a challenging problem that spans both inventory control and production process management, where decisions are usually optimised separately. In this research we develop a dynamic programming model that determines these joint decisions. The inspiration comes from furniture companies where the raw material are sheets of plywood that may contain imperfections that need to be avoided when cutting pieces to build the assembled furniture. We evaluate decisions regarding the layout of pieces onto the plywood, dispatching policies of ordered furniture, and the quality of the raw material. The model is useful to perform cost analysis for different scenarios. Variations on the expected demand, purchasing, ordering, disposal, holding costs and initial inventory are considered. Experiments conclude that focusing in quality becomes more important than the age of the plywood sheet if the companies implement a production flow that cuts and sorts all pieces before assembling the furniture. Cost analysis confirms that a just-in-time policy in which little inventory is kept results in important savings when compared with the standard operation of the companies
Abstract:
In this paper we tackle a two-dimensional cutting problem to optimize the use of raw material in a furniture company. Since the material used to produce pieces of furniture comes from a natural source, the plywood sheets may present defects that affect the total plywood that can be used in a single sheet. The heuristic presented in this research deals with these defects and present the best way to handle them. It also considers the use of the plywood sheets for the long term planning of the company, since usually purchases of raw material are done only at certain periods of time, and must last for several weeks. Experimental results show how an intelligent cutting plan and selection of the plywood sheets reduce considerable the amount of raw material needed compared with the current operation of the company, and guarantees that the purchased sheets last during the planning period, regardless of the available area to cut pieces on each plywood
Resumen:
En este trabajo se presenta un problema de corte y empaquetamiento bidimensional que optimiza el uso de materia prima en una fábrica de muebles. Dado que el material proviene de una fuente natural, como son las planchas de contrachapado de madera, pueden presentar defectos. Estos defectos afectan la cantidad de materia prima disponible en cada plancha, ya que no siempre se pueden incluir en las piezas finales. La heurística presentada en este trabajo, muestra la mejor manera de tratar con los defectos. También consideramos el uso de las planchas de contrachapado en una planeación a largo plazo, dado que la compra de materia prima normalmente se realiza de forma periódica, y las existencias deben ser suficientes para completar la demanda de varias semanas
Abstract:
In this work, we consider a batching machine that can process several jobs at the same time. Batches have a restricted batch size, and the processing time of a batch is equal to the largest processing time among all jobs within the batch. We solve the bi-objective problem of minimizing the maximum lateness and number of batches. This function is relevant as we are interested in meeting due dates and minimizing the cost of handling each batch. Our aim is to find the Pareto-optimal solutions by using an epsilon-constraint method on a new mathematical model that is enhanced with a family of valid inequalities and constraints that avoid symmetric solutions. Additionally, we present a biased random-key genetic algorithm to approximate the optimal Pareto points of larger instances in reasonable time. Experimental results show the efficiency of our methodologies
Abstract:
The cross entropy method was initially developed to estimate rare event probabilities through simulation, and has been adapted successfully to solve combinatorial optimization problems. In this paper we aim to explore the viability of using cross entropy methods for the vehicle routing problem. Published implementations to this problem have only considered a naive route-splitting scheme over a very limited set of instances. In addition of presenting a better route-splitting algorithm we designed a cluster-first/route-second approach. We provide computational results to evaluate these approaches, and discuss its advantages and drawbacks. The innovative method, developed in this paper, to generate clusters may be applied in other settings. We also suggest improvements to the convergence of the general cross entropy method
Abstract:
We address the problem of scheduling a single batching machine to minimize the maximum lateness with a constraint restricting the batch size. A solution for this NP-hard problem is defined by a selection of jobs for each batch and an ordering of those batches. As an alternative, we choose to represent a solution as a sequence of jobs. This approach is justified by our development of a dynamic program to find a schedule that minimizes the maximum lateness while preserving the underlying job order. Given this solution representation, we are able to define and evaluate various job-insert and job-swap neighborhood searches. Furthermore we introduce a new neighborhood, named split–merge, that allows multiple job inserts in a single move. The split–merge neighborhood is of exponential size, but can be searched in polynomial time by dynamic programming. Computational results with an iterated descent algorithm that employs the split–merge neighborhood show that it compares favorably with corresponding iterated descent algorithms based on the job-insert and job-swap neighborhoods
Abstract:
Expert systems are designed to solve complex problems by reasoning with and about specialized knowledge like an expert. The design of concrete is a complex task that requires expert skills and knowledge. Even when given the proportions of the ingredients used, predicting the exact behavior of concrete is not a trivial task, even for experts, because other factors that are hard to control or foresee also exert some influence over the final properties of the material. This paper presents some of our attempts to build a new expert system that can design different types of concrete (hydraulic, bacterial, cellular, lightweight, high-strength, architectural, etc.) for different environments. The system also optimizes the use of additives and cement, which are the most expensive raw materials used in the manufacture of concrete
Abstract:
The arrival of modern brain imaging technologies has provided new opportunities for examining the biological essence of human intelligence as well as the relationship between brain size and cognition. Thanks to these advances, we can now state that the relationship between brain size and intelligence has never been well understood. This view is supported by findings showing that cognition is correlated more with brain tissues than sheer brain size. The complexity of cellular and molecular organization of neural connections actually determines the computational capacity of the brain. In this review article, we determine that while genotypes are responsible for defining the theoretical limits of intelligence, what is primarily responsible for determining whether those limits are reached or exceeded is experience (environmental influence). Therefore, we contend that the gene-environment interplay defines the intelligent quotient of an individual
Abstract:
The ability to create, use and transfer knowledge may allow the creation or improvement of new products or services. But knowledge is often tacit: It lives in the minds of individuals, and therefore, it is difficult to transfer it to another person by means of the written word or verbal expression. This paper addresses this important problem by introducing a methodology, consisting of a four-step process that facilitates tacit to explicit knowledge conversion. The methodology utilizes conceptual modeling, thus enabling understanding and reasoning through visual knowledge representation. This implies the possibility of understanding concepts and ideas, visualized through conceptual models, without using linguistic or algebraic means. The proposed methodology is conducted in a metamodel-based tool environment whose aim is efficient application and ease of use
Abstract:
Purpose- The purpose of this paper is to devise a crowdsourcing methodology for acquiring and exploiting knowledge to pro file unscheduled transport networks for design of efficient routes for public transport trips. Design/methodology/approach- This paper analyzes daily travel itineraries within Mexico City provided by 610 public transport users. In addition, a statistical analysis of quality-of-service parameters of the public transport systems of Mexico City was also conducted. From the statistical analysis, a knowledge base was consolidated to characterize the unscheduled public transport network of Mexico City. Then, by using a heuristic search algorithm for finding routes, public transport users are provided with efficient routes lor their trips. Findings- The findings of the paper are as follows. A crowdsourcing methodology can be used to characterize complex and unscheduled transport networks. In addition, the knowledge of the clowds can be used to devise efficient routes for trips (using public transport) within a city. Moreover, the design of routes for trips can be automated by SmartPaths, a mobile application for public transport navigation. Research limitations/implications- The data collected from the public transport users of Mexico City may vary through the year. Originality/value- The significance and novelty is that the present work is the earliest effort in making use of a crowdsourcing approach for profiling unscheduled public transport networks to design efficient routes for public transport trips
Abstract:
KAMET II Concept Modeling Language (CML) is a consistent visual language with high usability and flexibility devised to acquire and organize knowledge from different sources in a very intuitive way. Similar recent work, which suggests visual tools for supporting knowledge acquisition (KA) processes, like Cmaptools and ICONKAT, are closed environments that cannot be easily translated to more popular frameworks like Protégé. On the other hand, languages for Semantic Web used for KA, like Extensible Markup Languages (XML), are designed for machine interpretation without considering the users interaction. KAMET II CML, on the contrary, cares about the input facilities for constructing knowledge models without disregarding its complexity, and it is compatible with commercial methodologies. We describe and demonstrate the advantages of KAMET II CML by proving its consistency and formality using Concept Algebra, a mathematical structure for the formal treatment of concepts and their algebraic relations, operations and associative rules. We do a direct transformation of KAMET II CML diagnosis models to Concept Network (CN) diagrams making use of Concept Algebra. As a result, KAMET II CML models are compatible with regular ontology representations and can be shared and used by other systems without adding complexity
Abstract:
The demand on Knowledge Management in the organizations, which are out‐performing their peers by above average growth in intellectual capital and wealth creation has lead to a growing community of IT people, who have adopted the idea of building Corporate or Organizational Memory Information Systems (OMIS). This system acknowledges the dynamics of the organizational environments, wherein the traditional design of information systems does not cope adequately with these organizational aspects. The successful development of such a system requires a careful analysis of essential for providing a cost‐effective solution which will be accepted by the employees/users and can be evolved in the future. This paper proposes a nine‐layered framework for improving OMIS’ implementation plan in order to support the effort to captures, shares and preserves the Organizational Memory (OM). The purpose of this framework is to gain a better understanding of how some factors are critical for the successful application of OMIS in order to face how to design suitable OMIS to turn the scattered, diverse knowledge of their people into well‐documented knowledge assets ready for deposit and reuse to benefit the whole organization
Abstract:
Knowledge acquisition (KA) is considered today a cognitive process that involves both dynamic modeling and knowledge generation activities. We understand KA should be seen as a spiral of epistemological and ontological content that grows upward by transforming tacit knowledge into explicit knowledge, which in turn becomes the basis for a new spiral of knowledge generation. This paper presents some of our attempts to develop a knowledge acquisition methodology that mainly build a bridge between two important fields: knowledge acquisition and knowledge management. KAMET II (Cairó and Guardati, 2012), the evolution of KAMET, represents a modern approach to creating diagnosis‐specialized knowledge models and knowledge‐based systems (KBS) that are more efficient
Abstract:
The knowledge acquisition (KA) process is not "mining from the expert’s head" and writing rules for building knowledge-based systems (KBS), as it was 20 years ago when KA was often confused with knowledge elicitation activity, and modern engineering tools did not exist. The KA process has definitely changed. Today knowledge acquisition is considered a cognitive process that involves both dynamic modeling and knowledge generation activities. KA should be seen as a spiral of epistemological and ontological content that grows upward by transforming tacit knowledge into explicit knowledge, which in turn becomes the basis for a new spiral of knowledge generation. This paper presents some of our attempts to build a new knowledge acquisition methodology that brings together and includes all of these ideas. KAMET II, the evolution of KAMET (Cairó, 1998), represents a modern approach to creating diagnosis-specialized knowledge models that can be run by Protégé 2000, the open source ontology editor and knowledge-based framework
Abstract:
Work on electronic negotiation has motivated the development of systems with strategies specifically designed to establish protocols for buying and selling goods on the Web. On the one hand, there are systems where agents interact with users through dialogues and animations, helping them to find products while learning from their preferences to plan future transactions. On the other hand, there are systems that employ knowledge-bases to determine the context of the interactions and to define the boundaries inherently established by the e-Commerce. This paper introduces the idea of developing an agent with both capabilities: negotiation and interaction in an e-Commerce application via virtual reality (with a view to apply it in the Latin-American market, where both the technological gap and an inappropriate approach to motivate electronic transactions are important factors). We address these issues by presenting a negotiation strategy that allows the interaction between an intelligent agent and a human consumer with Latin-American idiosyncrasy and by including a graphical agent to assist the user on a virtual basis. We think this may reduce the impact of the gap created by this new technology
Abstract:
The human brain is undoubtedly the most impressive, complex, and intricate organ that has evolved over time. It is also probably the least understood, and for that reason, the one that is currently attracting the most attention. In fact, the number of comparative analyses that focus on the evolution of brain size in Homo sapiens and other species has increased dramatically in recent years. In neuroscience, no other issue has generated so much interest and been the topic of so many heated debates as the difference in brain size between socially defined population groups, both its connotations and implications. For over a century, external measures of cognition have been related to intelligence. However, it is still unclear whether these measures actually correspond to cognitive abilities. In summary, this paper must be reviewed with this premise in mind
Abstract:
The knowledge acquisition (KA) process has evolved during the last years. Today KA is considered a cognitive process that involves both a dynamic modeling and knowledge generation activities. This should be seen as a spiral of epistemological and ontological content that grows up by transforming tacit knowledge into explicit knowledge, which in turn becomes the basis for a new spiral of knowledge generation. This paper shows some of our attempts to build a new knowledge acquisition methodology that collects and includes all of these ideas. KAMET II, the evolution of KAMET [1], represents a modern approach for building diagnosis-specialized knowledge models that could be run by Protégé
Abstract:
A number of research efforts were devoted to deploying agent technology applications in the field of Agent-Mediated Electronic Commerce. On the one hand, there are applications that simplify electronic transactions such as intelligent search engines and browsers, learning agents, recommender agent systems and agents that share knowledge. Thanks to the development and availability of agent software, e-commerce can use more than only telecommunications and online data processing. On the other hand, there are applications that include negotiation as part of their core activities such as the information systems field with negotiation support systems; multi-agent systems field with searching, trading and negotiation agents; and market design field with electronic auctions. Although negotiation is an important business activity, it has not been studied extensively either in traditional business or in ecommerce context. This paper introduces the idea of developing an agent with negotiation capabilities applied to the Latin American market, where both the technological gap and an inappropriate approach to motivate electronic transactions are important factors. We address these issues by presenting a negotiation strategy that allows the interaction between an intelligent agent and consumers with Latin American idiosyncrasy
Abstract:
Nowadays, Information Technologies (lT) are part of, practically, every aspect of life and business. The spread use of IT has led to both positive and negative consequences in the areas where they are applied. One of these areas is the implementation of Knowledge Management Systems (KMS). The application of certain technologies within a KMS pursues the efficient and effective realization of specific functions within the system. However, it is quite common to face a deficient or erroneous knowledge creation due to an incorrect utilization of these technologies. An incorrect application can go from a lack of definition of the role of a technology, to a perception of IT as a solution to knowledge creation problems itself. The incorrect application of technological solutions to KM issues can lead to a deficient creation of knowledge, an inability to update knowledge because of the rapid evolution of technologies, and, even, to erroneous decision-making. To address the described issue, this paper states the importance of understanding the influence of a KMS in an organization and developing conscience regarding the role of IT both within such organization and in society. Next, a description of different functions in KMS and' the adequate technologies to correctly implement them is made, taking into account the knowledge structure and capabilities of the enterprise. Finally, the significance of a KMS and a social ecosystem to support the implementation of IT is acknowledged, both to make the best out of an investment in technology and to achieve a consistent knowledge creating structure
Abstract:
The role of football in society has changed substantially over the past twenty years becoming to even larger extend the daily topic of millions of people around the world. Nowadays, the attention is drawn to both the needs of fans and obviously, the business. The game has evolved greatly in terms of physical performance while few changes are observed in its tactical aspect. The main goal of the professional soccer clubs is always to possess the best players, in the meanwhile forgetting about the team, except of course for some rare exceptions. This paper demonstrates that the practice of football as well as other sports should not ignore the basic requirements necessary to build a team and it also presents ways to address the value of knowledge management and its benefits for all stakeholders. Some of the key points discussed in this paper are the theory of team building, the different roles of group members and the transfer of know-how from generation to generation and how all these combined can make a difference
Abstract:
Traditional organizations' lack of Knowledge Leadership has become a major issue. Our proposal is a new type of team manager: the Knowledge Leader, who performs several functions of the Knowledge Champions and extends the notion of CKO into the teamwork context. This paper intends to clarify the relevance and necessity of a Knowledge Leader in every workgroup in organizations. Knowledge Leaders keep Knowledge Management efforts aligned to business strategies, making organizations competent. Without its leaders, a company will turn into a hollow shell
Abstract:
Few years ago, so many companies were convinced that they could reach positions of competitive advantage through investments in information technologies (lT's). This was certainly true for a while; however, lT's by themselves are not able to provide organizations with such a position anymore. One of the reasons for that is the tendency for the information technologies to beco me commodities; hence any competitor who has the enough acquisitive power could replicate the technological deployment of the leader, destroying this way the position of advantage. This work wants to explore a new source of competitive advantage called knowledge management, and its relationship with lT's. This paper also shows that companies should not regard lT's and KM as competitors but as coordinated efforts when trying to reach a position of competitive advantage
Abstract:
The overwhelming world situation has profoundly disturbed researchers, scientists and citizens, making them wonder about the vital element that guarantees life quality. While authorities and politicians struggle t9 provide for their citizens, idealists dream about the idea, as cliché as it must be, of a more prosperous world for everyone. Information technology and development alone have proved to be necessary, but indeed have been insufficient. Quite surprisingly, it's béen transpired that knowledge is the most precious asset every entity has. The importance of knowledge has been confirmed given that it is actually capable of providing an enduring solution to cities' main problems. Following this trend in the 90's, the link between knowledge and dties was born along with the concept Knowledge City {KC).Around this idea, since 2004, Monterrey's government implemented the program Infernational Knowledge Cify, a project based upon actions specifically designed to increase value to the city through knowledge creation and innovation. Its objective was and still is, to introduce Monterrey into the age of knowledge whilst transforming its landscapes and people. Hereafter, the objective of this paper was established: to determine whether or not Monterrey is becoming a KC. As such, this paper follows Ergazaki's methodology explained in A unified mefhodological approach for fhe developmenf of knowledge cities which is applied to Monterrey's knowledge infrastructure and also incorporates functional concepts derived from Knowledge Management. Furthermore, to complete the analysis, a comparison with the international admired KC of Singapore is included. The results indicate that Monterrey is a developing KC, improving by the minute
Abstract:
This paper presents an approach to automatic translation in two steps. The first one is recognition (syntactic) and the last one is interpretation (semantic). In the recognition phase, we use volatile grammars. They represent an innovation to logic-based grammars. For the interpretation phase, we use componential analysis and the laws of integrity. Componential analysis defines the sense of the lexical parts. The rules of integrity, on the other hand, are in charge of refining, improving and optimizing translation. We applied this framework for general analysis of romance languages and for the automatic translation of texts from Spanish into other neo-Latin languages and vice versa
Abstract:
This paper presents a series of conventionalities and tools for the general analysis of romance languages and for the automatic translation of texts from Spanish into other neo-Latin languages and vice versa. The work implies two well-defined phases: recognition (syntactic) and interpretation (semantic). In the recognition phase, we use volatile grammars. They represent an innovation to logic-based grammars. For the interpretation phase, we use componential analysis and the laws of integrity. Componential analysis defines the sense of the lexical parts that constituted the sentence through a collection of semantic features. The rules of integrity, on the other hand, have the goal of refining, improving and optimizing translation
Abstract:
Different methodologies have been developed to solve various tasks such as classification, design, planning, scheduling and diagnosis. Diagnosis is a task of which the desired output is a malfunction of a system. KAMET (Knowledge- Acquisition Methodology) is a knowledge engineering methodology aimed to attack diagnosis tasks exclusively. In this article KAMET II, the second version of KAMET, will be presented with the objective of knowing its most important characteristics as well as its modeling notation which will subsequently be necessary for the knowledge bases, Problem-Solving Methods (PSMs) and the knowledge model specification. KAMET is a model-based methodology appointed to administer knowledge acquisition from multiple knowledge sources. The methodology provides a mechanism by means of which knowledge acquisition is achieved in an incremental fashion and in a cooperative environment. One important feature is the specification used to describe knowledge-based systems independently of their implementation. A four-component architecture is presented to achieve this goal and to allow component separation and consequently component reuse
Abstract:
Problem-solving methods are ready-made software components that can be assembled with domain knowledge bases to create application systems. In this paper, we describe this relationship and how it can be used in a principled manner to construct knowledge systems. We have developed a ontologies: first, to describe knowledge bases and problem-solving methods as independent components that can be reused in different application systems; and second, to mediate knowledge between the two kinds of components when they are assembled in a specific system. We present our methodology and a set of associated tools that have been created to support developers in building knowledge systems and that have been used to conduct problem-solving method reuse
Abstract:
Information systems with an intelligent or knowledge component are now prevalent and include knowledge-based systems, intelligent agents, and knowledge management systems. These systems are capable of explain their reasoning or justify their behavior. Empirical studies, mainly with knowledge-based systems, are reviewed and linked to a theoretical and practical base. The present paper has two main objectives: a) to present a negotiation strategy that allows the interaction between an intelligent agent and a human consumer. This kind of negotiation is adapted to the Latin American market and idiosyncrasy where an appropriate tool to perform automated negotiations over Web does not exist. b) To include animations in order to show an agent that represents an actual person. This incorporation aims to reduce the impact and gap created by the new technology. The agent presented can find an optimal path to achieve its goal using its mental states and libraries designed for the business roles
Abstract:
The Knowledge-Acquisition (KA) community necessities demand more effective ways to elicit knowledge in different environments. Methodologies like CommonKADS [8], MIKE [1] and VITAL [6] are able to produce knowledge models using their respective Conceptual Modeling Languages (CML). However, sharing and reuse is nowadays a must-have in knowledge engineering (KE) methodologies and domain-specific KA tools in order to permit Knowledge-Based System (KBS) developers to work faster with better results, and give them the chance to produce and utilize reusable Open Knowledge Base Connectivity (OKBC)-constrained models. This paper presents the KAMET II Methodology, which is the diagnosis-specialized version of KAMET [2,3], as an alternative for creating knowledge-intensive systems attacking KE-specific risks. We describe here one of the most important characteristics of KAMET II which is the use of Protégé 2000 for implementing its CML models through ontologies
Abstract:
Knowledge engineering (KE) is not “mining from the expert’s head” as it was in the first generation of knowledge-based systems (KBS). Modern KE is considered more a modeling activity. Models are useful due to the incomplete accessibility that knowledge engineer has to the experts knowledge and because not all the knowledge is completely necessary to reach the majority of projects goals. KAMET II2, the evolution of KAMET (Knowledge-Acquisition Methodology), is a modern approach for building diagnosis-specialized knowledge models. This new diagnosis methodology pursues the objective of being a complete robust methodology to lead knowledge engineers and project managers (PM) to build powerful KBS by giving the appropriate modeling tools and by reducing KE-specific risks. Not only KAMET II encompasses the conceptual modeling tools, but it presents the adaptation to the implementation tool Protégé 2000 [6] for visual modeling and knowledge-base-editing as well. However, only the methodological part is presented in this paper
Abstract:
In recent years technology has experiented an exponential growth creating bridges between research areas that were independent from each other a few years ago. This paper focuses in an application combining three research areas: virtual environrnents, intelligent agents and museum web pages. The application consists in a virtual visit to a museum guided by an intelligent agent. The reactive agent implemented responds in real time to user's requests, such as precise information of the artwork, the authors' biography and other interesting facts such as museum's history and regional knowledge of the country where the museums lays. Our agent is ¡ncapable of providing different layers of data, making differences between adults, children and young people as well as between local and foreign users. The agent has some autonomy during the visit and permits the user to make his own choices. The virtual environment allows a semi-interactive visit through the museum's architecture. The user follows a pre-defined museum tour, but is able to perform interactive actions such as zoom s, free views of the museum's structure, free views of the artwork and may advance, go back, or terminate a visit at any time
Abstract:
Software project mistakes represent a loss of millions of dollars to thousands of companies all around the world. These software projects that somehow ran off course share a common problem: Risks became unmanageable. There are certain number of conjectures we can draw from the high failure rate: Bad management procedures, an inept manager was in charge, managers are not assessing risks, poor or inadequatemethodologies where used, etc. Some of them might apply to some cases, or all, or none, is almost impossible to think in absolute terms when a software project is an ad hoc solution to a given problem. Nevertheless,there is an ongoing effort in the knowledge engineering (KE) community to isolate risk factors, and provide remedies for runaway projects, unfortunately, we are not there yet. This work aims to express some general conclusions for risk assessment of software projects, particularly, but notlimited to, those involving knowledge acquisition (KA)
Abstract:
At the beginning of the 1980s, the Artificial Intelligence (Al) community showed little interest in research on methodologies for the construction of knowledge-based systems (KBS) and for knowledge acquisition (KA). The main idea was the rapid construction of prototypes with LISP machines, expert system shells, and so on. Over time, the community saw the need for a structured development of KBS projects, and KA was recognized as the critical stage and the bottleneck for the construction of KBS. Concerning KA, many publications have appeared since then. However, very few have focused on formal plans to manage knowledge acquisition and from multiple knowledge sources. This paper addresses this important problem. KAMET is a formal plan based on models designed to manage knowledge acquisition from multiple knowledge sources. The objective of KAMET is to improve, in some sense, the phase of knowledge acquisition and knowledge modeling process, making them more efficient.
Abstract:
Throughout time, Expert Systems (ES) have been applied on different areas of knowledge. In Telecommunications, the application of ESs has not been as outstanding as in other areas; however, some important applications have been seen lately; especially those focused on failure provisioning, monitoring and diagnosis [6]. In this article, we introduce DIFEVS, an ES for the remote diagnosis and correction of failures in satellite communications ground stations. DIFEVS also allows to considerably reduce the time elapsed between the initial moment of failure and when it is solved
Abstract:
Knowledge Acquisition (KA) in the 90s has been recognized as a critical stage in the construction of Knowledge Based Systems (KBS) and as the bottleneck for theír development. Nowadays a lot of material is published on KA, most of which ís focused on difficulties encountered in the process of knowledge elicitation. on the tools for KA, and on the verification and validation of Knowledge Bases (KB). A very limited number of these publications, however, have emphasised the need for formal plans to deal with knowledge elicitation of single and multiple experts. In this paper we propose a methodology based on models to deal with knowledge elicitation of multiple experts. The method provides a solid mechanism to accomplish KA in an increasing manner, by stages, and in a cooperative environment
Abstract:
During the past decades, many methods have been developed for the creation of Knowledge-Based Systems (KBS). For these methods, probabilistic networks have shown to be an important tool to work with probability-measured uncertainty. However, quality of probabilistic networks depends on a correct knowledge acquisition and modelation. KAMET is a model-based methodology designed to manage knowledge acquisition from multiple knowledge sources [1] that leads to a graphical model that represents causal relations. Up to now, all inference methods developed for these models are rule-based, and therefore eliminate most of the probabilistic information. We present a way to combine the benefits of Bayesian networks and KAMET, and reduce their problems. To achieve this, we show a transformation that generates directed acyclic graphs, the basic structure of Bayesian networks [2], and conditional probability tables, from KAMET models. Thus, inference methods for probabilistic networks may be used in KAMET models
Abstract:
Aluminum matrix composites (AMCs) reinforced with aluminum diboride (AlB2) particles are obtained through a casting process. A mixture design experiment combined with split-split plot experiment helped to assess the significance of the effects of cold work on precipitation hardening prior to aging. Both cold work and aging allowed higher microhardness of the composite matrix, which is further increased by higher levels of boron and copper. Microstructure analysis showed a good distribution of reinforcements and revealed a grain subdivision pattern due to cold work. Tensile tests helped corroborate the microhardness measurements. Fracture surface analysis showed a predominantly mixed brittle–ductile mode
Abstract:
This paper introduces a database of electoral precinct-level election returns for Mexican municipal elections between 1994 and 2019. This database includes: (i) electoral precinct-level votes for each electoral coalition, the coalitions of the incumbent mayor and incumbent state governor, and the four most popular political parties; (ii) electoral precinct-level valid and total votes, the number of registered voters, and turnout; (iii) the partisan composition and municipal-level votes of the incumbent and runner-up electoral coalitions from the previous election; and (iv) the partisan composition of the state-level incumbent governor. This paper outlines the organization of this data, its sources, and key variables, and describes the processes used to standardize the data. This database has the potential to support the cross-sectional and longitudinal study of local Mexican elections over two decades using fine-grained precinct-level electoral returns that enable panel and regression discontinuity analyses
Abstract:
In this paper, we report the results obtained of comparing how people make kense of health information when receiving it via two different media: application for a mobile device and a printed pamphlet. The study was motivated by the 2009 outbreak of the AHN1 influenza and the need to educate the general public using all possible media to disseminate symptoms and treatments in a timely manner. In this study, we investigate the influence of the media in the sensemaking process when processing health information that has be to comprehended fast and thoroughly as it can be life saving. We propose recommendations based on the obtained results
Abstract:
This article presents a "glocal" method of comparative constitutional interpretation. In the debate on the judicial use of foreign ideas, transnationalists claim to propose a simultaneously global and local approach. However, they perpetuate the methodological nationalism of globalists and localists by assuming nations as their primary units of analysis. In contrast, this article advances a truly glocal theory of judicial interpretation. The glocal is the product of a constant interplay between the global and the local, from the inception of an idea to its practical judicial application. This approach follows a three-step process. First, it provides a multiscale toolkit to demonstrate that ideas may have never been purely national in the first place but are the result of plural hybridizations. Second, it uncovers the units that generate and disseminate constitutional knowledge: trans-territorial networks united by thematically shared beliefs rather than by nationality or a global mission. Third, it equips judges with the ability to glocalize or customize the idea, not as an exercise of national differentiation but as a strategy to make it epistemically familiar and more politically appealing to the network. In this way, the article critically engages with the debate on constitutional transplants, challenging its nationalist bias
Abstract:
Jurisprudence has assumed, until recently, the idealized individual rationality of apex judges. Nevertheless, they do not solve cases alone, necessarily engage in oral deliberation, or are ideally wise. Instead, institutional procedures, caseloads, and epistemic constraints influence how courts solve cases. However, it is unclear what epistemic standards could replace this idealized model. This article proposes a framework that addresses this issue based on a virtue reliabilist approach and offers a solution to the problem concerning a proper understanding of the epistemic agency of apex courts. We propose a hybrid approach combining limited artificial intelligence automation based on precedent on routinary issues with a reflective approach to controversial ones. In this framework, automatized tools and judges are proxies that filter routinary from exceptional issues, paving the way for voting and realistic deliberation that is epistemically adequate
Resumen:
En marzo de 2021 se reformó la Constitución mexicana para transitar a un sistema de precedentes. Esta enmienda establece que las "razones" de las sentencias de la Suprema Corte serán obligatorias para los tribunales inferiores. Sin embargo, la reforma se enmarca en una arraigada práctica de tesis jurisprudenciales, i. e., enunciados abstractos identificados por la misma Corte al resolver un caso. Además, no hay consenso sobre qué son estas razones y por qué deberían ser vinculantes. El objetivo de este artículo es identificar las posibles concepciones de razones para revelar los distintos roles de la Corte en la creación del derecho judicial. Se utilizan nociones de la ratio decidendi del common law como herramientas de introspección para identificar cuatro modelos de creación del derecho en la práctica mexicana, a saber: legislación judicial, reglas implícitas, justificaciones político-morales, y categorías sociales. Aunque la primera concepción parece ser la dominante, las alternativas amplían el abanico para entender cómo es que la Corte crea derecho dependiendo del contexto interpretativo en que opere
Abstract:
In March 2021, the Mexican Constitution was amended to transition to a system of precedents. This amendment mandates that the "reasons" of Supreme Court rulings will be binding on the lower courts. However, the reform is rooted in a long-standing practice of case lawdoctrine, e.g., abstract statements the Court itself makes when deciding a case. Moreover, there is no consensus as to what these reasons are and why they should be binding. The objective of this article is to identify the possible notions of reasons to explore the Court's different roles in shaping judicial law. Concepts of the common law ratio decidendi are used as an insight to identify four models of law making in Mexican practice, namely: judicial legislation, implicit rules, moral-political justifications and social categories. Although the first model seems to prevail, the others offer a broad understanding of how the Court creates law depending on the interpretative context in which it operates
Resumen:
Este artículo busca mostrar que la concepción dominante sobre los órganos constitucionales autónomos (OCA) está equivocada. Estos organismos no son completa ni igualmente independientes del resto de poderes del Estado. Al contrario, se caracterizan por tener una variedad de diseños que les dan diferentes niveles de autonomía. Entender esta variedad de diseños es el primer paso para identificar las causas, reales o percibidas, sobre los defectos del sistema actual de equilibrio institucional, así como las consecuencias que un régimen bien diseñado podría generar. Hasta este momento, la discusión generalmente se ha limitado a si deben existir o no este tipo de organismos. A partir de este trabajo, los debates podrían abordar nuevos cuestionamientos sobre cómo deben diseñarse y cuánta autonomía habría que otorgarles
Abstract:
This article argues that the mainstream conception about the autonomous constitutional agencies is mistaken. These agencies are neither complete nor equally independent from the other branches of government. On the contrary, there is a variety of institutional designs that grants them different levels of autonomy. Acknowledging this variety of designs is the first step for understanding the causes, real or perceived, of the flaws in the current system of institutional balance, as well as the consequences that a well-designed regime could generate. Until today, most of the debate has been limited to the convenience or undesirability of having this kind of agencies. This article may be the starting point for discussions on how decisionmakers should design these agencies and how much autonomy they should grant them
Abstract:
Coherentists fail to distinguish between the individual revision of a conviction and the intersubjective revision of a rule. This paper fills this gap. A conviction is a norm that, according to an individual, ought to be ascribed to a provision. By contrast, a rule is a judicially ascribed norm that controls a case and is protected by the formal principles of competence, certainty, and equality. A revision of a rule is the invalidation or modification such a judicially ascribed norm, provided that the judge meets the burden of argumentation of formal principles. Thus, judges can revise their convictions without changing the law
Abstract:
The 'new NAFTA' agreement between Canada, Mexico, and the United States maintained the system for binational panel judicial review of antidumping and countervailing duty determinations of domestic government agencies. In US-Mexico disputes, this hybrid system brings together Spanish and English-speaking lawyers from the civil and the common law to solve legal disputes applying domestic law. These panels raise issues regarding potential bicultural, bilingual, and bijural (mis)understandings in legal reasoning. Do differences in language, legal traditions, and legal cultures limit the effectiveness of inter-systemic dispute resolution? We analyze all of the decisions of NAFTA panels in US-Mexico disputes regarding Mexican antidumping and countervailing duty determinations and the profiles of the corresponding panelists. This case study tests whether one can actually comprehend the 'other'. To what extent can a common law, English-speaking lawyer understand and apply Mexican law, expressed in Spanish and rooted in a distinct legal culture?
Resumen:
Desde el Derecho Comparado, se hace un análisis y una defensa del control de la constitucionalidad de la jurisprudencia. Se sostiene que este tipo de control es razonable, excepcionalmente, si se entiende la jurisprudencia como reglas prima facie protegidas por los principios formales de competencia, igualdad, certeza y jerarquía, no como reglas estrictas sustentadas solo en el principio de jerarquía, como la Suprema Corte mexicana las ha entendido
Abstract:
From the perspective of Comparative Law, the paper argues in favor of constitutional review on precedents. It claims that this type of review is reasonable in exceptional circumstances if precedents are understood as prima facie rules that safeguarded by the formal principles of competence, equality, certainty and hierarchy. Not as strict rulse grounded only in the principle of hierarchy, as the Mexican Supreme Court has understood them
Resumen:
En medio de una crisis democrática en México, surgen las candidaturas independientes como manera de ampliar la participación política. Sin embargo, para lograr el registro en las elecciones de 2018, el Instituto Nacional Electoral (INE) implementó una app sin considerar las diferencias de clase, étnicas/raciales, lingüísticas, temporales, geográficas y de habilidades digitales. Estas exclusiones se visibilizaron en las impugnaciones en torno a la candidatura de Marichuy, una mujer indígena nahua que buscó su registro como independiente a la presidencia. En este capítulo analizamos este uso vertical y mono-cultural de las tecnologías de la información y la comunicación (TIC), justificado en el discurso de “modernidad”, como una expresión de tecno-colonialismo. Contribuimos así a entender la formación de una sub-ciudadanía digital manifiesta en la subordinación y estigmatización jurídica, participativa y política en la que se priorizan valores tecnocráticos en lugar de la ampliación de los canales democráticos
Resumo:
No meio de uma crise democrática no México, as candidaturas independentes surgem como forma de expandir a participação política. No entanto, para conseguir o registo nas eleições de 2018, o Instituto Nacional Electoral (INE) implementou uma app sem considerar as diferenças de classe étnicas/raciais, linguísticas, temporais, geográficas e de habilidades digitais. Essas exclusões foram visíveis nas impugnações em torno a candidatura de Marichuy, uma indígena nahua que buscou seu registro como independente para a presidência. Neste capítulo analisamos este uso vertical e mono-cultural das tecnologias da informação e comunicação (TIC), justificado no discurso da "modernidade", como uma expressão de tecno-colonialismo. Contribuímos, assim, para com¬preender a formação de uma sub-cidadania digital que se manifesta na subordinação e estigmatização legal, participativa e política em que os valores tecnocráticos são priorizados em vez de expandir os canais democráticos
Resumen:
En este artículo se discute la derrotabilidad del precedente constitucional desde una perspectiva analítica y normativa. Analíticamente, se sostiene que la derrotabilidad es una propiedad contingente de los precedentes que se manifiesta cuando nuevas normas adscritas reducen o eliminan el campo de aplicación del precedente original. Normativamente, se propone que la derrotabilidad es una colisión entre los principio formales de igualdad y seguridad jurídica y principios de justicia sustantiva. Una norma se derrota cuando circunstancias o fuentes constitucionalmente relevantes ausentes en el precedente, pero presentes en el caso posterior, justifican emitir una nueva norma que funciona como excepción o invalidación de la norma anterior. De manera más práctica, se proponen cuatro técnicas argumentativas que hacen manifiesta la derrotabilidad de los precedentes. Estas técnicas son la distinción de casos, la circunscripción, la inaplicación y la desaplicación del precedente. Así, el artículo busca contribuir al debate teórico y práctico que ha surgido a partir de que el Pleno de la Suprema Corte resolviera la C.T. 299/2013
Abstract:
This article discusses the defeatability of the constitutional precedent from an analytical and normative perspective. Analytically, it is argued that defeatability is a contingent property of precedents that manifests itself when new ascribed standards reduce or eliminate the scope of the original precedent. Normatively, it is proposed that the defeatability is a collision between the formal principle of equality and legal certainty and principles of substantive justice. A rule is defeated when circumstances or constitutionally relevant sources absent in the preceding, but present in the later case, justify issuing a new rule that functions as an exception or override the previous rule. In a more practical way, four argumentative techniques are proposed that make manifest the defeatability of the precedents. These techniques are the distinction of cases, the circumscription, the inapplicability and the deapplication of the precedent. Thus, the article seeks to contribute to the theoretical and practical debate that has emerged since the plenary of the Supreme Court resolved the C.T. 299/2013
Abstract:
This article analyses the migration of the common law doctrine of precedent to civil law constitutionalism. Using the case study of Mexico and Colombia, it suggests how this doctrine should be tailored to the civil law context. Historically, the civil law tradition adhered to the doctrine of jurisprudence constante that grants relative persuasiveness to precedents, once they are reiterated. However, the trend is to consider single constitutional precedents as binding. Universalist judges are borrowing common law concepts to interpret precedents joining the global trend while particularists consider such migration a foreign imposition that distorts the civil law theory of sources. This article takes a dialogical approach and occupies a middle ground between universalist and particularist approaches. The doctrine of precedent should be adopted, but it must also be reconfigured considering three distinctive features of the civil law: (a) canonical rationes decidendi; (b) precedent overproduction; and (c) a fragmented judiciary
Resumen:
Este artículo tiene como objetivo informar los resultados de la puesta a prueba de una propuesta didáctica para la enseñanza de la optimización dinámica, en particular del cálculo de variaciones. El diseño de la propuesta se hizo con base en la teoría APOE y se puso a prueba en una institución de enseñanza superior. Los resultados obtenidos del análisis de las respuestas de los estudiantes a un cuestionario y una entrevista ponen de manifiesto que los estudiantes muestran concepciones proceso y, en ocasiones, objeto de los conceptos abstractos de esta disciplina como resultado de su aplicación, aunque se detectaron algunas dificultades que resultaron difíciles de superar para dichos alumnos
Abstract:
The purpose of this paper is to present the results of a research study on a didactical proposal to teach Dynamical Optimization, in particular, Calculus of Variations. The proposal design was based on APOS theory and was tested at a Mexican private university. Results obtained from the analysis of students responses to a questionnaire and an interview show that students construct process conceptions, and in some cases, object conceptions of this discipline´s abstract concepts. Some problems were however difficult to overcome for these students
Resumen:
Se presenta un estado del conocimiento sobre los avances que se han producido en el campo de la investigación en educación matemática, con respecto a la enseñanza y aprendizaje del concepto de espacio vectorial. Para organizar la revisión se utilizaron dos preguntas guía: ¿qué obstáculos para el aprendizaje del concepto de espacio vectorial se han identificado? y ¿qué sugerencias didácticas se han hecho para favorecer el aprendizaje con significado del concepto de espacio vectorial? Además de proporcionar respuesta a estas preguntas, el análisis de los resultados obtenidos ofrece una síntesis del conocimiento actual y una perspectiva acerca de posibles áreas para investigaciones futuras relacionadas con la enseñanza y el aprendizaje del concepto de espacio vectorial
Abstract:
A state of the art is presented on the advances that have taken place in the field of mathematics education research in connection to the teaching and learning of the concept of vector space. Two guiding questions were used to organize the review: what learning obstacles related to the concept of vector space have been identified? and what teaching proposals have been made to promote a meaningful learning of the concept of vector space? In addition to providing answers to these questions, the analysis of the obtained results offers a synthesis of current knowledge and a perspective on possible areas for future research related to teaching and learning of the concept of vector space
Abstract:
This paper proposes a research framework for studying the connections --realized and potential--between unstructured data (UD) and cybersecurity and internal controls. In the framework, cybersecurity and internal control goals determine the tasks to be conducted. The task influences the types of UD to be accessed and the types of analysis to be done, which in turn influences the outcomes that can be achieved. Patterns in UD are relevant for cybersecurity and internal control, but UD poses unique challenges for its analysis and management. This paper discusses some of these challenges including veracity, structuralizing, bias, and explainability
Abstract:
This paper proposes a cybersecurity control framework for blockchain ecosystems, drawing from risks identified in the practitioner and academic literature. The framework identifies thirteen risks for blockchain implementations, ten common to other information systems and three risks specific to blockchains: centralization of computing power, transaction malleability, and flawed or malicious smart contracts. It also proposes controls to mitigate the risks identified; some were identified in the literature and some are new. Controls that apply to all types of information systems are adapted to the different components of the blockchain ecosystem
Abstract:
Context. Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims. We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods. The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results. Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors
Abstract:
This paper deals with a research that pretends to explore the strategies that favor the understanding of the representation of parametric curves in the plane. We report the results of a three years long teaching experience with college students where we explore and explain the difficulties and strategies that students have when with problems that involve parameterization
Abstract:
This article proposes a more nuanced method to assess the accuracy of preelection polls in competitive multiparty elections. Relying on data from the 2006 and 2012 presidential campaigns in Mexico, we illustrate some shortcomings of commonly used statistics to assess survey bias when applied to multiparty elections. We propose the use of a Kalman filter-based method that uses all available information throughout an electoral campaign to determine the systematic error in the estimates produced for each candidate by all polling firms. We show that clearly distinguishing between sampling and systematic biases is a requirement for a robust evaluation of polling firm performance, and that house effects need not be unidirectional within a firm's estimates or across firms
Resumen:
El estudio de la conducta legislativa en la Cámara de Diputados durante el periodo de 1998 a 2006 presenta un problema potencialmente serio: no todos los votos han sido publicados en la Gaceta Parlamentaria. Analizamos la naturaleza de estos datos en aras de explorar la representatividad de la muestra de votos disponibles y las posibles repercusiones de este problema en los análisis legislativos existentes. Para esto, aprovechamos la aparición de una página de Internet que registra la totalidad de los votos de la Cámara de Diputados desde el año 2006 en paralelo con el sistema existente desde 1998. Mediante la exploración de los mecanismos que generan la omisión de votos y la comparación de distintas estimaciones del comportamiento legislativo, concluimos que los votos no publicados merman la precisión de estimadores de uso común pero no introducen ningún tipo de sesgo. A la par, hacemos pública una base de datos para el estudio del Congreso mexicano
Abstract:
This paper examines the nature of the data available for studying legislative behavior in Mexico. In particular, we evaluate a potentially serious problem: only a subset of roll-call votes have been released for the critical transition period of 1998-2006. We test whether this subset is a representative sample of all votes, and thus suitable for study, or whether it is biased in a way that misleads scholarship. Our research strategy takes advantage of a partial overlap between two roll call vote reporting sources by the Chamber of Deputies: the site with partial vote disclosure, created in 1998 and still in place today; and the site with universal vote disclosure since 2006 only. An examination of the data generation and publication mechanisms, comparing different estimations of legislative behavior, reveals that omitted votes reduce the precision of estimates but do not introduce bias. Scholarship of the lower chamber can therefore proceed with data that we make public with the publication of the paper
Abstract:
We propose a generalized 3D shape descriptor for the efficient classification of 3D archaeological artifacts. Our descriptor is based on a multi-view approach of curvature features, consisting of the following steps: pose normalization of 3D models, local curvature descriptor calculation, construction of 3D shape descriptor using the multi-view approach and curvature maps, and dimensionality reduction by random projections. We generate two descriptors from two different paradigms: 1) handcrafted, wherein the descriptor is manually designed for object feature extraction, and directly passed on to the classifier and b) machine learnt, in which the descriptor is automatically learns the object features through a pretrained deep neural network model (VGG-16) for transfer learning and passed on to the classifier. These descriptors are applied to two different archaeological datasets: 1) non-public Mexican dataset, represented by a collection of 963-3D archaeological objects from the Templo Mayor Museum in México City, that includes anthropomorphic sculptures, figurines, masks, ceramic vessels, and musical instruments; and 2) 3D pottery content-based retrieval benchmark dataset, consisting of 411 objects. Once the multi-view descriptors are obtained, we evaluate their effectiveness by using the following object classification schemes: K-Nearest neighbor, support vector machine, and structured support vector machine. Our object descriptors classification results are compared against five popular 3D descriptors in the literature, namely, rotation invariant spherical harmonic, histogram of spherical orientations, signature of histograms of orientations, symmetry descriptor, and reflective symmetry descriptor
Abstract:
Delta-hedged option returns consistently decrease in volatility of volatility changes (volatility uncertainty), for both implied and realized volatilities. We provide a thorough investigation of the underlying mechanisms including model-risk and gambling-preference channels. Uncertainty of both volatilities amplifies the model risk, leading to a higher option premium charged by dealers. Volatility of volatility-increases, rather than that of volatility-decreases, contributes to the effect of implied volatility uncertainty, supporting the gambling-preference channel. We further strengthen this channel by examining the effects of option end-users net demand and lottery-like features, and by decomposing implied volatility changes into systematic and idiosyncratic components
Abstract:
In this paper, we explore the interplay of virus contact rate, virus production rates, and initial viral load during early HIV infection. First, we consider an early HIV infection model formulated as a bivariate branching process and provide conditions for its criticality R0>1. Using dimensionless rates, we show that the criticality condition R0>1 defines a threshold on the target cell infection rate in terms of the infected cell removal rate and virus production rate. This result has motivated us to introduce two additional models of early HIV infection under the assumption that the virus contact rate is proportional to the target cell infection probability (denoted by VV+0). Using the second model, we show that the length of the eclipse phase of a newly infected host depends on the target cell infection probability, and the corresponding deterministic equations exhibit bistability. Indeed, occurrence of viral invasion in the deterministic dynamics depends onR0and the initial viral loadV0. If the viral load is small enough, eg, V0≪0, then there will be extinction regardless of the value ofR0. On the other hand, if the viral load is large enough, eg, V0≫0 and R0>1, then there will be infection. Of note, V0≈𝜃corresponds to a threshold regime above which virus can invade. Finally, we briefly discuss between-cell competition of viral strains using a third model. Our findings may help explain the HIV population bottlenecks during within-host progression and host-to-host transmission
Abstract:
In this paper, we consider a fractionally integrated multi-level dynamic factor model (FI-ML-DFM) to represent commonalities in the hourly evolution of realized volatilities of several international exchange rates. The FI-ML-DFM assumes common global factors active during the 24 h of the day, accompanied by intermittent factors, which are active at mutually exclusive times. We propose determining the number of global factors using a distance among the intermittent loadings. We show that although the bulk of common dynamics of exchange rates realized volatilities can be attributed to global factors, there are non-negligible effects of intermittent factors. The effect of the COVID-19 on the realized volatility comovements is stronger on the first global-in-time factor, which shows a permanent increase in the level. The effects on the second global factor and on the intermittent factors active when the EU, UK and US markets are operating are transitory lasting for approximately a year after the pandemic starts. Finally, there seems to be no effect of the pandemic neither on the third global factor nor on the intermittent factor active when the markets in Asia are operating
Abstract:
The MoProSoft Integral Tool, or HIM for its name in Spanish, is a Web-designed s)'stem to support 11l0nitoring the MoProSoft, a software process model defined as part of a strategy to encourage the software industry in Mexico. Tlle HIM-assistant, is a system added to the HIM, which main objectives are to give a guide for the automated use oi the MoProSoft and improve the aid provided to the HIM users. To reach these objectives, elements fr0111 software engÚzeering along witlz two areas oi artificial i11telligence, multiagenl systems and case based reasoning, were applied . lo develop llze HIM-assistant. The task involved the H/M-assistant analysis and design plzases usíng the MESSAGE methodology, as well as the development of lhe system and the peiformance of tests. Tlze major importance of the work líes on the integratíon oi differenl areas to Julfill the objectives, usíng existing progress instead oi developing a totally new solution
Abstract:
The Ministry of Social Development in Mexico is in charge of creating and assigning social programmes targeting specific needs in the population for the improvement of the quality of life. To better target the social programmes, the Ministry is aimed to find clusters of households with the same needs based on demographic characteristics as well as poverty conditions of the household. Available data consists of continuous, ordinal, and nominal variables, all of which come from a non-i.i.d complex design survey sample. We propose a Bayesian nonparametric mixture model that jointly models a set of latent variables, as in an underlying variable response approach, associated to the observed mixed scale data and accommodates for the different sampling probabilities. The performance of the model is assessed via simulated data. A full analysis of socio-economic conditions in households in the Mexican State of Mexico is presented
Abstract:
The A-RIO AQM mechanism has been recently introduced as a viable component for implementing the AF Per Hop Behavior in DiffServ architectures. A-RIO has been thoroughly studied and compared against other AQM algorithms in terms of fairness, performance and setting complexity. In this paper, we extend these studies by analyzing how A-RIO behaves faced to some well known parameters that affect TCP performance: RTT delay, packet size and the presence of unresponsive flows. Our study is based on extensive ns-2 simulations in settings considering under and over-provisioned networks
Abstract:
Active queue management (AQM) mechanisms manage queue lengths by dropping packets when congestion is building up; end-systems can then react to such losses by reducing their packet rate, hence avoiding severe congestion. They are also very useful for the differentiated forwarding of packets in the DiffServ architecture. Many studies have shown that setting the parameters of an AQM algorithm may prove difficult and error-prone, and that the performance of AQM mechanisms is very sensitive to network conditions. The Adaptive RIO mechanism (A-RIO) [16] addresses both issues. It requires a single parameter, the desired queuing delay and adjusts its internal dynamics accordingly. A-RIO has been thoroughly evaluated in terms of delay response and network utilization [16] but no study has been conducted in order to evaluate its behaviour in terms of fairness. By way of ns-2 simulations, this paper examines A-RIO's ability to fairly share the network's resources (bandwidth) between the flows contending for those resources. Using Jain's fairness index as our performance metric, we compare the bandwidth distribution among flows obtained with A-RIO and with RIO
Abstract:
Following Davie's example of a Banach space failing the approximation property (1973), we show how to construct a Banach space E which is asymptotically Hilbertian and fails the approximation property. Moreover, the space E is shown to be a subspace of a space with an unconditional basis which is "almost" a weak Hilbert space and which can be written as the direct sum of two subspaces all of whose subspaces have the approximation property
Abstract:
The paper studies probability forecasts of inflation and GDP by monetary authorities. Such forecasts can contribute to central bank transparency and reputation building. Problems with principal and agent make the usual argument for using scoring rules to motivate probability forecasts confused; however, their use to evaluate forecasts remains valid. Public comparison of forecasting results with a “shadow” committee is helpful to promote reputation building and thus serves the motivational role. The Brier score and its Yates-partition of the Bank of England’s forecasts are compared with those of a group of non-bank experts
Abstract:
Studies of strategic sophistication in experimental normal form games commonly assume that subjects' beliefs are consistent with independent choice. This paper examines whether beliefs are consistent with correlated choice. Players play a sequence of 2x2 normal form games with distinct opponents and no feedback. Another set of players, called predictors, report a likelihood ranking over possible outcomes. A substantial proportion of the reported rankings are consistent with the predictors believing that the choice of actions in the 2x2 game are correlated. Predictions seem to be correlated around focal outcomes and the extent of correlation over action profiles varies systematically between games (i.e., prisoner's dilemma, stag hunt, coordination, and strictly competitive)
Abstract:
This study reports a laboratory experiment wherein subjects play a hawk-dove game. We try to implement a correlated equilibrium with payoffs outside the convex hull ofNash equilibrium payoffs by privately recommending play. We find that subjects are reluctant to follow certain recommendations. We are able to implement tbis correlated equilibrium, however, when subjects play against robots that always follow recommendations, including in a control treatment in which human subjects receive the robot "earnings." This indicates that the lack of mutual knowledge of conjectures, rather than social preferences, explains subjects' failure to play the suggested correlated equilibrium when facing other human players
Abstract:
This paper presents a model in which a durable goods monopolist sells a product to two buyers. Each buyer is privately informed about his own valuation. Thus all players are imperfectly informed about market demand. We study the monopolist's pricing behavior as players' uncertainty regarding demand vanishes in the limit. In the limit, players are perfectly informed about the downward-sloping demand. We show that in all games belonging to a fixed and open neighborhood of the limit game there exists a generically unique equilibrium outcome that exhibits Coasian dynamics and in which play lasts for at most two periods. A laboratory experiment shows that, consistent with our theory, outcomes in the Certain and Uncertain Demand treatments are the same. Median opening prices in both treatments are roughly at the level predicted and considerably below the monopoly price. Consistent with Coasian dynamics, these prices are lower for higher discount factors. Demand withholding, however, leads to more trading periods than predicted
Resumen:
Se propone un nuevo paradigma educativo para México, a partir de una visión amplia, crítica e innovadora que corresponda a la realidad, necesidades y circunstancias del país
Abstract:
In this article, we propose a new educational paradigm for Mexico based on an encompassing, critical, and innovative vision conforming with the country’s realities, needs, and circumstances
Resumen:
Un recuerdo del gran escritor y personaje, don Ramón del Valle Inclán: su personalidad estrafalaria y consistente; su pertenencia –y amistad– a la Generación del 98, y la creación estética del esperpento, que se refleja en sus obras, particularmente en las más famosas, Luces de Bohemia y Tirano Banderas
Abstract:
This article is dedicated to the memory of the great writer and protagonist, Don Ramón del Valle Inclán. We pay tribute to his outlandish yet consistent personality, his association and friendship with the Generation of ’98, and the creation of the esperpento, present in his works, notably in the most famous ones, Luces de Bohemia and Tirano Banderas
Abstract:
Universality, a desirable feature in any system. For decades, elusive measurements of three-phase flows have yielded countless permeability models that describe them. However, the equations governing the solution of water and gas co-injection has a robust structure. This universal structure stands for Riemann problems in green oil reservoirs. In the past we established a large class of three phase flow models including convex Corey permeability, Stone I and Brooks-Corey models. These models share the property that characteristic speeds become equal at a state somewhere in the interior of the saturation triangle. Here we construct a three-phase flow model with unequal characteristic speeds in the interior of the saturation triangle, equality occurring only at a point of the boundary of the saturation triangle. Yet the solution for this model still displays the same universal structure, which favors the two possible embedded two-phase flows of water-oil or gas-oil. We focus on showing this structure under the minimum conditions that a permeability model must meet. This finding is a guide to seeking a purely three-phase flow solution maximizing oil recovery
Abstract:
In 1977 Korchinski presented a new type of shock discontinuity in conservation laws. These singular solutions were coined δ-shocks since there is a time dependent Dirac delta involved. A naive description is that such δ-shock is of the overcompressive type: a single shock wave belonging to both families, the four characteristic lines of which impinge into the shock itself. In this work, we open the fan of solutions by studying two-family waves without intermediate constant states but possessing central rarefactions or comprising δ-shocks
Resumen:
Este manuscrito está dedicado a la memoria de Antonmaria Minzoni, discutimos una de sus contribuciones al mundo de las matemáticas. Curiosamente, ésta no se encuentra en ninguna revista científica ni en sus propios libros. En sus aportes a la ciencia, <
> me dio estos resultados para incluirlos en mi tesis de licenciatura, convirtiéndola de este modo, en lo que él consideraba una tesis en sensu stricto. A finales del siglo pasado, un artículo cautivó la observación de Minzoni con un resultado numérico a un problema en relatividad general. Este resultado tiene una aproximación asintótica que aquí describimos. Para ello, primero construiremos parte de la historia y las ecuaciones que conforman la teoría de la relatividad general de Einstein. Veremos cómo es posible el colapso de la materia en una configuración especial con la posibilidad de generar un agujero negro
Abstract:
For a family of Riemann problems for systems of conservation laws, we construct a flux function that is scalar and is capable of describing the Riemann solution of the original system
Abstract:
We discuss the solution for commonly used models of the flow resulting from the injection of any proportion of three immiscible fluids such as water, oil, and gas in a reservoir initially containing oil and residual water. The solutions supported in the universal structure generically belong to two classes, characterized by the location of the injection state in the saturation triangle. Each class of solutions occurs for injection states in one of the two regions, separated by a curve of states for most of which the interstitial speeds of water and gas are equal. This is a separatrix curve because on one side water appears at breakthrough, while gas appears for injection states on the other side. In other words, the behavior near breakthrough is flow of oil and of the dominant phase, either water or gas; the non-dominant phase is left behind. Our arguments are rigorous for the class of Corey models with convex relative permeability functions. They also hold for Stone’s interpolation I model [5]. This description of the universal structure of solutions for the injection problems is valid for any values of phase viscosities. The inevitable presence of an umbilic point (or of an elliptic region for the Stone model) seems to be the cause of this universal solution structure. This universal structure was perceived recently in the particular case of quadratic Corey relative permeability models and with the injected state consisting of a mixture of water and gas but no oil [5]. However, the results of the present paper are more general in two ways. First, they are valid for a set of permeability functions that is stable under perturbations, the set of convex permeabilities. Second, they are valid for the injection of any proportion of three rather than only two phases that were the scope of [5]
Abstract:
Flow of three fluids in porousmedia is governed by a system of two conservation laws. Shock solutions are described by curves in state space, which is the saturation triangle of the fluids. We study a certain bifurcation locus of these curves, which is relevant for certain injection problems. Such structure arises, for instance, when water and gas are injected in a mature reservoir either to dislodge oil or to sequestrate CO2. The proof takes advantage of a certain wave curve to ensure that the waves in the flow are a rarefaction preceded by a shock, which is in turn preceded by a constant two-phase state (i.e., it lies at the boundary of the saturation triangle). For convex permeability models of Corey type, the analysis reveals further details, such as the number of possible two-phase states that correspond to the above mentioned shock, whatever the left state of the latter is within the saturation triangle
Resumen:
En teoría de números, el estudio de los números primos tiene una relevancia central. Se sabe que Hilbert creía que esta teoría sería siempre la parte más pura de las matemáticas, el vuelco vino con la criptografía y a su vez la búsqueda por números primos cada vez más grandes. Vemos, a lo largo de la historia desde 1952 hasta los días actuales, que 32 de los 33 números primos más grandes registrados son aquellos llamados primos de Mersenne; solamente de 1989 a 1992 el número 391581·2^216193−1 salió de esta regla. Desde 1996 todos los resultados provienen del proyecto colectivo GIMPS. Este trabajo propone una prueba simple para el teorema por el cual conocemos los primos de Mersenne, sin sofisticadas herramientas de teoría de números. La prueba es accesible y tan sencilla que nos permitiría ir un poco más allá y generalizar los primos de Mersenne al mostrar una gran parte de la familia que estaba escondida
Abstract:
Is ignition or extinction the fate of an exothermic chemical reaction occurring in a bounded region within a heat conductive solid consisting of a porous medium? In the spherical case, the reactor is modeled by a system of reaction-diffusion equations that reduces to a linear heat equation in a shell, coupled at the internal boundary to a nonlinear ODE modeling the reaction region. This ODE can be regarded as a boundary condition. This model allows the complete analysis of the time evolution of the system: there is always a global attractor. We show that, depending on physical parameters, the attractor contains one or three equilibria. The latter case has special physical interest: the two equilibria represent attractors ("extinction" or "ignition") and the third equilibrium is a saddle. The whole system is well approximated by a single ODE, a "reduced" model, justifying the "heat transfer coefficient" approach of chemical engineering
Abstract:
This paper provides evidence on the difficulty of expanding access to credit through large institutions. We use detailed observational data and a large-scale countrywide experiment to examine a large bank's experience with a credit card that accounted for approximately 15% of all first-time formal sector borrowing in Mexico in 2010. Borrowers have limited credit histories and high exit-risk – a third of all study cards are defaulted on or canceled during the 26 month sample period. We use a large-scale randomized experiment on a representative sample of the bank's marginal borrowers to test whether contract terms affect default. We find that large experimental changes in interest rates and minimum payments do little to mitigate default risk. We also use detailed data on purchases and payments to construct a measure of bank revenue per card and find it is generally low and difficult to predict (using machine learning methods), perhaps explaining the bank's eventual discontinuation of the product. Finally, we show that borrowers generating a favorable credit history are much more likely to switch banks providing suggestive evidence of a lending externality. Taken together these facts highlight the difficulty of increasing financial access using large formal sector financial organizations
Abstract:
This paper analyzes the existence and extent of downward nominal wage rigidities in the Mexican labor market using data from the administrative records of the Mexican Social Security Institute (IMSS). This establishment-level, panel dataset allows us to track workers employed with the same firm, observe their wage profiles and calculate the nominal-wage changes they experience over time. Based on the estimated density functions of nominal wage changes, we are able to calculate some standard tests of nominal wage rigidity that have been proposed in the literature. Furthermore, we extend these tests to take into account the presence of minimum wage laws that may affect the distribution of nominal wage changes. The densities and tests calculated using these data are similar to those obtained using administrative data from other countries, and constitute a significant improvement over the measures of nominal wage rigidities obtained from household survey data. We document the importance of minimum wages in the Mexican labor market, as evidenced by the large fraction of minimum wage earners and the indexation of wage changes to the minimum wage increases. We find considerably more nominal wage rigidity than previous estimates obtained for Mexico using data from the National Urban Employment Survey (ENEU) suggest, but lower than that reported for developed countries by other studies that use comparable data
Abstract:
Autonomous agents (AAs) are capable of evaluating their environment from an emotional perspective by implementing computational models of emotions (CMEs) in their architecture. A major challenge for CMEs is to integrate the cognitive information projected from the components included in the AA's architecture. In this chapter, a scheme for modulating emotional stimuli using appraisal dimensions is proposed. In particular, the proposed scheme models the influence of cognition on appraisal dimensions by modifying the limits of fuzzy membership functions associated with each dimension. The computational scheme is designed to facilitate, through input and output interfaces, the development of CMEs capable of interacting with cognitive components implemented in a given cognitive architecture of AAs. A proof of concept based on real-world data to provide empirical evidence that indicates that the proposed mechanism can properly modulate the emotional process is carried out
Abstract:
In this paper we present a mechanism to model the influence of agents’ internal and external factors on the emotional evaluation of stimuli in computational models of emotions. We propose the modification of configurable appraisal dimensions (such as desirability and pleasure) based on influencing factors. As part of the presented mechanism, we introduce influencing models to define the relationship between a given influencing factor and a given set of configurable appraisal dimensions utilized in the emotional evaluation phase. Influencing models translate factors’ influences (on the emotional evaluation) into fuzzy logic adjustments (e.g., a shift in the limits of fuzzy membership functions), which allow biasing the emotional evaluation of stimuli. We implemented a proof-of-concept computational model of emotions based on real-world data about individuals’ emotions. The obtained empirical evidence indicates that the proposed mechanism can properly affect the emotional evaluation of stimuli while preserving the overall behavior of the model of emotions
Abstract:
In this paper we introduce the concept of configurable appraisal dimensions for computational models of emotions of affective agents. Configurable appraisal dimensions are adjusted based on internal and/or external factors of influence on the emotional evaluation of stimuli. We developed influencing models to define the extent to which influencing factors should adjust configurable appraisal dimensions. Influencing models define a relationship between a given influencing factor and a given set of configurable appraisal dimensions. Influencing models translate the influence exerted by internal and external factors on the emotional evaluation into fuzzy logic adjustments, e.g., a shift in the limits of fuzzy membership functions. We designed and implemented a computational model of emotions based on real-world data about emotions to evaluate our proposal. Our empirical evidence suggests that the proposed mechanism properly influences the emotional evaluation of stimuli of affective agents
Abstract:
In this paper, we present a computational model of emotions based on the context of an Integrative Framework designed to model the interaction of cognition and emotion. In particular, we devise mechanisms for assigning an emotional value to events perceived by autonomous agents using a set of appraisal variables. Defined as fuzzy sets, these appraisal variables model the influence of cognition on emotion assessment. We do this by changing the limits of fuzzy membership functions associated to each appraisal variable. In doing so, we aim to provide agents with a degree of emotional intelligence. We also defined a case study involving three agents, two with different personalities (as a cognitive component) and another one without a personality to explore their reactions to the same stimulus, obtaining as a result, a different emotion for each agent. We noticed that emotions are biased by the interaction of cognitive and affective information suggesting the elicitation of more precise emotions
Resumen:
En este trabajo se presenta un modelo para optimizar el balanceo de inventario de un sistema de bicicletas compartidas. Se aplica con datos reales de Ecobici de un número específico de estaciones en la zona de Polanco de la Ciudad de México para realizar un re-balanceo estático (durante un periodo definido del día). Se define una función de satisfacción que toma en cuenta la probabilidad de hallar bicicletas disponibles y espacios libres para dejarlas en cada estación. Se optimiza una función ponderada de la satisfacción y el tiempo total de la ruta para llevar a cabo la carga y descarga con un vehículo al resolver un problema de programación lineal mixto entero. Los resultados sugieren que es posible realizar una optimización del re-balanceo de inventario en un sistema de bicicletas compartidas con mínimos recursos
Resumen:
A cinco siglos del encuentro entre dos mundos, que dio inicio a un periodo histórico en el cual se gestaron los cimientos del México actual, se analizan varios aspectos del proceso de creación y consolidación de Nueva España, su papel dentro de la monarquía hispánica y la Iglesia católica, así como sus posibles consecuencias en el desarrollo de la cultura mexicana
Abstract:
Five centuries after the encounter between two worlds, which began a historical period in which the foundations of today’s Mexico were developed, several aspects of the process of creation and consolidation of New Spain, its role within the Hispanic monarchy and the Catholic church are analyzed, as well as its possible consequences in the flourishing of Mexican culture
Resumen:
El propósito de este estudio es analizar la forma en que se genera (producto interno bruto), asigna (ingreso nacional), distribuye (ingreso disponible), utiliza (gasto y ahorro) y acumula (riqueza) el valor generado a partir del trabajo (intelectual y manual) y los recursos naturales (que también aportan valor); es decir, el propósito es estudiar la desigualdad en la repartición del valor generado en la economía a partir de la teoría del valor objetiva, en lugar de medir la desigualdad subjetiva del bienestar (felicidad) por medio del consumo (utilidad). Si bien se considera importante el tema de las capacidades y libertades (igualdad de oportunidades), se toma como referencia el marco más amplio de la necesidad de cumplimiento de los derechos humanos. Por ello, se otorga más importancia a las medidas urgentes que se deben tomar ex ante, ya que en ellas estaría la solución a la problemática de la pobreza y la desigualdad de los países de América Latina y el Caribe. Es preciso repartir de manera justa los beneficios que genera la sociedad y otorgar a todos sus miembros el goce pleno de los derechos humanos para construir un mundo más justo
Resumen:
La mayoría de los investigadores sobre la desigualdad en México han concluido que, aunque la inequidad en los ingresos es muy alta, su tendencia es a la baja. Han llegado a esta conclusión porque han utilizado las cifras oficiales de ingresos de las encuestas de hogares, sin hacer ninguna corrección. Por el contrario, en este estudio se propone un ajuste a los datos de ingresos provenientes de las encuestas de ingresos y gastos de los hogares, basado en las cuentas nacionales. Las cifras ajustadas muestran que la desigualdad es alta y creciente, debido a las políticas públicas implementadas desde mediados de la década de 1980. Por lo tanto, debemos atrevernos a pensar de manera diferente y cambiar el curso económico del país
Abstract:
Most inequality researchers in Mexico have concluded that although income inequality is very high, there is a downward trend in this inequality. They have come to this conclusion because they have used official income figures from household surveys without making any correction. In contrast, this study proposes an adjustment to income data from household income and expenditure surveys, based on national accounts. Adjusted figures show that inequality is high and increasing, due to public policies implemented since the mid-1980s. Therefore, we must dare to think differently and change the economic course of the country
Resumen:
En 2014, la riqueza total del país ascendió a 76,7 billones de pesos. El 37% de ella estaba en manos de los hogares; el gobierno administraba el 23%, las empresas privadas el 19%, las empresas públicas el 9%, el resto del mundo poseía el 7% y las instituciones financieras el 5%. En promedio cada hogar tendría, si hubiera una distribución equitativa, 900.000 pesos en activos físicos (casas, terrenos, automóviles y diversos bienes del hogar), y financieros (dinero e inversiones financieras), monto que sería más que suficiente para que las personas tuvieran una vida holgada: cerca de 400.000 pesos por adulto, en promedio. Lamentablemente la repartición es muy desigual. Dos terceras partes de la riqueza están en manos del 10% más rico del país y el 1% de los muy ricos acaparan más de un tercio. Por ello, el coeficiente de Gini de la riqueza es de 0,79. La distribución es todavía más desigual en los activos financieros: el 80% es propiedad del 10% más rico. En 2015 había en el país tan sólo 211.000 contratos de mexicanos celebrados en casas de bolsa, con una inversión total por 16 billones de pesos, el 22% de la riqueza nacional. El 11% de los contratos tienen un monto de inversión mayor a 500 millones de pesos y suman el 79,5% del total de la inversión. Es decir, hay 23.000 personas (si asumimos un contrato por persona), que tienen el 80% de la inversión de la Bolsa Mexicana de Valores. Por ello México está presente en la lista de la revista Forbes, así como en los reportes que han elaborado las instituciones financieras encargadas de gestionar los fondos patrimoniales, quienes ven en el país un mercado al cual atender. En los últimos once años, entre 2003 y 2014, la riqueza del país aumentó a una tasa promedio anual de 7,9%, en términos reales, por lo que México duplicó el monto de su riqueza entre 2004 y 2014. En cambio, el producto interno bruto tuvo un magro crecimiento de 2,6% promedio anual en el mismo período. [...]
Resumen:
En este estudio se propone una metodología para ajustar los datos de las encuestas de ingresos y gastos de los hogares en México con la información de las cuentas nacionales a fin de contar con información confiable para el estudio de la desigualdad en el ingreso y la riqueza, en especial entre los sectores de mayores ingresos. A partir de un método que asigna adecuadamente los ingresos sin afectar las medidas de pobreza, se calcula que la desigualdad en México es significativamente mayor a la que se ha estimado hasta ahora. La proporción del ingreso corriente total que concentra el 10% de las familias más ricas de México se incrementa del 35% al 62%, con lo que el coeficiente de Gini aumenta de 0,45 a 0,68. De igual forma, el 1% de las familias más ricas concentra el 22,8% del ingreso total y su ingreso promedio es de 625.000 pesos mensuales. Al utilizar la función de Pareto, este documento concluye también que la proporción de ingreso del 1% de los hogares más ricos se eleva a 34,2% y sus ingresos medios a 973.000 pesos mensuales, mientras que el 0,1% de las familias (poco más de 31.000) suman el 19% del ingreso y sus percepciones medias ascienden a 5.000.000 de pesos mensuales. De igual forma, este estudio señala que si se considera tan sólo la asignación del ingreso primario y se excluyen por tanto las transferencias, el 10% más rico concentra el 66%, con lo que el coeficiente de Gini se eleva a 0,73
Resumen:
Se analizan las principales ideas del libro El capital en el siglo xxi de Thomas Piketty: las fuerzas que inciden en la desigualdad de la riqueza y el ingreso; la primera ley del capitalismo y el aumento en la proporción del capital en la economía; la segunda ley del capitalismo y el aumento en la proporción de riqueza respecto al ingreso nacional; la desigualdad en los ingresos del trabajo y del capital; el cambio en la composición de la riqueza y las recomendaciones del autor para salvar la globalización y el sistema de mercado de los problemas sociales y políticos que la inequidad ha producido. Se confrontan estas ideas con México. Se concluye que el libro de Piketty puede ayudar a diseñar un mejor futuro para México, siempre y cuando nos atrevamos a pensar diferente
Abstract:
In this article, we will analyze the main ideas from Thomas Piketty’s book, Capital in the Twenty-First Century. They consist of the following: the elements causing inequality in wealth and income, the first law of capitalism and the increase of the proportion of capital in the economy, the second law of capitalism and the increase in the proportion of wealth with respect to national income, the inequalities in the division of income of labor and capital, and the changing forms of wealth. Moreover, the author gives recommendations to save both globalization and the market economy from the social and political problems brought on by inequality. All these ideas are contrasted with the Mexican economy and it is concluded that Piketty’s contributions are helpful to devise a better future for our country, as long as we dare to think differently
Resumen:
El problema del hambre en México es aún más grave de lo que se piensa. Quien no sufre subnutrición, está mal nutrido; la población de México está famélica u obesa; tan sólo el 14% tiene una nutrición adecuada. Una de las causas más importantes es el cambio en el entorno alimenticio, producto de la apertura comercial de México con Estados Unidos. Como parte del tlcan han llegado a México una gran cantidad de productos alimenticios procesados que han provocado una “epidemia” de diabetes en el país, que ocupa el primer lugar entre las causas de defunciones. La “Cruzada contra el Hambre” es insuficiente: se requiere implantar políticas públicas más contundentes, como un impuesto a la importación de productos alimenticios procesados, si en verdad se desea enfrentar el reto
Abstract:
Hunger in Mexico is an even more serious problem than commonly thought. Those who are not undernourished are malnourished. The Mexican population is either starving or obese. Only 14% of the population has good nutritional habits.The main cause is the change in the food supply due to the opening of commercial borders between Mexico and the United States. As part of nafta, a great variety of processed food products has arrived, causing a diabetes epidemic which has become the leading cause of death in our country. Thus, this “Crusade against Hunger” is insufficient and more forceful public policies are called for, namely, a tax on imported processed food
Resumen:
Se analiza la tesis del proceso de individualización de Ulrich Beck, sociólogo alemán contemporáneo, y se le compara con la realidad empírica de México. Para el autor, la modernidad reflexiva es un reflejo de la política y la tecnología; se trata de una revolución de las consecuencias a partir de tres ámbitos: ingreso, empleo y familia. A diferencia de Beck, se propone una distinción entre individualización, producto del Estado de bienestar, y la que surge de su desmantelamiento
Abstract:
In this article, we analyze the individualization process of Ulrich Beck, a contemporary German sociologist, with Mexico’s empirical reality. He proposes that reflexive modernization is a reflection of politics and technology, a revolution of consequences from three aspects: income, employment, and family. In contrast to Beck, we propose a distinction between that individualization, a product of the Welfare State, and that resulting from its breakdown
Abstract:
The Bike Sharing Systems (BSS) are an integral part of the multimodal transport systems in an urban area. Those systems have many advantages such as low cost, environmental--friendly and flexibility. Nevertheless, the lack of bicycles and the lack of spaces to drop them off discourage people from using the BSS. Thus, we propose a Saturation Index ($SI$) to identify the number of bicycles for a period of time (seconds, minutes, hours) in a station; thus, based on the $SI$, the supply and demand levels are known for every station. With those levels, the number of bicycles to be moved by truck among the stations is computed by solving the transship model. Finally, a route is computed using a heuristic to minimize the travel distance of the truck among the stations. To test our approach, we used the data set of the ECOBICI system in Mexico City, and we program a computational application based on R language and Python. The results show that during the day, a station change to supply bicycles to demand them. Thus, to balance the system, the proposed approach must be run every time the managers want to balance the system
Abstract:
In this paper we predict the overall withdrawal of ecobicis for a given day. The principal problem adressed was the lack of data available to understand certain behavior related to the ecobici's demand at some time of the day. However, with the information available in the ecobici's website we were capable to adjust a time series model that help us forecast the total withdrawals in the short time, this can be used to estimate the overall demand of ecobicis for a given hour in the day in order to identify the time of the day when the ecobici’s demand is greater than the availability. The principal motivation of our analysis is to predict the demand of each ecobici's station, so this model is a benchmark for future analysis
Abstract:
In this article, we examine how collective notions of belonging and imagination become a fertile terrain upon which transnational websites can sustain certain social practices across national boundaries that would be otherwise difficult. Drawing on field work carried out in the United States and Mexico, and using transnational imagination as our analytical lens, we observed three phenomena that are closely related to the use of a transnational website by a migrant community. First, the transnational website under study was a place for a collective imaginary rather than just for the circulation of news. Also, through transnational imagination, migrants can make claims about their status in their community of origin. Moreover, the website is instrumental in harmonizing the various views of the homelands’ realities. Finally, the website can inspire us to look beyond dyadic forms of communication
Abstract:
When somebody dies, her or his presence might still be hanging around others’ lives on Social Networking Sites (SNS). Several factors might influence the way we perceive this digital presence after the death including personal and religious beliefs. In this work, we present results from a study aimed at examining the differences between the way we perceive an online profile before and after the owner of the profile has died. In the few weeks following the death, the digital presence seems to become more salient, although this salience might be only transient. Our findings highlight not only the importance of studying this area, but also raise further questions within this area than need to be addressed in order to better understand this topic
Abstract:
Let τ be a hereditary torsion theory on Mod-R. For a right τ-full R-module M, we establish that [T, T v ξ (M)] is a boolean lattice; we find necessary and sufficient conditions for the interval [T, T v ξ (M)] be atomic, and we give conditions for the atoms be of some specific type in terms of the internal structure of M. We also prove that there are lattice isomorphisms between the lattice [T, Tv ξ (M)] and the lattice of T-pure fully invariant submodules of M, under the additional assumption that M is absolutely T-pure. With the aid of these results, we get a decomposition of a T-full and absolutely T-pure R-module M as a direct sum of T-pure fully invariant submodules N and N' with different atomic characteristics on the intervals [T, T v ξ (N)] and [T, Tv ξ (N )] , respectively
Resumen:
Objetivo. Describir y analizar el gasto de la Secretaría de Salud asociado con iniciativas de comunicación social de las campañas de prevención de enfermedades transmitidas por vectores (Zika, chikunguña y dengue) y la evaluación de impacto o resultados. Material y métodos. La información se obtuvo de 690 contratos de prestación de servicios de comunicación social (2015-2017), asociados con dos declaraciones de emergencia epidemiológica (EE-2-2015 y EE-1-2016). Resultados. Se concluye una débil evaluación de impacto del gasto público. No existe evidencia suficiente que demuestre la correspondencia del gasto en comunicación social con la efectividad y cumplimiento de las campañas. Conclusiones. Los hallazgos permiten definir recomendaciones para vigilar, transparentar y hacer más eficiente el gasto público. Existe información pública sobre el gasto; sin embargo, es necesario garantizar mecanismos de transparencia, trazabilidad de contratos y evaluación de impacto de las campañas
Abstract:
Objective. To briefly describe and to analyze, the expenditure of the Minister of Health associated with social communication campaigns for the prevention of vector-transmitted diseases (Zika, chikungunya and dengue). Materials and methods. The information was obtained through the analysis of 690 contracts for the provision of social communication services (2015-2017). The analyzed contracts are linked with two epidemiological emergency declarations (EE-2-2015, EE-1-2016). Results. The analysis concludes a weak evaluation of the impact of public spending. There is not enough evidence to show the correspondence of social communication spending with the effectiveness and compliance of the campaigns aim goals. Conclusions. Findings allow defining recommendations to monitor public spending. There are platforms that provide information on social communication spending; however, it is necessary to guarantee transparency, accountability and the governance of those responsible for social communication and official advertising and strengthen outcomes evaluation mechanisms
Abstract:
We introduce a high-dimensional factor model with time-varying loadings. We cover both stationary and nonstationary factors to increase the possibilities of applications. We propose an estimation procedure based on two stages. First, we estimate common factors by principal components. In the second step, considering the estimated factors as observed, the time-varying loadings are estimated by an iterative generalized least squares procedure using wavelet functions. We investigate the finite sample features by some Monte Carlo simulations. Finally, we apply the model to study the Nord Pool power market's electricity prices and loads
Abstract:
The expansion of democracy in the world has been paradoxically accompanied by a decline of political trust. By looking at the trends in political trust in new and stable democracies over the last 20 years, and their possible determinants, we claim that an observable decline in trust reflects the post-honeymoon disillusionment rather than the emergence of a more critical citizenry. However, the first new democracies of the ‘third wave’ show a significant reemergence of political trust after democratic consolidation. Using data from the World Values Survey and the European Values Survey, we develop a multivariate model of political trust. Our findings indicate that political trust is positively related to well-being, social capital, democratic attitudes, political interest, and external efficacy, suggesting that trust responds to government performance. However, political trust is generally hindered by corruption permissiveness, political radicalism and postmaterialism. We identify differences by region and type of society in these relationships, and discuss the methodological problems inherent to the ambiguities in the concept of political trust
Abstract:
In this paper, we present an agent-based simulation (ABS) model of a synthetic biology system that captures substrates and transfers them amongst a group of enzymes until reaching an acceptor enzyme, which generates a substrate-channeling event (SCE). In particular, we analyze the number of simulation cycles required to reach a pre-specified number of SCEs varying the system composition, which is given by the number of enzymes of two types: donor and acceptor. The results show an efficient frontier that generates the desired number of SCEs with the minimum number of cycles and the lowest acceptor:donor ratio for a given density of enzymes in the system. This frontier is characterized by an exponential function to define the system composition that would minimize the number of cycles to generate the desired SCEs. The output of the ABS confirms that compositions obtained by this function are highly efficient
Resumen:
Siguiendo las recomendaciones de la Organización para la Cooperación y Desarrollo Económico, hemos utilizado la información de patentes contenida en la base de datos de consulta PATENTSCOPE como un indicador de innovación tecnológica con el fin de analizar la participación de los inventores mexicanos en las solicitudes de patentes en un periodo de veinte años (1995-2015). Se realizó el análisis tomando en cuenta el género de los inventores para contrastar la participación de hombres y mujeres. Se muestran algunos indicadores tales como participación, contribución y presencia. Los resultados del estudio establecen que las inventoras mexicanas participan en los títulos de patentes con un equipo de inventores pequeño a mediano (de acuerdo al número de inventores enlistados), mientras los inventores mexicanos tienden a hacerlo en solitario. Se establece también que el área tecnológica en la que los inventores mexicanos hombres y mujeres tienen mayor participación es la de química y metalurgia. Los resultados revelan disparidades de género que deberían atenderse mediante políticas públicas para alcanzar las Metas del Milenio y las Metas de Sustentabilidad y Desarrollo establecidas por la ONU, así como promover la equidad de género en las actividades relacionadas con ciencia y tecnología
Abstract:
Following the Organization of Economic and Cooperation and Development recommendations, we have used patent data comprised in the PATENTSCOPE database as an indicator of technological innovation and in order to analyze Mexican inventors' involvement on patent filing over a 20 year period (1995-2015). The analysis was gender desegregated to observe patterns and trends of participation of both male and female inventors. Some indicators such as participation, contribution and presence are shown. Findings reveal that Mexican female inventors more often apply for patent titles within a small to medium sized team, while male inventors prefer single-authored applications. It has also been found that the stronger technological area in which both male and female Mexican inventors apply for patent titles is that relative to chemistry and metallurgy, inclusive of all its subareas. The results reveal gender disparities that should be addressed in Mexican public policy to accomplish United Nations Millenium Goals and UN Sustainable Development Goals, and to and promote gender equity in science and technology related activities
Resumo:
Seguindo as recomendações da Organização para a Cooperação e Desenvolvimento Econômico, temos utilizado a informação de patentes contida na base de dados de consulta PATENTSCOPE como um indicador de inovação tecnológica com a finalidade de analisar a participação dos inventores Mexicanos nas solicitações de patentes em um período de vinte anos (1995-2015). Foi realizada a análise levando em consideração o género dos inventores para contrastar a participação de homens e mulheres. São mostrados alguns indicadores tais como participação, contribuição e presença. Os resultados do estudo estabelecem que as inventoras Mexicanas participam nos títulos de patentes com uma equipe de inventores entre pequena a média (de acordo com o número de inventores listados), por sua vez, os inventores Mexicanos tendem a fazê-lo à sós. Foi estabelecido também que as áreas tecnológicas nas quais os inventores Mexicanos, homens e mulheres, têm maior participação são as de Química e Metalurgia. Os resultados revelam disparidades de género que deveriam ser atendidas mediante políticas públicas para alcançar as Metas do Milênio e as Metas de Sustentabilidade e Desenvolvimento estabelecidas pela ONU, assim como promover a equidade de género nas atividades relacionadas com Ciência e Tecnologia
Abstract:
University teaching in times of the COVID-19 pandemic has been exposed to new challenges and requirements. The fundamental challenge is to show that the educational quality of the institutions designed and rooted in a face-to-face model can offer the same educational quality in a digital format. The challenges appear in the form of students, who do not see added value in digital distance education and, on the other hand, employers and families who strongly question the price they are paying in terms of tuition and human resources educated now remotely. Faced with this problem, there are two opposite attitudes: wait or assume. The institutions that have decided to wait believe that this passage is temporary and that the adjustments they will have to make rest on presenting a mitigation plan with the hope to return to normal face-to-face. Other institutions have decided to assume that the changes are more profound, and that the pandemic is an opportunity to restructure and rethink their educational model. This paper presents some routes to go through this second attitude through a model called Effective Education also Digital (EED)
Resumen:
El trabajo propone distinguir entre el razonamiento práctico moral del razonamiento jurídico, a partir de dos criterios. El primero es que el razonamiento práctico moral es del dominio exclusivo de la razón práctica y que carece, así, de exigencias de la razón teórica. El segundo criterio es que el razonamiento jurídico posee un cierto grado de lo que se denomina racionalidad de método. El razonamiento práctico moral carece de esta clase de racionalidad. La tesis se argumenta a partir de la distinción entre razón teórica y razón práctica. Por un lado, el razonamiento teórico tiene relación directa con el conocimiento acerca del mundo. De otro, El razonamiento práctico se ocupa de decidir el curso de acción que se ha de elegir en un caso concreto. aunque relacionadas, las racionalidades sonregidadas por diferentes criterios de evaluación. Las diferentes exigencias de razón permiten advertir, también, que ciertos razonamientos jurídicos son pasibles de ciertos sentidos de racionalidad que no son aplicables al razonamiento moral
Abstract:
The article proposes to distinguish between moral practical reasoning from legal reasoning, from two different criteria. The first criterion is that moral practical reasoning is a reasoning that belongs exclusively to the domain of practical reason and, thus, it has no requirements from the domain of theoretical reason. The second criterion is that legal reasoning possesses, to a degree, a type of rationnality called method rationnality. Such a rationnality cannot be said to be present in moral practical reasoning , at all. The thesis is argued from the distinction between theoretical reason and practical reason. On the one hand, theoretical reasoning is concerned about deciding the best course of action in a particular situation. Although related, the two rationales are governed by different evaluation criteria. Such requirements reveal that some notions of rationality are applicable to legal reasoning but to its moral counterpart
Abstract:
In this paper we propose a theoretical model of an ITS (Intelligent Tutoring Systems) capable of improving and updating computer-aided navigation based on Bloom’s taxonomy. For this we use the Bayesian Knowledge Tracing algorithm, performing an adaptive control of the navigation among different levels of cognition in online courses. These levels are defined by a taxonomy of educational objectives with a hierarchical order in terms of the control that some processes have over others, called Marzano's Taxonomy, that takes into account the metacognitive system, responsible for the creation of goals as well as strategies to fulfill them. The main improvements of this proposal are: 1) An adaptive transition between individual assessment questions determined by levels of cognition. 2) A student model based on the initial response of a group of learners which is then adjusted to the ability of each learner. 3) The promotion of metacognitive skills such as goal setting and self-monitoring through the estimation of attempts required to pass the levels. One level of Marzano's taxonomy was left in the hands of the human teacher, clarifying that a differentiation must be made between the tasks in which an ITS can be an important aid and in which it would be more difficult
Abstract:
Efficient public transport systems rely on origin-destination matrices (ODMs) estimation to accurately capture passenger travel patterns, enabling adjustments to frequencies and lines as needed. In this study, we address the ODM estimation problem by employing multiple bi-level programs that consider an outdated ODM and observed passenger flows on specific transit line arcs. Additionally, we consider various optional data types, including boarding and alighting data, as well as the structure of the outdated ODM and passenger flows, either all, separately, in combination, or none. In our study, we reformulate these bi-level programs into single-level models, and we use a commercial solver to address the problem in benchmark instances. In our analysis, we focus on the impact of incorporating different types of information into the estimation process, leading to valuable insights. We find that considering all the data types leads to a higher accuracy than only a subset of these data types. In particular, focussing only on boarding and alighting data leads to improvements in the estimation process, whereas considering only the structure of the outdated ODM and passenger flows leads to reduced accuracy compared to not incorporating either. The latter highlights the significance of data selection in ODM estimations
Abstract:
The efficiency of the planning process in public transport is represented through different measures, which are practically impossible to optimize simultaneously. This study defines a bi-objective optimization problem for the transit network design to analyze the trade-off between minimization of travel times and reducing monetary costs for passengers (which was not addressed in the literature) while considering a hard constraint for operational costs. Indeed, the minimization of monetary costs for passengers is relevant in transport systems without a complete integrated fare system, where passengers may pay for each trip-leg; thus, modeling monetary costs for users is essential when referring to the system's accessibility and route choice. To achieve our goal, we implement an epsilon-constraint algorithm capable of obtaining high-quality approximations of the Pareto front for benchmark instances in hours of computational time, which is reasonable for strategic planning problems. Numerical results show that the conflict between both objectives is evident, and it is possible to identify the more useful lines to optimize each objective, leading to relevant information for the decision-making process. Finally, we perform a sensitivity analysis on the budget parameter of our optimization problem, showing the classic trade-off between the operational costs and the level of service in terms of travel time and monetary cost
Resumen:
Este artículo sostiene que la llegada del Antropoceno requiere un cambio en el significado y alcance de la responsabilidad. Con base en Hans Jonas y Bruno Latour, sostengo que la responsabilidad es una característica definitoria de la humanidad que, no obstante, está acechada por su opuesto. Si ser responsable es primariamente ser receptivo a lo Otro, entonces la cultura de 'responsabilidad personal' que prevalece hoy en día es una traición tanto a la humanidad como a la Tierra. Cuando Jonas formuló tales ideas en 1979, el 'sistema tierra' no era ni un campo de estudio científico ni una cuestión de preocupación existencial. Pocos académicos lo tomaron en serio. Sin embargo, desarrollos recientes en el pensamiento científico, legal y ambiental han validado su visión. Para probar esta hipótesis, retomo a Latour, quien fue un cuidadoso lector-y crítico-de Jonas. Ambos pensadores consideraron que la creencia modernista de que solo los humanos son fuentes de reclamos morales válidos es un error que debe ser corregido. A medida que la Tierra hoy 'reacciona' a nuestras intervenciones con fenómenos climáticos extremos y enfermedades zoonóticas, su mensaje resuena en círculos cada vez mayores. El Antropoceno trastoca una era en la que solo algunos humanos tenían permitido hablar. Ahora debemos enseñarnos a escuchar y responder a otros seres vivos y a generaciones futuras. Sostengo que esta capacidad es el corazón de regímenes emergentes de responsabilidad planetaria
Abstract:
This paper argues that the coming of the Anthropocene requires a shift in the meaning and scope of responsibility. Drawing on Hans Jonas and Bruno Latour, I argue that responsibility is a defining feature of humanity which is nevertheless haunted by its opposite. Indeed, if to be responsible is primarily to be responsive to the claim of the Other, then the culture of 'personal responsibility' that prevails today is a betrayal of both humanity and the Earth. When Jonas formulated such thoughts in 1979 the 'Earth system' was neither a field of scientific study, nor a matter of existential concern. Few scholars took him seriously. However, recent developments in scientific, legal, and environmental thought have vindicated his vision. To test this hypothesis I turn to Latour, who was a careful reader-and critic-of Jonas. Both thinkers regarded the modernist belief that only humans are sources of valid moral claims as an error that ought to be corrected. As the Earth today 'reacts' to our interventions with extreme weather and zoonotic diseases, their message is resounding in growing circles. The Anthropocene upends an era in which only (some) humans were allowed to speak. Now we must teach ourselves how to listen and respond to other living beings and future generations. This responsiveness, I argue, will form the core of emerging regimes of planetary responsibility
Resumen:
El artículo ofrece una lectura de La Condición Humana (1958) a la luz de Vita Activa (1961). Tras hacer un recorrido por la recepción de La Condición Humana desde su publicación en 1958, argumento que se ha marginado a la obra como un todo para extraer de ella fragmentos y supuestos ‘modelos’ de la política. Considerada como un todo, la obra de Arendt es una reflexión acerca de las condiciones de posibilidad de nuestras experiencias de sentido. Como es evidente sobre todo en la versión alemana, se trata, así, de una investigación fenomenológica. Argumento que tanto la fuerza crítica como la aparente ceguera que encontramos en Arendt se deben a las premisas fenomenológicas de su pensamiento
Abstract:
The article offers a reading of The Human Condition (1958) in light of Vita Activa (1961). After tracing the reception of The Human Condition since its publication in 1958, I argue that scholarship on Arendt has marginalized the work as a whole to extract fragments for supposed ‘models’ of politics. Considered as a whole, Arendt’s work is a reflection on the conditions of possibility of our experiences of meaning. As is evident above all in the German version, it is thus a phenomenological investigation. I argue that both the critical force and the apparent blindness that we find in Arendt are due to the phenomenological premises of her thought
Resumo:
O artigo oferece uma leitura da Condicão Humana (1958) à luz de Vita Activa (1961). Depois de fazer um tour pela recepção da Condicão Humana desde sua publicação em 1958, o artigo argumenta que essa recepção marginalizou o trabalho como um todo para extrair fragmentos e supostos "modelos" da política. Considerado como um todo, o trabalho de Arendt é uma reflexão sobre as condições de possibilidade de nossas experiências de significado. Como é evidente acima de tudo na versão alemã, é, assim, uma investigação fenomenológica. Argumento que tanto a força crítica quanto a aparente cegueira que encontramos em Arendt se devem às premissas fenomenológicas de seu pensamento
Abstract:
Leo Strauss has been read as the author of a paradoxically nonpolitical political philosophy. This reading finds extensive support in Strauss’s work, notably in the claim that political life leads beyond itself to contemplation and in the limits this imposes on politics. Yet the space of the nonpolitical in Strauss remains elusive. The “nonpolitical” understood as the natural, Strauss suggests, is the “foundation of the political”. But the meaning of “nature” in Strauss is an enigma: it may refer either to the “natural understanding” of commonsense, or to nature “as intended by natural science,” or to “unchangeable and knowable necessity.” As a student of Husserl, Strauss sought both to retrieve and radically critique both the “natural understanding” and the “naturalistic” worldview of natural science. He also cast doubt on the very existence of an unchangeable nature. The true sense of the nonpolitical in Strauss, I shall argue, must rather be sought in his embrace of the trans-finite goals of philosophy understood as rigorous science. Nature may be the nonpolitical foundation of the political, but we can only ever approximate nature asymptotically. The nonpolitical remains as elusive in Strauss as the ordinary. To approximate both we need to delve deeper into his understanding of Husserl
Abstract:
Scholars of International Relations (IR) confront the unenviable task of conceiving and representing the world as a whole. Philosophy has deemed this impossible since the time of Kant. Today's populist reaction against "globalism" suggests that it is imprudent. Yet IR must persevere in its quest to diagnose emerging global realities and fault lines. To do so without stoking populist fears and mythologies, I argue, IR must enter into dialogue with the new realism in philosophy, and in particular with its ontological pluralism. The truth of what unites and divides us today is not one-dimensional, as the image of a networked world of "open" or "closed" societies suggests. Beyond anonymous networks, there are principles such as sovereignty; there are systemic dynamics of inclusion/exclusion, and there is the power of justifications
Abstract:
Leo Strauss has been understood as one of the foremost critics of Heidegger, and as having provided an alternative to his thought: against Heidegger’s Destruktion of Plato and Aristotle, Strauss enacted a recovery; against Heidegger’s “historicist turn,” Strauss rediscovered a superior alternative in the “Socratic turn.” This paper argues that, rather than opposing or superseding Heidegger, Strauss engaged Heidegger dialectically. On fundamental philosophical problems, Strauss both critiqued Heidegger and retrieved the kernel of truth contained in Heidegger’s position. This method is based on Strauss’s zetetic conception of philosophy, which has deep roots in Heidegger’s 1922 reading of Aristotle’s Metaphysics
Resumen:
Desde la década de 1990, el mundo ha visto una proliferación de estados de excepción. La detención indefinida de "combatientes enemigos", la acelerada construcción de barreras fortificadas entre países y el aumento de la población mundial considerada "ilegal" ejemplifican la tendencia mundial a normalizar las excepciones. Donald Trump es el caso más burdo -no por ello menos alarmante- de esta tendencia
Abstract:
Since the 1990s, states of exception have proliferated. The indefinite detainment of "enemy combatants", the accelerated growth of fortified barriers between countries, and the increase of an "illegal" population are examples of this global tendency to regulate these exceptions. Donald Trump is the rudest case, not the least alarming, of this trend
Abstract:
This work analyzes the effect of wall geometry when a reaction-diffusion system is confined to a narrow channel. In particular, we study the entropy production density in the reversible Gray-Scott system. Using an effective diffusion equation that considers modifications by the channel characteristics, we find that the entropy density changes its value but not its qualitative behavior, which helps explore the structure-formation space
Abstract:
We study a reaction-diffusion system within a long channel in the regime in which the projected Fick-Jacobs-Zwanzig operator for confined diffusion can be used. We found that under this approximation, Turing instability conditions can be modified due to the channel geometry. The dispersion relation, range of unstable modes where pattern formation occurs, and spatial structure of the patterns itself change as functions of the geometric parameters of the channel. This occurs for the three channels analyzed, for which the values of the projected operators can be found analytically. For the reaction term, we use the well-known Schnakenberg kinetics
Abstract:
Auctioneers often face the decision of whether to bundle two or more different objects before selling them. Under a Vickrey auction (or any other revenue equivalent auction form) there is a unique critical number for each pair of objects such that when the number of bidders is fewer than that critical number the seller strictly prefers a bundled sale and when there are more bidders the seller prefers unbundled sales. This property holds even when the valuations for the objects are correlated for a given bidder. The results have been proved using a mathematical technique of quantiles that can be extremely useful for similar analysis
Abstract:
We examine the evolution of adult female heights in twelve Latin American countries during the second half of the twentieth century based on demographic health surveys and related surveys compiled from national and international organizations. Only countries with more than one survey were included, allowing us to cross-examine surveys and correct for biases. We first show that average height varies significantly according to location, from 148.3 cm in Guatemala to 158.8 cm in Haiti. The evolution of heights over these decades behaves like indicators of human development, showing a steady increase of 2.6 cm from the 1950s to the 1990s. Such gains compare favorably to other developing regions of the world, but not so much with recently developed countries. Height gains were not evenly distributed in the region, however. Countries that achieved higher levels of income, such as Brazil, Chile, Colombia and Mexico, gained on average 0.9 cm per decade, while countries with shrinking economies, such as Haiti and Guatemala, only gained 0.25 cm per decade
Abstract:
The credibility of election outcomes hinges on the accuracy of vote tallies. We provide causal evidence on the drivers and the downstream consequences of variation in the quality of vote tallies. Using data for the universe of polling stations in Mexico in five national elections, we document that over 40% of polling-station-level tallies display inconsistencies. Our evidence strongly suggests these inconsistencies are nonpartisan. Using data for more than 1.5 million poll workers, we show that lower educational attainment, higher workload, and higher complexity of the tally cause more inconsistencies. Finally, using an original survey of close to 80,000 poll workers together with detailed administrative data, we find that inconsistencies cause recounts and recounts lead to lower trust in electoral institutions. We discuss policy implications
Abstract:
A synthesis is presented of recent work by the authors and others on the formation of localized patterns, isolated spots, or sharp fronts in models of natural processes governed by reaction-diffusion equations. Contrasting with the well-known Turing mechanism of periodic pattern formation, a general picture is presented in one spatial dimension for models on long domains that exhibit sub-critical Turing instabilities. Localised patterns naturally emerge in generalised Schnakenberg models within the pinning region formed by bistability between the patterned state and the background. A further long-wavelength transition creates parameter regimes of isolated spots which can be described by semi-strong asymptotic analysis. In the species-conservation limit, another form of wave pinning leads to sharp fronts. Such fronts can also arise given only one active species and a weak spatial parameter gradient. Several important applications of this theory within natural systems are presented, at different lengthscales: cellular polarity formation in developmental biology, including root-hair formation, leaf pavement cells, keratocyte locomotion; and the transitions between vegetation states on continental scales. Philosophical remarks are offered on the connections between different pattern formation mechanisms and on the benefit of subcritical instabilities in the natural world
Abstract:
This work presents a comparison of different deep learning models for the reconstruction of artistic images from compact representations generated using Principal Component Analysis. The reconstruction models correspond to different types of Convolutional Neural Networks. Our results show that the statistics captured by the principal components transformation are enough to obtain good approximations in the reconstruction process, especially in terms of color and object visual features, even when using compact representations whose length is only about 1% of the original image space's total number of features
Abstract:
Huntington's disease (HD) is a neurological disorder characterized by a reduction in medium-spiny neurons in the brain. Currently, there is no cure for HD and treatment relies on symptomatic therapy. The generation of stem cells, including neural, mesenchymal, and pluripotent, through conventional strategies or direct cell reprogramming, has revolutionized HD therapy research. Due to their unique ability to differentiate into a variety of cells, self-renew and grow, stem cells have become an area of interest for treating various complex and unresolved neurodegenerative disorders. Nanotechnology has emerged as a novel approach with great potential for treating HD with reduced side effects. Nanoparticles (NPs) can act as nanovehicles for delivering therapeutic agents, including siRNAs, stem cells, neurotrophic factors, and different drugs. Additionally, NPs can be used as an alternative treatment based on their antioxidant and reactive oxygen species -scavenging properties that protect neuronal cells. Some NPs even exhibit the ability to interfere with the protein aggregation of mutant Huntingtin proteins during neurodegenerative processes. This review focuses on the most studied NPs for treating HD, including polymeric, lipid-based, liposomes, solid lipid and metal/metal oxide NPs. The combination of NPs with stem cell therapy has great potential for neurodegenerative disease diagnosis and treatment. NPs have been used to manage the cellular microenvironment, improve the efficiency of cell and drug delivery to the brain, and enhance stem cell transplant survival. Understanding the characteristics of the different NPs is essential for applying them for therapeutic purposes. In this study, the biology of HD as well as the benefits and drawbacks of using NPs and stem cell therapy for its treatment are discussed
Abstract:
Paclitaxel (PTX) is an effective anticancer chemical with a broad spectrum that was primarily obtained from a curative shrub, specifically the bark of Taxus brevifolia tree. It belongs to a group of drugs known as diterpenetaxanes, which are now the most widely used as chemotherapy drugs. These include ovarian cancer, breast cancer, nonsmall cell-lung cancer, neck cancer, Kaposi's sarcoma, head cancer and urologic cancer. Clinical studies have proven that it has anti-cancer effects against ovarian, lung, and breast cancer. This chemical is difficult to use due to its limited solubility, propensity to recrystallize after diluting, and cosolvent-induced toxicity. In some circumstances, nanotechnology and nanoparticles offer several benefits over free pharmaceuticals, improved half-life of drug, decreased toxicity, affected and discerning drug delivery. Nano-drugs can accumulate in tissues, which may be associated with increased permeability, retention, and anticancer activity while having low toxicity in healthy tissues. Information on paclitaxel's chemical make-up, formulations, mode of action, and toxicity is provided in this article. The value of nanoparticles containing paclitaxel, its possibilities, and its potential for the future are all brought up. In order to enhance the pharmacodynamic and pharmacokinetic characteristics of the chemotherapeutic paclitaxel medication, nanotechnology is being focused in this review article to provide a general summary of the current status of continuing therapeutic progress. More importantly, a thorough review on the potential of various nanocarriers including polymeric, lipid-based, inorganic and carbon-based nanostructures has been provided to assess their applicability in PTX delivery, comparatively. Our goal was to demonstrate how these different types of nanocarriers can contribute to improving the therapeutic efficiency of PTX as a chemotherapeutic agent
Resumen:
Esta nota de investigación presenta el Estudio Panel México 2006, un proyecto que permite estudiar la conducta electoral de los mexicanos. El estudio consiste en una encuesta tipo panel de tres rondas de entrevistas a 2,400 mexicanos adultos realizadas entre octubre de 2005 y julio de 2006, así como un extenso análisis de contenido de información noticiosa y anuncios políticos en televisión. En éste se registran cambios en las opiniones políticas de los mexicanos, los cuales pueden ser analizados a la luz de los eventos e información de las campañas presidenciales, así como del contexto político posterior a la elección. El estudio panel documenta qué tipo de votantes cambiaron su preferencia electoral y ofrece elementos para determinar por qué ocurrieron dichos cambios
Abstract:
This research note discusses the methods and key findings of the Mexico 2006 Panel Study, a multi-investigator project on Mexican voting behavior. This panel study consists of a three-wave panel survey of 2,400 Mexican adults between October 2005 and July 2006. It also includes an extensive content analysis of television news and advertisements. The Mexico 2006 Panel Study permits assessing the impact of events and information flows, along with post-electoral context, on the political attitudes of Mexicans. These surveys document which types of voters changed their electoral preferences during the campaign and offers a host of variables to explain why these changes occurred
Abstract:
Long time existence and uniqueness of solutions to the Yang-Mills heat equation is proven over a compact 3-manifold with smooth boundary. The initial data is taken to be a Lie algebra valued connection form in the Sobolev space H1. Three kinds of boundary conditions are explored, Dirichlet type, Neumann type and Marini boundary conditions. The last is a nonlinear boundary condition, specified by setting the normal component of the curvature to zero on the boundary. The Yang-Mills heat equation is a weakly parabolic nonlinear equation.We use gauge symmetry breaking to convert it to a parabolic equation and then gauge transform the solution of the parabolic equation back to a solution of the original equation. Apriori estimates are developed by first establishing a gauge invariant version of the Gaffney-Friedrichs inequality. A gauge invariant regularization procedure for solutions is also established. Uniqueness holds upon imposition of boundary conditions on only two of the three components of the connection form because of weak parabolicity. This work is motivated by possible applications to quantum field theory
Abstract:
We study the set of eigenvalues of the Bochner Laplacian on a geodesic ball of an open manifold M, and find lower estimates for these eigenvalues when M satisfies a Sobolev inequality. We show that we can use these estimates to demonstrate that the set of harmonic forms of polynomial growth over M is finite dimensional, under sufficient curvature conditions. We also study in greater detail the dimension of the space of bounded harmonic forms on coverings of compact manifolds
Abstract:
In this paper we consider the Hodge Laplacian on differential k-forms over smooth open manifolds MN, not necessarily compact. We find sufficient conditions under which the existence of a family of logarithmic Sobolev inequalities for the Hodge Laplacian is equivalent to the ultracontractivity of its heat operator. We will also show how to obtain a logarithmic Sobolev inequality for the Hodge Laplacian when there exists one for the Laplacian on functions. In the particular case of Ricci curvature bounded below, we use the Gaussian type bound for the heat kernel of the Laplacian on functions in order to obtain a similar Gaussian type bound for the heat kernel of the Hodge Laplacian. This is done via logarithmic Sobolev inequalities and under the additional assumption that the volume of balls of radius one is uniformly bounded below
Resumen:
En el presente artículo se examina la relación de la Filosofía del derecho de Hegel con la filosofía práctica aristotélica. Con ello se pretende mostrar, por una parte, que algunas de las tesis y motivos centrales de la filosofía del derecho hegeliana se entienden de mejor forma trayendo a primer plano ciertos planteamientos aristotélicos y, por otro lado, que dichos planteamientos son objeto de una reinterpretación y reelaboración por parte de Hegel ante ciertas exigencias históricas y filosóficas del contexto moderno. En particular, se examina la relación entre estos dos autores atendiendo a dos puntos concretos: la metodología holística de una filosofía práctica -entendida de modo amplio- y la teoría de la motivación ética
Abstract:
In this article I examine the relation of Hegel's Philosophy of Right with the Aristotelean practical philosophy. My aim with this is to show, on the one hand, that some of the theses and central motifs of the Hegelian philosophy of right are better understood if one brings certain Aristotelean ideas to the forefront of the discussion and, on the other hand, that these ideas are subjected to a reinterpretation by Hegel due to the historical and philosophical demands of the Modern Age. In particular, the relationship between these two authors is analyzed by examining two specific subjects: the holistic methodology of a practical philosophy -broadly construed- and the theory of ethical motivation
Resumen:
Mi objetivo en este artículo es estudiar el papel de la belleza como símbolo de la moralidad en la Kritik der Urteilskraft. En primer lugar, realizo una comparación entre el esquematismo de los conceptos y el proceso del razonamiento analógico. Lo que sostengo es que el razonamiento analógico y simbólico conduce en Kant a una consideración sobre los objetos bajo las condiciones establecidas por el agente reflexivo mismo: la tesis que mostraré con ello es que los objetos simbólicos contribuyen a los propios propósitos de reflexión de uno sobre una temática particular. Posteriormente, explico por qué Kant considera que la belleza es un símbolo adecuado de la moralidad en vistas de la existencia de un interesante número de similitudes entre el ámbito de lo estético y de lo moral. Finalmente, discuto por qué únicamente la libertad puede exhibirse por medio de un símbolo según la estética kantiana, y por qué las otras ideas de la razón --Dios y el alma-- están mucho más asociadas con la experiencia de lo sublime
Abstract:
My aim in this article is to study the role of beauty as symbol of morality within Kant's Kritik der Urteilskraft. First, I draw a comparison between the schematism of concepts and the process of analogous reasoning. I sustain that analogous and symbolic reasoning leads in Kant to a consideration of the objects under the conditions set by the reflective agent himself: the thesis that I will prove thereby is that symbolic objects serve one's own purposes of reflection on a given topic. I proceed then to explain why Kant considers that beauty is an adequate symbol of morality on account of the existence of an interesting number of similarities between the aesthetical and the moral realms. Lastly, I clarify why only freedom can be exhibited by means of a symbol in Kantian aesthetics, and why the other two ideas of reason --namely God and the soul-- are much more associated with the experience of the sublime
Resumen:
Este artículo analiza la interpretación de Ricoeur sobre el concepto de Anerkennung en la filosofía temprana de Hegel. Se discute la importancia primordial de este concepto y se evalúa el significado del mismo dentro de un orden ético y político
Abstract:
This article analyzes Ricoeur's interpretation of Hegel's Anerkennung (Recognition) in his early philosophical thought. The paramount importance of this notion is discussed and its meaning is analyzed in an ethical and political order
Resumen:
A pesar de la promesa presidencial de crear un sistema de salud universal, la falta de objetivos y estrategias ha ocasionado graves problemas. Sin coordinación entre instituciones no se pueden lograr avances reales en la materia
Resumen:
En los últimos años el Estado mexicano ha faltado a su palabra de proteger y garantizar el derecho fundamental a la salud. Sus omisiones no deben ser ignoradas
Resumen:
El presente artículo tiene como objetivo demostrar que el sistema público de salud en México atraviesa su más severa crisis de las últimas décadas en el momento que debe dar respuesta a la pandemia de Sars-Cov-2. Para poder comprender el deterioro del sistema público de salud se describen las causas principales que han derivado en su debilitamiento, donde destacan la corrupción de la pasada administración y la reforma inconclusa del sistema de salud de la actual administración. Posteriormente, se explica la importancia de los órganos encargados de dar respuesta a la emergencia sanitaria, los Acuerdos de mayor impacto emitidos al comienzo de la pandemia y sus desaciertos, para demostrar cómo el conjunto de factores descritos y explicados afectan los derechos humanos, principalmente el goce efectivo del derecho a la protección de la salud que se vulnera aún más frente a la pandemia. El propósito último del estudio es demostrar la debilidad del sistema público de salud para responder a la emergencia sanitaria y la urgencia que existe de fortalecer el sistema público de salud para cumplir con el derecho a la protección de la salud de acuerdo con los principios establecidos en la Constitución
Abstract:
The objective of this article is to demonstrate that the public health system in Mexico is going through its must severe crisis in recent decades at the time it must respond to Sars-Cov-2 pandemic. In order to understand the deterioration of the public health system, the main causes that have resulted in its weakening are described, underlining the corruption that took place in the previous administration and the unfinished health system reform of the current administration. Subsequently, the importance of the institutions in charge of responding to the health emergency, the Agreements with the greatest impact issued at the beginning of the pandemic and their limitations are explained to demonstrate how these affected human rights, mainly the effectiveness of the right to health protection. The ultimate purpose of the study is to demonstrate the weakness of the public health system to address the sanitary emergency and the urgency to strength the public health system in order to comply with the right to health protection in accordance with the principles established in the Constitution
Resumen:
Objetivo. Analizar la cobertura en salud de cáncer pulmonar en México y ofrecer recomendaciones al respecto. Material y métodos. Mediante la conformación de un grupo multidisciplinario se analizó la carga de la enfermedad relativa al cáncer de pulmón y el acceso al tratamiento médico que ofrecen los diferentes subsistemas de salud en México. Resultados. Se documentan desigualdades importantes en la atención del cáncer de pulmón entre los distintos subsistemas de salud que sugieren acceso y cobertura en salud variable, tanto a los tratamientos tradicionales como a las innovaciones terapéuticas existentes, y diferencias en la capacidad de los prestadores de servicios de salud para garantizar el derecho a la protección de la salud sin distinciones. Conclusión. Se hacen recomendaciones sobre la necesidad de mejorar las acciones para el control del tabaco, el diagnóstico temprano y la inclusión de terapias innovadoras y la homologación entre los diferentes prestadores públicos de servicios de salud a través del financiamiento con la recaudación de impuestos al tabaco
Abstract:
Objective. To analyze the coverage of lung cancer in Mexico and offer recommendations in this regard. Materials and methods. By means of the conformation of a multidisciplinary group, we analyze the burden of the disease relative to the lung cancer and the access to the medical treatment offered by the different public health subsystems in Mexico. Results. Important inequalities in lung cancer care are documented among the different public health subsystems. Our data suggest differential access and coverage to both traditional treatments and existing therapeutic innovations and differences in the capacity of health service providers to guarantee the right to health protection without distinction. Conclusions. Recommendations are made on the need to improve actions for tobacco control, early diagnosis for lung cancer and inclusion of innovative therapies and homologation among different public health service providers through financing via tobacco taxes
Abstract:
Priority setting is the process through which a country's health system establishes the drugs, interventions, and treatments it will provide to its population. Our study evaluated the priority-setting legal instruments of Brazil, Costa Rica, Chile, and Mexico to determine the extent to which each reflected the following elements: transparency, relevance, review and revision, and oversight and supervision, according to Norman Daniels's accountability for reasonableness framework and Sarah Clark and Albert Wale's social values framework. The elements were analyzed to determine whether priority setting, as established in each country's legal instruments, is fair and justifiable. While all four countries fulfilled these elements to some degree, there was important variability in how they did so. This paper aims to help these countries analyze their priority-setting legal frameworks to determine which elements need to be improved to make priority setting fair and justifiable
Abstract:
In 2010, the Mexican government implemented a multi-sector agreement to prevent obesity. In response, the Ministries of Health and Education launched a national school-based policy to increase physical activity, improve nutrition literacy, and regulate school food offerings through nutritional guidelines. We studied the Guidelines’ negotiation and regulatory review process, including government collaboration and industry response. Within the government, conflicting positions were evident: the Ministries of Health and Education supported the Guidelines as an effective obesity-prevention strategy, while the Ministries of Economics and Agriculture viewed them as potentially damaging to the economy and job generation. The food and beverage industries opposed and delayed the process, arguing that regulation was costly, with negative impacts on jobs and revenues. The proposed Guidelines suffered revisions that lowered standards initially put forward. We documented the need to improve cross-agency cooperation to achieve effective policymaking. The ‘siloed’ government working style presented a barrier to efforts to resist industry's influence and strong lobbying. Our results are relevant to public health policymakers working in childhood obesity prevention
Resumen:
Este artículo tiene tres objetivos: presentar las dificultades que plantea la exigibilidad del derecho a la salud en México; dar cuenta de los principales problemas que se están presentando en lo que, de modo general, podemos llamar “relaciones entre derecho y salud” y, por último, ofrecer algunas propuestas para alcanzar una relación eficaz entre ambas materias
Abstract:
This article pursues three goals: to describe the obstacles that in Mexico faces the complete exigibility of the right to health, to present the main problems arising in what, generally speaking, could be referred to as “relations between law and health”, and finally to offer some proposals in order to reach a working relation between the aforementioned disciplines
Abstract:
How do legislators develop reputations to further their individual goals in environments with limited space for personalization? In this article, we evaluate congressional behavior by legislators with gubernatorial expectations in a unitary environment where parties control political activities and institutions hinder individualization. By analyzing the process of drafting bills in Uruguay, we demonstrate that deputies with subnational executive ambition tend to bias legislation towards their districts, especially those from small and peripheral units. Findings reinforce the importance of incorporating ambition to legislative studies and open a new direction towards the analysis of multiple career patterns within a specific case
Abstract:
This article addresses the stability properties of a simple economy (characterized by a one-dimensional state variable) when the representative agent, confronted by trajectories that are divergent from the steady state, performs transformations in that variable in order to improve forecasts. We find that instability continues to be a robust outcome for transformations such as differencing and detrending the data, the two most typical approaches in econometrics to handle nonstationary time series data. We also find that inverting the data, a transformation that can be motivated by the agent reversing the time direction in an attempt to improve her forecasts, may lead the dynamics to a perfect-foresight path
Abstract:
This article examines dynamics in a model where agents forecast a one dimensional state variable through ordinary least squares regressions on the lagged values of the state variable. We study the stability properties of alternative transformations of the state variable, such as taking logarithms, which the agent can endogenously set forth. Surprisingly, for the considered class of economies, we found that the transformations that an econometrician would attempt are destabilizing, whereas alternative transformations, which an econometrician would never consider, such as convex transformations, are stabilizing. Therefore, we ironically find that in our set-up, an active agent who is concerned about learning the economy's dynamics and who in an attempt to improve forecasting transforms the state variable using standard transformations, is more likely to deviate from the steady state than a passive agent
Abstract:
Consider a one step forward looking model where agents believe that the equilibrium values of the state variable are determined by a function whose domain is the current value of the state variable and whose range is the value for the subsequent period. An agent’s forecast for the subsequent period uses the belief, where the function that is chosen is allowed to depend on the current realization of an extrinsic random process, and is made with knowledge of the past values of the state variable but not the current value. The paper provides (and characterizes) the conditions for the existence of sunspot equilibria for the model described
Abstract:
We show that a perfect correlated equilibrium distribution of an N-person game, as defined by Dhillon and Mertens (1996) can be achieved using a finite number of copies of the strategy space as the message space
Abstract:
We reformulate the local stability analysis of market equilibria in a competitive market as a local coordination problem in a market game, where the map associating market prices to best-responses of all traders is common knowledge and well-defined both in and out of equilibrium. Initial expectations over market variables differ from their equilibrium values and are not common knowledge. This results in a coordination problem as traders use the structure of the market game to converge back to equilibrium. We analyse a simultaneous move and a sequential move version of the market game and explore the link with local rationalizability
Abstract:
This paper introduces, within the framework of a simple example, the notion of a subjective temporary equilibrium. The underlying relation linking forecasts to equilibrium values of the state variable is linear. However, agents perceive a non-linear law that governs the rate of adjustment between successive periods and forecast using linear approximations to the non-linear law of motion. This is shown to generate a non-linear law of motion for the state variable with the feature that the agent's model describes correctly the period wise evolution of the economy. The resulting non-linear law of motion is referred to as a subjective temporary equilibrium as its specification is determined largely by the subjective beliefs of the agents regarding the dynamics of the system. In a subjective equilibrium, agents forecasts are generated by taking linear approximations to a correctly specified law of motion and the forecasts may accordingly be interpreted as being boundedly rational in a first-order sense. There exist specifications that admit the possibility of cyclical behaviour
Abstract:
This paper provides conditions for the almost sure convergence of the least squares learning rule in a stochastic temporary equilibrium model, where regressions are performed on the past values of the endogenous state variable. In contrast to earlier studies, (Evans and Honkapohja, 1998; Marcent and Sargent, 1989), which were local analyses, the dynamics are studied from a global viewpoint, which allows one to obtain an almost sure convergence result without employing projection facilities
Abstract:
Background and objective: This paper presents Alzheed, a mobile application for monitoring patients with Alzheimer's disease at day centers as well as a set of design recommendations for the development of healthcare mobile applications. The Alzheed project was conducted at Day Center “Dorita de Ojeda” that is focused on the care of patients with Alzheimer's disease. Materials and methods: A software design methodology based on participatory design was employed for the design of Alzheed. This methodology is both iterative and incremental and consists of two main iterative stages: evaluation of low-fidelity prototypes and evaluation of high-fidelity prototypes. Low-fidelity prototypes were evaluated by 11 day center's healthcare professionals (involved in the design of Alzheed), whereas high-fidelity prototypes were evaluated using a questionnaire based on the technology acceptance model (TAM) by the same healthcare professionals plus 30 senior psychology undergraduate students uninvolved in the design of Alzheed. Results: Healthcare professional participants perceived Alzheed as extremely likely to be useful and extremely likely to be usable, whereas senior psychology undergraduate students perceived Alzheed as quite likely to be useful and quite likely to be usable. Particularly, the median and mode of the TAM questionnaire were 7 (extremely likely) for healthcare professionals and 6 (quite likely) for psychology students (for both constructs: perceived usefulness and perceived ease of use). One-sample Wilcoxon signed-rank tests were performed to confirm the significance of the median for each construct
Abstract:
To update a public transportation origin-destination (OD) matrix, the link choice probabilities by which a user transits along the transit network are usually calculated beforehand. In this work, we reformulate the problem of updating OD matrices and simultaneously update the link proportions as an integer linear programming model based on partial knowledge of the transit segment flow along the network. We propose measuring the difference between the reference and the estimated OD matrices with linear demand deficits and excesses and simultaneously having slight deviations from the link probabilities to adjust to the observed flows in the network. In this manner, our integer linear programming model is more efficient in solving problems and is more accurate than quadratic or bilevel programming models. To validate our approach, we build an instance generator based on graphs that exhibit a property known as a "small-world phenomenon" and mimic real transit networks. We experimentally show the efficiency of our model by comparing it with an Augmented Lagrangian approach solved by a dual ascent and multipliers method. In addition, we compare our methodology with other instances in the literature
Abstract:
This paper presents an optimization approach for the Intermodal Timetable Synchronization Problem that can be used to explore the impact of holding decisions and departure flexibility in an isolated and integrated way for different scenarios. It also introduces the concept of fuzzy synchronization to protect the solutions against variability in travel times caused by external factors, such as congestion. The methodology was applied to the transit network of Metrorrey in Monterrey, Mexico, assuming three planning periods along a weekday. It was found that considering holding decisions leads to significant improvements when generating synchronous timetables at transfer stops. Furthermore, three different types of variability were simulated, and their effects on four different fuzzy synchronization functions were evaluated. Thus, we provide a tool to analyze different scenarios and to support decision-makers in the planning stage of public transport systems. The analysis can be extended considering other types of travel time variability and nonlinear synchronization functions
Abstract:
We show that firms' left-tail risk positively predicts future returns of crash insurance. We proxy crash insurance with bear spreads, an option trading strategy that profits when extreme negative returns occur. Crash insurance for high (low) left-tail risk firms earns positive (negative) returns, suggesting that the downside protection it provides is not adequately priced. Our results are mainly explained by two types of underreaction: volatility underreaction in high left-tail risk portfolios and underreaction to the persistence of left-tail risk. Disagreement partially explains our results but a risk-based approach does not
Abstract:
We develop a novel method to decompose a straddle into two assets: a volatility risk asset and a jump risk asset. Using the price ratio of the jump risk asset to the straddle, we create a forward-looking measure (S-jump) that captures the stock price jump risk anticipated by the option market. We show that S-jump substantially increases before earnings announcements and strongly predicts the size and the probability of earnings-induced stock price jumps. We also find that S-jump amplifies the earnings response coefficient. Our jump risk asset captures the run-up and run-down return patterns observed for straddles around earnings announcements
Abstract:
We present an analytical framework and evidence that characterize the historical patterns of Mexico's manufacturing exports and its participation in Global Value Chains (GVCs). We use this framework to guide an empirical analysis in which we identify sectors with the highest export potential as a result of nearshoring. We also estimate the orders of magnitude of the potential impacts of this process on Mexico's manufacturing exports and GDP. Finally, we discuss factors that could have an influence on the size of these effects, including an elastic supply of skilled labor, an institutional framework that promotes contract enforcement, cost effective and reliable energy supply, and strong and widespread connectivity through transportation and communication networks
Abstract:
COVID-19 heightened women's exposure to gender-based and intimate partner violence, especially in low-income and middle-income countries. We tested whether edutainment interventions shown to successfully combat gender-based and intimate partner violence when delivered in person can be effectively delivered using social (WhatsApp and Facebook) and traditional (TV) media. To do so, we randomized the mode of implementation of an intervention conducted by an Egyptian women's rights organization seeking to support women amid COVID-19 social distancing. We found WhatsApp to be more effective in delivering the intervention than Facebook but no credible evidence of differences across outcomes between social media and TV dissemination. Our findings show little credible evidence that these campaigns affected women's attitudes towards gender or marital equality or on the justifiability of violence. However, the campaign did increase women's knowledge, hypothetical use and reported use of available resources
Abstract:
Let A be an expansive linear map in Rd. Approximation properties of shift-invariant subspaces of L2(Rd) when they are dilated by integer powers of A are studied. Shift-invariant subspaces providing approximation order a or density order a associated to A are characterized. These characterizations impose certain restrictions on the behavior of the spectral function at the origin expressed in terms of the concept of point of approximate continuity. The notions of approximation order and density order associated to an isotropic dilation turn out to coincide with the classical ones introduced by de Boor, DeVore and Ron. This is no longer true when A is anisotropic. In this case the A-dilated shift-invariant subspaces approximate the anisotropic Sobolev space associated to A and a. Our main results are also new when S is generated by translates of a single function. The obtained results are illustrated by some examples
Abstract:
In this article, we consider the problem of voltage regulation of a proton exchange membrane fuel cell connected to an uncertain load through a boost converter. We show that, in spite of the inherent nonlinearities in the current-voltage behavior of the fuel cell, the voltage of a fuel cell/boost converter system can be regulated with a simple proportional-integral (PI) action designed following the passivity-based control approach. The system under consideration consists of a DC-DC converter interfacing a fuel cell with a resistive load. We show that the output voltage of the converter converges to its desired constant value for all the systems initial conditions-with convergence ensured for all positive values of the PI gains. This latter feature facilitates the, usually difficult, task of tuning the gains of the PI. An immersion and invariance parameter estimator is afterward proposed which allows the operation of the PI passivity-based control when the load is unknown, maintaining the output voltage at the desired level. The stable operation of the overall system is proved and the approach is validated with extensive numerical simulations considering real-life scenarios, where robust behavior in spite of load variations and the presence of noise is obtained
Abstract:
In this note, we address the problem of parameter identification of nonlinear, input affine dissipative systems. It is assumed that the supply rate, the storage and the internal dissipation functions may be expressed as nonlinearly parameterized regression equations where the mappings (depending on the unknown parameters) satisfy a monotonicity condition-this encompasses a large class of physical systems, including passive systems. We propose to estimate the system parameters using the "power-balance" equation, which is the differential version of the classical dissipation inequality, with a new estimator that ensures global, exponential, parameter convergence under the very weak assumption of interval excitation of the power-balance equation regressor. The benefits of the proposed approach are illustrated with an example
Abstract:
In this paper, we propose a new control scheme for a wind energy conversion system connected to a solid-state transformer-enabled distribution microgrid. The system consists of a wind turbine, a permanent magnet synchronous generator, a rectifier and a load which is connected to the distribution grid dc bus. The scheme combines a classical PI placed, in a nested-loop configuration, with a passivity-based controller. Guaranteeing stability and endowed with disturbance rejection properties, the controller regulates the wind turbine angular velocity to a desired value - in particular, the set-point is selected such that the maximum power from the wind is extracted - maximising the generator efficiency. The fast response of the closed-loop system makes possible to operate under fast-changing wind speed conditions. To assess and validate the controller performance and robustness under parameter variations, realistic simulations comparing our proposal with a classical PI scheme are included
Abstract:
In this paper we propose a new control scheme for a wind energy conversion system connected to a solid state transformer-enabled distribution microgrid. The system consists of a wind turbine, a permanent magnet synchronous generator, a rectifier and a load which is connected to the distribution grid dc bus. The scheme combines a classical PI placed, in a nested-loop configuration, with a passivity-based controller. Guaranteeing stability and endowed with disturbance rejection properties, the controller regulates the wind turbine angular velocity to a desired value - in particular, the set-point is selected such that the maximum power from the wind is extracted - maximizing the generator efficiency. The fast response of the closed-loop system makes possible to operate under fast-changing wind speed conditions. To assess and validate the controller performance and robustness under parameters variations, realistic simulations comparing our proposal with a classical PI scheme are included
Abstract:
A controller, based on passivity, for a wind energy conversion system connected to a dc bus is proposed. The system consists of a wind turbine, a permanent magnet synchronous generator, a rectifier and a load. Guaranteeing stability and endowed with adaptive properties, the controller regulates the wind turbine angular velocity to a desired value - in particular, the set-point is selected such that the maximum power from the wind is extracted - and maximizes the generator efficiency. The fast response of the closed-loop system makes possible to operate under fastchanging wind speed conditions. To assess the controller performance, realistic simulation results are included
Abstract:
In this note, an observer-based feedback control for tracking trajectories of the yaw and lateral velocities in a vehicle is proposed. The considered model consists of the vehicle’s longitudinal, lateral and yaw velocities dynamics together with its roll dynamics. First, an observer for the vehicle lateral velocity, roll angle and roll velocity is proposed. Its design is based on the well-known Immersion & Invariance technique and Super-Twisting Algorithm. Tuning conditions on the observer gains are given such that the observation errors globally asymptotically converge to zero provided that the yaw velocity reference is persistently excited. Next, a feedback control law depending on the observer estimates is designed using the Output Regulation technique. It is showed that the tracking errors converge to zero as the observation errors decay. To assess the performance of the controller, numerical simulations are performed where the stable operation of the closed-loop system is verified
Abstract:
The present paper proposes a maximum power extraction control for a wind system consisting of a turbine, a permanent magnet synchronous generator, a rectifier, a load and one constant voltage source, which is used to form the DC bus. We propose a linear PI controller, based on passivity, whose stability is guaranteed under practically reasonable assumptions. PI structures are widely accepted in practice as they are easier to tune and simpler than other existing model-based methods. Real switching based simulations have been performed to assess the performance of the proposed controller
Abstract:
This paper deals with the problem of trajectory tracking of a class of bilinear systems with time-varying measurable disturbance, namely, systems of the form x(t)=[A + Σi ui(t)Bi]x(t) + d(t). We identify, via a linear matrix inequality, a set of matrices {A, Bi} for which it is possible to ensure global tracking of (admissible, differentiable) trajectories with a simple linear time-varying PI controller. Instrumental to establish the result is the construction of an output signal with respect to which the incremental model is passive. The result is applied to the boost and the modular multilevel converter for which experimental results are given
Abstract:
This paper deals with the problem of trajectory tracking of a class of bilinear systems with time-varying measurable disturbance, namely, systems of the form xt A u tB xt dt i i i ̇()=[ + ∑ () ] ()+ ( ). A set of matrices A B, { }i has been identified, via a linear matrix inequality, for which it is possible to ensure global tracking of (admissible, differentiable) trajectories with a simple linear time-varying PI controller. Instrumental to establish the result is the construction of an output signal with respect to which the incremental model is passive. The result is applied to the Interleaved Boost and the Modular Multilevel Converter for which experimental results are given
Abstract:
The problem of controlling small-scale wind turbines providing energy to the grid is addressed in this paper. The overall system consists of a wind turbine plus a permanent magnet synchronous generator connected to a single-phase ac grid through a passive rectifier, a boost converter, and an inverter. The control problem is challenging for two reasons. First, the dynamics of the plant are described by a highly coupled set of nonlinear differential equations. Since the range of operating points of the system is very wide, classical linear controllers may yield below par performance. Second, due to the use of a simple generator and power electronic interface, the control authority is quite restricted. In this paper we present a high performance, nonlinear, passivity-based controller that ensures asymptotic convergence to the maximum power extraction point together with regulation of the dc link voltage and grid power factor to their desired values. The performance of the proposed controller is compared via computer simulations against the industry standard partial linearizing and decoupling plus PI controllers
Abstract:
A goal in the study of dynamics on the interval is to understand the transition to positivetop ological entropy. There is a conjecture from the 1980s that the only route to positivetop ological entropy is through a cascade of period doubling bifurcations. We prove this conjecturein natural families of smooth interval maps, and use it to study the structure of the boundary ofmappings with positive entropy. In particular, we show that in families of mappings with a fixednumber of critical points the boundary is locally connected, and for analytic mappings that it isa cellular set
Abstract:
This study adds to creativity research by investigating the connection between employees' ruminations about life-threatening crises and their creative work behavior, with a specific focus on the mediating role of their experiences of personal life-to-work conflict and the moderating role of their resilience in this connection. Cross-sectional survey data (N = 710) gathered from employees who operate in the health-related distribution sector indicate that a key factor that underpins the connection between persistent worries about deadly crises and thwarted creativity at work is that personal life worries spill over into the work domain, but this detrimental effect is less salient if employees can bounce back from difficult situations. For organizations, this investigation reveals a critical conduit, personal life - induced work strain, through which employees' struggles to ban negative thoughts about excruciating crises translate into a lower propensity to develop novel ideas for organizational improvement. It also reveals how organizations can mitigate this risk by making their workforce more resilient
Abstract:
Purpose - The purpose of this study is to draw from conservation of resources theory to examine how employees' experience of resource-draining interpersonal conflict might diminish the likelihood that they engage in championing behaviour. Its specific focus is on the mediating effect of their motivation to leave the organization and the moderating effect of their peer-oriented social interaction in this connection. Design/methodology/approach - The research hypotheses are empirically assessed with quantitative survey data gathered from 632 employees who work in a large Mexican-based pharmacy chain. The statistical analyses involved an application of the Process macro, which enabled concurrent estimations of the direct, mediating and moderating effects predicted by the proposed conceptual framework. Findings - Emotion-based tensions in co-worker relationships decrease employees' propensity to mobilize support for innovative ideas, because employees make plans to abandon their jobs. This mediating role of turnover intentions is mitigated when employees maintain close social relationships with their co-workers. Practical implications - For organizational practitioners, this study identifies a core explanation (i.e. employees want to quit the company) for why frustrations with emotion-based quarrels can lead to a reluctance to promote novel ideas - ideas that otherwise could add to organizational effectiveness. It also highlights how this harmful process can be avoided if employees maintain good, informal relationships with their colleagues. Originality/value - For organizational scholars, this study explicates why and when employees' experience of interpersonal conflict translates into complacent work behaviours, in the form of tarnished idea championing. It also identifies informal peer relationships as critical contingency factors that disrupt this negative dynamic
Abstract:
This research investigates how employees' perceptions of role ambiguity might inhibit their propensity to engage in organizational citizenship behaviour (OCB), with a particular focus on the potential buffering roles of two personal resources in this process: political skill and organizational identification. Survey data collected from a manufacturing organization indicate that role ambiguity diminishes OCB, but this effect is attenuated when employees are equipped with political skill and have a strong sense of belonging to their organization. The buffering role of organizational identification also is particularly strong when employees have adequate political skills, suggesting the reinforcing, buffering roles of these two personal resources. Organizations that want to foster voluntary work behaviours, even if they cannot provide clear role descriptions for their employees, should nurture adequate personal resources within their employee ranks
Abstract:
This paper investigates how employees' experience of workplace incivility may steer them away from idea championing, with a special focus on the mediating role of their desire to quit their jobs and the moderating role of their dispositional self-control. Data collected from employees who work in a large retail organization reveal that an important reason that exposure to rude workplace behaviors reduces employees' propensity to champion innovative ideas is that they make concrete plans to leave. This mediating effect is mitigated when employees are equipped with high levels of self-control though. For organizations, this study accordingly pinpoints desires to seek alternative employment as a critical factor by which irritations about resource-draining incivility may escalate into a reluctance to add to organizational effectiveness through dedicated championing efforts. It also indicates how this escalation can be avoided, namely, by ensuring employees have access to pertinent personal resources
Abstract:
Purpose. The purpose of this research is to examine how employees' experience of career dissatisfaction might curtail their organizational citizenship behavior, as well as how this detrimental effect might be mitigated by employees' access to valuable peer-, supervisor- and organizational-level resources. The frustrations stemming from a dissatisfactory career might be better contained in the presence of these resources, such that employees are less likely to respond to this resource-depleting work circumstance by staying away from extra-role activities
Abstract:
This study examines how employees' exposure to interpersonal conflict might reduce their creative behaviour, with a particular focus on how this negative connection might be mitigated by their access to pertinent personal and contextual resources. Using data from employees who work in the construction sector, the empirical findings reveal that interpersonal conflict thwarts creativity, but this effect is weaker when employees can more easily control their emotions, have clear job descriptions, and are satisfied with how their employer communicates with them. Empathy exhibits a weak effect. This study accordingly reveals different means through which organizations can overcome the challenge that arises when emotion-based conflicts steer employees away from creativity
Résumé:
Les auteurs de cette étude s'appuient sur les données des employés exerçant dans le secteur de la construction pour examiner l'impact que l'exposition aux conflits interpersonnels a sur le comportement créatif des employés. Ils s'intéressent plus particulièrement aumoyenqui pourrait permettre d'atténuer le lien négatif entre les conflits interpersonnels et la créativité, soit l'accès à des ressources personnelles et contextuelles pertinentes. Les résultats empiriques révèlent que les conflits interpersonnels entravent la créativité. Mais cet effet est plus faible lorsque les employés contrôlent plus facilement leurs émotions, reçoivent des descriptions de tâches précises et sont satisfaits de la façon dont leur employeur communique avec eux. L'empathie n'a qu'un faible effet. Cette étude révèle les différents moyens par lesquels les organisations peuvent surmonter le problème lié aux conflits émotionnels qui entravent la créativité des employés
Abstract:
Anchored in conservation of resources theory, this study considers how employees' experience of job stress might reduce their organizational citizenship behaviors (OCB), as well as how this negative relationship might be buffered by employees' access to two personal resources (passion for work and adaptive humor) and two contextual resources (peer communication and forgiving climate). Data from a Mexican-based organization reveal that felt job stress diminishes OCB, but the effect is subdued at higher levels of the four studied resources. This study accordingly adds to extant research by elucidating when the actual experience of job stress is more or less likely to steer employees away from OCB -that is, when they have access to specific resources that hitherto have been considered direct enablers of such efforts instead of buffers of employees' negative behavioral responses to job stress
Abstract:
Purpose. The purpose of this paper is to consider how employees' perceptions of psychological contract breach, due to their sense that their organization has not kept its promises, might diminish their creative behavior. Yet access to two critical personal resources -emotion regulation and humor skills- might buffer this negative relationship. Design/methodology/approach. Survey data were collected from employees in a large organization in the automobile sector. Findings. Employees' beliefs that their employer has not come through on its promises diminishes their engagement in creative activities. The effect is weaker among employees who can more easily control their emotions and who use humor in difficult situations. Practical implications. For organizations, the results show that the frustrations that come with a sense of broken promises can be contained more easily to the extent that their employee bases can rely on pertinent personal resources. Originality/value. This investigation provides a more comprehensive understanding of when perceived contract breach steers employees away from productive work activities, in the form of creativity. This damaging effect is less prominent when employees possess skills that enable them to control negative emotions or can use humor to cope with workplace adversity
Abstract:
This study investigates how employees' perceptions of work overload might reduce their creative behaviours and how this negative relationship might be buffered by employees' access to three energy-enhancing resources: their passion for work, their ability to share emotions with colleagues, and their affective commitment to the organization. Data from a manufacturing organization reveal that work overload reduces creative behaviour, but the effect is weaker with higher levels of passion for work, emotion sharing, and organizational commitment. The buffering effects of emotion sharing and organizational commitment are particularly strong when they combine with high levels of passion for work. These findings indicate how organizations marked by adverse work conditions, due to excessive workloads, can mitigate the likelihood that employees avoid creative behaviours
Abstract:
Drawing from conservation of resources theory, this article investigates the relationship between job control (a critical job resource) and idea championing, as well as how this relationship may be augmented by stressful work conditions that can lead to resource losses, such as conflicting work role demands and psychological contract violations. With quantitative data collected from employees of an organization that operates in the chemical sector, this study reveals that job control increases the propensity to champion innovative ideas. This effect is especially salient when employees experience high levels of role conflict and psychological contract violations. For organizations, the results demonstrate that giving employees more control over whether they invest in championing activities will be most beneficial when those employees also face resource-draining work conditions, in the form of either incompatible role expectations or unfilled employer obligations
Abstract:
Based on the job demands–resources model, this study considers how employees’ perceptions of organizational politics might reduce their engagement in organizational citizenship behavior. It also considers the moderating role of two contextual resources and one personal resource (i.e., supervisor transformational leadership, knowledge sharing with peers, and resilience) and argues that they buffer the negative relationship between perceptions of organizational politics and organizational citizenship behavior. Data from a Mexican-based manufacturing organization reveal that perceptions of organizational politics reduce organizational citizenship behavior, but the effect is weaker with higher levels of transformational leadership, knowledge sharing, and resilience. The buffering role of resilience is particularly strong when transformational leadership is low, thus suggesting a three-way interaction among perceptions of organizational politics, resilience, and transformational leadership. These findings indicate that organizations marked by strongly politicized internal environments can counter the resulting stress by developing adequate contextual and personal resources within their ranks
Abstract:
Drawing from the job demands-resources model, this study considers how task conflict reduces employees' job satisfaction, as well as how the negative task conflict-job satisfaction relationship might be buffered by supervisors' transformational leadership and employees' personal resources. Using data from a large organization, the authors show that task conflict reduces job satisfaction, but this effect is weaker at higher levels of transformational leadership, tenacity, and passion for work. The buffering roles of the two personal resources (tenacity and passion for work) are particularly salient when transformational leadership is low. These findings indicate that organizations marked by task-related clashes can counter the accompanying stress by developing adequate leadership and employee resources within their ranks
Abstract:
This paper investigates how the harmful effect of role ambiguity might be buffered by employees' access and reveals that the role ambiguity enhances turnover intentions but this effect diminishes at higher levels of innovation propensity, goodwill trust, and procedural justice. Purpose-This article investigates how employees' perceptions of role ambiguity might increase their turnover intentions and how this harmful effect might be buffered by employees' access to relevant individual (innovation propensity), relational (goodwill trust), and organizational (procedural justice) resources. Uncertainty due to unclear role descriptions decreases in the presence of these resources, so employees are less likely to respond to this adverse work situation in the form of enhanced turnover intentions
Abstract:
We add to human resource literature by investigating how the contribution of task conflict to employee creativity depends on employees' learning orientation and their goal congruence with organizational peers. We postulate a positive relationship between task conflict and employee creativity and predict that this relationship is augmented by learning orientation but attenuated by goal congruence. We also argue that the mitigating effect of goal congruence is more salient among employees who exhibit a low learning orientation. Our results, captured from employees and their supervisors in a large, Mexican-based organization, confirm these hypotheses. The findings have important implications for human resource managers who seek to foster creativity among employees
Abstract:
This study considers how employees' tenacity might enhance their propensity to engage in knowledge exchange with organizational peers, as well as how the positive tenacity-knowledge exchange relationship is invigorated by two types of role conflict: within-work and between work and family. Using data from a large Mexican organization in the logistics sector, this study shows that tenacity increases knowledge exchange, and this effect is stronger at higher levels of within-work and work-family role conflict. The invigorating role of within-work role conflict is particularly salient when work-family role conflict is high. These findings inform organizations that the application of personal energy to knowledge-enhancing activities is particularly useful when employees encounter severe workplace adversity because of conflicting role demands
Abstract:
Purpose: Drawing from conservation of resources theory and affective events theory, this article examines the hitherto unexplored relationship between employees' tenacity levels and problem-focused voice behavior, as well as how this relationship may be augmented when employees encounter adversity in relationships with peers or in the organizational climate in general. Design/Methodology/Approach: The study draws on quantitative data collected through a survey administered to employees and their supervisors in a large manufacturing organization. Findings: Tenacity increases the likelihood of speaking up about problem areas, and this relationship is strongest when peer relationships are characterized by low levels of goal congruence and trust (relational adversity) or when the organization does not support change (organizational adversity). The augmenting effect of organizational adversity on the usefulness of tenacity is particularly salient when it combines with high relational adversity, which underscores the critical role of tenacity for spurring problem-focused voice behavior when employees negatively appraise different facets of their work environment simultaneously. Implications: The results inform organizations that the allocation of personal energy to reporting organizational problems is perceived as particularly useful by employees when they encounter significant adversity in their work environments. Originality/Value: This study extends research on voice behavior by providing a better understanding of the likelihood that employees speak up about problem areas, according to their levels of tenacity, and explicating when this influence of tenacity tends to be more prominent
Abstract:
This conceptual article centers on the relationship between intergenerational strategy involvement and family firms' innovation pursuits, a relationship that may be contingent on the nature of the interactions among family members who belong to different generations. The focus is the potential contingency roles of two conflict management approaches (cooperative and competitive) and two dimensions of social capital (goal congruence and trust), in the context of intergenerational interactions. The article theorizes that although cooperative conflict management may invigorate the relationship between intergenera- tional strategy involvement and innovation pursuits, competitive conflict management likely attenuates it. Moreover, it proposes that both functional and dysfunctional roles for social capital might arise with regard to the contribution of intergenerational strategy involvement to family firms' innovation pursuits. This article thus provides novel insights into the opportunities and challenges that underlie the contributions of family members to their firm's innovative aspirations when more than one generation participates in the firm's strategic management
Abstract:
This study investigates how employees’ perceptions of adverse work conditions might discourage innovative behavior and the possible buffering roles of relational resources. Data from a Mexican-based organization reveal that perceptions of work overload negatively affect innovative behavior, but this effect gets attenuated with greater knowledge sharing and interpersonal harmony. Further, although perceived organizational politics lead to lower innovative behavior when relational resources are low, they increase this behavior when resources are high. Organizations which seek to adopt innovative ideas in the presence of adverse work conditions thus should create relational conduits that can mitigate the associated stress
Abstract:
We develop and test a motivational framework to explain the intensity with which individuals sell entrepreneurial initiatives within their organizations. Initiative selling efforts may be driven by several factors that hitherto have not been given full consideration: initiative characteristics, individuals’ anticipation of rewards, and their level of dissatisfaction. On the basis of a survey in a mail service firm of 192 managers who proposed an entrepreneurial initiative, we find that individuals’ reported intensity of their selling efforts with respect to that initiative is greater when they (1) believe that the organizational benefits of the initiative are high, (2) perceive that the initiative is consistent with current organizational practices (although this effect is weak), (3) believe that their immediate organizational environment provides extrinsic rewards for initiatives, and (4) are satisfied with the current organizational situation. These findings extend previous expectancy theory-based explanations of initiative selling (by considering the roles of initiative characteristics and that of initiative valence for the proponent) and show the role of satisfaction as an important motivational driver for initiative selling
Abstract:
In this study, we examine the role of individuals’ commitment in small and medium-sized firms. More specifically, we argue that employees will commit themselves to their firm based on their current work status in the firm, their perception of the organizational climate, and the firm’s entrepreneurial orientation. We also examine how individuals’ commitment affect the actual effort they exert vis-à-vis their firm. The study’s hypotheses are tested by applying quantitative analyses to survey data collected from 863 Mexican small and medium-sized businesses. We found that individuals’ position and tenure in the firm, their perception of psychological safety and meaningfulness, and the firm’s entrepreneurial orientation all are positively related to organizational commitment. We also found a positive relationship between organizational commitment and effort. Finally, our findings show that organizational commitment mediates the relationship between many of the predictor variables and effort. We discuss the limitations and implications of our findings and provide directions for future research
Abstract:
Participation in social programs, such as clubs and other social organizations, results from a process in which an agent learns about the requirements, benefits, and likelihood of acceptance related to a program, applies to be a participant, and, finally, is accepted or rejected. We propose a model of this participation process and provide an application of the model using data from a social program in Mexico. Our empirical analysis illustrates that decisions at each stage of the process are responsive to expectations about the decisions and outcomes at the subsequent stages and that knowledge about the program can have a significant impact on participation outcomes. JEL codes: I38, D83, program participation, take-up, information acquisition, targeting, undercoverage, leakage
Abstract:
The judicialization of social rights is a reality in Latin America; however, little has been said about this phenomenon in Mexico or about the role of the Mexican Supreme Court (Suprema Corte de Justicia de la Nación, SCJN) in advancing an effective guarantee of the right to health. In several countries, courts have adopted either an active role in defining health policy and protecting the right to health or a passive one. Studying the ways in which health-related cases are resolved in Mexico enables us to evaluate the role of the SCJN when ruling for or against this right. This article aims to determine whether the SCJN, through the analysis of its rulings, is or could be a catalyst for change in the healthcare system. This article reports on the results of a systematic content analysis of twenty-two SCJN rulings, examining the claimants, their claims as understood by the SCJN, and the elements considered by the justices in their decision-making process. The analysis of the way in which the SCJN ruled in these cases demonstrates that the SCJN must be uniform and consistent in applying constitutional and conventional principles to improve predictability of its decisions and to be innovative in responding to the new requirements posed by economic, social, and cultural rights. The SCJN should increase its possibilities of promoting structural reforms where laws or policies are inconsistent with constitutional or conventional standards by maintaining a middle ground with respect of the executive and legislative branches
Resumen:
Objetivo. Conocer la opinión de actores clave respecto del proceso de judicialización del derecho a la protección de la salud en México. Material y métodos. Se realizaron 30 entrevistas semiestructuradas a representantes de los poderes Judicial (PJ), Legislativo (PL), Sector Salud (SS), industria farmacéutica, academia y organizaciones de la sociedad civil (OSC) durante mayo de 2017 a agosto de 2018, en distintos lugares de la Ciudad de México. Se transcribieron las grabaciones y se analizó el contenido con base en categorías de interés. Resultados. Las posturas respecto al fenómeno de la judicialización del derecho a la salud son disímiles. Hay tensiones entre quienes ven su potencial efecto como agente de cambio del sector y quienes la perciben como una interferencia ilegítima del PJ. No existe una estrategia coordinada entre los sectores para promover un cambio en el SS. Conclusiones. Las posturas respecto al fenómeno de la judicialización en México son disímiles. Hay tensiones entre quienes ven su potencial efecto como agente de cambio del sector y quienes la perciben como una interferencia ilegítima del PJ en el SS. Otros argumentan que no existe una estrategia coordinada entre los sectores para promover un cambio en el SS. Todos coinciden en que la judicialización en México es una realidad
Abstract:
Objective. Understand what Mexican key stakeholders think about the judicialization of the right to health in Mexico. Materials and methods. 30 semi-structured interviews were conducted at different settings in Mexico City with representatives of the judiciary, legislative power, Health Sector (HS), pharmaceutical industry, academia and nongovernmental organizations from May 2017 to August 2018. Interviews were transcribed and analyzed based on different categories of interest. Results. There are different opinions regarding judicialization of the right to health. Tensions exist between those who see its potential effect as a game changer for the HS and those who perceive it as an illegitimate interference of the judiciary. There is no coordinated strategy between sectors to promote change in the HS. Conclusions. There are different opinions regarding judicialization of the right to health in Mexico. There are tensions between those who see its potential effect as a game changer for the HS and those who perceive it as an illegitimate interference of the judiciary. Others argue that there is no coordinated strategy between sectors to promote change in the HS. All agree that judicialization in Mexico is a reality
Resumen:
La disminución de la tasa de lactancia materna en México es un problema de salud pública. En este artículo discutimos un enfoque regulatorio –Regulación Basada en Desempeño– y su aplicación para mejorar las tasas de lactancia materna. Este enfoque obliga a la industria a asumir su responsabilidad por la falta de lactancia materna y sus consecuencias. Se considera una estrategia factible de ser aplicada al caso, ya que el mercado de sucedáneos tiene una estructura oligopólica, donde es relativamente fácil fijar la contribución de cada participante del mercado en el problema; incide en un grupo poblacional definido; tiene un objetivo regulatorio que puede ser fácilmente evaluado, y se pueden definir las sanciones bajo criterios objetivos. Para su aplicación se recomienda: modificar la política pública, crear convenios de concertación con la industria, establecer sanciones disuasorias, fortalecer los mecanismos de supervisión y alinear lo anterior al Código Internacional de Comercialización de Sucedáneos
Abstract:
The decreasing breastfeeding rate in México is of public health concern. In this paper we discus an innovative regulatory approach -Performance Based Regulation- and its application to improve breastfeeding rates. This approach, forces industry to take responsibility for the lack of breastfeeding and its consequences. Failure to comply with this targets results in financial penalties. Applying performance based regulation as a strategy to improve breastfeeding is feasible because: the breastmilk substitutes market is an oligopoly, hence it is easy to identify the contribution of each market participant; the regulation's target population is clearly defined; it has a clear regulatory standard which can be easily evaluated, and sanctions to infringement can be defined under objective parameters. Recommendations: modify public policy, celebrate concertation agreements with the industry, create persuasive sanctions, strengthen enforcement activities and coordinate every action with the International Code of Marketing of Breast-milk Substitutes
Abstract:
What would have happened if a relatively looser fisheries policy had been implemented in the European Union (EU)? Using Bayesian methods a Dynamic Stochastic General Equilibrium (DSGE) model is estimated to assess the impact of the European Common Fisheries Policy (CFP) on the economic performance of a Galician (northwest of Spain) fleet highly dependant on the EU Atlantic southern stock of hake. Our counterfactual analysis shows that if a less effective CFP had been implemented during the period 1986–2012, fishing opportunities would have increased, leading to an increase in labour hours of 4.87%. However, this increase in fishing activity would have worsened the profitability of the fleet, dropping wages and rental price of capital by 6.79% and 0.88%, respectively. Welfare would also be negatively affected since, in addition to the increase in hours worked, consumption would have reduced by 0.59%
Abstract:
Extreme weather events are becoming more frequent and intense due to climate change. Yet, little is known about the relationship between exposure to extreme events, subjective attribution of these events to climate change, and climate policy support, especially in the Global South. Combining large-scale natural and social science data from 68 countries (N = 71,922), we develop a measure of exposed population to extreme weather events and investigate whether exposure to extreme weather and subjective attribution of extreme weather to climate change predict climate policy support. We find that most people support climate policies and link extreme weather events to climate change. Subjective attribution of extreme weather was positively associated with policy support for five widely discussed climate policies. However, exposure to most types of extreme weather event did not predict policy support. Overall, these results suggest that subjective attribution could facilitate climate policy support
Abstract:
Science is crucial for evidence-based decision-making. Public trust in scientists can help decision makers act on the basis of the best available evidence, especially during crises. However, in recent years the epistemic authority of science has been challenged, causing concerns about low public trust in scientists. We interrogated these concerns with a preregistered 68-country survey of 71,922 respondents and found that in most countries, most people trust scientists and agree that scientists should engage more in society and policymaking. We found variations between and within countries, which we explain with individual- and country-level variables, including political orientation. While there is no widespread lack of trust in scientists, we cannot discount the concern that lack of trust in scientists by even a small minority may affect considerations of scientific evidence in policymaking. These findings have implications for scientists and policymakers seeking to maintain and increase trust in scientists
Abstract:
In this work, we study the composition operators on the little Lipschitz space L0 of a rooted tree T, defined as the subspace of the Lipschitz space L consisting of the complex-valued functions f on Tsuch that lim/|v|-infinity (|f(v) - f(v_)|=0, where v_ is the vertex adjacent to the vertex v in the path from the root to v and |v| denotes the number of edges from the root to v. Specifically, we give a complete characterization of the self-maps φ of T for which the composition operator Cφ si bounded and we estimate its operator norm. In addition, we study the spectrum of Cφ and the hypercyclicity of the operators λCφ for λ E C
Resumen:
Para mejorar las prácticas de lactancia materna es necesario fortalecer acciones de promoción, protección y apoyo, y establecer una política nacional multisectorial que incluya elementos indispensables de diseño, implementación, monitoreo y evaluación de programas y políticas públicas, financiamiento para acciones e investigación, desarrollo de abogacía y voluntad política, y promoción de la lactancia materna, todo coordinado por un nivel central. Recientemente, México ha iniciado un proceso de reformas conducentes a la conformación de una Estrategia Nacional de Lactancia Materna (ENLM). Esta estrategia es el resultado de la disponibilidad de evidencia científica sobre los beneficios de la lactancia materna en la salud de la población y el desarrollo de capital humano así como de los datos alarmantes de su deterioro. La implementación integral de una ENLM que incluya el establecimiento de un Comité Nacional Operativo, coordinación intra e intersectorial de acciones, establecimiento de metas claras, monitoreo y penalización de las violaciones al Código Internacional de Comercialización de Sucedáneos de la Leche Materna, y financiamiento de estas acciones es la gran responsabilidad pendiente de la agenda de salud pública del país
Abstract:
Evidence strongly supports that to improve breastfeeding practices it is needed to strengthen actions of promotion, protection and support. To achieve this goal, it is necessary to establish a multisectoral national policy that includes elements such as design, implementation, monitoring and evaluation of programs and policies, funding research, advocacy to develop political willingness, and the promotion of breastfeeding from the national to municipal level, all coordinated by a central level. It is until now that Mexico has initiated a reform process to the establish a National Strategy for Breastfeeding Action. This strategy, is the result not only of the consistent scientific evidence on clear and strong benefits of breastfeeding on population health and the development of human capital, but also for the alarming data of deterioration of breastfeeding practices in the country. The comprehensive implementation of the National Strategy for Breastfeeding Action that includes the establishment of a national committee, intra- and inter-sectoral coordination of actions, setting clear goals and monitoring the International Code of Marketing of Breast-Milk Substitutes, is the awaiting responsibility of the public health agenda of the country
Abstract:
Climate change is a core issue for sustainable development and exacerbates inequality. However, in both the WTO and the climate regime, disagreements over differential treatment have hampered efforts to address international inequalities in a way that facilitates effective responses to global issues. Sustainable globalization requires bridging the disparities between developed and developing countries in their capacities to address such matters of global concern. However, differential treatment now functions as a distraction from the global issues it was supposed to address. Cognitive biases distort perceptions regarding the climate crisis and the value of multilateralism. To what extent can cognitive science inform decision making by States? How do we change paradigms (cognitive background assumptions), which limit the options that decision-making elites in developed and developing countries perceive as useful and worth considering? To what extent do cognitive biases and cultural cognition create a barrier to multilateral cooperation on issues of global concern?
Abstract:
According to the 2018 Intergovernmental Panel on Climate Change (IPCC) report, if global warming continues to increase at the current rate (the so-called business-as-usual scenario) global temperatures are likely to reach 1.5 °C between 2030 and 2052. Limiting global warming to 1.5 °C is affordable and feasible but requires immediate action. Climate-related risks for natural and human systems depend on the magnitude and rate of warming, geographic location, levels of development and vulnerability, and on the choices and implementation of adaptation and mitigation options. Climate-related risks to health, livelihoods, food security, water supply, human security, and economic growth are projected to increase with global warming of 1.5 °C and increase further with 2 °C. Estimates of the global emissions outcome of current nationally stated mitigation ambitions as submitted under the Paris Agreement will result in global warming of about 3 °C by 2100, with warming continuing afterward. The climate risks that we set out below are very serious. The worst-case scenario is catastrophic. Additionally, climate risks create challenges and opportunities for the financial industry, particularly insurance. In this chapter we first set out the risks posed by climate change, and then we seek some possible solutions to reduce emissions (often referred to as climate change mitigation) and to adapt to climate change. We show that financial markets can be harnessed to reduce emissions through market mechanisms. In particular, markets can put a price on risk. That allows insurance/ reinsurance companies to step in with appropriate solutions and create incentives for emissions reductions and climate change adaption. Emissions reductions will reduce the risk of catastrophic climate change. However, some climate change is already inevitable. Therefore, adaptation is essential
Abstract:
The effects of accelerating climate change will have a destabilizing impact on trade negotiations, particularly for the worst-affected developing countries. The effects of the climate crisis will make it more difficult to make concessions in crucial areas, such as agriculture and intellectual property rights, due to the effects of the climate crisis on agricultural yields and the increased need for technology to adapt to a warming climate and reduce greenhouse gas emissions. Rising sea levels, droughts, floods, and killer heat waves will provoke mass migration, with impacts on domestic politics that makes trade concessions more difficult. In this context, multilateral trade negotiations are unlikely to advance in a significant way and megaregional trade agreements will become increasingly difficult to join. The result will be a warming world that is divided between those included in and those excluded from the megaregional trade regime. This will also hamper efforts to slow and to adapt to the climate crisis, due to the key role that international trade plays in addressing both
Abstract:
The Appellate Body is considered the jewel in the crown of the WTO dispute settlement system. However, since it blocked the re-appointment of Jennifer Hillman to the Appellate Body, the United States has become increasingly assertive in its efforts to control judicial activism at the WTO. This was a hot topic in the corridors at the eleventh WTO Ministerial Conference, in Buenos Aires. This article examines judicial activism in the Appellate Body, and discusses the efforts of the United States to constrain the Appellate Body in this context. It also analyses US actions and proposals regarding the dispute settlement systems of the NAFTA, in order to place the WTO debate in a wider context. It concludes that reforms are necessary to break the negative feedback loop between deadlock in multilateral trade negotiations and judicial activism
Abstract:
In two early WTO cases, the Appellate Body found a failure to engage in negotiations to be arbitrary or unjustifiable discrimination under the GATT Article XX chapeau. Subsequent jurisprudence has not applied a negotiation requirement. Instead, it analyzes whether discrimination is arbitrary or unjustifiable by focusing on the cause of the discrimination, or the rationale put forward to explain its existence, which would exclude a duty to negotiate in many circumstances. The issue of whether there is a duty to negotiate is a systemic issue for international economic law. The Article XX chapeau language appears in other WTO agreements and in other international economic law treaties, including those that address environmental protection, regional trade and international investment. This article argues that there is no such duty in WTO law
Abstract:
The purpose of multilateral disciplines on subsidies is to avoid trade distortions, in order to increase production efficiencies through competition. However, this objective may be defeated due to defects in the structure of the WTO Agreement on Subsidies and Countervailing Measures and the resulting interpretations of WTO tribunals in cases involving clean energy subsidies. These defects, together with inefficient design of energy markets, could slow the transition to clean energy sources. However, the necessary reforms to the multilateral subsidies regime and energy markets are unlikely to be implemented any time soon, in the absence of a successful formula for multilateral negotiations. In this environment, the private sector will have to take the lead in making the transition to clean energy sources, in order to mitigate the disastrous effects of climate change to the extent that this goal remains attainable
Abstract:
Mexican energy reforms open the energy sector to foreign participation via different types of contracts, some of which may qualify as investments under North American Free Trade Agreement (NAFTA) Chapter 11. Mexican NAFTA reservations exclude some Mexican regulation from the scope of application of specific obligations in Chapter 11, such as those regarding performance requirements, most-favoured-nation treatment, and national treatment. However, Mexico’s legislative restrictions on foreign investors’ right to pursue investor-state arbitration are not covered by its NAFTA reservations and should not affect access to NAFTA Chapter 11 and Mexico cannot invoke its domestic laws to justify a violation of its international obligations. Moreover, Mexico’s reservations do not prevent the application of obligations regarding fair and equitable treatment and expropriation
Abstract:
This article analyzes how International Investment Agreements (IIAs) might constrain the ability of governments to adopt climate change measures. This article will consider how climate change measures can either escape the application of IIA obligations or be justified under exceptions. First, this article considers the role of treaty structure in preserving regulatory autonomy. Then, it analyzes the role that general scope provisions can play in excluding environmental regulation from the scope of application of IIAs. Next, this article will consider how the limited incorporation of environmental exceptions into IIAs affects their interpretation and application in cases involving environmental regulation. The article then analyzes non-discrimination obligations, the minimum standard of treatment for foreign investors and obligations regarding compensation for expropriation. This analysis shows that tribunals can exclude environmental regulation from the scope of application of specific obligations as well
Abstract:
This article analyzes how treaty structure affects regulatory autonomy by shifting the point in a treaty in which tribunals address public interest regulation. The article also analyzes how trade and investment treaties use a variety of structures that influence their interpretation and the manner in which they address public interest regulation. WTO and investment tribunals have addressed public interest regulation in provisions regarding a treaty’s scope of application, obligations and public interest exceptions. The structure of treaties affects a tribunal’s degree of deference to regulatory choices. In treaties that do not contain general public interest exceptions, tribunals have excluded public interest regulation from the scope of application of the treaty as a whole or the scope of application of specific obligations. If treaty parties wish to preserve a greater degree of regulatory autonomy, they can limit the general scope of application of a treaty, or the scope of application of specific obligations, which places the burden of proof on the complainant. In cases where the complexity of the facts or the law make the burden of proof difficult to meet, this type of treaty structure makes it more difficult to prove treaty violations and thus preserves regulatory autonomy to a greater degree
Abstract:
Debates between developing and developed countries over access to technology to mitigate or adapt to climate change tend to overlook the importance of plant varieties. Climate change will increase the importance of the development of new plant varieties that can adapt to changing climactic conditions.This article compares Intellectual Property Rights (IPRs) for plant varieties in the World Trade Organization (WTO) Trade-Related Aspects of Intellectual Property (TRIPS) Agreement, the UPOV Convention and the Convention on Biological Diversity (CBD). It concludes that TRIPS Article 27.3(b) provides an appropriate degree of flexibility regarding the policy options available to confront climate change
Abstract:
Multilingualism is a sensitive and complex subject in a global organisation such as the World Trade Organization (WTO). In the WTO legal texts, there is a need for full concordance, not simply translation. This article begins with an overview of the issues raised by multilingual processes at the WTO in the negotiation, drafting, translation, interpretation and litigation phases. It then compares concordance procedures in the WTO, European Union and United Nations and proposes new procedures to prevent and correct linguistic discrepancies in the WTO legal texts. It then categorises linguistic differences among the WTO legal texts and considers the suitability of proposed solutions for each category. The conclusion proposes an agenda for further work at the WTO to improve best practices based on the experience of the WTO and other international organisations and multilingual governments
Abstract:
This article examines three issues: (1) the use, over time, of facemasks in a public setting to prevent the spread of a respiratory disease for which the mortality rate is unknown; (2) the difference between the responses of male and female subjects in a public setting to unknown risks; and (3) the effectiveness of mandatory and voluntary public health measures in a public health emergency. The use of facemasks to prevent the spread of respiratory diseases in a public setting is controversial. At the height of the influenza epidemic in Mexico City in the spring of 2009, the federal government of Mexico recommended that passengers on public transport use facemasks to prevent contagion. The Mexico City government made the use of facemasks mandatory for bus and taxi drivers, but enforcement procedures differed for these two categories. Using an evidence-based approach, we collected data on the use of facemasks over a 2-week period. In the specific context of the Mexico City influenza outbreak, these data showed mask usage rates mimicked the course of the epidemic and gender difference in compliance rates among metro passengers. Moreover, there was not a significant difference in compliance with mandatory and voluntary public health measures where the effect of the mandatory measures was diminished by insufficiently severe penalties, the lack of market forces to create compliance incentives and sufficient political influence to diminish enforcement. Voluntary compliance was diminished by lack of trust in the government
Abstract:
This article analyzes several unresolved issues in World Trade Organization (WTO) law that may affect the WTO-consistency of measures that are likely to be taken to address climate change. How should the WTO deal with environmental subsidies under the General Agreement on Tariffs and Trade (GATT), the Agreement on Agriculture and the Subsidies and Countervailing Measures (SCM) Agreement? Can the general exceptions in GATT Article XX be applied to other agreements in Annex 1A? Are processing and production methods relevant to determining the issue of ‘like products’ in GATT Articles I and III, the SCM Agreement and the Antidumping Agreement and the TBT Agreement? What is the scope of paragraphs b and g in GATT Article XX and the relationship between these two paragraphs? What is the relationship between GATT Article XX and multilateral environmental agreements in the context of climate change? How should Article 2 of the TBT Agreement be interpreted and applied in the context of climate change? The article explores these issues
Resumen:
En este trabajo se aplica la técnica de extracción de señales para realizar la desestacionalización de dos series de tiempo observadas. En ambos casos se considera explícitamente la de la serie. Para una de las series se obtiene una desestacionalización claramente aceptable, pero la segunda conduce a concluir que no cualquier serie, aparentemente estacional, debe ser desestacionalizada
Abstract:
This paper applies the theory of signal extraction to carry out the deseasonalization ot two observed time series. For both series, we explicitly consider the ARMA model simplification and estimation of the series components. For one series we obtain a reasonable deseasonalization, but the oter case allows us to conclude that not every, apparently seasonal, series should be deseasonalized
Resumen:
En este trabajo se da cuenta del estado y la evolución geográfica de las ciencias sociales, a través del proceso de desconcentración en México de los integrantes del Sistema Nacional de Investigadores (SNI). Las preguntas que se busca responder y que sirven como dimensiones de análisis son: ¿Cuál es la distribución de los investigadores dentro de la República? ¿Cómo se distribuyen por sexo en las regiones geográficas del país? ¿Cómo es la composición por nivel del sni en cada zona? ¿Existen, y cuáles, han sido los cambios en la composición disciplinar de esta área del conocimiento
Abstract:
In this paper, the the social sciences state and geographical evolution is recognized through the deconcentrating process of the members on the National System of Researchers in Mexico. The questions that are pursued to be answered and that serve as analysis dimensions are: Which is the researchers geographical distribution in the country? How are they distributed by gender and academic level? How is the composition of the National System of Researchers by level in each area? Are there, if any, changes in the disciplinary composition of this area of knowledge?
Resumen:
Los autores nos ofrecen un reporte que muestra la desconcentración de los integrantes del Sistema Nacional de Investigadores —anteriormente ubicados de forma predominante en la Zona Metropolitana de la Ciudad de México—, que muestra cómo en las diversas regiones del país hay una participación cada vez mayor en trabajos de investigación científica y tecnológica original y de calidad
Abstract:
This article examines a calculus graphing schema and the triad stages of schema devel- opment from Action-Process-Object-Schema (APOS) theory. Previously, the authors studied the underlying structures necessary for students to process concepts and enrich their knowledge, thus demonstrating various levels of understanding via the calculus graphing schema. This investigation built on this previous work by focusing on the thematization of the schema with the intent to expose those possible structures acquired at the most sophisticated stages of schema development. Results of the analyses showed that these successful calculus students, who appeared to be operating at varying stages of development of their calculus graphing schemata, illustrated the complexity involved in achieving thematization of the schema, thus demonstrating that thematization was possible
Abstract:
We classify all planar four–body central configurations where two pairs of the bodies are on parallel lines. Using cartesian coordinates, we show that the set of four–body trape- zoid central configurations with positive masses forms a two–dimensional surface where two symmetric families, the rhombus and isosceles trapezoid, are on its boundary. We also prove that, for a given position of the bodies, in some cases a specific order of the masses determines the geometry of the configuration, namely acute or obtuse trapezoid central configuration. We also prove the existence of non–symmetric trapezoid central configura- tions with two pairs of equal masses
Abstract:
Let T be a second-order arithmetical theory, Lambda a well-order, lambda < Lambda and X subset of N. We use [lambda vertical bar X](T)(Lambda)phi as a formalization of "phi is provable from T and an oracle for the set X, using omega-rules of nesting depth at most lambda". For a set of formulas Gamma, define predicative oracle reflection for T over Gamma (Pred-O-RFNG(T)) to be the schema that asserts that, if X subset of N, Lambda is a well-order and phi is an element of Gamma, then for all lambda < Lambda ([lambda vertical bar X](T)(Lambda)phi -> phi ). In particular, define predicative oracle consistency (Pred-O-Cons(T)) as Pred-O-RFN.({0= 1}) (T). Our main result is as follows. Let ATR(0) be the second-order theory of Arithmetical Transfinite Recursion, RCA(0)(*) be Weakened Recursive Comprehension and ACA be Arithmetical Comprehension with Full Induction. Then, ATR(0) RCA(0)(*) + Pred-O-Cons(RCA(0)(*)) = RCA(0)(*) + Pred-O-RFN Pi 12 (ACA). We may even replace RCA(0)(*) by the weaker ECA(0), the second-order analogue of Elementary Arithmetic. Thus we characterize ATR(0), a theory often considered to embody Predicative Reductionism, in terms of strong reflection and consistency principles
Abstract:
In the generalized Russian cards problem, the three players Alice, Bob and Cath draw a, b and c cards, respectively, from a deck of a+b+c cards. Players only know their own cards and what the deck of cards is. Alice and Bob are then required to communicate their hand of cards to each other by way of public messages. For a natural number k, the communication is said to be k-safe if Cath does not learn whether or not Alice holds any given set of at most k cards that are not Cath’s, a notion originally introduced as weak k-security by Swanson and Stinson. An elegant solution by Atkinson views the cards as points in a finite projective plane. We propose a general solution in the spirit of Atkinson’s, although based on finite vector spaces rather than projective planes, and call it the ‘geometric protocol’. Given arbitrary c, k>0, this protocol gives an informative and k-safe solution to the generalized Russian cards problem for infinitely many values of (a,b,c) with b=O(ac). This improves on the collection of parameters for which solutions are known. In particular, it is the first solution which guarantees k-safety when Cath has more than one card
Abstract:
In the generalized Russian cards problem, Alice, Bob and Cath draw a, b and c cards, respectively, from a deck of size a + b + c. Alice and Bob must then communicate their entire hand to each other, without Cath learning the owner of a single card she does not hold. Unlike many traditional problems in cryptography, however, they are not allowed to encode or hide the messages they exchange from Cath. The problem is then to find methods through which they can achieve this. We propose a general four-step solution based on finite vector spaces, and call it the “colouring protocol”, as it involves colourings of lines. Our main results show that the colouring protocol may be used to solve the generalized Russian cards problem in cases where a is a power of a prime, c = O(a^2) and b = O(c^2). This improves substantially on the set of parameters for which solutions are known to exist; in particular, it had not been shown previously that the problem could be solved in cases where the eavesdropper has more cards than one of the communicating players
Abstract:
We find a new and simple inversion formula for the 2D Radon transform (RT) with a straight use of the shearlet system and of well-known properties of the RT. Since the continuum theory of shearlets has a natural translation to the discrete theory, we also obtain a computable algorithm that recovers a digital image from noisy samples of the 2D Radon transform which also preserves edges. A very well-known RT inversion in the applied harmonic analysis community is the biorthogonal curvelet decomposition (BCD). The BCD uses an intertwining relation of differential (unbounded) operators between functions in Euclidean and Radon domains. Hence the BCD is ill-posed since the inverse is unstable in the presence of noise. In contrast, our inversion method makes use of an intertwining relation of integral transformations with very smooth kernels and compact support away from the origin in the Fourier domain, i.e. bounded operators. Therefore, we obtain, at least, the same asymptotic behavior of mean-square error as the BCD (and its shearlet version) for the class of cartoon-like functions. Numerical simulations show that our inverse surpasses, by far, the inverse based on the BCD. Our algorithm uses only fast transformations
Abstract:
We built an infrared vision system to be used as the real time 3D motion sensor in a prototype low cost, high precision, frameless neuronavigator. The objective of the prototype is to develop accessible technology for increased availability of neuronavigation systems in research labs and small clinics and hospitals. In this paper we present our choice of technology including camera and IR emitter characteristics. We describe the methodology for setting up the 3D motion sensor, from the arrangement of the cameras and the IR emitters on surgical instruments, to triangulation equations from stereo camera pairs, high bandwidth computer communication with the cameras and real time image processing algorithms. We briefly cover the issues of camera calibration and characterization. Although our performance results do not yet fully meet the high precision, real time requirements of neuronavigation systems we describe the current improvements being made to the 3D motion sensor that will make it suitable for surgical applications
Abstract:
This paper evaluates the finite sample performance of two methods for selecting the power transformation when applying seasonal adjustment with the X-13ARIMA-SEATS package: the automatic method, based on the Akaike Information Criterion (AIC) and Guerrero's method, that is based on the minimization of a coefficient of variation in order to choose a power transformation. For this purpose, we generate time series with different Data Generating Processes considering traditional Monte Carlo experiments as well as additive and multiplicative time series with linear and time-varying deterministic trends. We also illustrate the performance of both approaches with an empirical application, by seasonally adjusting the Mexican Global Economic Activity Indicator and its components. The results of different statistical tests indicate that Guerrero's method is more adequate than the automatic one. We conclude that using Guerrero's method generates better results when seasonally adjusting time series with the X-13ARIMA-SEATS program
Abstract:
This article presents a new method to reconcile direct and indirect deseasonalized economic time series. The proposed technique uses a Combining Rule to merge, in an optimal manner, the directly deseasonalized aggregated series with its indirectly deseasonalized counterpart. The lastmentioned series is obtained by aggregating the seasonally adjusted disaggregates that compose the aggregated series. This procedure leads to adjusted disaggregates that verify Denton’s movement preservation principle relative to the originally deseasonalized disaggregates. First, we use as preliminary estimates the directly deseasonalized economic time series obtained with the X-13ARIMA-SEATS program applied to all the disaggregation levels. Second, we contemporaneously reconcile the aforementioned seasonally adjusted disaggregates with its seasonally adjusted aggregate, using Vector Autoregressive models. Then, we evaluate the finite sample performance of our solution via a Monte Carlo experiment that considers six Data Generating Processes that may occur in practice, when users apply seasonal adjustment techniques. Finally, we present an empirical application to the Mexican Global Economic Indicator and its components. The results allow us to conclude that the suggested technique is appropriate to indirectly deseasonalize economic time series, mainly because we impose the movement preservation condition to the preliminary estimates produced by a reliable seasonal adjustment procedure
Abstract:
This paper is about a control design for the lateral-directional dynamics of a fixed wing aircraft. The objective is to command the lateral-directional dynamics to the equilibrium point that corresponds to a coordinated turn. The proposed control is inspired by the total energy control system technique; with the objective of mixing the ailerons and the rudder inputs. The control law is developed directly from the nonlinear aircraft model. Numerical simulation results are presented to show the closed-loop system behavior
Abstract:
The stratified proportional hazards model represents a simple solution to take into account heterogeneity within the data while keeping the multiplicative effect of the predictors on the hazard function. Strata are typically defined a priori by resorting to the values of a categorical covariate. A general framework is proposed, which allows the stratification of a generic accelerated lifetime model, including, as a special case, the Weibull proportional hazards model. The stratification is determined a posteriori, taking into account that strata might be characterized by different baseline survivals, and also by different effects of the predictors. This is achieved by considering a Bayesian nonparametric mixture model and the posterior distribution it induces on the space of data partitions. An optimal stratification is then identified following a decision theoretic approach. In turn, stratum-specific inference is carried out. The performance of this method and its robustness to the presence of right-censored observations are investigated through an extensive simulation study. Further illustration is provided analyzing a data set from the University of Massachusetts AIDS Research Unit IMPACT Study
Abstract:
This work presents a study about the smoothness attained by the methods more frequently used to choose the smoothing parameter in the context of splines: Cross Validation, Generalized Cross Validation, and corrected Akaike and Bayesian Information Criteria, implemented with Penalized Least Squares. It is concluded that the amount of smoothness strongly depends on the length of the series and on the type of underlying trend, while the presence of seasonality even though statistically significant is less relevant. The intrinsic variability of the series is not statistically significant and its effect is taken into account only through the smoothing parameter
Abstract:
Nowadays, dairy products, especially fermented products such as yogurt, fromage frais, sour cream and custard, are among the most studied foods through tribological analysis due to their semi-solid appearance and close relationship with attributes like smoothness, creaminess and astringency. In tribology, dairy products are used to provide information about the friction coefficient (CoF) generated between tongue, palate, and teeth through the construction of a Stribeck curve. This provides important information about the relationship between friction, food composition, and sensory attributes and can be influenced by many factors, such as the type of surface, tribometer, and whether saliva interaction is contemplated. This work will review the most recent and relevant information on tribological studies, challenges, opportunity areas, saliva interactions with dairy proteins, and their relation to dairy product sensory
Resumen:
Las reformas de 1928 y 1934 a la Constitución mexicana trajeron una serie de cambios de forma y fondo a la Suprema Corte de Justicia de la Nación, que han sido analizadas desde la teoría constitucional, y a partir de las cuales se puede estudiar el periodo liberal de las mismas que muestran cómo, a pesar del transfondo político imperante, la Corte construyó y desarrolló su propio modelo judicial
Abstract:
The reforms of 1928 and 1934 to the Mexican Constitution brought a series of changes of form and substance to the Supreme Court of Justice of the Nation, which have been analyzed from the constitutional theory, and from which the liberal period of the same ones that show how, in spite of the prevailing political background, the Court constructed and developed its own judicial model
Resumen:
El nuevo modelo de control de la regularidad constitucional y el advenimiento del llamado "paradigma constitucionalista" demandan una buena cantidad de ajustes a nuestro sistema jurídico, tanto en el ámbito legislativo como en el jurisprudencial. Un mandato constitucional que condiciona estos cambios de manera preponderante es el principio pro persona. En este trabajo demostramos cómo la Suprema Corte de Justicia no ha sido precisamente consistente a la hora de conjugar este importante principio con los diferentes problemas que va resolviendo. A menos que pensemos que la jurisprudencia de la Corte es infalible, no encontramos ninguna razón que justifique su inaplicación a cargo de los jueces ordinarios mediante el control difuso. Tampoco podemos admitir que la Corte sea impermeable con relación al principio pro persona. En este trabajo, reflexionamos sobre estos problemas a propósito de un expediente de reciente resolución: la CT 299/2013
Abstract:
The new model of constitutional adjudication and the advent of the so called "constitutionalist paradigm" demand quite a few adjustments in the Mexican legal system, both in the legislative field as in the judicial one. The "pro personae" principle must compel and inspire these changes. In this paper we will demonstrate how the Supreme Court of Justice has not been consistent at the time of expounding this important principle through judicial review. Unless we think that the Supreme Court is infallible, we do not find a reason that justifies this position. We cannot admit, either, that the Supreme Court is impermeable regarding the "pro personae" principle. In this paper, we reflect upon these issues by analyzing the recent decision in the C.T. 299/2013 (conflict in jurisdictional criteria)
Resumen:
El presente artículo tiene como objetivo plantear algunas reflexiones respecto a la forma en que el orden jurídico mexicano regula los cuidados paliativos y su conexión con el debate acerca de la muerte asistida. En él, los autores analizan el contenido de la Ley General de Salud –a partir de una reforma publicada en enero de 2009–, su Reglamento en Materia de Prestación de Servicios de Atención Médica y las demás disposiciones aplicables a los cuidados paliativos, con el objetivo de impulsar el debate público en torno a las diversas formas de asistencia que pueden recibir las personas con una enfermedad que no responde al tratamiento curativo
Abstract:
This article analyzes the Mexican regulation on palliative care and its relationship with the public debate on assisted death or suicide. This paper focuses on the rights that people with incurable diseases have, given the current contents of the General Health Statute and other applicable rules. Its main purpose is to activate the public debate on these matters
Resumen:
La Primera Sala de la Suprema Corte de Justicia de la Nación resolvió, por mayoría de cuatro votos, un amparo en el que había que dilucidar si ciertos artículos de la Norma Oficial Mexicana NOM-174-SSA1-1998, Para el Manejo Integral de la Obesidad resultaban o no violatorios de derechos humanos. En la sentencia aprobada por la mayoría se concluye que las restricciones son contrarias a la libertad prescriptiva o terapéutica, la cual, a su juicio, es parte esencial del derecho al trabajo. El ministro Cossío Díaz votó en contra y emitió un voto particular en el cual estimó que, en primer lugar, la libertad prescriptiva no es parte esencial del referido derecho, sino que funge como criterio orientador de la profesión médica. En segundo lugar, señala que el derecho al trabajo no es absoluto, toda vez que admite ciertos límites, siempre y cuando sean válidos constitucionalmente. Por ello, para determinar si son constitucionalmente válidos los requisitos que establece la Norma Oficial Mexicana (NOM), se debió haber solicitado la opinión de expertos, para poder justificar la introducción de aquéllos en la NOM. Finalmente, manifestó que el estudio debió de ponderar el derecho al trabajo con el de la salud del paciente, toda vez que este último es el que se pretendió tutelar con la NOM impugnada
Abstract:
The First Chamber of the Mexican Supreme Court of Justice decided, by a majority of four votes, on a case where it had to be evaluated if some articles of a Mexican Official Norm (NOM) on obesity violated human rights. The majority in the chamber concluded that the restrictions went against Medics’ prescribing or therapeutic rights, and therefore their freedom to work. Justice Cossío Díaz voted against the judgment and wrote a separate opinion where he holds, first of all, that the prescribing right works as a guideline for the medical profession and is not an essential element of the freedom to work. Secondly, he points out that the freedom to work is not an absolute right, for it has certain limits permitted by the Constitution. Consequently, experts’ opinions should have been consulted for them to be able to determine if the NOM´s requirements were in accordance with the Constitution. Finally, he considers that the judgment should have introduced a balancing test between freedom to work and the patient’s health rights, since this last-mentioned right was what the NOM intended to protect. (Gac Med Mex. 2013;149:686-90)
Resumen:
En sesión del 23 de enero de 2013, la Primera Sala de la Suprema Corte de Justicia de la Nación resolvió por mayoría de tres votos el asunto citado al rubro. Al respecto, expongo las consideraciones de mi desacuerdo en torno a la aprobación del engrose de la sentencia y a las razones que sustentan dicha resolución
Resumen:
En las sesiones del 17, 18, 20, 24, 25 y 27 de mayo de 2010, el Pleno de la Suprema Corte de Justicia reconoció la validez de la modificación de la “Norma oficial mexicana NOM-190-SSA1-1999, prestación de servicios de salud. Criterios para la atención médica de violencia familiar”, para quedar como “Norma oficial mexicana NOM-046-SSA2-2005, violencia familiar, sexual y contra las mujeres. Criterios para la prevención y atención”, publicada en el Diario Oficial de la Federación el 16 de abril de 2009, impugnada por el gobernador del estado de Jalisco, quien señaló que la norma era violatoria de los artículos 4, 5, 14, 16, 20, 21, 29, 31, fracción IV, 49, 73, 74, 89, fracción I, 123, 124 y 133 de la Constitución Federal. En dicha resolución, la Suprema Corte de Justicia determinó la constitucionalidad de la obligación a cargo de los integrantes del Sistema Nacional de Salud, de ofrecer y suministrar la denominada pastilla del “día siguiente” o de “emergencia” a las mujeres víctimas de violación sexual. De acuerdo con el artículo 5 de la Ley General de Salud, el Sistema Nacional de Salud abarca tanto los hospitales privados como los públicos, ya sean locales o federales. Lo que quiere decir que todos los hospitales se encuentran obligados a atender las disposiciones contenidas en la norma oficial impugnada. Dada la importancia de la determinación de la Corte en el ámbito médico nacional, en el presente artículo se analizan los puntos más relevantes del referido fallo y sus implicaciones
Abstract:
This article summarizes the Court’s ruling regarding the constitutionality of the Official Norm “NOM-046-SSA2-2005”. Jalisco’s Governor challenged the validity of the referred norm arguing that it was against articles 4, 5, 14, 16, 20, 21, 29, 31-IV, 49, 73, 74, 89-I, 123, 124 y 133 of the Federal Constitution. The Supreme Court disregarded Governor’s claim and determined that the members of the National Health System are obliged to offer and give the “day after pill” to sexual violation victims. According to article 5 of General Health Law, the National Health System includes private and public hospitals, whether they are local or federal. This means that all these health institutions have the obligation to observe the dispositions contained in the appealed Official Norm. Given the significance of the Court’s ruling in the medical sphere, in this article the most relevant issues of the Court decision and its implications are analyzed
Resumen:
La función que los médicos cumplen en la sociedad es muy importante desde diversos ángulos. No obstante, las actividades que desarrollan no pueden quedar fuera del control legal en la medida en que está en juego en muchos casos la salud o incluso la vida de otras personas. Por ello, en el presente artículo se analiza a partir de una sentencia emitida por la Primera Sala de la Suprema Corte de Justicia de la Nación, el equilibrio que debe existir entre el derecho al trabajo de los médicos y el derecho de las personas a ver protegida su salud, tomando como referencia el análisis que dicho tribunal hizo en la revisión de un juicio de amparo respecto a la constitucionalidad del artículo 271 de la Ley General de Salud, destacando que dicho análisis se hizo teniendo en cuenta los estándares internacionales en materia de derechos humanos existentes. Asimismo, se analizan aspectos relacionado a quiénes son las autoridades competentes para otorgar títulos académicos médicos, y cómo el referido artículo de la Ley General de Salud era compatible con otros derechos constitucionales y la labor de los médicos
Abstract:
The role physicians play in society is very important from different perspectives. In spite of this, their activities cannot remain outside of the legal sphere and their ensuing guidelines since physicians activities include the health and life of patients, often at risk. We describe a law put forth by Mexico’s Supreme Court that includes a balance between physician’s duties and safeguarding a patient’s health. Following international guidelines and human right’s treaties, Supreme Court magistrates analyzed the constitutionality of article 271 included in Mexico’s General Health Law (Ley General de Salud). Other aspects of their analysis included attributes to grant medical degrees and the way in which certain clauses in the General Health Law are compatible with physicians’ daily work and other constitutional rights
Abstract:
Para efectos de un adecuado desarrollo del tema objeto de este trabajo, es conveniente apuntar desde ahora -aún cuando con posterioridad desarrollemos el tema de manera detallada- que el sistema de justicia constitucional mexicano tiene la peculiaridad de simultáneamente poder ser calificado como concentrado, en cuanto que sólo son competentes para ejercitar el control de regularidad constitucional los órganos del Poder Judicial de la Federación, y difuso, debido a que tal control se ejercita por los distintos órganos jurisdiccionales1 que componen a ese Poder (Suprema Corte de Justicia, tribunales de circuito, jueces de distrito). De este modo, la adecuada descripción de lo que podemos denominar "justicia constitucional mexicana" exige analizar distintos órganos (Suprema Corte de Justicia, tribunales de circuito y juzgados de distrito) y diversos procedimientos (juicio de amparo, controversias constitucionales y acciones de inconstitucionalidad), de ahí que sea pertinente introducir tales diferenciaciones desde ahora
Abstract:
Water regulation in Mexico rests on the Mexican Constitution and interpretation of that law by the Mexican Supreme Court: Mexican lawyers,on the other hand, tend to ignore those interpretations and look to the text of the Constitution itself. This article argues against that approach and points to the importance of new ways of making decisions
Resumen:
Por diversos motivos un gobierno puede querer inducir a la banca comercial a que otorgue crédito a un determinado sector. Debido a ello, en este artículo se analiza una familia de contratos que puede generar tal resultado pero sin que traiga consigo un comportamiento perverso por parte de acreedores o deudores - situación que tradicionalmente se presenta con la banca de desarrollo. Para llevar a cabo tal análisis se consideró la existencia de información asimétrica en dos frentes: i) entre el gobierno y la banca comercial (el crédito puede ser desviado a miembros que no pertenezcan al grupo-objetivo del gobierno), y ii) entre la banca comercial y los deudores (estos últimos pueden utilizar el crédito en actividades distintas de las acordadas). Tomando esto en consideración, se muestra que el contrato aquí elaborado es superior a otro tipo de políticas en cuanto que genera menores tasas de interés y una mayor disposición de la banca a cooperar en la consecución del objetivo gubernamental
Abstract:
For several reasons, governments may want to encourage commercial banks to offer credit to some specific groups. This paper analyzes a contract scheme that may help achieve this objective without inducing financial disintermediation or other adverse behavior. Two sources of information asymmetry are considered: between the government and the bank (credit might be diverted to borrowers not belonging to the targeted group); and, between the bank and the borrower (the latter may divert credit to purposes other than stated). Compared to other policies, the scheme analyzed here is superior since it brings about lower interest rates and higher cooperation from banks to achieve the government's objective
Abstract:
For several reasons, governments may want to encourage commercial banks to offer credit to some specific groups. This paper analyzes a contract scheme that may help achieve this objective without inducing financial disintermediation or other adverse behavior. Two sources of information asymmetry are considered: between the government and the bank (credit might be diverted to borrowers not belonging to the targeted group); and between the bank and the borrower (the latter may divert credit to purposes other than stated). Compared to other policies, the scheme analyzed here is superior since it brings about lower interest rates and higher cooperation from banks to achieve the government's objective
Resumen:
Se presenta la doctrina de Santo Tomás de Aquino, fundada en aquella de Aristóteles, relativa al auténtico sentido de la verdad práctica, concebida como diferente de la especulativa, caracterizada por su fin específico, consistente en poner en la existencia una obra exterior al hombre (poiesis) o bien una acción propiamente humana (praxis). Para actuar rectamente se requiere dirigir la acción en conformidad con un apetito recto. De este modo la vida moral no es obra de puro conocimiento sino que requiere necesariamente la participación de los apetitos, lo cuales, si están moldeados por la prudencia, permitirán actuar rectamente
Abstract:
The doctrine of St. Thomas Aquinas, founded on that of Aristotle, is presented here. It concerns the authentic meaning of practical truth, conceived as different from that of a speculative nature, characterized by its specific purpose, consisting in the existence of a work external to Man (poiesis) or an actual human action (praxis). In both practical domains, reason is directed toward movement that is acted by will. To act righteously, it is necessary to direct action in accordance with an upright appetite, which ultimately means to act in a prudent manner. In this way, the moral life is not the work of pure knowledge but necessarily requires the participation of appetites, which, if molded by prudence, will allow one to act righteously
Abstract:
This paper reports an experiment designed to shed light on an empirical puzzle observed by Dufwenberg and Gneezy (Games and Economic Behavior 30: 163-182, 2000) that the size of the foregone outside option by the first mover does not affect the behavior of the second mover in a lost wallet game. Our conjecture was that the original protocol may not have made the size of the forgone outside option salient to second movers. Therefore, we change two features of the Dufwenberg and Gneezy protocol: (i) instead of the strategy method we implement a direct response method (sequential play) for the decision of the second mover; and (ii) we use paper money certificates that are passed between the subjects rather than having subjects write down numbers representing their decisions. We observe that our procedure yields qualitatively the same result as the Dufwenberg and Gneezy experiment, i.e., the second movers do not respond to the change in the outside option of the first movers
Abstract:
In the mid-1980s, Mexico undertook major trade reform, privatization and deregulation. This coincided with a rapid expansion in wages and employment that led to a rise in wage dispersion. This paper examines the role of industry- and occupation-specific effects in explaining the growing dispersion. We find that despite the magnitude and pace of the reforms, industry-specific effects explain little of the rising wage dispersion. In contrast occupation-specific effects can explain almost half of the growing wage dispersion. Finally, we find that the economy became more skill-intensive and that this effect was larger for the traded sector because this sector experienced much smaller low-skilled employment growth. We therefore suggest that competition from imports had an important role in the fall of the relative demand for less-skilled workers
Abstract:
Technology and mobile devices have been successfully integrated in peoples’ everyday activities. Educational institutions around the world are increasing their interest to create mobile learning (ML) environments considering the advantage of connectivity, situated learning, individualized learning, social interactivity, portability, affordability and more widely ubiquity. Even with the fast development of ML environments. There is however a lack of understanding about the factors that influence ML adoption. This paper proposes a framework for ML adoption integrating a modified Unified Theory of Acceptance and Use of Technology (UTAUT) with constructs from the Expectation-Confirmation Theory (ECT). Since the goal for education is learning, this research will include individual attributes such as learning styles (LS) and experience to understand how they moderate ML adoption and actual use. For this reason, the framework brings together the adoption theory for initial use and the constructs of continuance intention for actual and habitual use as an outcome of learning. The framework is divided in two stages, acceptance and actual use. The purpose of this paper is to test the first stage: ML acceptance through the structural equation modeling statistical technique. The data was collected from students that already are experiencing ML. Findings demonstrate that performance and effort expectation constructs are significant predictors of ML and there is some influence of LS and experience as moderators for ML adoption. The practical implication in educational services is to incorporate LS influence when designing strategies for learning enhanced by mobile devices
Abstract:
Mobile learning (ML) has been growing recently. The successful adoption of this technology to enhance learning environments could represent a major contribution for education. The purpose of this research will be to investigate the effect of learning styles on ML adoption. This research will consider four stages: conduct an exploratory study, develop a systematic literature review, develop the model and test the model. The results could be a guideline to encourage ML and to help mobile device manufacturers and developers to generate new mobile applications for education
Abstract:
As the mobile technology evolves, the possibilities for Mobile Learning (ML) are becoming increasingly attractive. However, the lack of perceived learning value and institutional infrastructure are hindering the possibilities for ML attempts. The purpose of our study is to understand the use and adoption of mobile technologies by teachers in a business school. We developed a questionnaire based on current research about the use of technology on higher education and it was used to interview 14 teachers. Participants provided insights about ML opportunities, such as availability, interactive environments, enhanced communication and inclusion on daily activities. Participants also realized that current teaching practices should change in mobile environments to include relevant information, to organize mobile materials, to encourage reflection and to create interactive activities with timely feedback. Further, they identified technological, institutional, pedagogical and individual obstacles that are threaten ML practices
Abstract:
Mobile technology and mobile applications evolution have increased possibilities for mobile learning (ML). However, the lack of perceived learning value and institutional infrastructure are hindering the possibilities for ML attempts. The purpose of our study is the understanding opportunities and obstacles of mobile technologies as perceived by teachers in higher education. A questionnaire was developed based on actual research about technology adoption in higher education and was used to interview 14 teachers. Participants were asked to identify different issues associated with using mobile technology in education. In response, participants provided insights about ML perception, such as opportunities to enhance communication with students, availability for resources and immediate feedback Finally, they identified technological, institutional, pedagogical and individual obstacles that threaten ML practices
Resumen:
A los abogados novohispanos se les concedió la gracia del derecho a utilizar en sus togas puños de encaje de bolillo, privilegio sólo reservado a las altas autoridades eclesiásticas y que se conserva actualmente en las sesiones togadas del Colegio. La toga es una vestimenta propia de la profesión de abogado, es la prenda profesional de los juristas. Alfonso X impuso la garnacha sin vuelillos puños de encaje profesional como prenda profesional de los juristas en las cortes de Jerez de la Frontera en abril de 1267. Los vuelillos quedaron reservados en España hoy en día a los miembros de las juntas de gobierno de los Colegios de Abogados y jueces. La concesión del privilegio del uso de bolillos blancos se hizo a solicitud del IRCAM que desde su fundación gozaba de la participación de las preeminencias, y prerrogativas de que gozaba en el Colegio de Abogados de Madrid. Se confirma la concepción de élite profesional que distinguió a la profesión en el siglo XVIII. EL otorgamiento del privilegio buscó acabar con la confusión en el uso de los trajes de abogados y de otras profesiones, tuvo así una finalidad práctica: la de distinguir a los abogados respecto del resto de los togados
Abstract:
Lawyer in New Spain were granted the right to adorn their robes with bobbin lace cuffs, privileged reserved to high ecclesiastical magistrates, this tradition has since then been observed in ceremonial sessions at the bar. The robe is a vestment distinctive to the Spanish tabard without the aforementioned laced cuffs to be proper of the jurist's guild as a garment indicative of such undertaking at the Jerez de la Frontera Courts in April 1267. Those cuffs remained reserved in Spain to members of the governing body of the Castilian law professional's courts of inn and of judges. The petition for the conceded privilege for the use of white laced cuffs was bestowed to the Royal Illustrious Bar of Mexico since its establishment and has then on enjoyed of the sort entitlements and prerogatives. The common conception of an elite body distinguished such occupation in the XVIII century and is confirmed to us by the measures taken. It was by the approaching means that it was intended to put to an end to the then prevailing confusion regarding the attire of law's practitioners from other cultivated fields: Therefore it bore a practicality, set aside legists from of other people of learned pursuits
Résumé:
Le droit des manchettes de dentelle blanche sur leurs toges a été concédée aux avocats de la Nouvelle Espagne, privilège qui était réservé aux plus hautes autorités ecclésiastiques, y et qui se conserve de nos jours lors des sessions solennelles du Barreau. La toge est un habit propre à la profession d'avocat, c'est l'habit professionnel des juristes. Alphonse X imposa la "garnacha sin vuelillos" (précurseur de la toge comme on la connait aujourd'hui) comme habit officiel des juristes à la court Jerez de la Frontera en avril 1267. Les boucles des toges restent réservées de nos jours en Espagne aux members des Conseils d'Administration des Barreaux et aux juges. Le droit d'utiliser les manchettes en dentelle blanche résulte d'une demande de l'ICRAM, qui depuis sa fondation, bénéfice de la prééminence et des prérogatives dont jouissait le barreau de Madrid. Elle confirme par ces usages, le concept d'élite professionnelle qui distingue la profession d'avocat au XVIIIème siècle. L'octroi de ce privilège visait à mettre fin à la confusion dans l'utilisation des costumes d'avocats avec d'autres professions. Ce privelège accordé répond un objectif practique: distinguer les avocats du reste des professionnels qui utilisaient des habits comme la toge
Abstract:
Information workers and software developers are exposed to work fragmentation, an interleaving of activities and interruptions during their normal work day. Small-scale observational studies have shown that this can be detrimental to their work. In this paper, we perform a large-scale study of this phenomenon for the particular case of software developers performing software evolution tasks. Our study is based on several thousands interaction traces collected by Mylyn and the Eclipse Usage Data Collector. We observe that work fragmentation is correlated to lower observed productivity at both the macro level (for entire sessions) and at the micro level (around markers of work fragmentation); further, longer activity switches seem to strengthen the effect, and different activities seem to be affected differently. These observations give ground for subsequent studies investigating the phenomenon of work fragmentation
Abstract:
Metal oxide nanoparticles are considered to be good alternatives as fungicides for plant disease control. To date, numerous metal oxide nanoparticles have been produced and evaluated as promising antifungal agents. Consequently, a detailed and critical review on the use of mono-, bi-, and tri-metal oxide nanoparticles for controlling phytopathogenic fungi is presented. Among the studied metal oxide nanoparticles, mono-metal oxide nanoparticles-particularly ZnO nanoparticles, followed by CuO nanoparticles -are the most investigated for controlling phytopathogenic fungi. Limited studies have investigated the use of bi- and tri-metal oxide nanoparticles for controlling phytopathogenic fungi. Therefore, more studies on these nanoparticles are required. Most of the evaluations have been carried out under in vitro conditions. Thus, it is necessary to develop more detailed studies under in vivo conditions. Interestingly, biological synthesis of nanoparticles has been established as a good alternative to produce metal oxide nanoparticles for controlling phytopathogenic fungi. Although there have been great advances in the use of metal oxide nanoparticles as novel antifungal agents for sustainable agriculture, there are still areas that require further improvement
Abstract:
The use of metal nanoparticles is considered a good alternative to control phytopathogenic fungi in agriculture. To date, numerous metal nanoparticles (e.g., Ag, Cu, Se, Ni, Mg, and Fe) have been synthesized and used as potential antifungal agents. Therefore, this proposal presents a critical and detailed review of the use of these nanoparticles to control phytopathogenic fungi. Ag nanoparticles have been the most investigated nanoparticles due to their good antifungal activities, followed by Cu nanoparticles. It was also found that other metal nanoparticles have been investigated as antifungal agents, such as Se, Ni, Mg, Pd, and Fe, showing prominent results. Different synthesis methods have been used to produce these nanoparticles with different shapes and sizes, which have shown outstanding antifungal activities. This review shows the success of the use of metal nanoparticles to control phytopathogenic fungi in agricultura
Abstract:
The sluggish kinetics of oxygen reduction reaction (ORR) affects the massive use of polymer exchange membrane fuel cells (PEMFCs). Therefore, the use of electrocatalysts is required to accelerate the ORR kinetics. Usually, Pt/C electrocatalysts are used for the ORR. Nevertheless, the principal disadvantages of using Pt are its high-cost, low availability, and limited stability. For these reasons, research of more stable and cost effective electrocatalysts with high electrocatalytic activity is important to achieve large-scale power generation. To date, there are significant advances and great opportunities for the design of novel electrocatalysts with high electrocatalytic activity and stability for the ORR through the connection of experimental and theoretical explorations. For these reasons, in this review, combined theoretical-experimental studies for the design of novel Pt-based metallic electrocatalysts for the ORR are revised and analyzed in detail
Abstract:
In this work, the growth behavior, structures, energetic and magnetic properties of ConPdn and NinPdn (n = 1-10) clusters were investigated employing auxiliary density functional theory (ADFT). Initial geometries for successive optimization were extracted from Born-Oppenheimer molecular dynamics (BOMD) trajectories. It is demonstrated that when the cluster size increases, the Co and Ni atoms became shrouded by Pd atoms, leading to the initial formation of M@Pd (M = Co and Ni) core-shell structures. The spin multiplicities of the ConPdn and NinPdn (n = 1-10) clusters increase with cluster size. The CoPd clusters exhibit higher spin multiplicity and are characterized by higher spin magnetic moments than the NiPd counterparts. This study reveals that the spin density distributions are located on the Co and Ni atoms in the respective clusters. As the cluster size increases both systems tend to donate more easily electrons and the binding energies per atom grows monotonically
Abstract:
The design and manufacture of highly efficient nanocatalysts for the oxygen reduction reaction (ORR) is key to achieve the massive use of proton exchange membrane fuel cells. Up to date, Pt nanocatalysts are widely used for the ORR, but they have various disadvantages such as high cost, limited activity and partial stability. Therefore, different strategies have been implemented to eliminate or reduce the use of Pt in the nanocatalysts for the ORR. Among these, Pt-free metal nanocatalysts have received considerable relevance due to their good catalytic activity and slightly lower cost with respect to Pt. Consequently, nowadays, there are outstanding advances in the design of novel Pt-free metal nanocatalysts for the ORR. In this direction, combining experimental findings and theoretical insights is a low-cost methodology-in terms of both computational cost and laboratory resources-for the design of Pt-free metal nanocatalysts for the ORR in acid media. Therefore, coupled experimental and theoretical investigations are revised and discussed in detail in this review article
Abstract:
Detecting and monitoring air-polluting gases such as carbon monoxide (CO), nitrogen oxides (NOx), and sulfur oxides (SOx) are critical, as these gases are toxic and harm the ecosystem and the human health. Therefore, it is necessary to design high-performance gas sensors for toxic gas detection. In this sense, graphene-based materials are promising for use as toxic gas sensors. In addition to experimental investigations, first-principle methods have enabled graphene-based sensor design to progress by leaps and bounds. This review presents a detailed analysis of graphene-based toxic gas sensors by using first-principle methods. The modifications made to graphene, such as decorated, defective, and doped to improve the detection of NOx, SOx, and CO toxic gases are revised and analyzed. In general, graphene decorated with transition metals, defective graphene, and doped graphene have a higher sensibility toward the toxic gases than pristine graphene. This review shows the relevance of using first-principle studies for the design of novel and efficient toxic gas sensors. The theoretical results obtained to date can greatly help experimental groups to design novel and efficient graphene-based toxic gas sensors
Abstract:
The trends of the catalytic activity toward the oxygen reaction reduction (ORR) from Pd44 nanoclusters to M6@Pd30Pt8 (M = Co, Ni, and Cu) core-shell nanoclusters was investigated using auxiliary density functional theory. The adsorption energies of O and OH were computed as predictors of the catalytic activity toward the ORR and the following tendency of the electrocatalytic activity was computed: Pt44 = M6@Pd30Pt8 > M6@Pd38 > Pd44. In addition, the adsorption of O2 on the Ni6@Pd30Pt8 and Pt44 nanoclusters were investigated, finding an elongation of the O-O bond length when O2 is adsorbed on the Ni6@Pd30Pt8 and Pt44 nanoclusters, suggesting that the O2 is activated. Finally, the stabilities of the M6@Pd38 and M6@Pd30Pt8 core-shell nanoclusters were analyzed both in vacuum and in oxidative environment. From the calculated segregation energies for the bimetallic and trimetallic nanoclusters in vacuum, it can be clearly observed that the M atoms prefer to be in the center of the M6@Pd38 and M6@Pd30Pt8 nanoclusters. Nevertheless, it is observed that the segregation energies of M atoms for the M6@Pd38 nanoclusters with an oxidizing environment tend to decrease compared with their M6@Pd38 nanoclusters counterparts in vacuum, which suggests that in an oxidative environment, M atoms may tend to segregate to the surface of the M6@Pd38 nanoclusters
Abstract:
Electronic structure computations of pure Pd and Pd-based core-shell clusters were studied employing auxiliary density functional theory (ADFT). For this investigation icosahedral clusters with 13 and 55 atoms and octahedral clusters with 19 and 44 atoms were employed to analyze the change in the properties of the Pd and M@Pd core-shell clusters. All properties calculated for the M@Pd clusters are directly compared with the ones of pure palladium clusters. Spin multiplicities, spin magnetic moments, spin densities, binding energies per atom, segregation energies, and average bond lengths were calculated to understand their changes when varying the size, composition and shape of the M@Pd (M = Co, Ni, and Cu) core-shell clusters. The M1@Pd12 and M1@Pd18 (M = Co and Cu) clusters exhibit changes in the spin multiplicity and spin magnetic moment with respect to the Pd13 and Pd19 clusters, respectively, whereas the Ni1@Pd12 and Ni1@Pd18 clusters maintain the same properties as their pure Pd counterparts. The spin multiplicities and spin magnetic moments of the M6@Pd38 and M13@Pd42 (M = Co, Ni, and Cu) clusters greatly differ from their pure Pd counterparts. This study reveals that the Pd-Pd bond lengths are shorter in the M@Pd core-shell clusters compared to the ones of pure Pd clusters. This work demonstrates that the binding energy per atom of the M@Pd core-shell clusters is greater than the binding energy per atom of the pure Pd clusters. The calculated segregation energies indicate that 3d atoms prefer to be in the center of core-shell systems
Abstract:
In this chapter, the authors present how the use of artificial intelligence (AI) can help to identify and reduce the new digital crimes according to hate messages. The appearance of the internet in our lives, at the end of the last century, has meant a great technological advance, providing easier access to a huge volume of information and communication between people. The rise of communication-oriented networks has been such that true digital environments have been created, the so-called social networks, with millions of users all over the planet. This has meant, to a large extent, the modification of our personal relationships, and, unfortunately, the appearance of new ways of sending hate messages. The work presented is aimed at a digital tool built for this purpose for the automatic detection of hate (and non-hate) messages, in Spanish and other non-Anglo-Saxon languages, with AI algorithms, using training data from the Spanish language
Resumen:
En 1976 Nagel y Williams presentaron -en una reunión de la Aristotelian Society- dos célebres textos dirigidos a exhibir el desafío que el azar y la fortuna representan para la imputación kantiana de responsabilidad moral. Desde entonces a la fecha en la literatura han proliferado cientos de artículos centrados en analizar este dilema. Dicho debate, no obstante, rara vez es situado al interior del análisis de las implausibles y falsas premisas que dan lugar a él. En este trabajo reconstruyo las coordenadas centrales en las que esta problemática filosófica se origina. Posteriormente muestro que la imputación de responsabilidad ética a un agente no sólo no excluye, sino incluso presupone, lo que llamaré una capacidad "impura" de agencia donde la suerte ocupa un lugar central
Abstract:
In 1976, Nagel and Williams presented -at the congress of the Aristotelian Society- two famous texts aimed at exposing the challenge that chance and fortune represent for moral thought. Since then, hundreds of articles have proliferated in the literature focused on analyzing this dilemma. This debate, however, is rarely situated within the analysis of the implausible and false premises that give rise to it. In this paper I reconstruct the central coordinates in which this philosophical problem originates. Later, I show that imputation of ethical responsibility to an agent not only does not exclude, but even presupposes, what I will call an impure capacity for agency where luck occupies a central place
Resumen:
Alrededor de la categoría "populismo" suelen articularse todo tipo de ansiedades políticas. Jan-Werner Müller atina al afirmar que la frivolidad en el uso de este concepto ha terminado por volverlo sinónimo de todo aquello que nos desagrada en gobiernos que gozan de amplio apoyo social. Lo cierto es que “populismo” ha sido utilizado en el debate académico y político como un término valorativo antes que como una categoría analítica. En este texto me propongo mostrar que -de cara a desarrollar una teoría correcta sobre el populismo- debemos comenzar por evitar el lenguaje valorativo para en su lugar emprender una tarea analítica. Probaré que se trata de una tarea urgente, cuyo primer paso será diferenciar entre populismo y autoritarismo democrático
Abstract:
Around the category "populism" all kinds of political anxieties are usually articulated. Wermer or Laclau have been correct in stating that the frivolity in the use of this concept has ended up making it synonymous with everything that we dislike in a government with a large popular base. The truth is that "populism" has been used in academic and political debate as an evaluative term rather than as an analytical category. In this text I propose to show that -in order to develop a correct theory of populism- we must begin by avoiding evaluative language and instead undertake an analytical task. I will prove that such task is the first step to differentiate between populism and democratic authoritarianism
Abstract:
The new Latin American constitutionalism (NLC) is the term that has been coined to refer to certain constitutional processes and constitutional reforms that have taken place relatively recently in Latin America. Constitutional theorists have not been very optimistic regarding the scope and nature of this new constitutionalism. I thoroughly agree with this critical skepticism as well as with the idea that this new phenomenon does not substantively change the organic element of the different constitutions in the region. However, I argue that it is a mistake to focus analysis on this characteristic. My intention is to show that the NLC should be evaluated in the context of its relationship with contemporary neo-constitutional theory
Résumé:
Le nouveau constitutionalisme latino-américain est la locution qui a été inventée pour renvoyer à certains processus et réformes constitutionnels ayant eu lieu dans une époque relativement récente en Amérique Latine. Les théoriciens des constitutions ne se sont pas montrés très optimistes quant à l’étendue et à la nature de ce nouveau constitutionalisme. Je suis tout à fait d’accord avec ce scepticisme critique, ainsi qu’avec l’idée selon laquelle ce nouveau phénomène ne change pas significativement l’élément organique des différentes constitutions en place dans la région. Cependant, je soutiens qu’il est erroné de concentrer l’analyse sur cette caractéristique. Mon intention est de montrer que le nouveau constitutionalisme latino-américain doit être examiné relativement à la théorie néo-constitutionnelle contemporaine
Resumen:
La obra de Marx ha suscitado una añeja polémica entre sus estudiosos. Algunos han mantenido que el lenguaje desarrollado en ella es estrictamente explicativo. Dicho lenguaje expresaría ante todo un saber científico expurgado de todo contenido moral (sobre la estructura del capital, las fuerzas que causan la dinámica social y las leyes que la rigen). En el otro extremo, en cambio, otros han argüido que en Marx hallamos más bien un lenguaje ético orientado a denunciar los crímenes y miserias de una determinada formación social con el fin de oponerle otra. En este artículo defiendo la idea de que en la obra de Marx hay elementos tanto para afirmar una cosa como la otra. Sin embargo, argumento que la actualidad del pensamiento marxista reside esencialmente en los elementos éticos y normativos que configuran la dimensión moral de su planteamiento
Abstract:
Marx's work has brought forward an archaic controversy among his followers. Some have sustained that the language developed throughout it is merely descriptive. Such language would express above all a scientific knowledge expurgated of all moral content (about the structure of capital, the forces that cause social dynamics and the laws that govern it.) On the other side, however, others have argued that in Marx we have found an ethical language oriented towards denouncing crimes and miseries of one determined social formation with the finality of opposing another one. In this article I defend the idea that in Marx's work there are elements to affirm one thing as well as the other. Nevertheless, I argue that the main importance of Marx's thinking resides essentially on the ethical and regulatory elements that configure the moral dimension of his approach
Resumen:
Los procesos democráticos de toma de decisiones (al igual que las restricciones constitucionales a la regla de mayoría) pueden ser evaluados por sus resultados, por su valor intrínseco o por una combinación de ambas cosas. Mostraré que analizar a fondo estas alternativas permite sacar a la luz las debilidades más serias en los modos usuales de justificación del constitucionalismo. La fundamentación teórica de la articulación entre democracia y constitucionalismo ha permanecido atrapada en una trampa que busco romper. Concluiré mostrando la necesidad de rebasar los argumentos epistémicos y contra-epistémicos sugiriendo pautas que hasta ahora creo han sido poco ponderadas en la literatura clásica sobre el tema
Abstract:
Democratic decision-making process (as well as constitutional limits to majority rule) may be evaluated on the basis of their results, their intrinsic value or a combination of both. I will show that an in-depth analysis of these alternatives uncovers serious weaknesses in the usual models of justification for constitutionalism. The theoretical basis to describe the relationship between democracy and constitutionalism has remained stuck in a trap that I seek to break from. I conclude by showing the need to overcome epistemic and counter-epistemic arguments by proposing
Resumen:
Este ensayo reflexiona sobre la crisis de las instituciones ciudadanas del Estado y de la sociedad civil como consecuencia del proceso de globalización actual. Efecto de este proceso es que los Gobiernos locales se ven cada vez más obligados a orientar su política conforme a los criterios de flujos económicos globales. Con ello, los Estados ven desbordada su capacidad de gestión, con lo que tienden a sacrificar intereses de sectores hasta entonces protegidos por ellos. Este texto se dirige a reflexionar sobre los fenómenos de exclusión, violencia y subalternidad que dicha exclusión genera. Su interés es hacer una exploración crítica de tres categorías analíticas centrales: imperio, imperialismo y multitud, a partir de la importante obra publicada en el año 2000 por Hardt y Negri. Al final, se mostrará su importancia para desvelar diversos fenómenos derivados de esta condición mundial y la violencia que genera, así como la necesidad de analizar el pensamiento de Hardt y Negri a partir de ciertas coordenadas de reflexión latinas
Abstract:
This essay is a reflection on the crisis of the citizen institutions of the Sate and the civilian society as a consequence of the process of present globalization. The effect of this process is that the local governments see themselves more obligated to orient their politics according to the criteria of the global economic flow. With this the States see their capability to management overflow and this in turn tends to sacrifice the interests of the sectors that until now were protected by them. This text is directed to make a reflection about the phenomena of exclusion, violence, and subalternity that such exclusion generates. The interest of this essay is to do a critical exploration on three critical central categories: empire, imperialism and multitude as mentioned in the important work of Hardt and Negri published in 2002. Finally it will demonstrate its importance too unveil diverse phenomena derived from this worldwide condition and the violence that it generates, as well as the need to analyze Hardt and Negri’s thoughts considering certain coordenates of latino reflections
Resumen:
Me propongo analizar críticamente la idea de abstinencia epistémica desarrollada por un importante grupo de teóricos liberales a partir de los años ochenta del siglo pasado. Para los propósitos del liberalismo político la propuesta de la abstinencia epistémica desempeña un papel crucial. Consiste en garantizar el consenso en torno a las reglas procesales y principios públicos de justicia, exigiendo que la pluralidad de intereses y concepciones sustantivas que coexisten en la sociedad se abstengan de realizar atribuciones de verdad sobre sus propias concepciones morales cuando éstas son debatidas en la esfera pública. Mi argumento es que esta estrategia fracasa toda vez que la abstinencia epistémica no resiste la aplicación de sus propias cláusulas a sí misma
Abstract:
The purpose of this paper is to discuss a thesis of Epistemic Abstinence that was developed by an important group of political theorists starting in the 1980s. The thesis is of central importance to political liberalism. It is meant to secure a consensus on procedural rules and public principles of justice by insisting that the many interests and fundamental conceptions that coexist in society abstain from making claims about the truth their own moral precepts within the public sphere. I argue that this strategy breaks down because the thesis of Epistemic Abstinence cannot be applied to itself
Abstract:
This article offers an articulation of liberation philosophy, a Latin American form of political and philosophical thought that is largely not followed in European and Anglo-American political circles. Liberation philosophy has posed serious challenges to Jürgen Habermas's and Karl-Otto Apel's discourse ethics. Here I explain what these challenges consist of and argue that Apel's response to Latin American political thought shows that discourse ethics can maintain internal consistency only if it is subsumed under the program of liberation philosophy
Resumen:
El liberalismo, en esencia, consiste en relegar el pluralismo y trasladarlo a la esfera privada para asegurar el consenso en la esfera pública. De este modo, todas las cuestiones controvertidas (por antonomasia, la discusión en torno a la verdad) son eliminadas de la agenda para crear las condiciones de un consenso "racional". En consecuencia, el reino de la política se transforma en un terreno en el cual los individuos, despojados de sus pasiones y creencias más fundamentales, aceptan someterse a acuerdos que consideran (o se les imponen) como neutrales. Es así como niega el liberalismo la dimensión de lo político (esto es, de lo polemos, lo dinámico, lo conflictivo), con el fin de reconducirlo al ámbito de la política (la polis, el lugar de la reconciliación del conflicto). El propósito de este trabajo es analizar y discutir a fondo algunos de los principales argumentos que se han ofrecido con el fin de justificar esta estrategia liberal (básicamente en autores como Rorty o Rawls). Mi conclusión será mostrar cómo es que esta estrategia liberal dista mucho de no ser problemática
Abstract:
The essence of liberalism is the relegation of pluralism to the private sphere in order to ensure consensus in the public sphere. In this way, all controversial issues (most notably, the debate on truth) are removed from the agenda to create the conditions for a "rational" consensus. Accordingly, the realm of politics becomes an arena in which individuals, stripped of their most fundamental beliefs and passions, show their agreement with arrangements which they consider to be (or are imposed on them as) neutral. Thus, liberalism denies the political dimension (ie, the polemos, the dynamic, the conflictive) and brings it instead to the realm of politics (the polis, the place of reconciliation of the conflict). The aim of this paper is to analyze and discuss in depth some of the main arguments that have been offered in order to justify such liberal strategy (basically in authors such as Rorty and Rawls). My conclusion will show that this liberal strategy is far from being unproblematic
Resumen:
We study the effects of a nongovernmental civic inclusion campaign on the democratic integration of demobilized insurgents. Democratic participation ideally offers insurgents a peaceful channel for political expression and addressing grievances. However, existing work suggests that former combatant's ideological socialization and experiences of violence fuel hard-line commitments that may be contrary to democratic political engagement, threatening the effectiveness of postwar electoral transitions. We use a field experiment with demobilized FARC combatants in Colombia to study how a civic inclusion campaign affects trust in political institutions, democratic political participation, and preferences for strategic moderation versus ideological rigidity. We find the campaign increased trust in democracy and support for political compromise. Effects are driven by the most educated ex-combatants moving from more hard-line positions to ones that are in line with their peers and by ex-combatants who had the most violent conflict experience similarly moderating their views
Abstract:
Self-interruptions account for a significant portion of task switching in information-centric work contexts. However, most of the research to date has focused on understanding, analyzing and designing for external interruptions. The causes of self-interruptions are not well understood. In this paper we present an analysis of 889 hours of observed task switching behavior from 36 individuals across three hightechnology information work organizations. Our analysis suggests that self-interruption is a function of organizational environment and individual differences, but also external interruptions experienced. We find that people in open office environments interrupt themselves at a higher rate. We also find that people are significantly more likely to interrupt themselves to return to solitary work associated with central working spheres, suggesting that selfinterruption occurs largely as a function of prospective memory events. The research presented contributes substantially to our understanding of attention and multitasking in context
Abstract:
Law search is fundamental to legal reasoning and its articulation is an important challenge and open problem in the ongoing efforts to investigate legal reasoning as a formal process. This article formulates a mathematical model that frames the behavioral and cognitive framework of law search as a sequential decision process. The model has two components: first, a model of the legal corpus as a search space and second, a model of the search process (or search strategy) that is compatible with that environment. The search space has the structure of a "multi-network"-an interleaved structure of distinct networks-developed in earlier work. In this article, we develop and formally describe three related models of the search process. We then implement these models on a subset of the corpus of U.S. Supreme Court opinions and assess their performance against two benchmark prediction tasks. The first is to predict the citations in a document from its semantic content. The second is to predict the search results generated by human users. For both benchmarks, all search models outperform a null model with the learning-based model outperforming the other approaches. Our results indicate that through additional work and refinement, there may be the potential for machine law search to achieve human or near-human levels of performance
Abstract:
This work focuses on the historical volatility models in which the temporal and spatial dependencies inherent in the mean and variance are "simple". The empirical time series used are trade-by-trade common stock time series from the U.S. and Mexico, and from the Mexican ADR's traded in the U.S. Results from these three data sets provide information on the liquidity of the markets in these two countries
Abstract:
By using the Monte Carlo simulation, one can obtain a close form solution for the price of the pure discount bond. In order to do this, the path of the stochastic variables n and r should be simulated first. To properly simulate from the tails of the Normal distribution, in order to have the expected value for the martingale n to converge to one, a few sampling procedures are applied that are tailored to specifically emphasize the sampling from the tails of a distribution
Abstract:
A moral right to health or health care, given reasonable resource constraints, implies a reasonable array of services, as determined by a fair deliberative process. Such a right can be embodied in a constitution where it becomes a legal right with similar entitlements. What is the role of the courts in deciding what these entitlements are? The threat of “judicialization” is that the courts may overreach their ability if they attempt to carry out this task; the promise of judicialization is that the courts can do better than health systems have done at determining such entitlements. We propose a middle ground that requires the health system to develop a fair, deliberative process for determining how to achieve the progressive realization of the same right to health or health care and that also requires the courts to develop the capacity to assess whether the deliberative process in the health system is fair
Abstract:
This article analyses the importance of training as a creator of human capital, which enables a company to obtain competitive advantages that are sustainable in the long-term that result in greater profitability. The study is based on the general theoretical framework of resource and capacity theory. The study not only analyses the impact of the influence of training on performance; it also attempts to analyse the nature of such a relationship in greater depth. This being the case, an attempt has been made to measure explanatory capacity from two different perspectives: the universalistic approach and the contingent approach. At the outset, two hypotheses are formulated that attempt to quantify the relationship from a universalistic perspective to later, in two more hypotheses, incorporate the potential moderating effect of the strategy into the model, in order to verify whether or not this strategy improves the explanatory power of our model of analysis
Abstract:
Purpose – The aim of this paper is to determine whether the effort invested by service companies in employee training has an impact on their economic performance. Design/methodology/approach – The study centres on an intensive labor sector, where the perception of service quality depends on who renders this service. To overcome the habitual problems of transversal studies, the time effect has been considered by measuring data over a period of nine years, to give panel data treatment with fixed effects. Findings – The prepared models give clear empirical support to the hypothesis that training activities are a positive influence on company performance. Research limitations/implications – The results obtained contribute empirical evidence about a relationship that, hitherto, has not been satisfactorily demonstrated. However, there may be some limitations related to the use of a training indicator based on effort and not on results obtained, with low representation of what happens in the smaller companies that lack structured training policies, or with no differentiation between generic or more specific training. Practical implications – The results obtained can contribute towards increased manager awareness that training should be treated as an investment and not considered as an expense. Originality/value – The main contributions can be resumed in three points: a training measurement has been used, based on three dimensions, which presumes to be an improvement on the more frequent method of measuring this variable. A consistent methodology was used that previously was not applied in the analysis of this relationship, and clear empirical evidence has been obtained concerning a relationship that, frequently, is mentioned with theoretical arguments, but which needs more empirical evidence
Abstract:
In a simple public good economy, we propose a natural bargaining procedure, the equilibria of which converge to Lindahl allocations as the cost of bargaining vanishes. The procedure splits the decision over the allocation in a decision about personalized prices and a decision about output levels for the public good. Since this procedure does not assume price-taking behavior, it provides a strategic foundation for the personalized taxes inherent in the Lindahl solution to the public goods problem
Abstract:
Retail petroleum markets in Mexico are on the cusp of a historic deregulation. For decades, all 11,000 gasoline stations nationwide have carried the brand of the state-owned petroleum company Pemex and sold Pemex gasoline at federally regulated retail prices. This industry structure is changing, however, as part of Mexico's broader energy reforms aimed at increasing private investment. Since April 2016, independent companies can import, transport, store, distribute, and sell gasoline and diesel. In this paper, we provide an economic perspective on Mexico's nascent deregulation. Although in many ways the reforms are unprecedented, we argue that past experiences in other markets give important clues about what to expect, as well as about potential pitfalls. Turning Mexico's retail petroleum sector into a competitive market will not be easy, but the deregulation has enormous potential to increase efficiency and, eventually, to reduce prices
Abstract:
The financial crisis has brought the problems of regulatory failure and unbridled counterparty risk to the forefront in financial discussions. In the last decade, central counterparties have been created in order to face those insidious problems. In Mexico, both the stock and the derivatives markets have central counterparties, but the money market has not. This paper addresses the issue of creating a central counterparty for the Mexican money market. Recommendations that must be followed in the design and the management of risk of a central counterparty, given by international regulatory institutions, are presented in this study. Also, two different conceptual designs for a central counterparty, appropriate for the Mexican market, are proposed. Final conclusions support the creation of a new central counterparty for the Mexican money market
Abstract:
If two elections are held at the same day, why do some people choose to vote in one but to abstain in another? We argue that selective abstention is driven by the same factors that determine voter turnout. Our empirical analysis focuses on Sweden where the (aggregate) turnout gap between local and national elections has been about 2-3%. Rich administrative register data reveal that people from higher socio-economic backgrounds, immigrants, women, older individuals, and people who have been less geographically mobile are less likely to selectively abstain
Abstract:
This paper demonstrate that in procedural contexts of free proof, proof sentences of judicial decision (i.e. sentences of the kind "it is proven that p"), have normative illocutionary force. On the one hand, in that context, "it is proven that p" express a value judgment of the judge. On the other hand, it is shown that "it is proven that p" is, in that context, a practical reason aiming to justify an action of the decision-maker: the acceptance of the factual statement as a premise of the judicial decision
Resumen:
Algunas versiones del realismo jurídico pretenden compatibilizar la pretensión de que el derecho es un conjunto de normas con un fuerte compromiso con el empirismo. De conformidad con este último, el derecho no está constituido por entidades abstractas de ningún tipo sino por hechos empíricamente constatables. En vistas a llevar a cabo esta compatibilización, en varios trabajos Riccardo Guastini ha defendido una concepción de las proposiciones normativas, i.e. aserciones existenciales sobre normas jurídicas, como enunciados teóricos acerca del derecho vigente, necesariamente referentes a ciertos hechos. Se concibe así al derecho vigente como el conjunto de tex-tos que son resultado de interpretaciones estables, consolidadas y dominantes que los jueces han llevado a cabo en sus decisiones en el ordenamiento de que se trate. Partiendo de esta versión del realismo jurídico, este trabajo procura sembrar algunas dudas. Primero, sobre este modo de concebir a las proposiciones normativas. Segundo, sobre el modo en que, en consecuencia, queda configurada la teoría del derecho. Tercero, y más en general, sobre la pretensión de compatibilizar la visión del derecho como conjunto de normas con la tesis empirista
Abstract:
Some versions of legal realism seek to reconcile the claim that law is a set of rules with a commitment to empiricism. According to the latter, law is not constituted by abstract entities of any kind, but by facts instead. Embracing this orientation, Riccardo Guastini has defended a conception of normative propositions, i.e. existential assertions about legal norms, as necessarily referring to certain facts. Specifically, law is conceived as a set of texts that are the result of stable, consolidated and dominant interpretations that judges have carried out in their decisions. Starting from this version of legal realism, this work tries to cast some doubts. First, on this way of conceiving normative propositions. Second, on the way in which, as a consequence, legal theory is understood. Third, and more generally, on the claim to reconcile the view of law as a set of rules with the empiricist thesis
Abstract:
The paper applies a factor model to the study of risk sharing among U.S. states. The factor model makes it possible to disentangle movements in output and consumption due to national, regional, or state-specific business cycles from those due to measurement error. The results of the paper suggest that some findings of the previous literature which indicate a substantial amount of inter-state risk sharing may be due to the presence of measurement error in output. When measurement error is properly taken into account, the evidence points towards a lack of inter-state smoothing
Abstract:
Motivated by the dollarization debate in Mexico, we estimate an identified vector autoregression for the Mexican economy, using monthly data from 1976 to 1997, taking into account the changes in the monetary policy regime which occurred during this period. We find that (i) exogenous shocks to monetary policy have had no impact on output and prices; (ii) most of the shocks originated in the foreign sector; (iii) disturbances originating in the U.S. economy have been a more important source of fluctuations for Mexico than shocks to oil prices. We also study the endogenous response of domestic monetary policy by means of a counterfactual experiment. The results indicate that the response of monetary policy to foreign shocks played an important part in the 1994 crisis
Abstract:
The conference on "Global Monetary Integration" addressed a number of questions related to the adoption of the US dollar as legal tender in emerging-market economies. The goal of the conference was to foster the policy debate on dollarization, not to resolve it, and on that score it succeeded
Abstract:
Mexican manufacturing job loss induced by competition with China increases cocaine trafficking and violence, particularly in municipalities with transnational criminal organizations. When it becomes more lucrative to traffic drugs because changes in local labor markets lower the opportunity cost of criminal employment, criminal organizations plausibly fight to gain control. The evidence supports a Becker-style model in which the elasticity between legitimate and criminal employment is particularly high where criminal organizations lower illicit job search costs, where the drug trade implies higher pecuniary returns to violent crime, and where unemployment disproportionately affects low-skilled men
Abstract:
The paper investigates how the relative contribution of external factors to stock price movements varies with the degree of financial development. It is found that financial development makes stock markets more susceptible to external influences (both financial and macroeconomic). Interestingly, this effect is present even after having accounted for capital controls and international trade effects
Abstract:
We consider the spatial circular restricted three-body problem, on the motion of an infinitesimal body under the gravity of Sun and Earth. This can be described by a 3-degree of freedom Hamiltonian system. We fix an energy level close to that of the collinear libration point L1, located between Sun and Earth. Near L1 there exists a normally hyperbolic invariant manifold, diffeomorphic to a 3-sphere. For an orbit confined to this 3-sphere, the amplitude of the motion relative to the ecliptic (the plane of the orbits of Sun and Earth) can vary only slightly. We show that we can obtain new orbits whose amplitude of motion relative to the ecliptic changes significantly, by following orbits of the flow restricted to the 3-sphere alternatively with homoclinic orbits that turn around the Earth. We provide an abstract theorem for the existence of such ‘diffusing’ orbits, and numerical evidence that the premises of the theorem are satisfied in the three-body problem considered here. We provide an explicit construction of diffusing orbits. The geometric mechanism underlying this construction is reminiscent of the Arnold diffusion problem for Hamiltonian systems. Our argument, however, does not involve transition chains of tori as in the classical example of Arnold. We exploit mostly the ‘outer dynamics’ along homoclinic orbits, and use very little information on the ‘inner dynamics’ restricted to the 3-sphere. As a possible application to astrodynamics, diffusing orbits as above can be used to design low cost maneuvers to change the inclination of an orbit of a satellite near L1 from a nearly-planar orbit to a tilted orbit with respect to the ecliptic. We explore different energy levels, and estimate the largest orbital inclination that can be achieved through our construction
Abstract:
Rapidly exploring Random Trees (RRTs) are effective for a wide range of applications ranging from kinodynamic planning to motion planning under uncertainty. However, RRTs are not as efficient when exploring heterogeneous environments and do not adapt to the space. For example, in difficult areas an expensive RRT growth method might be appropriate, while in open areas inexpensive growth methods should be chosen. In this paper, we present a novel algorithm, Adaptive RRT, that adapts RRT growth to the current exploration area using a two level growth selection mechanism. At the first level, we select groups of expansion methods according to the visibility of the node being expanded. Second, we use a cost-sensitive learning approach to select a sampler from the group of expansion methods chosen. Also, we propose a novel definition of visibility for RRT nodes which can be computed in an online manner and used by Adaptive RRT to select an appropriate expansion method. We present the algorithm and experimental analysis on a broad range of problems showing not only its adaptability, but efficiency gains achieved by adapting exploration methods appropriately
Abstract:
The Covid-19 pandemic has deepened the existing gender inequalities. In particular, it has dealt a significant blow to women entrepreneurs, as it has magnified the pre-pandemic disadvantages women have faced in the economic, social, financial and regulatory ecosystems they operate in, particularly due to the nature and size of their businesses. The article outlines three main reasons that explain why women entrepreneurs have been disproportionately impacted during this health pandemic. It then explores how trade agreements can help women overcome the barriers that impede their entrepreneurial potential and help their businesses sustain the pandemic-inflicted market disruptions
Abstract:
Universal access to safe drinking water and sanitation facilities is an essential human right, recognised in the Sustainable Development Goals as crucial for preventing disease and improving human wellbeing. Comprehensive, high-resolution estimates are important to inform progress towards achieving this goal. We aimed to produce high resolution geospatial estimates of access to drinking water and sanitation facilities
Abstract:
This paper proposes a strategy to estimate the community structure for a network accounting for the empirically established fact that communities and links are formed based on homophily. It presents a maximum likelihood approach to rank community structures where the set of possible community structures depends on the set of salient characteristics and the probability of a link between two nodes varies according to the characteristics of the two nodes. This approach has good large sample properties, which lead to a practical algorithm for the estimation. To exemplify the approach it is applied to data collected from four village clusters in Ghana
Abstract:
This paper examines the role of identity in the fragmentation of networks by incorporating the choice of commitment to identity characteristics into a non-cooperative network formation game. The Nash network features divisions based on identity, with layers within these divisions. Using the refinement of strictness I find structures where stars of highly committed players are linked together by less committed players
Abstract:
This paper analyzes the effect of the transfer of information by an informed strategic trader (owner) to another strategic player (buyer). It shows that while the owner will never fully divulge his information, he may transfer a noisy signal of his information to the buyer. With such a transfer, the owner loses some of his informational superiority and yet increases his trading profit. I also show that if the transfer can be made to more than one buyer, then, the owner’s profit is increasing in the number of other buyers to whom the transfer is made
Abstract:
Much of what we know about the alignment of voters with parties comes from mass surveys of the electorate in the postwar period or from aggregate electoral data. Using individual elector-level panel data from nineteenth-century United Kingdom poll books, we reassess the development of a party centered electorate. We show that (a) the electorate was party-centered by the time of the extension of the franchise in 1867, (b) a decline in candidate-centered voting is largely attributable to changes in the behavior of the working class, and (c) the enfranchised working class aligned with the Liberal left. This early alignment of the working class with the left cannot entirely be explained by a decrease in vote buying. The evidence suggests instead that the alignment was based on the programmatic appeal of the Liberals. We argue that these facts can plausibly explain the subsequent development of the party system
Abstract:
This article makes an analytical critique of the position of Basu, Haas, and Moraitis, who, by extending the conventional linear system for the simultaneous determination of value, argue that in Marx's economic theory the intensification of work generates absolute surplus value and is not relative. This position is also contrasted with the original theory of Marx to verify its incompatibility. As an alternative in search of a rectification of the role of labor intensification as a generator of relative surplus value, this work incorporates labor intensity into the Temporal Single System Interpretation (TSSI), showing its full compatibility with Marx's original theory
Resumen:
En el cambiante escenario económico global, los Contadores Públicos desempeñan un papel relevante en la toma de decisiones dentro de las organizaciones, así que es imperativo analizar las perspectivas económicas y financieras, considerando las recientes dinámicas y los desafíos que enfrenta la economía mundial, entre los que se incluyen la volatilidad económica global y los cambios en las regulaciones
Resumen:
En los últimos años el tema de la sostenibilidad ha empezado a tomarse en cuenta en la agenda de actividades de las empresas con el objetivo de evitar daños a la naturaleza y generar un cambio integral no solo en materia ambiental, sino también en aspectos sociales, económicos y culturales. De ahí la relevancia de que las instituciones de educación superior lo incorporen en los planes de estudio de las licenciaturas, entre ellas la de Contaduría, a fin de que estén en consonancia con los Objetivos de Desarrollo Sostenible de las Naciones Unidas
Abstract:
This article was meant to be about poetry and International Relations (IR). It ended up being about trans/feminist and cuir art and critique with love and care among people of color. This is what praxis does to academic thinking; it disrupts the methods as much as it troubles the aesthetics
Resumen:
Este artículo analiza las razones por las que el tribunal electoral confirma o revoca las multas que impone el IFE a los partidos políticos mexicanos, como resultado de la revisión a sus ingresos y gastos. Se confirman parcialmente las expectativas de la literatura sobre política judicial, la cual predice que los tribunales especializados, como el electoral en México, tienen más probabilidades de revocar las decisiones de los organismos especializados que revisan por razones estratégicas. Al analizar 1671 multas impugnadas entre 1997 y 2010, se concluye que aunque los magistrados confirman tres de cada cuatro multas, cuando revocan decisiones del IFE se trata de temas visibles como gastos de campaña o cuando las élites políticamente relevantes son las que impugnan
Abstract:
I analyze the main determinants of why the electoral tribunal upholds or overturns fines imposed by the IFE to Mexico's political parties, as revealed by audits of political spending. I found evidence that partially support the hypotheses developed by the judicial politics literature, which states that specialized courts, such as the electoral tribunal are more likely to overturn decisions of a specialized agency for strategic reasons. By analyzing 1671 fines challenged between 1996 and 2010, I conclude that although magistrates affirm three out of four fines, they overturn IFE's decisions when there is a salient issue, such as campaign spending or when relevant political elites challenge the fines imposed
Resumen:
Al explorar la relación entre religión, evasión e involucramiento en América Latina, se verifica la hipótesis de evasión que predice que el tiempo dedicado a la Iglesia disminuye el tiempo dedicado a la política. Se analizan 24 países encuestados por el Baró metro de las Américas en 2010, donde se estudian la asistencia a servicios religiosos, a grupos de la Iglesia, importancia de la religión, confianza en las iglesias y la presencia de la Iglesia católica a escala subnacional con datos del Anuario Pontificio de 2005. Se halla evidencia en favor de la hipótesis de evasión entre quienes asisten al culto y donde existe una mayor presencia de la Iglesia católica, mientras que el involucramiento aumenta entre quienes asisten a grupos de la Iglesia
Abstract:
The presented paper describes the integration of an artificial vision system into a flexible production cell: the production cell consists of a material storage box with an artificial vision system (AVS) and a 5-DOF robot type Scorbot ER 4. The camera system detects the geometry of the rough material. This information is sent to the robot that starts moving to take the material for further processing. The Cartesian Coordinates are processed so that the robot joints can be positioned correctly. The described system is part of an ongoing development of a smart factory for investigation and educational purposes
Abstract:
The proliferation of wireless sensor networks is one of the main hardware components enabling the creation of the Internet of Things. As sensor nodes are being deployed in a wide variety of indoor and outdoor environments, they are in general battery-powered devices. In fact, power provisioning is one of the main challenges faced by engineers when deploying IoT-based applications. This paper develops crosslayer architecture, integrating smart and power-aware protocols with a low-cost and highefficiency power management module, which is the basis of long-lasting of self-powered WSNs. The main physical components of the proposed architecture are a wireless node comprising a set of small solar cells responsible for harvesting the energy and an ultracapacitor as storage device. Energy consumption is reduced significantly by varying the sleep/wake duty cycle of the radio module. For environments with only a few hours of sunlight per day we present the feasibility of ensuring long-lasting operation by means of adapting the duty cycle scheme according to the energy stored in the ultracapacitor. Our experiments prove the feasibility of a long-endurance outdoors operation with a low-complexity power management unit. This is an important advance towards the development of novel IoT-based applications
Abstract:
In this article we explore the relationship between 19 of the most common anomalies reported for the US market and the cross-section of Mexican stock returns. We find that 1-month stock returns in Mexico are robustly predicted only by 3 of the 19 anomalies: momentum, idiosyncratic volatility, and the lottery effect. Momentum has a positive relation with future 1-month returns, while idiosyncratic volatility and the lottery effect have a negative relation. For longer horizons of 3 and 6 months, only the 3 most important factors in the US market predict returns: size, book-to-market, and momentum
Abstract:
Tebuconazole (TBZ) nanoemulsions (NEs) were formulated using a low energy method. TBZ composition directly affected the drop size and surface tension of the NE. Water fraction and the organic-to-surfactant-ratio (RO/S) were evaluated in the range of 1-90 and 1-10 wt %, respectively. The study was carried out with an organic phase (OP) consisting of an acetone/glycerol mixture containing TBZ at a concentration of 5.4 wt % and Tween 80 (TW80) as a nonionic and Agnique BL1754 (AG54) as a mixture of nonionic and anionic surfactants. The process involved a large dilution of a bicontinuous microemulsion (ME) into an aqueous phase (AP). Pseudo-ternary phase diagrams of the OP//TW80//AP and OP//AG54//AP systems at T = 25 °C were determined to map ME regions; these were in the range of 0.49-0.90, 0.01-0.23, and 0.07-0.49 of OP, AP, and surfactant, respectively. Optical microscope images helped confirm ME formation and system viscosity was measured in the range of 25-147 cP. NEs with drop sizes about 9 nm and 250 nm were achieved with TW80 and AG54, respectively. An innovative low-energy method was used to develop nanopesticide TBZ formulations based on nanoemulsion (NE) technology. The surface tension of the studied systems can be lowered 50% more than that of pure water. This study's proposed low-energy NE formulations may prove useful in sustainable agriculture
Resumen:
Objetivos: Nuestro objetivo fue evaluar el rendimiento de un modelo electrocardiográfico basado en IA capaz de detectar ACOMI (Acute Coronary Occlusion Myocardial Infarction) en pacientes con SCA. Métodos: Este fue un estudio prospectivo, observacional y longitudinal, de un solo centro que incluyó a pacientes con diagnóstico inicial de SCA (tanto STEMI como NSTEMI). Para entrenar el modelo de deep learning en el reconocimiento de ACOMI, se realizó una digitalización manual de los ECG de los pacientes utilizando cámaras de teléfonos inteligentes de diversas calidades. Nos basamos en el uso de Redes Neuronales Convolucionales (CNN) como modelos de inteligencia artificial para la clasificación de los ejemplos de ECG. Los ECG fueron evaluados de forma independiente por dos cardiólogos expertos, quienes desconocían los resultados clínicos; a cada uno se le pidió determinar a) si el paciente presentaba un STEMI, según criterios universales, o b) si no se cumplían los criterios de STEMI, identificar cualquier otro hallazgo en el ECG que sugiriera ACOMI. ACOMI se definió por la presencia de cualquiera de los siguientes tres hallazgos en la angiografía coronaria: a) oclusión total trombótica, b) trombo grado TIMI 2 o superior + flujo grado TIMI 1 o menor, o c) la presencia de una lesión suboclusión (> 95% de estenosis angiográfica) con flujo grado TIMI < 3. Los pacientes se clasificaron en cuatro grupos: STEMI + ACOMI, NSTEMI + ACOMI, STEMI + no ACOMI y NSTEMI + no ACOMI
Abstract:
Objectives: We aimed to assess the performance of an artificial intelligence-electrocardiogram (AI-ECG)-based model capable of detecting acute coronary occlusion myocardial infarction (ACOMI) in the setting of patients with acute coronary syndrome (ACS). Methods: This was a prospective, observational, longitudinal, and single-center study including patients with the initial diagnosis of ACS (both ST-elevation acute myocardial infarction [STEMI] & non-ST-segment elevation myocardial infarction [NSTEMI]). To train the deep learning model in recognizing ACOMI, manual digitization of a patient's ECG was conducted using smartphone cameras of varying quality. We relied on the use of convolutional neural networks as the AI models for the classification of ECG examples. ECGs were also independently evaluated by two expert cardiologists blinded to clinical outcomes; each was asked to determine (a) whether the patient had a STEMI, based on universal criteria or (b) if STEMI criteria were not met, to identify any other ECG finding suggestive of ACOMI. ACOMI was defined by coronary angiography findings meeting any of the following three criteria: (a) total thrombotic occlusion, (b) TIMI thrombus grade 2 or higher + TIMI grade flow 1 or less, or (c) the presence of a subocclusive lesion (> 95% angiographic stenosis) with TIMI grade flow < 3. Patients were classified into four groups: STEMI + ACOMI, NSTEMI + ACOMI, STEMI + non-ACOMI, and NSTEMI + non-ACOMI
Resumen:
El artículo presenta un simulador de vuelo ejecutivo (SVE), como parte de un entorno de aprendizaje, diseñado para ser utilizado por dueños o administradores de parques o reservas cinegéticas, grupos conservacionistas o diseñadores de políticas ecológicas gubernamentales, con el objetivo de evaluar diversas estrategias y de proveer experiencia virtual en la planeación estratégica sustentable y rentable de dichos parques o reservas de turismo cinegético. El SVE está basado en un modelo de dinámica de sistemas que evalúa el riesgo de agotamiento de la población, sus beneficios económicos potenciales y la generación potencial de impuestos en un parque virtual. Se diseñó el SVE con el objetivo de evaluar los impactos de diversas políticas desde la libre cacería hasta políticas altamente restrictivas como cuotas de caza, esquemas de impuestos y precios. La estructura del modelo está basada en el fenómeno económico denominado “tragedia de los comunes”, el cual ocurre cuando los individuos, actuando independientemente unos de otros, explotan indiscriminadamente un recurso de propiedad común, buscando obtener beneficios de corto plazo, mientras lo agotan para su uso en el largo plazo. La utilización del SVE muestra que sí es posible la sustentabilidad y la rentabilidad en una reserva de turismo cinegético, aplicando combinaciones de estrategias o políticas racionales a nivel sistema
Abstract:
This paper presents a management flight simulator (MFS) as part of a learning environment, designed to be used by wildlife hunting park owners or managers, conservationists and governement environmental policy makers, with the aim to provide strategy assessment and virtual experience in the sustainable and profitable hunting parks or reserves management strategic planning. The MFS is basedon a system dynamicsmodel that asses the population risk of depletion, economic benefits and tax collecting in a virtual wildlife park.*The MSF was designed for evaluate different policies impacts from free shooting to highly restricted shhting quotas, tax schemes and price policies. The model structure is based on the "tragedy of the commons", economic phenomenon, ocurring when individuals acting independently of one another, will overuse a commo-property resource in order to obtain short term benefit while depleting it for a long term use. The use of the MFS shows that sustainability and profitability is possible in a wildlife shooting reserve, applying a system level combination of rational policies
Resumen:
Existen crecientes preocupaciones en México respecto a las emisiones de CO2, debido a la utilización de combustibles fósiles en la generación de electricidad. Recientemente se han autorizado varias leyes con la finalidad de incrementar la participación de combustibles no fósiles en la mezcla energética. A pesar de que se han establecido algunos objetivos, estos serán difíciles de lograr si las inversiones continúan siendo dirigidas principalmente a las tecnologías fósiles. Este artículo presenta un modelo de apoyo a la toma de decisiones, basado en el enfoque de dinámica de sistemas, como un método alternativo a las técnicas de modelaje tradicionales. El modelo es utilizado para identificar los requerimientos futuros de capacidad de generación, así como para evaluarlos en diversos escenarios simulados
Abstract:
There are increasing concerns in México regarding CO2 emissions, due to the use of fossil fuel based electric generation. Recently, several laws have been passed with the objective of increase the non-fossil participation in the energy portfolio mix. Although several objectives have been established, these would be hard to achieve if investments should continue to be directed mainly to fossil fuel technologies. This article presents a system dynamics decision support model, as an alternative method to the traditional modelling approaches. The model is used to assess the generation capacity requirements and to evaluate them in several simulated scenarios. Keywords: Non-fossil electricity generation capacity expansion, Mexico, scenario simulation model, system dynamics
Resumen:
El artículo presenta un simulador de vuelo ejecutivo (SVE), como parte de un entorno de aprendizaje, diseñado para ser utilizado por dueños o administradores de parques o reservas cinegéticas, grupos conservacionistas o diseñadores de políticas ecológicas gubernamentales, con el objetivo de evaluar diversas estrategias y de proveer experiencia virtual en la planeación estratégica sustentable y rentable de dichos parques o reservas de turismo cinegético. El SVE está basado en un modelo de dinámica de sistemas que evalúa el riesgo de agotamiento de la población, sus beneficios económicos potenciales, y la generación potencial de impuestos en un parque virtual. Se diseñó el SVE con el objetivo de evaluar los impactos de diversas políticas desde la libre cacería hasta políticas altamente restrictivas como cuotas de caza, esquemas de impuestos y precios. La estructura del modelo está basada en el fenómeno económico denominado “tragedia de los comunes”, el cual ocurre cuando los individuos, actuando independientemente unos de otros, explotan indiscriminadamente un recurso de propiedad común, buscando obtener beneficios de corto plazo, mientras lo agotan para su uso en el largo plazo. La utilización del SVE muestra que sí es posible la sustentabilidad y la rentabilidad en una reserva de turismo cinegético, aplicando combinaciones de estrategias o políticas racionales a nivel sistema
Abstract:
This paper presents a management flight simulator (MFS) as part of a learning environment, designed to be used by wildlife hunting park owners or managers, conservationists and government environmentalist policy makers, with the aim to provide strategy assessment and virtual experience in sustainable and profitable hunting parks or reserves management strategic planning. The MFS is based on a System Dynamics model that asses the population risk of depletion, economic benefits and tax collecting in a virtual wildlife park. The MSF was designed for evaluate different policies impacts from free shooting to highly restricted shooting quotas, tax schemes, and price policies. The model structure is based on the “tragedy of the commons” economic phenomenon, occurring when individuals acting independently of one another, will overuse a common-property resource in order to obtain short term benefit while depleting it for long term use. Using the MFS shows that sustainability and profitability is possible in a wildlife shooting reserve, applying a system level combination of rational policies
Abstract:
A lack of Resource Based View (RBV) effective understanding in strategy courses and a quick feedback learning style of the new generation of Business Administration students demand more than a traditional lecture teaching strategy. Based on two educator research questions: How could my students achieve an effective understanding of the RBV concepts? How could my students experiment quick financial impacts of their strategic decisions? and one student question: How could I develop strategic resources in order to achieve the maximum Cash Flow?, an Interactive Learning Environment (ILE) is proposed with the following learning objectives: understand the RBV concepts, identify relationships between strategic resources and financial performance and experiment the financial impacts of several resource development strategies, as an iterative process. The proposed ILE is tested based on a laboratory experiment that was conducted with the participation of graduate and undergraduate students to evaluate some key performance measures differences due to student investment strategy profiles between these two groups. The experiment results suggest that graduate students were more aggressive, getting worst results at the beginning but, at the end they achieve better results with some less aggressive strategy plus assigning more resources to productivity versus capacity than undergraduate students did
Resumen:
Se presentan los resultados de la aplicación de un modelo de planeación, que permite la conformación y evaluación de escenarios del sector manufacturero mexicano, analizando los impactos de la inversión en formación humana y en desarrollo tecnológico en sus niveles de productividad futura. El esquema de planeación se basa en los conceptos de evolución que explican el comportamiento de sistemas abiertos. La operacionalización del esquema se realiza en base a un modelo, compuesto por un sistema de ecuaciones simultáneas, cuyos parámetros se estiman estadísticamente mediante la utilización de técnicas estadísticas de regresión. La conformación y evaluación de escenarios se lleva a cabo estableciendo supuestos y simulando el modelo hacia el año 2000. El modelo considera que la inversión en capital de conocimiento, conformado tanto por aspectos humanos (educación y capacitación), como por aspectos técnicos (desarrollo tecnológico, investigación y desarrollo), es uno de los elementos fundamentales que influyen en la productividad de todo sistema de transformación en competencia, la cual es, a su vez, uno de los elementos críticos que determinan su productividad y su participación en el mercado. La conformación y evaluación de escenarios tecnológicos en el sector manufacturero mexicano permite identificar la gran importancia que tienen las inversiones en capital de conocimiento para su desarrollo, lo que proporciona pautas para las políticas de asignación de recursos en los rubros correspondientes
Resumen:
El artículo explora la crítica de Santiago Ramírez a la clasificación de la analogía de Cayetano. Ramírez argumenta que la división tradicional de la analogía no refleja completamente la complejidad de las nociones de santo Tomás de Aquino. En particular, se introduce la "analogía de atribución intrínseca" como una categoría adicional dentro de la analogía de atribución. A través de un análisis detallado, el texto examina cómo esta forma de analogía mantiene elementos de la proporcionalidad y la atribución extrínseca, pero se distingue por ser una atribución "según el ser" y "según la intención". Se destacan las implicaciones teológicas y filosóficas de esta distinción, especialmente en relación con la predicación del concepto de verdad respecto a Dios, las criaturas y los juicios humanos
Abstract:
The article explores Santiago Ramírez's critique of Cajetan's classification of analogy. Ramírez argues that the traditional division of analogy does not fully reflect the complexity of Thomas Aquinas' notions. Specifically, the "intrinsic attribution analogy" is introduced as an additional category within attribution analogy. Through a detailed analysis, the text examines how this form of analogy retains elements of proportionality and extrinsic attribution yet is distinguished by being an attribution "according to being" and "according to intention." The theological and philosophical implications of this distinction are highlighted, especially regarding the predication of the concept of truth in relation to God, creatures, and human judgments
Abstract:
For the problem of adjudicating conflicting claims, we propose the following method to extend a lower bound rule: (i) for each problem, assign the amounts recommended by the lower bound rule and revise the problem accordingly; (ii) assign the amounts recommended by the lower bound rule to the revised problem. The “recursive-extension” of a lower bound rule is obtained by recursive application of this procedure. We show that if a lower bound rule satisfies positivity then it’s recursive extension singles out a unique awards rule.We then study the relation between desirable properties satisfied by a lower bound rule and properties satisfied by its recursive extension
Abstract:
In economics the main efficiency criterion is that of Pareto-optimality. For problems of distributing a social endowment a central notion of fairness is no-envy (each agent should receive a bundle at least as good, according to her own preferences, as any of the other agent's bundle). For most economies there are multiple allocations satisfying these two properties. We provide a procedure, based on distributional implications of these two properties, which selects a single allocation which is Pareto-optimal and satisfies no-envy in two-agent exchange economies. There is no straightforward generalization of our procedure to more than two-agents
Abstract:
For the problem of adjudicating conflicting claims, we consider the requirement that each agent should receive at least 1/n his claim truncated at the amount to divide, where n is the number of claimants (Moreno-Ternero and Villar, 2004a). We identify two families of rules satisfying this bound. We then formulate the requirement that for each problem, the awards vector should be obtainable in two equivalent ways, (i) directly or (ii) in two steps, first assigning to each claimant his lower bound and then applying the rule to the appropriately revised problem. We show that there is only one rule satisfying this requirement. We name it the “ rule”, as it is obtained by a recursion. We then undertake a systematic investigation of the properties of the rule
Abstract:
This article proposes specification tests for economic models defined through conditional moments restrictions in which conditioning variables are estimated. There are two main motivations for this situation. First, the case when the conditioning variables are not directly observable, such as economic models, where innovations or latent variables appear as explanatory variables. Second, the case when the set of conditioning variables is too large to derive powerful tests, and hence, the original conditioning set is replaced by a constructed variable that is regarded as a good summary of it. We establish the asymptotic properties of the proposed tests, examine its finite sample behavior, and apply them to different econometric contexts. In some cases, the proposed approach leads to relevant tests that generalize well known specification tests, such as Ramsey’s RESET test
Abstract:
Despite their theoretical advantages, Integrated Conditional Moment (ICM) specification tests are not commonly employed in the econometrics practice. An important reason is that the employed test statistics are nonpivotal, and so critical values are not readily available. This article proposes an omnibus test in the spirit of the ICM tests of Bierens and Ploberger (1997, Econometrica 65, 1129–1151) where the test statistic is based on the minimized value of a quadratic function of the residuals of time series econometric models. The proposed test falls under the category of overidentification restriction tests started by Sargan (1958, Econometrica 26, 393–415). The corresponding projection interpretation leads us to propose a straightforward wild bootstrap procedure that requires only linear regressions to estimate the critical values irrespective of the model functional form. Hence, contrary to other existing ICM tests, the critical values are easily calculated while the test preserves the admissibility property of ICM tests
Abstract:
In econometrics, models stated as conditional moment restrictions are typically estimated by means of the generalized method of moments (GMM). The GMM estimation procedure can render inconsistent estimates since the number of arbitrarily chosen instruments is finite. In fact, consistency of the GMM estimators relies on additional assumptions that imply unclear restrictions on the data generating process. This article introduces a new, simple and consistent estimation procedure for these models that is directly based on the definition of the conditional moments. The main feature of our procedure is its simplicity, since its implementation does not require the selection of any user-chosen number, and statistical inference is straightforward since the proposed estimator is asymptotically normal. In addition, we suggest an asymptotically efficient estimator constructed by carrying out one Newton-Raphson step in the direction of the efficient GMM estimator
Abstract:
Decisions based on econometric model estimates may not have the expected effect if the model is misspecified. Thus, specification tests should precede any analysis. Bierens' specification test is consistent and has optimality properties against some local alternatives. A shortcoming is that the test statistic is not distribution free, even asymptotically. This makes the test unfeasible. There have been many suggestions to circumvent this problem, including the use of upper bounds for the critical values. However, these suggestions lead to tests that lose power and optimality against local alternatives. In this paper we show that bootstrap methods allow us to recover power and optimality of Bierens' original test. Bootstrap also provides reliable p-values, which have a central role in Fisher's theory of hypothesis testing. The paper also includes a discussion of the properties of the bootstrap Nonlinear Least Squares Estimator under local alternatives
Abstract:
In this paper we consider testing that an economic time series follows a martingale difference process. The martingale difference hypothesis has typically been tested using information contained in the second moments of a process, that is, using test statistics based on the sample autocovariances or periodograms. Tests based on these statistics are inconsistent since they cannot detect nonlinear alternatives. In this paper we consider tests that detect linear and nonlinear alternatives. Given that the asymptotic distributions of the considered tests statistics depend on the data generating process, we propose to implement the tests using a modified wild bootstrap procedure. The paper theoretically justifies the proposed tests and examines their finite sample behavior by means of Monte Carlo experiments
Abstract:
In this paper we propose an iterative method for solving the inhomogeneous systems of linear equations associated with density fitting. The proposed method is based on a version of the conjugate gradient method that makes use of automatically built quasi-Newton preconditioners. The paper gives a detailed description of a parallel implementation of the new method. The computational performance of the new algorithms is analyzed by benchmark calculations on systems with up to about 35 000 auxiliary functions. Comparisons with the standard, direct approach show no significant differences in the computed solutions
Abstract:
The problem of robustification of interconnection and damping assignment passivity-based control for underactuated mechanical system vis-à-vis matched, constant, and unknown disturbances is addressed in the paper. This is achieved adding an outer-loop controller to the interconnection and damping assignment passivity-based control. Three designs are proposed, with the first one being a simple nonlinear PI, while the second and the third ones are nonlinear PIDs. While all controllers ensure stability of the desired equilibrium in spite of the presence of the disturbances, the inclusion of the derivative term allows us to inject further damping enlarging the class of systems for which asymptotic stability is ensured. Numerical simulations of the Acrobot system and experimental results on the disk-on-disk system illustrate the performance of the proposed controller
Abstract:
In this paper we present a dynamic model of marine vehicles in both body-fixed and inertial momentum coordinates using port-Hamiltonian framework. The dynamics in body-fixed coordinates have a particular structure of the mass matrix that allows the application of passivity-based control design developed for robust energy shaping stabilisation of mechanical systems described in terms of generalised coordinates. As an example of application, we follow this methodology to design a passivity-based controller with integral action for fully actuated vehicles in six degrees of freedom that tracks time-varying references and rejects disturbances. We illustrate the performance of this controller in a simulation example of an open-frame unmanned underwater vehicle subject to both constant and time-varying disturbances. We also describe a momentum transformation that allows an alternative model representation of marine craft dynamics that resembles general port-Hamiltonian mechanical systems with a coordinate dependent mass matrix
Abstract:
The problem of robustification of Interconnection and Damping Assignment Passivity- Based Control (IDA-PBC) for underactuated mechanical system vis-à-vis matched, constant, unknown disturbances is addressed in the paper. This is achieved adding an outer-loop controller to the IDA-PBC. Three designs are proposed, with the first one being a simple nonlinear PI, while the second and the third ones are nonlinear PIDs. While all controllers ensure stability of the desired equilibrium in spite of the presence of the disturbances, the inclusion of the derivative term allows us to inject further damping enlarging the class of systems for which asymptotic stability is ensured
Abstract:
Control of underactuated mechanical systems via energy shaping is a well-established, robust design technique. Unfortunately, its application is often stymied by the need to solve partial differential equations (PDEs). In this technical note a new, fully constructive, procedure to shape the energy for a class of mechanical systems that obviates the solution of PDEs is proposed. The control law consists of a first stage of partial feedback linearization followed by a simple proportional plus integral controller acting on two new passive outputs. The class of systems for which the procedure is applicable is identified imposing some (directly verifiable) conditions on the systems inertia matrix and its potential energy function. It is shown that these conditions are satisfied by three benchmark examples
Abstract:
To extend the realm of application of the well known controller design technique of interconnection and damping assignment passivity-based control (IDA-PBC) of mechanical systems two modifications to the standard method are presented in this article. First, similarly to Batlle et al. (2009) and Gómez-Estern and van der Schaft (2004), it is proposed to avoid the splitting of the control action into energy-shaping and damping injection terms, but instead to carry them out simultaneously. Second, motivated by Chang (2014), we propose to consider the inclusion of dissipative forces, going beyond the gyroscopic ones used in standard IDA-PBC. The contribution of our work is the proof that the addition of these two elements provides a non-trivial extension to the basic IDA-PBC methodology. It is also shown that several new controllers for mechanical systems designed invoking other (less systematic procedures) that do not satisfy the conditions of standard IDA-PBC, actually belong to this new class of SIDA-PBC
Abstract:
To extend the realm of application of the well known controller design technique of interconnection and damping assignment passivity-based control (IDA-PBC) of mechanical systems two modifications to the standard method are presented in this article. First, similarly to [1], [13], it is proposed to avoid the splitting of the control action into energy-shaping and damping injection terms, but instead to carry them out simultaneously. Second, motivated by [2], we propose to consider the inclusion of dissipative forces, going beyond the gyroscopic ones used in standard IDA-PBC. It can be shown that new controllers for mechanical systems that do not satisfy the conditions of standard IDA-PBC, actually belong to this new class of SIDA-PBC
Abstract:
As Mexico slouches from economic meltdown to recalcitrant recovery, several questions loom large in the minds of pundits and investors, Mexicans and foreigners alike: Will President Ernesto Zedillo maintain current economic policy it will be succumb to political pressures and electoral cycles? Will the social fabric untravel or will it withstand the brunt of 'adjustment fatigue'? And is the predicted demise of the PRI likely, or will the party display its traditional resilience?
Abstract:
Robots with bimanual morphology usually possess higher flexibility, dexterity, and efficiency than those only equipped with a single arm. The dual-arm structure has enabled robots to perform various intricate tasks that are difficult or even impossible to achieve by unimanipulation. In this article, we aim to achieve robust bimanual grasping for object transportation. In particular, provided that stable contact is the key to the success of the transportation task, our focus lies on stabilizing the contact between the object and the robot end-effectors by employing the contact servoing strategy. To ensure that the contact is stable, the contact wrenches are required to evolve within the so-called friction cones all the time throughout the transportation task. To this end, we propose stabilizing the contact by leveraging a novel contact parameterization model. Parameterization expresses the contact stability manifold with a set of constraint-free exogenous parameters where the mapping is bijective. Notably, such parameterization can guarantee that the contact stability constraints can always be satisfied. We also show that many commonly used contact models can be parameterized out of a similar principle. Furthermore, to exploit the parameterized contact models in the control law, we devise a contact servoing strategy for the bimanual robotic system such that the force feedback signals from the force/torque sensors are incorporated into the control loop. The effectiveness of the proposed approach is well demonstrated with the experiments on several representative bimanual transportation tasks
Abstract:
Sick pay is a common provision in most labor contracts. This paper employs an experimental gift exchange environment to explore two related questions using both managers and undergraduates as subjects. First, do workers reciprocate generous sick pay with higher effort? Second, do firms benefit from offering sick pay with higher effort. However, firms is that workers do reciprocate generous sick pay in terms of profits only if there is a competition among firms for workers. Consequently, competition leads to a higher voluntary provision of sick pay relative to a monopsonistic labor market
Abstract:
A kinetic model for the Boltzmann equation is proposed and explored as a practical means to investigate the properties of a dilute granular gas. It is shown that all spatially homogeneous initial distributions approach a universal "homogeneous cooling solution" after a few collisions. The homogeneous cooling solution (HCS) is studied in some detail and the exact solution is compared with known results for the hard sphere Boltzmann equation. It is shown that all qualitative features of the HCS, including the nature of overpopulation at large velocities, are reproduced by the kinetic model. It is also shown that all the transport coefficients are in excellent agreement with those from the Boltzmann equation. Also, the model is specialized to one having a velocity independent collision frequency and the resulting HCS and transport coefficients are compared to known results for the Maxwell model. The potential of the model for the study of more complex spatially inhomogeneous states is discussed
Abstract:
Recent theoretical analyses of the two-time joint-probability density for electric-field dynamics in a strongly coupled plasma have included formal short-time expansions. Here we compare the short-time-expansion results for the associated generating function with molecular-dynamics-simulation results for the special case of fields at a neutral point in a one-component plasma with plasma parameter GAMMA = 10. The agreement is quite good for times omega(p)t less-than-or-equal-to 2, although more general application of the short-time expansion requires some important qualifications
Abstract:
The dynamics of electric fields at a neutral or charged point in a one-component plasma is considered. The equilibrium joint probability density for electric-field values at two different times is defined, and several formally exact limits are described in some detail. The asymptotic short-time behavior for both neutral and charged-point cases is shown to be Gaussian with respect to the field differences, but with a half-width depending on their sum. In the strong-coupling limit, the joint probability density is dominated by weak fields (charged-point case), leading to a Gaussian distribution with time dependence entirely determined from the electric-field time-correlation function. The limit of large fields is shown to be determined by the time-dependent autocorrelation function for the density of ions around the field point; for the special case of fields at a neutral point, this result implies that the joint distribution at large fields is determined entirely by the dynamic structure factor. Finally, the full distribution (all field values and times) is studied in the weak-coupling limit
Abstract:
The equilibrium joint probability density for electric fields at two different times is considered for both neutral and charged points. The behavior of this distribution function is discussed in the Gaussian, short time, and high field limits. An approximate global description is proposed using an independent particle model as an extension of corresponding approximations for the single time field distribution
Abstract:
We develop a theory of media slant as a systematic filtering of political news that reduces multidimensional politics to the one-dimensional space perceived by voters. Economic and political choices are interdependent in our theory: expected electoral results influence economic choices, and economic choices in turn influence voting behaviour. In a two-candidate election, we show that media favouring the front-runner will focus on issues unlikely to deliver a surprise, while media favouring the underdog will gamble for resurrection. We characterize the socially optimal slant and show thal it coincides with the one favoured by the underdog under a variety of circumstances. Balanced media, giving each issue equal coverage, may be worse for voters than partisan media
Abstract:
We model voting in juries as a game of incomplete information, allowing jurors to receive a continuum of signals. We characterize the unique symmetric equilibrium of the game, and give a condition under which no asymmetric equilibria exist under unanimity rule. We offer a condition under which unanimity rule exhibits a bias toward convicting the innocent, regardless of the size of the jury, and give an example showing that this bias can be reversed. We prove a "jury theorem" for our general model: As the size of the jury increases, the probability of a mistaken judgment goes to zero for every voting rule except unanimity rule. For unanimity rule, the probability of making a mistake is bounded strictly aboye zero if and only if there do not exist arbitrarily strong signals of innocence. Our results explain the asymptotic inefficiency of unanimity rule in finite models and establishes the possibility of asymptotic efficiency, a property that could emerge only in a continuous model
Abstract:
We study diffeomorphisms that have one-parameter families of continuous symmetries. For general maps, in contrast to the symplectic case, existence of a symmetry no longer implies existence of an invariant. Conversely, a map with an invariant need not have a symmetry. We show that when a symmetry flow has a global Poincaré section there are coordinates in which the map takes a reduced, skew-product form, and hence allows for reduction of dimensionality. We show that the reduction of a volume-preserving map again is a volume preserving. Finally we sharpen the Noether theorem for symplectic maps. A number of illustrative examples are discussed and the method is compared with traditional reduction techniques
Abstract:
Parameters in climate models are usually calibrated manually, exploiting only small subsets of the available data. This precludes both optimal calibration and quantification of uncertainties. Traditional Bayesian calibration methods that allow uncertainty quantification are too expensive for climate models; they are also not robust in the presence of internal climate variability. For example, Markov chain Monte Carlo (MCMC) methods typically require 0(10 to the 5) model runs and are sensitive to internal variability noise, rendering them infeasible for climate models. Here we demonstrate an approach to model calibration and uncertainty quantification that requires only 0(10 to the 2) model runs and can accommodate internal climate variability. The approach consists of three stages: (a) a calibration stage uses variants of ensemble Kalman inversion to calibrate a model by minimizing mismatches between model and data statistics; (b) an emulation stage emulates the parameter-to-data map with Gaussian processes (GP), using the model runs in the calibration stage for training; (c) a sampling stage approximates the Bayesian posterior distributions by sampling the GP emulator with MCMC. We demonstrate the feasibility and computational efficiency of this calibrate-emulate-sample (CES) approach in a perfect-model setting. Using an idealized general circulation model, we estimate parameters in a simple convection scheme from synthetic data generated with the model. The CES approach generates probability distributions of the parameters that are good approximations of the Bayesian posteriors, at a fraction of the computational cost usually required to obtain them. Sampling from this approximate posterior allows the generation of climate predictions with quantified parametric uncertainties
Abstract:
This article elaborates the concepts of techno-colonialism and sub-netizenship to explore the renewal of colonial processes through the digitalization of "democracy." Techno-colonialism is conceived as a frame - adopted consciously and unconsciously - that shapes capitalist social relations and people's political participation. Today, this frame appeals to the idealized netizen, a global, free, equal and networked subject that gains full membership to a political community. Meanwhile, sub-netizenship is the novel political subordination because of race, ethnicity, class, gender, language, temporality, and geography within a global matrix that crosses the analogue-digital dimensions of life. This techno-colonialism/sub-netizenship dynamic manifested in the experience of Marichuy as an indigenous independent precandidate for the Mexican presidential elections of 2018. In a highly unequal and diverse country, aspirants required a tablet or smartphone to collect citizen support via a monolinguistic app only accessible to Google or Facebook users. Our analysis reveals how some individuals are excluded and disenfranchised by digital innovation but still resist a legal system that seeks to homogenize them and render them into legible and marketable data
Abstract:
Despite ongoing interest in deploying information and communication technologies (ICTs) for sustainable development, their use in climate change adaptation remains understudied. Based on the integration of adaptation theory and the existing literature on the use of ICTs in development, we present an analytical model for conceptualizing the contribution of existing ICTs to adaptation, and a framework for evaluating ICT success. We apply the framework to four case studies of ICTs in use for early warning systems and managing extreme events in the Latin American and the Caribbean countries. We propose that existing ICTs can support adaptation through enabling access to critical information for decision-making, coordinating actors and building social capital. ICTs also allow actors to communicate and disseminate their decision experience, thus enhancing opportunities for collective learning and continual improvements in adaptation processes. In this way, ICTs can both communicate the current and potential impacts of climate change, as well as engage populations in the development of viable adaptation strategies
Abstract:
We examine the welfare properties of surplus maximízation by embedding a perfectly díscriminating monopoly in an otherwise standard Arrow-Debreu economy. Although we discover an inefficient equilibríum, we validate partial equilibrium intuition by showing: (i) that equilibria are efficient provided that the monopoly goods are costly, and (ii) that a natural monopoly can typically use personalized two-part tariffs in these equilibria. However, we find that Pareto optima are sometimes incompatible with surplus maximization, even when transfer payrnents are used. We provide insight into the source of this difficulty and give some instructive examples of economies where a second welfare theorem holds.
Abstract:
We ask when firms with increasing returns can cover their costs independently by charging two-part tariffs (TPTs), a condition we call independent viability. To answer, we develop notions of substitutability and complementarity that account for the total value of goods and use them to find the maximum extractable surplus. We then show that independent viability is a sufficient condition for existence of a general equilibrium in which regulated natural monopolies use TPTs. Independent viability also guarantees efficiency when the increasing returns arise solely from fixed costs. For arbitrary technologies, it ensures that a second welfare theorem holds
Abstract:
We report the findings from a study that explores candidate participation in a context where citizens can become candidates under both plurality and run-off voting systems. The study also considers the influence of entry costs and different platforms of potential candidates. While our findings align with the expected outcomes of the citizen-candidate model, there is a notable over-participation by candidates from less favorable electoral positions. These entry patterns adjusted well to the QRE. This research adds to the existing body of knowledge about what motivates candidates to enter races under different voting systems and analyzes the behavior of candidates in extreme positions
Abstract:
We study theoretically and experimentally committee decision making with common interests. Committee members do not know which of two alternatives is optimal, but each member can acquire a private costly signal before casting a vote under either majority or unanimity rule. In the experiment, as predicted by Bayesian equilibrium, voters are more likely to acquire information under majority rule, and vote strategically under unanimity rule. As opposed to Bayesian equilibrium predictions, however, many committee members vote when uninformed. Moreover, uninformed voting is strongly associated with a lower propensity to acquire information. We show that an equilibrium model of subjective prior beliefs can account for both these phenomena, and provides a good overall fit to the observed patterns of behavior both in terms of rational ignorance and biases
Abstract:
We conduct a laboratory study of the group-on group ultimatum bargaining with restricted within-group interaction. In this context, we concentrate on the effect of different within-group voting procedures on the bargaining outcomes. Our experimental observations can be summarized in two propositions. First, individual responder behavior across treatments does not show statistically significant variation across voting rules, implying that group decisions may be viewed as aggregations of independent individual decisions. Second, we observe that proposer behavior significantly depends (in the manner predicted by a simple model) on the within-group decision rule in force among the responders and is generally different from the proposer behavior in the one-on-one bargaining
Abstract:
This work reports the results of in vivo assays of an implant composed of the hydrogel Chitosan-g-Glycidyl Methacrylate-Xanthan [(CTS-g-GMA)-X] in Wistar rats. Degradation kinetics of hydrogels was assessed by lysozyme assays. Wistar rats were subjected to laminectomy by cutting the spinal cord with a scalpel. After the surgical procedure, hydrogels were implanted in the injured zone (level T8). Somatosensory evoked potentials (SEPs) obtained by electric stimulation onto periphery nerves were registered in the corresponding central nervous system (CNS) areas. Rats implanted with the biomaterials showed a successful recovery compared with the non-implanted rats after 30 days. Lysozyme, derived from egg whites, was used for in vitro assays. This study serves as the basis for testing the biodegradability of the hydrogels (CTS-g-GMA)-X that is promoted by enzymatic hydrolysis. Hydrogels' hydrolysis was studied via lysozyme kinetics at two pH values, 5 and 7, under mechanical agitation at 37 °C. Results show that our materials' hydrolysis is slower than pure CTS possibly due to the steric hindrance imposed by the GMA grafting of functionalization. This hydrolysis helps degrade the biomaterial and at the same time it provides support for spinal cord recovery. Combination of these results may prove useful in the use of these hydrogels as scaffolds for cells proliferation and their application as implants in living organisms
Resumen:
Este trabajo considera el problema de pronosticar los siniestros ocurridos pero no reportados. El pronóstico sirve para que las compañías de seguros calculen la reserva que debe constituirse para los siniestros pendientes de pago, aunque aquí no se trata el cálculo de la reserva. Se revisan los métodos de pronóstico y se destaca uno que surge de un modelo estadístico, y produce pronósticos con error cuadrático medio mínimo. Su uso se ilustra con datos reales sobre siniestralidad en el ramo de automóviles3. Las ventajas del método, en comparación con otros, son una reducción de la subjetividad en su uso y la posibilidad de medir la incertidumbre asociada a los pronósticos
Abstract:
This research considers the problem of forecasting incurred but not reported claims. Even though it does not deal with reserve calculations, the forecast is used to calculate the reserve that insurance companies require to face claims pending to pay; however, reserve calculation is not discussed in this paper. Forecasting methods are reviewed and emphasis is placed on one that emerges from a statistical model and provides minimum mean square error forecasts. Its use is illustrated with automobile claim real data3. The advantages of this method, as compared with others, are the reduction of the subjectivity component when used, and the possibility of measuring the uncertainty associated to the forecasts
Abstract:
Why does entrepreneurship training work? We argue that the feedback loop of the opportunity development process is a training element that can explain the effectiveness of entrepreneurship training. Building on action regulation theory, we model the feedback loop as a recursive cycle of changes in the business opportunity, goals, performance outcomes, and feedback. Furthermore, we hypothesize that error orientation and monitoring can strengthen or weaken the cycle, and that going through the feedback loop during training explains short- and long-term training outcomes. To test our hypotheses, we collected data before, during, and after an entrepreneurship training program. Results support our hypotheses, suggesting that the feedback loop of the opportunity development process is a concept that can explain why entrepreneurship training is effective
Abstract:
We examine how the possibility of a bank run affects the investment decisions made by a competitive bank. Cooper and Ross [1998. Bank runs: liquidity costs and investment distortions. Journal of Monetary Economics 41, 27–38] have shown that when the probability of a run is small, the bank will offer a contract that admits a bank-run equilibrium. We show that, in this case, the bank will chose to hold an amount of liquid reserves exactly equal to what withdrawal demand will be if a run does not occur; precautionary or “excess” liquidity will not be held. This result allows us to show that when the cost of liquidating investment early is high, an increase in the probability of a run will lead the bank to invest less. However, when liquidation costs are moderate, the level of investment is increasing in the probability of a run
Abstract:
This paper introduces an approach to the study of optimal government policy in economies characterized by a coordination problem and multiple equilibria. Such models are often criticized as not being useful for policy analysis because they fail to assign a unique prediction to each possible policy choice. We employ a selection mechanism that assigns, ex ante, a probability to each equilibrium indicating how likely it is to obtain. We show how such a mechanism can be derived as the natural result of an adaptive learning process. This approach leads to a well-defined optimal policy problem, and has important implications for the conduct of government policy. We illustrate these implications using a simple model of technology adoption under network externalities
Abstract:
We study optimal fiscal policy in an economy where (i) search frictions create a coordination problem and generate multiple, Pareto-ranked equilibria and (ii) the government finances the provision of a public good by taxing market activity. The government must choose the tax rate before it knows which equilibrium will obtain, and therefore an important part of the problem is determining how the policy will affect the equilibrium selection process. We show that when the equilibrium selection rule is based on the concept of risk dominance, higher tax rates make coordination on the Pareto-superior outcome less likely. As a result, taking equilibrium-selection effects into account leads to a lower optimal tax rate
Abstract:
We construct an endogenous growth model in which bank runs occur with positive probability in equilibrium. In this setting, a bank run has a permanent effect on the leve1s of the capital stock and of output. In addition, the possibility of a run changes the portfolio choices of depositors and of banks, and thereby affects the long-run growth rateo These facts imply that both the occurrence of a run and the mere possibility of runs in a given period have a large impact on aH future periods.A bank run in our model is triggered by sunspots, and we consider two different equilibrium selection rules. In the first, a run occurs with a fixed, exogenous probability, while in the second the probability of a run is influenced by banks' portfolio choices. We show that when the choices of an individual bank affect the proba,bility of a run on that bank, the economy both grows faster and experiences fewer runs
Abstract:
Social distancing policies have been widely used to curb the spread of infectious diseases such as COVID-19, but assessing their effectiveness is challenging. This study shows that widely-used methods to estimate the effects of such policies, like Two-way Fixed Effects and Difference-in-Differences, are highly sensitive to accounting, or failing to account, for the simultaneous adoption of policies and the presence of spillovers across geographies stemming from human movement. By estimating a series of nonparametric models on fine-grained mobility, epidemiological, and policy data from Mexico during the COVID-19 pandemic, this research shows that failing to consider confounders, interactions, and spillovers can change the magnitude and the sign of estimated policy effects, hampering the design of optimal public policies
Abstract:
Social media's capacity to quickly and inexpensively reach large audiences almost simultaneously has the potential to promote electoral accountability. Beyond increasing direct exposure to information, high saturation campaigns-which target substantial fractions of an electorate-may induce or amplify information diffusion, persuasion, or coordination between voters. Randomizing saturation across municipalities, we evaluate the electoral impact of non-partisan Facebook ads informing millions of Mexican citizens of municipal expenditure irregularities in 2018. The vote shares of incumbent parties that engaged in zero/negligible irregularities increased by 6-7 percentage points in directly-targeted electoral precincts. This direct effect, but also the indirect effect in untargeted precincts within treated municipalities, were significantly greater where ads targeted 80%-rather than 20%-of the municipal electorate. The amplifying effects of high saturation campaigns are driven by citizens within more socially-connected municipalities, rather than responses by politicians or media outlets. These findings demonstrate how mass media can ignite social interactions to promote political accountability
Resumen:
En el contexto de implementación de medidas para mitigar el cambio climático, diversas tecnologías han surgido o han sido implementadas con el objeto de reducir los efectos del mismo, incluyéndose dentro de estas la captura y almacenamiento de dióxido de carbono (CO2). Si bien esta tecnología es usada para capturar CO2 en procesos de altas emisiones de GEI, resulta una tecnología idónea en una etapa de transición energética. Como todo proceso de esta naturaleza, su aplicación conlleva la actualización de diversos riesgos, como daño ambiental o afectaciones a la salud de los seres humanos, sin embargo por la amplia temporalidad que conlleva el almacenamiento de CO2, la regulación en relación a la responsabilidad de los agentes responsables resulta fundamental. Ante la necesidad de implementar este tipo de tecnologías, resulta fundamental su regulación, por lo que el presente artículo propone como base de estudio para el modelo de regulación de la CAC, los Fondos Internacionales de Indemnización de Daños Debidos a Contaminación por Hidrocarburos (los FIDAC o los IOPC, por el acrónimo en inglés), esquemas que han demostrado dar certeza y seguridad ante contingencias en proyectos u operaciones a largo plazo
Abstract:
In the context of implementing measures to mitigate climate change, various technologies have emerged or have been implemented to reduce its effects including the capture and storage of carbon dioxide (CO2). Although this technology is used to capture CO2 in processes with high GHG emissions, it is an ideal technology in a stage of energy transition. Like any process of this nature, its application entails the updating of various risks, such as environmental damage or effects on the health of human beings, however, due to the extensive temporality that CO2 storage entails, the regulation in relation to the responsibility of the responsible agents is essential. Given the need the implement this type of technology, its regulation is essential, which is why this article proposes as an study basis for the CAC regulation model, the International Funds for Compensation for Damage Due to Hydrocarbon Pollution (the IOPC Funds), schemes that have proven to provide certainly and security in the face of contingencies in long-term projects or operations
Resumen:
Tras una breve aproximación respecto a ciertas disputas en el sector energético latinoamericano, el autor comenta sobre tres mecanismos contratuales relevantes que, tanto inversionistas como Estados anfitriones han diseñado conjuntamente, bajo una perspectiva de prevención. Al hacerlo, estos jugadores vuelven a enfocar sus incentivos, una vez que un cambio material de circunstancias emerge. Estos mecanismos son los siguientes: (i) las cláusulas de renegociación; (ii) las cláusulas de estabilización y (iii) los factores de estandarización económica
Abstract:
Upon a brief introduction into certain disputes arising in connection with the Latin-American energy sector, the author comments on the three relevant contractual mechanisms that, both, investors and Host States have jointly designed from a prevention perspective. In so doing, such players aim to re-focus their incentives, when a material change of circumstances emerges. These mechanims are the following: (i) renegotiation clauses; (ii) stabilization clauses and (iii) economic standarization factors
Abstract:
This article formulates some criticism on traditional Comparative Law and elaborates on the transition of the same to sustainable Comparative Law. The article further explains the need of incorporating cultural dialogues (multiculturalism and interculturalism) and the joint efforts of sciences (transdisciplinarity), as fundamental elements of the contemporary comparative method
Resumen:
El artículo tiene por objeto analizar las críticas y réplicas a las funciones del FMI y el BM, con especial referencia al campo del derecho internacional de los derechos humanos. Se argumenta que los Estados partes de tratados internacionales en materia de derechos humanos están obligados a ser congruentes en sus negociaciones y acuerdos con el FMI Y el BM, por lo que incurren en responsabilidad internacional, en caso de incumplir esta obligación. Se explora asimismo el deber del FMI y del BM de respetar las obligaciones internacionales sobre derechos humanos, por su carácter de costumbre internacional
Abstract:
This article analyses the criticisms and answers regarding the functions of the IMF and the WB, with specíal reference to the field of International Law of Human Rights. It ís argued that the State Parties to ínternational treaties on human rights are obliged to be coherent on their negotiations and agreements with both the IMF and the WB, as breaching said obligations amounts to their international liability. Likewise, the article explores the duty of the IMF and the WB to respect international obligations on human rights considering their character of International Customary Law.
Resumen:
El objeto del presente artículo es analizar un conjunto de perspectivas críticas que sobre el derecho internacional económico existen actualmente. El artículo aboga porque los instrumentos internacionales de la disciplina comentada, tengan en consideración valores universales, principios y nonnas internacionalmente aceptados, tanto por la sociedad civil en general, como por los estados, a través de diversas fuentes de derecho internacional. El artículo fonna parte de la primera fase de una investigación del autor, en tomo a la aplicación de dichos valores, principios y nonnas en ciertos instnlmentos internacionales del Banco Mundial (BM), la Organización Mundial del Comercio (OMC), el Programa de las Naciones Unidas para el desarrollo (PNUD) y la Organización para la cooperación y el desarrollo econólnico (OCDE)
Abstract:
The purpose of the present paper is to analyze a group of current perspectives regarding International Economic Law. The paper advocates for the consider.ation of universal values, principIes and rules within the instrunlents of said legal discipline, as accepted by both , the civil society and the States as per other sources of Public International Law. The paper is part of the first stage 01 an academic research 01 the author with respect to the application of those values, principIes and rules to certain international instruments of the World Bank (WB), the World Trade Organization (WTO), the United Nations Development Programme (UNDP), and the Organization for Economic Cooperation and Development (OECD)
Abstract:
A dynamic multi-level factor model with possible stochastic time trends is proposed. In the model, long-range dependence and short memory dynamics are allowed in global and local common factors as well as model innovations. Estimation of global and local common factors is performed on the prewhitened series, for which the prewhitening parameter is estimated semiparametrically from the cross-sectional and local average of the observable series. Employing canonical correlation analysis and a sequential least-squares algorithm on the prewhitened series, the resulting multi-level factor estimates have centered asymptotic normal distributions under certain rate conditions depending on the bandwidth and cross-section size. Asymptotic results for common components are also established. The selection of the number of global and local factors is discussed. The methodology is shown to lead to good small-sample performance via Monte Carlo simulations. The method is then applied to the Nord Pool electricity market for the analysis of price comovements among different regions within the power grid. The global factor is identified to be the system price, and fractional cointegration relationships are found between local prices and the system price, motivating a long-run equilibrium relationship. Two forecasting exercises are then discussed
Abstract:
Equilibrium electricity spot prices and loads are often determined simultaneously in a day-ahead auction market for each hour of the subsequent day. Hence daily observations of hourly prices take the form of a periodic panel rather than a time series of hourly observations. We consider novel panel data approaches to analyse the time series and the cross-sectional dependence of hourly Nord Pool electricity spot prices and loads for the period 2000-2013. Hourly electricity prices and load data are characterized by strong serial long-range dependence in the time series dimension in addition to strong seasonal periodicity, and along the cross-sectional dimension, i.e. the hours of the day, there is a strong dependence which necessarily has to be accounted for in order to avoid spurious inference when focusing on the time series dependence alone. The long-range dependence is modelled in terms of a fractionally integrated panel data model and it is shown that both prices and loads consist of common factors with long memory and with loadings that vary considerably during the day. Due to the competitiveness of the Nordic power market the aggregate supply curve approximates well the marginal costs of the underlying production technology and because the demand is more volatile than the supply, equilibrium prices and loads are argued to identify the periodic power supply curve. The estimated supply elasticities are estimated from fractionally co-integrated relations and range between 0.5 and 1.17 with the largest elasticities being estimated during morning and evening peak hours
Abstract:
Drawing upon signaling theory, charismatic leadership tactics (CLTs) have been identified as a trainable set of skills. Although organizations rely on technology-mediated communication, the effects of CLTs have not been examined in a virtual context. Preregistered experiments were conducted in face-to-face (Study 1; n = 121) and virtual settings (Study 2; n = 128) in the United States. In Study 3, we conducted virtual replications in Austria (n = 134), France (n = 137), India (n = 128), and Mexico (n = 124). Combined with past experiments, the meta-analytic effect of CLTs on performance (Cohen's d = 0.52 in-person, k = 4; Cohen's d = 0.21 overall, k = 10) and engagement in an extra-role task (Cohen's d = 0.19 overall; k = 6) indicate large to moderate effects. Yet, for performance in a virtual context Cohen's d ranged from −0.25 to 0.17 (Cohen's d = 0.01 overall; k = 6). Study 4 (n = 129) provided mixed support for signaling theory in a virtual context, linking CLTs to some positive evaluations. We conclude with guidance for future research on charismatic leadership and signaling theory
Abstract:
We assess relative performance of three recently proposed instrument selection methods via a Monte CarIo study that investigates the finite sample behavior of the post-selection estimator of a simple linear IV model. Our results suggest that no one method dominates
Abstract:
In the normal linear simultaneous equations model, we demonstrate a close relationship between two recently proposed methods of instrument selection by presenting a fundamental relationship between the two sets of canonical correlations upon which the methods are based
Abstract:
This article introduces a data-driven Box-Pierce test for serial correlation. Ihe proposed test is very attractive compared to the existing ones. In particular, implementation of this test is extremely simple for two reasons: first, the researcher does not need to specify the order of the autocorrelation tested, since the test automatically chooses this number; second, its asymptotic null distribution is chi-square with one degree of freedom, so there is no need of using a bootstrap procedure to estimate the critical values. In addition, the test is robust to the presence of conditional heteroskedasticity of unknown form. Finally, the proposed test presents higher power in simulations than the existing ones for models commonly employed in empirical finance
Abstract:
This artícle introduces an automatic test for the correct specification of a vector autoregression (VAR) model. The proposed test statistic is a Portmanteau statistic with an automatic selection of the order of the residual serial correlation tested. The test presents several attractive characteristics: simplicity, robustness, and high power in finite samples. The test is simple to implement since the researcher does not need to specify the order of the autocorrelation tested and the proposed critical values are simple to approximate, without resorting to bootstrap procedures. In addition, the test is robust to the presence of conditional heteroscedasticity of unknown form and accounts for estimation uncertainty without requiring the computation of large-dimensional inverses of near-to-singularity covariance matrices. The basic methodology is extended to general nonlinear multivariate time series models. Simulations show that the proposed test presents higher power than the existing ones for models commonly employed in empirical macroeconomics and empirical finance. Finally, the test is applied to the elassical bivariate V AR model for GNP (gross national product) and unemployment of Blanchard and Quah (1989) and Evans (1989). Online supplementary material ineludes proofs and additional details
Abstract:
Multi-server queueing systems with Poisson arrivals and Erlangian service times are among the most applicable of what are considered "easy" systems in queueing theory. By selecting the proper order, Erlangian service times can be used to approximate reasonably well many general types of service times which have a unimodal distribution and a coefficient of variation less than or equal to 1. In view of their practical importance, it may be surprising that the existing literature on these systems is quite sparse. The probable reason is that, while it is indeed possible to represent these systems through a Markov process, serious difficulties arise because of (1) the very large number of system states that may be present with increasing Erlang order and/or number of servers, and (2) the complex state transition probabilities that one has to consider. Using a standard numerical approach, solutions of the balance equations describing systems with even a modest Erlang order and number of servers require extensive computational effort and become impractical for larger systems. In this paper we illustrate these difficulties and present the equally likely combinations (ELC) heuristic which provides excellent approximations to typical equilibrium behavior measures of interest for a wide range of stationary multiserver systems with Poisson arrivals and Erlangian service. As system size grows, ELC computational times can be more than 1000 times faster than those for the exact approach. We also illustrate this heuristic's ability to estimate accurately system response under transient and/or dynamic conditions
Resumen:
Las decisiones de otorgamiento de crédito son cruciales en la administración de riesgos. Las instituciones financieras han desarrollado y usado modelos de credit scoring para estandarizar y automatizar las decisiones de crédito, sin embargo, no es común encontrar metodologías para aplicarlos a clientes sin referencias crediticias, es decir clientes que carecen de información en los burós nacionales de crédito. En este trabajo se presenta una metodología general para construir un modelo sencillo de credit scoring enfocado justamente a esa población, la cual ha venido tomando una mayor importancia en el sector crediticio latinoamericano. Se usa la información sociodemográfica proveniente de las solicitudes de crédito de una pequeña institución bancaria mexicana para ejemplificar la metodología
Abstract:
The credit grant decisions are crucial for risk management. Financial institutions have developed and used credit scoring models for automating and standardizing credit granting. However, in the literature it is not common to find a methodology to be applied to clients without previous credit experience, in other words those who lack information in the national credit bureaus. In this paper a basic methodology to build a scorecard model is presented, considering that the Latin American banks have been incremented the credit policies in favor of this kind of population. We use demographic information for an objective population from a small Mexican bank to illustrate the methodology
Résumé:
Les décisions de crédit sont cruciales pour la gestion des risques. Les institutions financières ont développé et utilisé des modèles de notation de crédit dans le but de standardiser et automatiser les décisions de crédit, cependant, ce n'est pas fréquent de trouver des méthodologies à appliquer aux clients sans références de crédit, à savoir les clients qui n'ont pas d'informations sur les bureaux de crédit nationaux. Cet article présent une méthode générale pour construire un modèle simple de notation de crédit portait précisément cette population, laquelle a gagné de plus en plus importance dans le secteur du crédit dans l'Amérique latine. On utilise les renseignements relatifs aux caractéristiques sociodémographiques des demandes de crédit auprès d'une banque mexicaine petite pour illustrer la méthodologie
Resumen:
No se puede entender la configuración de la política medieval y moderna en cuanto a sus ejes estructurales de "sacralidad" y "laicidad" sin tener en consideración la lectura que hicieron los distintos intelectuales, filósofos y escritores de la Europa de entre los siglos XIII y XVII de los autores clásicos. En la presente investigación abarcamos el estudio de cuatro escritores latinos: Cicerón, Séneca, Tito Livio y Tácito como modelos de referencia que van a suponer, a través de una lectura "reconstructiva", el desarrollo de la formación de los principales modelos políticos de la Europa moderna. A través del diálogo que notables representantes de la teología escolástica, el realismo maquiaveliano, el jesuitismo político y el absolutismo barroco, entablan con los autores clásicos mencionados, se van organizando moldes que constituirán las formas de gobierno propias de los príncipes y de los monarcas bajo-medievales renacentistas y barrocos. El núcleo que dirigirá nuestro análisis será, precisamente, la dicotomía entre la sacralidad del príncipe gobernante que se convertirá en el principal motivo de la política medieval y eclesiástica y la desacralización y semi-desacralización en las formas de gobernar maquiavelianas (para el primer caso) y absolutistas contrarreformistas (para el segundo). Nos valdremos del método de la estética de la recepción para trazar el diálogo clásicos-modernos, por los que iremos viendo en qué medida los escritores políticos de la Europa medieval y la moderna van "coloreando", "concretando" y "rellenando" sus propios "horizontes de expectativas", con distintos sesgos ideológicos, en la idea de ir configurando modelos políticos que se adapten al periodo histórico que estamos estudiando
Resumen:
Si la virtus de la Antigüedad era la fuerza, la valentía y el coraje, y en la modernidad la diligencia, el trabajo, el mérito y el esfuerzo, actualmente se ha instalado la commoditas, una suerte de pseudovirtud nihilista que reivindica la ociosidad, el igualitarismo radical y el entretenimiento. La sociedad contemporánea posmoderna ha superado tanto la metafísica filosófica grecorromana como la teología cristiana, tanto el racionalismo y el empirismo ilustrado como el idealismo romántico decadentista y el positivismo materialista, y ha entrado en una suerte de fin de la historia de nihilismo lúdico autocomplaciente y autosatisfecho. Concluimos que la commoditas está para quedarse y reconfigurará la naturaleza del ser humano en función de una nueva mentalidad que empalidece a la tradición hasta prácticamente anularla
Abstract:
If the virtus of antiquity was strength, bravery and courage, and in Modernity, diligence, work, merit and effort, today the commoditas has been installed, a kind of nihilistic pseudo-virtue that claims the idleness, radical egalitarianism, and entertainment. Contemporary postmodern society has surpassed both Greco-Roman philosophical metaphysics and Christian theology, both rationalism and enlightened empiricism, as well as romantic-decadent idealism and materialist positivism, so society has entered a sort of end of history of self-indulgent and self-satisfied playful nihilism. We conclude that commoditas is here to stay and will reconfigure the nature of the human being based on a new mentality that pales tradition to the point of practically annulling it
Resumen:
Pestes y pandemias están de actualidad por la crisis de Covid-19. Aunque nuestro mundo no está acostumbrado a las epidemias, han sido muy frecuentes a lo largo de la historia de la humanidad. En el presente estudio se ofrece un análisis de distintas pestes que asolaron el mundo grecorromano, con las explicaciones que les dieron los autores clásicos y las crisis sociopolíticas que supusieron, muy similares a la actual
Abstract:
Pests and pandemics are in the news because of the Covid-19 crisis. Although our world is not accustomed to epidemics, they have been very frequent throughout human history. This paper offers an analysis of the different plagues that devastated the Greco-Roman world, with the discussions made by classical authors and the political-social crises that they supposed, very similar to the present one
Abstract:
The notion of relative importance of criteria is central in multicriteria decision aid. In this work we define the concept of comparative coalition structure, as an approach for formally discussing the notion of relative importance of criteria. We also present a multicriteria decision aid method that does not require the assignment of weights to the criteria
Abstract:
This study identifies characteristics that positively affect entrepreneurial intention. To do so, the study compares personality traits with work values. Socio-demographic and educational characteristics act as control variables. The sample comprises 1210 public university students. Hierarchical regression analysis serves to test the hypotheses. Results show that personality traits affect entrepreneurial intention more than work values do
Resumen:
En la presente investigación se trata de analizar si las Universidades como organismos que podrían actuar como incubadoras de ideas de negocio, realmente están cumpliendo ese papel y están incentivando la actitud emprendedora entre sus estudiantes, a través de la organización de sus y las medidas específicas que acometen. Las hipótesis planteadas acerca de la mayor formación en su titulación, genérica y específica, son contrastadas utilizando una muestra de 668 estudiantes de una universidad madrileña. Las conclusiones obtenidas invitan a la reflexión, en tanto que los estudiantes de más reciente incorporación a la Universidad presentan tasas de actitud emprendedora superiores a las de sus compañeros más veteranos
Abstract:
The objetive of the present research consists of analyzing if the Universities, as organisms that might act as incubators of business ideas, really are fulfilling this role and are stimulating the entrepreneurship attitude among their students, through the structure of its studies and the specific measures that they undertake. The raised hypotheses over of the higher training in their studies, generic and specific, they are confirmed using a sample of 668 students of a university of Madrid. The obtained conclusions invite to the reflection, while the students of more recent incorporation to the University present rates of entrepreneurship attitude higher to those of more veteran colleages
Abstract:
In this paper we propose an instrument for collecting sensitive data that allows for each participant to customize the amount of information that she is comfortable revealing. Current methods adopt a uniform approach where all subjects are afforded the same privacy guarantees; however, privacy is a highly subjective property with intermediate points between total disclosure and non-disclosure: each respondent has a different criterion regarding the sensitivity of a particular topic. The method we propose empowers respondents in this respect while still allowing for the discovery of interesting findings through the application of well-known inferential procedures
Abstract:
In this article, we introduce the partition task problem class along with a complexity measure to evaluate its instances and a performance measure to quantify the ability of a system to solve them. We explore, via simulations, some potential applications of these concepts and present some results as examples that highlight their usefulness in policy design scenarios, where the optimal number of elements in a partition or the optimal size of the elements in a partition must be determined
Abstract:
In a negative representation, a set of elements (the positive representation) is depicted by its complement set. That is, the elements in the positive representation are not explicitly stored, and those in the negative representation are. The concept, feasibility, and properties of negative representations are explored in the paper; in particular, its potential to address privacy concerns. It is shown that a positive representation consisting of n l-bit strings can be represented negatively using only O(ln) strings, through the use of an additional symbol. It is also shown that membership queries for the positive representation can be processed against the negative representation in time noworse than linear in its size, while reconstructing the original positive set from its negative representation is anNP-hard problem. The paper introduces algorithms for constructing negative representations as well as operations for updating and maintaining them
Abstract:
This paper proposes a strategy for administering a survey that is mindful of sensitive data and individual privacy. The survey seeks to estimate the population proportion of a sensitive variable and does not depend on anonymity, cryptography, or legal guarantees for its privacy preserving properties. Our technique presents interviewees with a question and t possible answers, and asks participants to eliminate one of the t-1 alternatives at random. We introduce a specific setup that requires just a single coin as randomizing device, and that limits the amount of information each respondent is exposed to by presenting to her/him only a subset of the question's alternatives. Finally we conduct a simulation study to provide evidence of the robustness against the response and the nonresponse bias of the suggested procedure
Abstract:
In this paper we present a method for hiding a list of data by mixing it with a large amount of superfluous items. The technique uses a device known as a negative database which stores the complement of a set rather that the set itself to include an arbitrary number of garbage entries efficiently. The resulting structure effectively hides the data, without encrypting it, and obfuscates the number of data items hidden; it prevents arbitrary data lookups, while supporting simple membership queries; and can be manipulated to reflect relational algebra operations on the original data
Abstract:
A set D B of data elements can be represented in terms or its complement set, known as a negative database. That is, all of the elements not in D B are represented, and D B itself is not explicitly stored. This method of representing data has certain properties that are relevant for privacy enhancing applications. The paper reviews the negative database (N D B) representation scheme for storing a negative image compactly, and proposes using a collection of N D Bs to represent a single D B, that is, one N D B is assigned for each record in D B. This method has the advantage of producing negative databases that are hard to reverse in practice, i.e., from which it is hard to obtain DB. This result is obtained by adapting a technique for generating hard-to-solve 3-SAT formulas. Finally we suggest potential avenues of application
Abstract:
The benefits of negative detection for obscuring information are explored in the context of Artificial Immune Systems (AIS). AIS based on string matching have the potential for an extra security feature in which the "normal" profile of a system is hidden from its possible hijackers. Even if the model of normal behavior falls into the wrong hands, reconstructing the set of valid or "normal" strings is an NP-hard problem. The data-hiding aspects of negative detection are explored in the context of an application to negative databases. Previous work is reviewed describing possible representations and reversibility properties for privacy-enhancing negative databases. New algorithms are presented which allow on-line creation, updates and clean-up of negative databases, some experimental results illustrate the impact of these operations on the size of the negative database. Finally some future challenges are discussed
Abstract:
In anomaly detection, the normal behavior of a process is characterized by a model, and deviations from the model are called anomalies. In behavior-based approaches to anomaly detection, the model of normal behavior is constructed from an observed sample of normally occurring patterns. Models of normal behavior can represent either the set of allowed patterns (positive detection) or the set of anomalous patterns (negative detection). A formal framework is given for analyzing the tradeoffs between positive and negative detection schemes in terms of the number of detectors needed to maximize coverage. For realistically sized problems, the universe of possible patterns is too large to represent exactly (in either the positive or negative scheme). Partial matching rules generalize the set of allowable (or unallowable) patterns, and the choice of matching rule affects the tradeoff between positive and negative detection. A new match rule is introduced, called r-chunks, and the generalizations induced by different partial matching rules are characterized in terms of the crossover closure. Permutations of the representation can be used to achieve more precise discrimination between normal and anomalous patterns. Quantitative results are given for the recognition ability of contiguous-bits matching together with permutations
Abstract:
Dementia is characterized by a progressive deterioration in cognitive functions and behavioral problems. Due to its importance, in the domain of Internet of Things (IoT), where physical objects are connected to the internet, a myriad of systems have been proposed to support people with dementia, their caregivers, and medical experts. However, the vast and increasing number of research efforts has led to a complex state of the art, which is in need of a methodological analysis and a characterization of its key aspects. Based on the PRISMA guidelines, this article presents a systematic review aimed at investigating the state of the art of the IoT in dementia regardless of the dementia category and/or its cause. Articles published within the period of January 2017 to November 2022 were searched in well-known scientific databases. The searches retrieved a total of 2733 records, which were narrowed down to 104 relevant studies by applying inclusion, exclusion, and quality criteria. A set of 13 research questions at the intersection of IoT and dementia were posed, which guided the analysis of the selected studies. The systematic review contributes (i) an in-depth methodological analysis of recent and relevant IoT systems in the domain of dementia; (ii) a taxonomy that identifies, characterizes, and categorizes key aspects of IoT research focused on dementia; and (iii) a series of future work directions to advance the field of IoT in the dementia domain
Abstract:
Background and objective: In day centers, people with dementia are assigned to specific groups to receive care according to the progression of the disease. This article presents the design and evaluation of a dashboard aimed at facilitating the comprehension of the progression of people with dementia to support decision-making of healthcare professionals (HCPs) when determining patient-group assignment. Materials and method: A participatory design methodology was followed to build the dashboard. The grounded theory methodology was utilized to identify requirements. A total of 8 HCPs participated in the design and evaluation of a low-fidelity prototype. The perceived usefulness and perceived ease of use of the high-fidelity prototype was evaluated by 15 HCPs (from several day centers) and 38 psychology students utilizing a questionnaire based on the technology acceptance model. Results: HCPs perceived the dashboard as extremely likely to be useful (Mdn = 6.5 out of 7) and quite likely to be usable (Mdn = 6 out of 7). Psychology students perceived the dashboard as quite likely to be useful and usable (both with Mdn= 6)
Resumen:
El aumento de sanciones por violaciones de la privacidad motiva la definición de una metodología de evaluación de la utilidad de la información y de la preservación de la privacidad de datos a publicar. Al desarrollar un caso de estudio se provee un marco de trabajo para la medición de la preservación de la privacidad. Se exponen problemas en la medición de la utilidad de los datos y se relacionan con la preservación de la privacidad en datos a publicar. Se desarrollan modelos de aprendizaje máquina para determinar el riesgo de predicción de atributos sensibles y como medio de verificación de la utilidad de los datos. Los hallazgos motivan la necesidad de adecuar la medición de la preservación de la privacidad a los requerimientos actuales y a medios de ataque sofisticados como el aprendizaje máquina
Abstract:
The grown penalties for privacy violations motivate the definition of a methodology for evaluating the usefulness of information and the privacy-preserving data publishing. We developing a case study and we provided a framework for measuring the privacy-preserving. Problems are exposed in the measurement of the usefulness of the data and relate to privacy-preserving data publishing. Machine learning models are developed to determine the risk of predicting sensitive attributes and as a means of verifying the usefulness of the data. The findings motivate the need to adapt the privacy measures to current requirements and sophisticated attacks as the machine learning
Abstract:
This paper introduces an extension of the existing two-port 'Dynamic Energy Router' (DER) applied to a three-port system in the context of an electric vehicle integrating a hydrogen fuel cell and a supercapacitor. The control strategy implemented is feedback linearization to manage the nonlinear dynamics of the system and the efficient energy distribution between the multiports. By incorporating voltage control mechanisms in one port, this method addresses the challenges of the DER energy decay and enhances the overall performance of this device. Simulations are performed in MATLAB/Simulink for testing the DER under different load variations, allowing the system to distribute the energy while considering each device's restrictions and the defined energy management policy. This work contributes to extending the two-port DER to a three-port DER applied in a hydrogen fuel cell vehicle using a Feedback Linearization control technique, providing a versatile solution for a multi-port system where each port is capable of generating, storing and consuming energy
Abstract:
Scholars argue that electoral management bodies staffed by autonomous, non-partisan experts are best for producing credible and fair elections. We inspect the voting record of Mexico's Instituto Federal Electoral (IFE), an ostensibly independent bureaucratic agency regarded as extremely successful in organizing clean elections in a political system marred by fraud. We discover that the putative non-partisan experts of "autonomous" IFE behave as "party watchdogs" that represent the interests of their political party sponsors. To validate this party influence hypothesis, we examine roll-call votes cast by members of IFE's Council-General from 1996 to 2006. Aside from shedding light on lFE's failure to achieve democratic compliance in 2006, our analysis suggests that election arbiters that embrace partisan strife are quite capable of organizing free, fair, and credible elections in new democracies
Resumen:
Este trabajo propone una metodología para elaborar escenarios de cambio climático a escala local. Se usan modelos multivariados de series de tiempo para obtener pronósticos restringidos y se avanza en la literatura sobre métodos estadísticos de reducción de escala en varios aspectos. Así se logra: i) una mejor representación del clima a escala local; ii) evitar la posible ocurrencia de relaciones espurias entre variables de gran y pequeña escalas; iii) una representación apropiada de la variabilidad de las series en los escenarios de cambio climático, y iv) evaluar la compatibilidad y combinar la información de variables climáticas con las derivadas de los modelos de clima. La metodología propuesta es útil para integrar escenarios sobre la evolución de los factores de pequeña escala que influyen en el clima local. De esta forma, al escoger distintas evoluciones que representen, por ejemplo, distintas políticas públicas sobre uso del suelo o control de contaminantes, la metodología ofrece una manera de evaluar la conveniencia de dichas políticas en términos de sus efectos para amplificar o atenuar los impactos del cambio climático
Abstract:
This paper proposes a new methodology for generating climate change scenarios at the local scale based on multivariate time series models and restricted forecasting techniques. This methodology offers considerable advantages over the current statistical downscaling techniques such as: (i) it provides a better representation of climate at the local scale; (ii) it avoids the occurrence of spurious relationships between the large and local scale variables; (iii) it offers a more appropriate representation of variability in the downscaled scenarios; and (iv) it allows for compatibility assessment and combination of the information contained in both observed and simulated climate variables. Furthermore, this methodology is useful for integrating scenarios of local scale factors that affect local climate. As such, the convenience of different public policies regarding, for example, land use change or atmospheric pollution control can be evaluated in terms of their effects for amplifying or reducing climate change impacts
Abstract:
The urge for higher resolution climate change scenarios has been widely recognized, particularly for conducting impact assessment studies. Statistical downscaling methods have shown to be very convenient f or this task, mainly because of their lower computational requirements in comparison with nested limited-area regional models or very high resolution Atmosphere–ocean General Circulation Models. Nevertheless, although some of the limitations of statistical downscaling methods are widely known and have been discussed in the literature, in this paper it is argued that the current approach for statistical downscaling does not guard against misspecified statistical models and that the occurrence of spurious results is likely if the assumptions of the underlying probabilistic model are not satisfied. In this case, the physics included in climate change scenarios obtained by general circulation models, could be replaced by spatial patterns and magnitudes produced by statistically inadequate models. Illustrative examples are provided for monthly temperature for a region encompassing Mexico and part of the United States. It is found that the assumptions of the probabilistic models do not hold for about 70 % of the gridpoints, parameter instability and temporal dependence being the most common problems. As our examples reveal, automated statistical downscaling “black-box” models are to be considered as highly prone to produce misleading results. It is shown that the Probabilistic Reduction approach can be incorporated as a complete and internally consistent framework for securing the statistical adequacy of the downscaling models and for guiding the respecification process, in a way that prevents the lack of empirical validity that affects current methods
Abstract:
We design a laboratory experiment to study behavior in a multidivisional organization. The organization faces a trade-off between coordinating its decisions across the divisions and meeting division-specific needs that are known only to the division managers, who can communicate their private information through cheap talk. While the results show close to optimal communication, we also find systematic deviations from optimal behavior in how the communicated information is used. Specifically, subjects' decisions show worse than predicted adaptation to the needs of the divisions in decentralized organizations and worse than predicted coordination in centralized organizations. We show that the observed deviations disappear when uncertainty about the divisions' local needs is removed and discuss the possible underlying mechanisms
Abstract:
We design a laboratory experiment in which an interested third party endowed with private information sends a public message to two conflicting players, who then make their choices. We find that third-party communication is not strategic. Nevertheless, a hawkish message by a third party makes hawkish behavior more likely while a dovish message makes it less likely. Moreover, how subjects respond to the message is largely unaffected by the third party’s incentives. We argue that our results are consistent with a focal point interpretation in the spirit of Schelling
Abstract:
Forward induction (FI) thinking is a theoretical concept in the Nash refinement literature which suggests that earlier moves by a player may communicate his future intentions to other players in the game. Whether and how much players use FI in the laboratory is still an open question. We designed an experiment in which detailed reports were elicited from participants playing a battle of the sexes game with an outside option. Many of the reports show an excellent understanding of FI, and such reports are associated more strongly with FI-like behavior than reports consistent with first mover advantage and other reasoning processes. We find that a small fraction of subjects understands FI but lacks confidence in others. We also explore individual differences in behavior. Our results suggest that FI is relevant for explaining behavior in games
Abstract:
This paper examines methodology for performing Bayesian inference sequentially on a sequence of posteriors on spaces of different dimensions. For this, we use sequential Monte Carlo samplers, introducing the innovation of using deterministic transformations to move particles effectively between target distributions with different dimensions. This approach, combined with adaptive methods, yields an extremely flexible and general algorithm for Bayesian model comparison that is suitable for use in applications where the acceptance rate in reversible jump Markov chain Monte Carlo is low. We use this approach on model comparison for mixture models, and for inferring coalescent trees sequentially, as data arrives
Resumen:
Este artículo hace una revisión del modelo de bienestar noruego cuyo objeto ha sido la protección y el mantenimiento de la clase media en el país. Para entenderlo, se analiza el concepto de clase media y su relación con las políticas públicas. En los países escandinavos, el papel que desarrolla el Estado en la economía es crucial para la generación de la clase media, lo que ha permitido un modelo socio-económico exitoso y de difícil aplicación en otros países donde el concepto de Estado difiere sustancialmente
Abstract:
This article reviews the Norwegian welfare model, the object of which has been the protection and maintenance of the middle class in the country. In order to understand it, the concept of middle class and its relationship with public policies is analyzed. In Scandinavian countries, the role of the State in the economy is crucial for the generation of the middle class, which has allowed a successful socio-economic model that is difficult to apply in other countries where the concept of the State differs substantially
Abstract:
Using new census-type data and a dynamic structural model, we study the effect of credit supply on investment by manufacturing firms during the Greek depression. Real factors (profitability, uncertainty, and taxes) account for only a fraction of the substantial drop in investment observed in the data. The reduction in credit supply has significant real effects, explaining 11–32% of the investment slump. We also find that exporting firms, which reduce investment and deleverage despite their improved profitability during the crisis, face a contraction in credit supply similar to that of non-exporters, suggesting that the credit-supply shock has a significant common component
Resumen:
El objetivo principal de este trabajo es documentar la amplia heterogeneidad a nivel institucional que existe dentro de los países así como investigar qué factores institucionales son los más relevantes para los franquiciantes multinacionales
Abstract:
The purpose of this paper is to document the extensive heterogeneity in institutions within countries and investigate which institutional factors are the most relevant for international brands
Resumo:
O principal objetivo desta investigação é justificar a grande heterogeneidade que existe nos países, assim como pesquisar quais fatores institucionais são os mais importantes para as franquias multinacionais
Abstract:
Introduction: Mathematical models and field data suggest that human mobility is an important driver for Dengue virus transmission. Nonetheless little is known on this matter due the lack of instruments for precise mobility quantification and study design difficulties. Materials and methods: We carried out a cohort-nested, case-control study with 126 individuals (42 cases, 42 intradomestic controls and 42 population controls) with the goal of describing human mobility patterns of recently Dengue virus-infected subjects, and comparing them with those of non-infected subjects living in an urban endemic locality. Mobility was quantified using a GPS-data logger registering waypoints at 60-second intervals for a minimum of 15 natural days. Results: Although absolute displacement was highly biased towards the intradomestic and peridomestic areas, occasional displacements exceeding a 100-Km radius from the center of the studied locality were recorded for all three study groups and individual displacements were recorded traveling across six states from central Mexico. Additionally, cases had a larger number of visits out of the municipality´s administrative limits when compared to intradomestic controls (cases: 10.4 versus intradomestic controls: 2.9, p = 0.0282). We were able to identify extradomestic places within and out of the locality that were independently visited by apparently non-related infected subjects, consistent with houses, working and leisure places. Conclusions: Results of this study show that human mobility in a small urban setting exceeded that considered by local health authority’s administrative limits, and was different between recently infected and non-infected subjects living in the same household. These observations provide important insights about the role that human mobility may have in Dengue virus transmission and persistence across endemic geographic areas that need to be taken into account when planning preventive and control ...
Resumen:
Se realizó una investigación con 38 estudiantes universitarios de un curso de Física I con el fin de indagar acerca de los conocimientos matemáticos que ellos han aprendido acerca de la transformación de funciones, tras haber cursado Cálculo Diferencial. La investigación se enmarca en la Teoría de la Representaciones Semióticas de Duval. Se realizó una evaluación diagnóstica referida a las transformaciones Af(Bx+C)+D aplicadas sobre las funciones x2 y senx. El artículo contribuye mostrando las posibles causas de las dificultades presentadas por los estudiantes a la luz del marco teórico, además de proponer estrategias que contribuyan a mejorar su aprendizaje
Abstract:
A study was carried out with 38 university students enrolled in the Physics I course in order to investigate what they had learned about the transformation of functions as part of their mathematical knowledge, after having completed a course on Differential Calculus. The research is framed by Duval’s Theory of Semiotic Representations. A diagnostic assessment test was carried out concerning transformations of the form Af(Bx+C)+D applied to the functions x2 and sinx when the parameters included in them are varied one by one. The article contributes by showing possible reasons to explain student’s difficulties in the light of the theoretical framework, and by proposing strategies that may contribute to improve student's learning
Resumo:
Foi realizada uma pesquisa com 38 estudantes universitários de um curso de Física I, a fim de investigar o conhecimento matemático que eles aprenderam sobre a transformação de funções, depois de concluir a cadeira de Cálculo Diferencial. A pesquisa está enquadrada na Teoria das Representações Semióticas de Duval. Foi realizada uma avaliação diagnóstica referente às transformações Af(Bx+C)+D aplicadas às funções x2 e senx. O artigo contribui mostrando as possíveis causas das dificuldades apresentadas pelos alunos à luz do referencial teórico, além de propor estratégias que contribuam para melhorar sua aprendizagem
Abstract:
Purpose. This paper studies the determinants of the debt maturity of Mexican-listed companies by analysing the effects on the extensive (issuing or liquidating debt) and the intensive (debt maturity renegotiation) margins. Design/methodology/approach. This study, using a Tobit model for panel data and measuring maturity as a time variable, shows that size, liquidity and leverage, among other firm characteristics, as well as the market interest rate, explain debt maturity. Additionally, the study employs the McDonald and Moffitt decomposition to determine whether the explanatory variables of maturity have a more significant effect on the decision to issue or liquidate debt or on debt maturity renegotiations. Findings. The results obtained highlight that the market interest rate negatively affects debt maturity. On the other hand, variables like size, liquidity, collateral and leverage demonstrate a positive relationship with the dependent variable. In addition, the extensive margin has a higher impact on corporate debt than the intensive margin, suggesting that firms prefer to liquidate or issue new debt rather than renegotiate preexisting contracts. Research limitations/implications. The main limitation of this study is the use of an unbalanced panel. The lack of data limits the application of specific methodologies suggested by the literature as a way to test the robustness of the estimates. Originality/value. First of all, this study adds empirical evidence of debt maturity decisions by publicly traded firms in a middle-income country such as Mexico to the existing literature on maturity choice. Second, the study treats debt maturity as a time-censored, limited variable. Finally, the authors have used the McDonald and Moffitt (1980) methodology to decompose the effect of each independent variable into extensive and intensive margins
Resumen:
El objetivo de esta investigación es identificar los determinantes de la madurez de la deuda para las empresas mexicanas que cotizan en la BMV, usando una definición alternativa de esta variable dependiente. En particular, se define la madurez como "tiempo para expiración del contrato" considerando el promedio ponderado del tiempo a vencimiento, contribución original del presente trabajo. Se utilizan modelos de datos panel y de selección de Heckman, pues el uso de datos longitudinales en un panel desbalanceado puede presentar problemas de selección en forma de atrición. Los resultados sugieren que el sesgo por atrición es significativo, y que la madurez promedio de la deuda está determinada por variables como tamaño y apalancamiento, entre otras característias de las empresas, así como la tasa de interés del mercado. Como principal limitación, se tienen las omisiones de datos de las fuentes de información utilizadas generando un panel corto y desbalanceado. Se concluye que al usar este método de medición de madurez se obtienen mejores resultados para analizar el plazo de vencimiento de la deuda, comparado con las métricas tradicionales en la literatura
Abstract:
This research aims to determinants of debt maturity for Mexican companies listed on the BMV, using an alternative definition of this dependent variable. Maturity is defined as "time to contract expiration" considering the weighted average of the time to expiration time, which contributes to the origonality of this work. Panel data models and Heckman selection models are used, since the use of longitudinal data in an unbalanced panel can present selection problems due to atrition. The results suggest that the attrition bias is significant, and that the average maturity of the debt is determined by firm characteristics such a size and leverage, among others, and the interest rate of the Mexican market. As a limitation and due to the omissions of data reported by the information sourced used for the analysis, a short and unbalanced panel is used. It is concluded that, by using this maturity alternative measurement method, better results are obtained to analyze the maturity of the debt, compared to the traditional metrics in the literature
Abstract:
This work is concerned with a reaction-diffusion system that has been proposed as a model to describe acid-mediated cancer invasion. More precisely, we consider the properties of travelling waves that can be supported by such a systém, and show that a rich variety of wave propagation dynamics, both fast and slow, is compatible with the model. In particular, asymptotic formulae for admissible wave profiles and bounds on their wave speeds are provided
Abstract:
A numerical study of model-based methods for derivative-free optimization is presented. These methods typically include a geometry phase whose goal is to ensure the adequacy of the interpolation set. The paper studies the performance of an algorithm that dispenses with the geometry phase altogether (and therefore does not attempt to control the position of the interpolation set). Data are presented describing the evolution of the condition number of the interpolation matrix and the accuracy of the gradient estimate. The experiments are performed on smooth unconstrained optimization problems with dimensions ranging between 2 and 15
Abstract:
Hospitals are convenient settings for the deployment of context-aware applications. The information needs of hospital workers are highly dependent on contextual variables, such as location, role and activity. While some of these parameters can be easily determined, others, such as activity are much more complex to estimate. This paper describes an approach to estimate the activity being performed by hospital workers. The approach is based on information gathered from a workplace study conducted in a hospital, in which 196 h of detailed observation of hospital workers was recorded. Contextual information, such as the location of hospital workers, artifacts being used, the people with whom they collaborate and the time of the day, is used to train a back propagation neural network to estimate hospital workers activities. The activities estimated include clinical case assessment, patient care, preparation, information management, coordination and classes and certification. The results indicate that the user activity can be correctly estimated 75% of the time (on average) which is good enough for several applications. We discuss how these results can be used in the design of activity-aware applications, arguing that recent advances in pervasive and networking technologies hold great promises for the deployment of such applications
Abstract:
Hospitals are convenient settings for deployment of ubiquitous computing technology. Not only are they technology-rich environments, but their workers experience a high level of mobility resulting in information infrastructures with artifacts distributed throughout the premises. Hospital information systems (HISs) that provide access to electronic patient records are a step in the direction of providing accurate and timely information to hospital staff in support of adequate decision-making. This has motivated the introduction of mobile computing technology in hospitals based on designs which respond to their particular conditions and demands. Among those conditions is the fact that worker mobility does not exclude the need for having shared information artifacts at particular locations. In this paper, we extend a handheld-based mobile HIS with ubiquitous computing technology and describe how public displays are integrated with handheld and the services offered by these devices. Public displays become aware of the presence of physicians and nurses in their vicinity and adapt to provide users with personalized, relevant information. An agent-based architecture allows the integration of proactive components that offer information relevant to the case at hand, either from medical guidelines or previous similar cases
Abstract:
We consider the motion of a planar rigid body in a potential twodimensional flow with a circulation and subject to a certain nonholonomic constraint. This model can be related to the design of underwater vehicles. The equations of motion admit a reduction to a 2-dimensional nonlinear system, which is integrated explicitly. We show that the reduced system comprises both asymptotic and periodic dynamics separated by a critical value of the energy, and give a complete classification of types of the motion. Then we describe the whole variety of the trajectories of the body on the plane
Abstract:
In the Black–Scholes–Merton model, as well as in more general stochastic models in finance, the price of an American option solves a parabolic variational inequality.When the variational inequality is discretized, one obtains a linear complementarity problem (LCP) that must be solved at each time step. This paper presents an algorithm for the solution of these types of LCPs that is significantly faster than the methods currently used in practice. The new algorithm is a two-phase method that combines the active-set identification properties of the projected successive over relaxation (SOR) iteration with the second-order acceleration of a (recursive) reduced-space phase.We show how to design the algorithm so that it exploits the structure of the LCPs arising in these financial applications and present numerical results that show the effectiveness of our approach
Abstract:
We develop a model of the politics of state capacity building undertaken by incumbent parties that have a comparative advantage in clientelism rather than in public goods provision. The model predicts that, when challenged by opponents, clientelistic incumbents have the incentive to prevent investments in state capacity. We provide empirical support for the model's implications by studying policy decisions by the Institutional Revolutionary Party that affected local state capacity across Mexican municipalities and over time. Our difference-in-differences and instrumental variable identification strategies exploit a national shock that threatened the Mexican government's hegemony in the early 1960s
Abstract:
We document how informal employment in Mexico is countercyclical, lags the cycle and is negatively correlated withformal employment. This contributes to explaining why total employment in Mexico displays low cyclicality and variability over the business cycle when compared to Canada, a developed economy with a much smaller share of informal employment. To account for these empirical findings, we build a business cycle model of a small, open economy that incorporates formal and informal labor markets and calibrate it to Mexico. The model performs well in terms of matching conditional and unconditional moments in the data. It also sheds light into the channels through which informal economic activity may affect business cycles. Introducing informal employment into a standard model amplifies the effects of productivity shocks. This is linked to productivity shocks being imperfectly propagated from the formal to the informal sector. It also shows how imperfect measurement of informal economic activity in national accounts can translate into stronger variability in aggregate economic activity
Abstract:
At the beginning of 2003, the debate which occurred within the United Nations Security Council about the Iraqi war has been one of the few international events which drew so much attention within Mexico's public opinion. This can be explained by seriousness of United States' violation of international law, but also by the difficult bilateral relations between Mexico - non permanent member state of the Security Council - and the United States at the heart of the Iraqi crisis. Regarding the debate's polarization between France and the United States at the United Nations, Mexico decided to side with France for three major reasons: the necessity to counterbalance American power, cultural affinities with France, and also because of an ingenuous attitude in view of the specific interests defended by France
Abstract:
Fernández-Durán and Gregorio-Domínguez (2014) defined a family of probability distributions for a vector of circular random variables by considering multiple nonnegative trigonometric sums. These distributions are highly flexible and can present numerous modes and skewness. Several operations on these multivariate distributions were translated into operations on the vector of parameters; for instance, marginalization involves calculating the eigenvectors and eigenvalues of a matrix, and independence among subsets of the vector of circular variables translates to a Kronecker product of the corresponding subsets of the vector of parameters. Furthermore, it was demonstrated that the family of multivariate circular distributions based on nonnegative trigonometric sums is closed under marginalization and conditioning, that is, the marginal and conditional densities of any order are also members of the family. The derivation of marginal and conditional densities from the joint multivariate density is important when applying this model in practice to real datasets. A goodness-of-fit test based on the characteristic function and an alternative parameter estimation algorithm for high-dimensional circular data was presented and applied to a real dataset on the daily times of occurrence of maxima and minima of prices in financial markets
Abstract:
The parameter space of nonnegative trigonometric sums (NNTS) models for circular data is the surface of a hypersphere; thus, constructing regression models for a circular-dependent variable using NNTS models can comprise fitting great (small) circles on the parameter hypersphere that can identify different regions (rotations) along the great (small) circle. We propose regression models for circular- (angular-) dependent random variables in which the original circular random variable, which is assumed to be distributed (marginally) as an NNTS model, is transformed into a linear random variable such that common methods for linear regression can be applied. The usefulness of NNTS models with skewness and multimodality is shown in examples with simulated and real data
Abstract:
The sum of independent circular uniformly distributed random variables is also circular uniformly distributed. In this study, it is shown that a family of circular distributions based on nonnegative trigonometric sums (NNTS) is also closed under summation. Given the flexibility of NNTS circular distributions to model multimodality and skewness, these are good candidates for use as alternative models to test for circular uniformity to detect different deviations from the null hypothesis of circular uniformity. The circular uniform distribution is a member of the NNTS family, but in the NNTS parameter space, it corresponds to a point on the boundary of the parameter space, implying that the regularity conditions are not satisfied when the parameters are estimated by using the maximum likelihood method. Two NNTS tests for circular uniformity were developed by considering the standardised maximum likelihood estimator and the generalised likelihood ratio. Given the nonregularity condition, the critical values of the proposed NNTS circular uniformity tests were obtained via simulation and interpolated for any sample size by the fitting of regression models. The validity of the proposed NNTS circular uniformity tests was evaluated by generating NNTS models close to the circular uniformity null hypothesis
Abstract:
The probability integral transform of a continuous random variable X with distribution function F X is a uniformly distributed random variable U = F X ( X ). We define the angular probability integral transform (APIT) as Θ U = 2πU = 2πFX (X), which corresponds to a uniformly distributed angle on the unit circle. For circular (angular) random variables, the sum modulus 2π of absolutely continuous independent circular uniform random variables is a circular uniform random variable, that is, the circular uniform distribution is closed under summation modulus 2π, and it is a stable continuous distribution on the unit circle. If we consider the sum (difference) of the APITs of two random variables, X1 and X2, and test for the circular uniformity of their sum (difference) modulus 2π, this is equivalent to test of independence of the original variables. In this study, we used a flexible family of nonnegative trigonometric sums (NNTS) circular distributions, which include the uniform circular distribution as a member of the family, to evaluate the power of the proposed independence test by generating samples from NNTS alternative distributions that could be at a closer proximity with respect to the circular uniform null distribution
Abstract:
Recent technological advances have enabled the easy collection of consumer behavior data in real time. Typically, these data contain the time at which a consumer engages in a particular activity such as entering a store, buying a product, or making a call. The occurrence time of certain events must be analyzed as circular random variables, with 24:00 corresponding to 0:00. To effectively implement a marketing strategy (pricing, promotion, or product design), consumers should be segmented into homogeneous groups. This paper proposes a methodology based on circular statistical models from which we construct a clustering algorithm based on the use patterns of consumers. In particular, we model temporal patterns as circular distributions based on nonnegative trigonometric sums (NNTSs). Consumers are clustered into homogeneous groups based on their vectors of parameter estimates by using a spherical k-means clustering algorithm. For this purpose, we define the parameter space of NNTS models as a hypersphere. The methodology is applied to three real datasets comprising the times at which individuals send short-service messages and start voice calls and the check-in times of the users of a mobile application Foursquare
Abstract:
Fernández-Durán [Circular distributions based on nonnegative trigonometric sums. Biometrics. 2004;60:499–503] developed a new family of circular distributions based on non-negative trigonometric sums that is suitable for modelling data sets that present skewness and/or multimodality. In this paper, a Bayesian approach to deriving estimates of the unknown parameters of this family of distributions is presented. Because the parameter space is the surface of a hypersphere and the dimension of the hypersphere is an unknown parameter of the distribution, the Bayesian inference must be based on transdimensional Markov Chain Monte Carlo (MCMC) algorithms to obtain samples from the high-dimensional posterior distribution. The MCMC algorithm explores the parameter space by moving along great circles on the surface of the hypersphere. The methodology is illustrated with real and simulated data sets
Abstract:
The statistical analysis of circular, multivariate circular, and spherical data is very important in different areas, such as paleomagnetism, astronomy and biology. The use of nonnegative trigonometric sums allows for the construction of flexible probability models for these types of data to model datasets with skewness and multiple modes. The R package CircNNTSR includes functions to plot, fit by maximum likelihood, and simulate models based on nonnegative trigonometric sums for circular, multivariate circular, and spherical data. For maximum likelihood estimation of the models for the three different types of data an efficient Newton-like algorithm on a hypersphere is used. Examples of applications of the functions provided in the CircNNTSR package to actual and simulated datasets are presented and it is shown how the package can be used to test for uniformity, homogeneity, and independence using likelihood ratio tests
Resumen:
El objetivo de este artículo es probar la hipótesis de la curva U invertida de Kuznets en la relación entre el consumo de agua per cápita para uso agrícola y ganadero, que representa en promedio 75% del consumo total, y el PIB per cápita para los municipios en la cuenca Lerma-Chapala. En el contexto de cambio climático, la relación entre consumo de agua y PIB es muy importante, pues la variabilidad en la disponibilidad del agua ha aumentado, forzando a los usuarios y gobiernos a considerar estrategias para su uso eficiente, en donde se incluyan los posibles impactos económicos y ambientales. Al llevar a cabo el análisis a nivel de una cuenca hidrográfica es necesario considerar los efectos espaciales entre municipios vecinos a través de la aplicación de modelos autorregresivos espaciales. Al incluir errores correlacionados espacialmente en los modelos de regresión, no se rechaza la hipótesis de la curva U invertida de Kuznets. Por tanto, cualquier estrategia de mitigación del cambio climático relacionada con el uso eficiente del agua debe ser evaluada en sus costos y beneficios en el PIB municipal en relación con la curva de Kuznets estimada en este artículo
Abstract:
The main objective of this article is to test the Kuznets inverted U-curve hypothesis in the relation between water consumption, for which agriculture and farming represent on average 75% of total consumption, and the GDP in municipalities located in the Lerma-Chapala hydrographic basin. Given the context of climate change, it is essential to understand the relationship between water consumption and GDP: the increased variability in the availability of water has forced governments and users to implement strategies for the efficient use of water resources, and thus they must consider not only likely environmental problems but also economic impact. Using data at the municipal level in a hydrographic basin, we consider the spatial effects among the different municipalities; these effects are modeled using spatial autoregressive models. The Kuznets inverted U-curve hypothesis is not rejected when allowing for spatially correlated errors. Thus, any strategy for mitigating climate change by making an efficient use of water resources must be evaluated in terms of its costs and benefits in the PIB of the municipality in relation to the fitted Kuznets curve presented in this article
Abstract:
The generational cohort theory states that groups of individuals who experienced the same social, economic, political, and cultural events during early adulthood (17–23 years) would share similar values throughout their lives. Moreover, they would act similarly when making decisions in different aspects of life, particularly when making decisions as consumers. Thus, these groups define market segments, which is relevant in the design of marketing strategies. In Mexico, marketing researchers commonly use U.S. generational cohorts to define market segments, despite sufficient evidence that the generational cohorts in the two countries are not identical because of differences in national historic events. This paper proposes a methodology based on change-point analysis and ordinal logistic regressions to obtain a new classification of generational cohorts for Mexican urban consumers, using data from a 2010 nationwide survey on the values of individuals across age groups
Abstract:
Femández-Durán, and Gregorio-Domínguez, Seasonal Mortality for Fractional Ages in Life Insurance. Scandinavian Actuarial Joumal. A uniform distribution of deaths between integral ages is a widely used assumption for estimating future-lifetimes; however, this assumption does not necessarily reflect the true distribution of deaths throughout the year. We propose the use of a seasonal mortality assumption for estimating the distribution of future-lifetimes between integral ages: this assumption accounts for the number of deaths that occurs in given months of the year, including the excess mortality that is observed in winter months. The impact of this seasonal mortality assumption on short-term life insurance premium calculations is then examined by applying the proposed assumption to Mexican mortality data
Abstract:
A family of distributions for a random pair of angles that determine a point on the surface of a three-dimensional unit sphere (three-dimensional directions) is proposed. It is based on the use of nonnegative double trigonometric (Fourier) sums (series). Using this family of distributions, data that possess rotational symmetry, asymmetry or one or more modes can be modeled. In addition, the joint trigonometric moments are expressed in terms of the model parameters. An efficient Newton-like optimization algorithm on manifolds is developed to obtain the maximum likelihood estimates of the parameters. The proposed family is applied to two real data sets studied previously in the literature. The first data set is related to the measurements of magnetic remanence in samples of Precambrian volcanics in Australia and the second to the arrival directions of low mu showers of cosmic rays
Abstract:
Fernández-Durán, J. J. (2004): “Circular distributions based on nonnegative trigonometric sums,” Biometrics, 60, 499–503, developed a family of univariate circular distributions based on nonnegative trigonometric sums. In this work, we extend this family of distributions to the multivariate case by using multiple nonnegative trigonometric sums to model the joint distribution of a vector of angular random variables. Practical examples of vectors of angular random variables include the wind direction at different monitoring stations, the directions taken by an animal on different occasions, the times at which a person performs different daily activities, and the dihedral angles of a protein molecule. We apply the proposed new family of multivariate distributions to three real data-sets: two for the study of protein structure and one for genomics. The first is related to the study of a bivariate vector of dihedral angles in proteins. In the second real data-set, we compare the fit of the proposed multivariate model with the bivariate generalized von Mises model of [Shieh, G. S., S. Zheng, R. A. Johnson, Y.-F. Chang, K. Shimizu, C.-C. Wang, and S.-L. Tang (2011): “Modeling and comparing the organization of circular genomes,” Bioinformatics, 27(7), 912–918.] in a problem related to orthologous genes in pairs of circular genomes. The third real data-set consists of observed values of three dihedral angles in γ-turns in a protein and serves as an example of trivariate angular data. In addition, a simulation algorithm is presented to generate realizations from the proposed multivariate angular distribution
Abstract:
The Bass Forecasting Diffusion Model is one of the most used models to forecast the sales of a new product. It is based on the idea that the probability of an initial sale is a function of the number of previous buyers. Almost all products exhibit seasonality in their sales patterns and these seasonal effects can be influential in forecasting the weekly/monthly/quarterly sales of a new product, which can also be relevant to making different decisions concerning production and advertising. The objective of this paper is to estimate these seasonal effects using a new family of distributions for circular random variables based on nonnegative trigonometric sums and to use this family of circular distributions to define a seasonal Bass model. Additionally, comparisons in terms of one-step-ahead forecasts between the Bass model and the proposed seasonal Bass model for products such as iPods, DVD players, and Wii Play video game are included
Abstract:
In medical and epidemiological studies, the importance of detecting seasonal patterns in the occurrence of diseases makes testing for seasonality highly relevant. There are different parametric and nonparametric tests for seasonality. One of the most widely used parametric tests in the medical literature is the Edwards test. The Edwards test considers a parametric alternative that is a sinusoidal curve with one peak and one trough. The Cave and Freedman test is an extension of the Edwards test that is also frequently applied and considers a sinusoidal curve with two peaks and two troughs as the alternative hypothesis. The Kuiper, Hewitt and David and Newell are common non-parametric tests. Fernández-Durán (2004) developed a family of univariate circular distributions based on non-negative trigonometric (Fourier) sums (series) (NNTS) that can account for an arbitrary number of peaks and troughs. In this article, this family of distributions is used to construct a likelihood ratio test for seasonality considering parametric alternative hypotheses that are NNTS distributions
Resumen:
El capital social se puede definir como la capacidad de personas o grupos de obtener beneficios por medio del uso de redes sociales (Robinson, Siles y Schmid, y Flores y Rello, 2003). En el presente artículo se utilizan los datos de la segunda ronda de entrevistas del Panel de Hogares de Escasos Recursos en la delegación Álvaro Obregón, México, Distrito Federal (PAO), para ajustar modelos de regresión con variables instrumentales con el objetivo de identificar variables asociadas al capital social en redes sociales que sean significativas para explicar el porcentaje del ingreso que los hogares de PAO suelen gastar en comida. Además, se presenta un análisis factorial para identificar las variables medidas en PAO que están relacionadas con las dimensiones de confianza, redes sociales y aceptación de normas sociales que constituyen el concepto de capital social
Abstract:
Social capital can be defined as the capacity of individuals or groups to obtain benefits by participating in social networks (Robinson, Siles and Schmid and, Flores and Rello in Atria and Siles eds. 2003). In this paper, we use data from the second round of the Panel of Low Income Households in the Delegación Álvaro Obregón, México, D.F. (PAO) to fit regression models with instrumental variables in order to identify significant proxy variables related to the social capital in social networks to explain the percentage of income that households in PAO spend in food. Also, we present the results of a factor analysis to identify the variables in PAO that are related with the dimensions of trust, social networks and accepted norms that are main elements of the definition of social capital
Abstract:
In Fernández-Durán, a new family of circular distributions based on nonnegative trigonometric sums (NNTS models) is developed. Because the parameter space of this family is the surface of the hypersphere, an efficient Newton-like algorithm on manifolds is generated in order to obtain the maximum likelihood estimates of the parameters
Abstract:
Johnson and Wehrly (1978, Journal of the American Statistical Association 73, 602-606) and Wehrly and Johnson (1980, Biometrika 67, 255-256) show one way to construct the joint distribution of a circular and a linear random variable, or the joint distribution of a pair of circular random variables from their marginal distributions and the density of a circular random variable, which in this article is referred to as joining circular density. To construct flexible models, it is necessary that the joining circular density be able to present multimodality and/or skewness in order to model different dependence patterns. Fernandez- Durain (2004, Biometrics 60, 499-503) constructed circular distributions based on nonnegative trigonometric sums that can present multimodality and/or skewness. Furthermore, they can be conveniently used as a model for circular-linear or circular-circular joint distributions. In the current work, joint distributions for circular-linear and circular-circular data constructed from circular distributions based on nonnegative trigonometric sums are presented and applied to two data sets, one for circular-linear data related to the air pollution patterns in Mexico City and the other for circular-circular data related to the pair of dihedral angles between consecutive amino acids in a protein
Abstract:
In many practical situations it is common to have information about the values of the mean and range of certain population. In these cases it is important to specify the population distribution from the given values of its mean and range. The article by de Alba, Fernández-Durán and Gregorio-Domínguez (2004) includes a bibliography with articles dealing with the case of making inference about the mean and standard deviation of a normal population in terms of the observed sample mean and range. In this paper, the maximum entropy principle is used to specify the population distribution given its mean and range. This problem has the particular difficulty that it is necessary to determine the unknown support of the maximum entropy distribution given its length which is equal to the specified value of the range
Resumen:
México es un país donde ocurren distintos fenómenos naturales, como inundaciones, huracanes y terremotos, que pueden convertirse en desastres que requieren grandes sumas de dinero para mitigar su efecto económico en la población afectada. Generalmente, estas grandes sumas de dinero suelen ser aportadas por el gobierno federal y/o local. El objetivo del presente trabajo es el desarrollo de una metodología actuarial para el cálculo de bonos catastróficos para desastres naturales en México, de manera que, en caso de que el evento catastrófico ocurra durante la vigencia del bono, el gobierno cuente con fondos adicionales y, en caso de que no ocurra, los inversionistas que hayan comprado el bono obtengan tasas de interés superiores a la tasa libre de riesgo del mercado. Los bonos tienen la particularidad, a diferencia de bonos similares emitidos en otros países, que en caso de que ocurra el evento catastrófico el inversionista no pierde el total de su inversión sino sólo una parte o se le difiere su capital total o parte de éste a una fecha posterior a la de la finalización del contrato del bono catastrófico, esto con el fin de hacerlos más atractivos para el inversionista
Abstract:
Floods, hurricanes and earthquakes occur every year in Mexico. These natural phenomena can be considered as catastrophes if they produce large economic damages in the affected areas. In these cases it is required a huge amount of monéy to provide relief to the catastrophe victims and areas. Usually, in Mexico it is the local andlor federal governments that are responsible to provide these funds. The main objective of this article is to develop an actuarial methodologyfor the pricing of CAT bonds in Mexico in order to allow the government to have additional funds to provide relief to the affected victims and areas in case that the catastrophic event occurs during the CAT bond periodo If the catastrophic event does not occur during the CAT bond period then the CAT bondholders will get a higher interest rate than the (risk-free) reference interest rate in the market. To make the CAT bond more attractive to investors the CAT bonds considered in this work have the additional characteristic that the CAT bondholders do not necessarily lose all their initial investment if the catastrophic event occurs. Instead a percentage of the CAT bond principal is lost or their initial investment is paid in a date after the end of the CAT bond period
Abstract:
A new family of distributions for circular random variables is proposed. It is based on nonnegative trigonometric sums and can be used to model data sets which present skewness and/or multimodality. In this family of distributions, the trigonometric moments are easily expressed in terms of the parameters of the distribution. The proposed family is applied to two data sets, one related with the directions taken by ants and the other with the directions taken by turtles, to compare their goodness of fit versus common distributions used in the literature
Abstract:
Many random variables occurring in nature are circular random variables, i.e., its probability density function has period 2 pi and its support is the unit circle. The support of a linear random variable is a subset of the real line. When one is interested in the relation between a circular random variable and a linear random variable it is necessary to construct their joint distribution. The support of the joint distribution of a circular and a linear random variable is a cylinder. In this paper, we use copulas and circular distributions based on non-negative trigonometric sums to construct the joint distribution of a circular and a linear random variable. As an application of the proposed methodology the hourly quantile curves of ground-level ozone concentration for a monitoring station in Mexico City are analyzed. In this case the circular random variable has a uniform distribution if we have the same number of observations in each hour during the day and, the linear random variable is the ground-level ozone concentration
Abstract:
Consider a portfolio of personal motor insurance policies in which. for each policyholder in the portfolio, we want to assign a credibility factor at the end of each policy period that reflects the claim experience of the policyholder compared with the claim experience of the entire portfolio. In this paper we present the calculation of credibility factors based on the concept of relative entropy between the claim size distribution of the entire portfolio and the claim size distribution of the policyholder
Abstract:
By using data of the elections for the Chamber of Deputies of 1997 and 2000 in Mexico, we fit spatial autologistic models with temporal effects to test the significance of spatial and temporal effects on those elections. The binary variable of interest is the one that indicates a win of the National Action Party (PAN) or the alliance that it formed. By spatial effect, we refer to the fact that neighbouring constituencies present dependence on their electoral results. The temporal effect refers to the existence of dependence, for the same constituency, of the result of the election with the result of the previous election. The model that we used to test the significance of spatial and temporal effects is the spatial autologistic model with temporal effects for which estimation is complex and requires simulation techniques. By defining an urban constituency as one that contains at least one population center of 200,000 inhabitants or more, among our principal results, we find that, for the Mexican election of 2000, the spatial effect is significant only when neighbouring constituencies are both urban. For the election of 1997, the spatial effect is significant independent of the type of neighbouring constituencies. The temporal effect is significant on both elections
Resumen:
Cuando se otorga un crédito a plazo fijo a una persona para comprar un bien de consumo duradero existe la posibilidad de que ésta no pueda hacer frente a los pagos del crédito debido a que pierda su empleo de manera involuntaria. Esta situación representa un problema tanto para el deudor como para el acreedor. El acreedor puede incurrir en una pérdida operativa y el deudor ser despojado del bien. En este trabajo se establece una metodología para el cálculo de la prima de un seguro de desempleo cuyos beneficios, en caso de desempleo involuntario del deudor, son el pago de un número máximo (predeterminado en el contrato del seguro) de las mensualidades de su crédito durante el periodo de desempleo. El seguro propuesto tiene una vigencia igual a la del crédito y sólo el primer desempleo durante la vigencia del crédito es considerado para el pago de beneficios. El costo del seguro se obtiene estimando las tasas de transición de una cadena de Markov en tiempo continuo con dos estados (empleado y desempleado). Estas tasas de transición se modelan como funciones de covariables, como el género, el estado civil, la edad y la escolaridad. Para la estimación de las tasas de transición se utilizan bases de datos de la Encuesta Nacional de Empleo Urbano (ENEU) (INEGI, 1998). Al considerar todas las posibles trayectorias de empleo-desempleo en la duración del crédito, así como las probabilidades de cada una de ellas, obtenemos la distribución de probabilidades de la cantidad mensual requerida por el seguro definida como la cantidad mensual que el asegurado debería pagar para cubrir las mensualidades de su crédito durante el primer periodo de desempleo si ocurriese la trayectoria de empleo-desempleo considerada. A partir de esta distribución de probabilidades es posible calcular distintas medidas, como la desviación estándar o cuantificar el riesgo del contrato del seguro
Abstract:
When a person makes an installment purchase of a durable good it is possible that he/she will be unable to make the periodical payments because he/she looses his/her employment involuntarily. This is an issue both to the borrower as well as to the creditor. The borrower can be deprived of the good and the creditor can incur an operational loss. In this paper we develop a methodology to calculate the net premium of an insurance against involuntary unemployment for installment purchases of durable goods. The benefits of this insurance are a predetermined number of installments during the unemployment spell. The insurance has the same starting date and duration as the installment purchase and only the first involuntary unemployment spell is considered for benefits. The cost of the insurance is obtained by using a two-state (employed-unemployed) continuous time Markov chain. The transition rates of the chain are modelled as functions of the covariates gender, marital status, age and educational level. By using these covariates it is possible to identify different risk groups. The transition rates are estimated by using data from the National Urban Employment Surver (Encuesta Nacional de Empleo Urbano ENEU, INEGI, 1998) in Mexico. By considering all the possible employed-unemployed trajectories during the installment purchase period and the probability of each of these trajectories, it is possible to obtain the probability distribution of the required amount for insurance which is defined as the monthly amount that the borrower should have to pay in order to cover the payments of the credit during the first unemployment spell if the considered trajectory occurs. From this probability distribution it is possible to calculate different measures such as the standard deviation or quantiles to set safety limits in the cost of the insurance and to give insight into the riskiness of the insurance contract
Abstract:
rocedure based on the combination of a Bayesian changepoint model and ordinary least squares is used to identify and quantify regions where a radar signal has been attenuated (i.e. diminished) as a consequence of intervening weather. A graphical polar display is introduced that illustrate the location and importance of the attenuation
Resumen:
La democracia no genera las mismas expectativas entre los mexicanos. La concepción que estos tienen de ella varía de acuerdo a su sistema de creencias, donde su nivel de información define los términos en que los mexicanos piensan a la democracia como forma de gobierno. Este ensayo busca dar una explicación a las diferencias que existen en las concepciones de la democracia entre las distintas regiones del país. Estas diferencias ponen en duda los argumentos que afirmaban entre los valores de los mexicanos existe un déficit democrático. Su concepción de la democracia es un reflejo del ambiente en el que vive el individuo y, en gran medida, un espejo de las características de la región en la que habita
Abstract:
We study a general scenario where confidential information is distributed among a group of agents who wish to share it in such a way that the data becomes common knowledge among them but an eavesdropper intercepting their communications would be unable to obtain any of said data. The information is modeled as a deck of cards dealt among the agents, so that after the information is exchanged, all of the communicating agents must know the entire deal, but the eavesdropper must remain ignorant about who holds each card. This scenario was previously set up in Fernández-Duque and Goranko (2014) as the secure aggregation of distributed information problem and provided with weakly safe protocols, where given any card c, the eavesdropper does not know with certainty which agent holds c. Here we present a perfectly safe protocol, which does not alter the eavesdropper’s perceived probability that any given agent holds c. In our protocol, one of the communicating agents holds a larger portion of the cards than the rest, but we show how for infinitely many values of a, the number of cards may be chosen so that each of the m agents holds more than a cards and less than 4m2a
Abstract:
We consider the generic problem of Secure Aggregation of Distributed Information (SAD 1), where several agents acting as a team have information distributed amongst them, modelled by means of a publicly known deck of cards distributed amongst the agents, so that each of them knows only her cards. The agents have to exchange and aggregate the information about how the cards are distributed amongst them by means of public announcements over insecure communication channels, intercepted by an adversary "eavesdropper", in such a way that the adversary does not learn who holds any of the cards. We present a combinatorial construction of protocols that provides a direct solution of a class of SADI problems and develop a technique of iterated reduction of SADI problems to smaller ones which are eventually solvable directly. We show that our methods provide a solution to a large class of SADI problems, including all SADI problems with sufficiently large size and sufficiently balanced card distributions
Abstract:
This article uses possible-world semantics to model the changes that may occur in an agent's knowledge as she loses information. This builds on previous work in which the agent may forget the truth-value of an atomic proposition, to a more general case where she may forget the truth-value of a propositional formula. The generalization poses some challenges, since in order to forget whether a complex proposition π is the case, the agent must also lose information about the propositional atoms that appear in it, and there is no unambiguous way to go about this. We resolve this situation by considering expressions of the form [‡π]ϕ, which quantify over all possible (but 'minimal') ways of forgetting whether π. Propositional atoms are modified non-deterministically, although uniformly, in all possible worlds. We then represent this within action model logic in order to give a sound and complete axiomatization for a logic with knowledge and forgetting. Finally, some variants are discussed, such as when an agent forgets π (rather than forgets whether π) and when the modification of atomic facts is done non-uniformly throughout the model
Abstract:
Dynamic topological logic (DTL) is a polymodal logic designed for reasoning about dynamic topological systems. These are pairs (X,f), where X is a topological space and f : X → X is continuous. DTL uses a language L which combines the topological S4 modality with temporal operators from linear temporal logic. Recently, we gave a sound and complete axiomatization DTL* for an extension of the logic to the language L*, where is allowed to act on finite sets of formulas and is interpreted as a tangled closure operator. No complete axiomatization is known in the language L, although one proof system, which we shall call KM, was conjectured to be complete by Kremer and Mints. In this article, we show that given any language L' such that L ⊆ L' ⊆ L*, the set of valid formulas of L' is not finitely axiomatizable. It follows, in particular, that KM is incomplete
Abstract:
Provability logics are modal or polymodal systems designed for modeling the behavior of Gödel’s provability predicate and its natural extensions. If ALPHA is any ordinal, the Gödel-Löb calculus GLPΛ contains one modality [λ] for each λ < Λ, representing provability predicates of increasing strength. GLPω has no non-trivial Kripke frames, but it is sound and complete for its topological semantics, as was shown by Icard for the variable-free fragment and more recently by Beklemishev and Gabelaia for the full logic. In this paper we generalize Beklemishev and Gabelaia’s result to GLPΛ for countable Λ. We also introduce provability ambiances, which are topological models where valuations of formulas are restricted. With this we show completeness of GLPΛ for the class of provability ambiances based on Icard polytopologies
Abstract:
This article studies the transfinite propositional provability logics GLPΛ and their corresponding algebras. These logics have for each ordinal ξ< Λ a modality 〈ξ〉. We will focus on the closed fragment of GLPΛ (i.e. where no propositional variables occur) and worms therein. Worms are iterated consistency expressions of the form 〈ξn〉…〈ξ1〉Τ. Beklemishev has defined well-orderings <ξ on worms whose modalities are all at least ξ and presented a calculus to compute the respective order-types. In the current article, we present a generalization of the original < ξ orderings and provide a calculus for the corresponding generalized order-types oξ. Our calculus is based on so-called hyperations which are transfinite iterations of normal functions. Finally, we give two different characterizations of those sequences of ordinals which are of the form 〈oξ(A)〉ξεOn for some worm A. One of these characterizations is in terms of a second kind of transfinite iteration called cohyperation
Abstract:
Ordinal functions may be iterated transfinitely in a natural way by taking pointwise limits at limit stages. However, this has disadvantages, especially when working in the class of normal functions, as pointwise limits do not preserve normality. To this end we present an alternative method to assign to each normal function f a family of normal functions Hyp[ f ]=(f^ξ) ξ∈On, called its hyperation, in such a way that f^0 = id, f^1 = f and f^α+β = f^α ◦ f^β for all α, β. Hyperations are a refinement of the Veblen hierarchy of f . Moreover, if f is normal and has a well-behaved left-inverse g called a left adjoint, then g can be assigned a cohyperation coH[g]=(g^ξ) ξ∈On, which is a family of initial functions such that g^ξ is a left adjoint to f^ξ for all ξ
Abstract:
For any ordinal Λ, we can define a polymodal logic GLP(Λ), with a modality [ξ] for each ξ<Λ. These represent provability predicates of increasing strength. Although GLP(Λ) has no Kripke models, Ignatiev showed that indeed one can construct a Kripke model of the variable-free fragment with natural number modalities, denoted GLP(Θ)(0). Later, Icard defined a topological model for GLP(Θ)(0); which is very closely related to Ignatiev's. In this paper we show how to extend these constructions for arbitrary Λ. More generally, for each Θ. Λ we build a Kripke model J(Λ)(Θ)and a topological model , and show that GLP(Λ)(Θ) is sound for both of these structures, as well as complete, provided Θ is large enough
Abstract:
We show that given a finite, transitive and reflexive Kripke model 〈 W, ≼, [ ⋅ ] 〉 and w∈W , the property of being simulated by w (i.e., lying on the image of a literal preserving relation satisfying the ‘forth’ condition of bisimulation) is modally undefinable within the class of S4 Kripke models. Note the contrast to the fact that lying in the image of w under a bisimulation is definable in the standard modal language even over the class of K4 models, a fairly standard result for which we also provide a proof. We then propose a minor extension of the language adding a sequent operator ♮ (‘tangle’) which can be interpreted over Kripke models as well as over topological spaces. Over finite Kripke models it indicates the existence of clusters satisfying a specified set of formulas, very similar to an operator introduced by Dawar and Otto. In the extended language L+=L□♮ , being simulated by a point on a finite transitive Kripke model becomes definable, both over the class of (arbitrary) Kripke models and over the class of topological S4 models. As a consequence of this we obtain the result that any class of finite, transitive models over finitely many propositional variables which is closed under simulability is also definable in L +, as well as Boolean combinations of these classes. From this it follows that the μ-calculus interpreted over any such class of models is decidable
Resumen:
Young citizens vote at relatively low rates, which contributes to political parties de-prioritizing youth preferences. We analyze the effects of low-cost online interventions in encouraging young Moroccans to cast an informed vote in the 2021 elections. These interventions aim to reduce participation costs by providing information about the registration process and by highlighting the election's stakes and the distance between respondents' preferences and party platforms. Contrary to preregistered expectations, the interventions did not increase average turnout, yet exploratory analysis shows that the interventions designed to increase benefits did increase the turnout intention of uncertain baseline voters. Moreover, information about parties' platforms increased support for the party closest to the respondents' preferences, leading to better-informed voting. Results are consistent with motivated reasoning, which is surprising in a context with weak party institutionalization
Abstract:
In this paper, we introduce a novel integral transform termed the two-sided Quaternion Hyperbolic Windowed Fourier Transform (ts-QHWFT), designed for the analysis of two-dimensional quaternion-valued signals defined in an open rectangle of the plane equipped with a hyperbolic measure. We investigate the basic properties of the ts-QHWFT, includi
