Buscar por

Publicaciones de la facultad

Se muestran 0 publicaciones de 0 resultados.

Abstract:

In recent years, much attention has been paid to the role of classical special functions of a real or complex variable in mathematical physics, especially in boundary value problems (BVPs). In the present paper, we propose a higher-dimensional analogue of the generalized bessel polynomials within Clifford analysis via a special set of monogenic polynomials. We give the definition and derive a number of important properties of the generalized monogenic bessel polynomials (GMBPs), which are defined by a generating exponential function and are shown to satisfy an analogue of Rodrigues' formula. As a consequence, we establish an expansion of particular monogenic functions in terms of GMBPs and show that the underlying basic series is everywhere effective. We further prove a second-order homogeneous differential equation for these polynomials

Abstract:

We consider twisted graphs, that is, topological graphs that are weakly isomorphic to subgraphs of the complete twisted graph Tn. We determine the exact minimum number of crossings of edges among the set of twisted graphs with n vertices and m edges; state a version of the crossing lemma for twisted graphs and conclude that the mid-range crossing constant for twisted graphs is 1/6. Let e(k)(n) be the maximum number of edges over all twisted graphs with n vertices and local crossing number at most k. We give lower and upper bounds for e(k)(n) and settle its exact value for k is an element of {0, 1, 2, 3, 6, 10). We conjecture that for every t > = 1, e(t 2) (n) = (t+1) n - (t+2 2), n > = t + 1

Abstract:

This study has carried out a review of the literature appearing on diversity in the last 50 years. Research findings from this period reveal it is impossible to assume there is a pure and simple relationship between diversity and performance without considering a series of variables that affect this relationship. In this study, emphasis has been placed on the analysis of results arrived at through empirical investigation on the relation between the most studied dimensions of diversity and performance. The results presented are part of a more extensive research

Abstract:

In order to analyze the influence of substituent groups, both electron-donating and electron-attracting and the number of π-electrons on the corrosion inhibiting properties of organic molecules, a theoretical quantum chemical study under vacuo and in the presence of water, using the Polarizable Continuum Model (PCM), was carried out for four different molecules, bearing similar chemical framework structure: 2-mercaptoimidazole (2MI), 2-mercaptobenzimidazole (2MBI), 2-mercapto-5- methylbenzimidazole (2M5MBI), and 2-mercapto-5-nitrobenzimidazole (2M5NBI). From an electrochemical study conducted previously in our group, (R. Álvarez-Bustamante, G. Negrón-Silva, M. Abreu-Quijano, H. Herrera-Hernández, M. Romero-Romo, A. Cuán, M. Palomar-Pardavé. Electrochim. Acta, 54, (2009) 539), it was found that the corrosion inhibition efficiency, IE, order followed by the molecules tested was 2MI > 2MBI > 2M5MBI > 2M5NBI. Thus 2MI turned out to be the best inhibitor. This fact strongly suggests that, contrary to a hitherto generally suggested notion, an efficient corrosion inhibiting molecule neither requires to be a large one, nor possesses an extensive π-electrons number. In this work, from a theoretical study a correlation was found between EHOMO, hardness (η), electron charge transfer (ΔN), electrophilicity (W), back-donation (ΔEBack-donation) and the inhibition efficiency, IE. The negative values of EHOMO and the estimated value of the Standard Free Gibbs energy for all the molecules (based on the calculated equilibrium constant) were negative, indicating that the complete chemical processes in which the inhibitors are involved, occur spontaneously

Abstract:

The run sum chart is an effective two-sided chart that can be used to monitor for process changes. It is known that it is more sensible than the Shewhart chart with runs rules and its performance improves as the number of regions increases. However, as the number of regions increses the resulting chart has more parameters to be defined and its design becomes more involved. In this article, we introduce a one-parameter run sum chart. This chart accumulates scores equal to the subgroup means and signals when the cummulative sum exceeds a limit value. A fast initial response feature is proposed and its run length distribution function is found by a set of recursive relations. We compare this chart with other charts suggested in the literature and find it competitive with the CUSUM, the FIR CUSUM, and the combined Shewhart FIR CUSUM schemes

Abstract:

Processes with a low fraction of nonconforming units are known as high-yield processes. These processes produce a small number of nonconforming parts per million. Traditional methods for monitoring the fraction of nonconforming units such as the binomial and geometric control charts with probability limits are not effective. In order to properly monitor these processes, we propose new two-sided geometric-based control charts. In this article we show how to design, analyze, and evaluate their performance. We conclude that these new charts outperform other geometric charts suggested in the literature

Abstract:

To improve the performance of control charts the conditional decision procedure (CDP) incorporates a number of previous observations into the chart's decision rule. It is expected that charts with this runs rule are more sensitive to shifts in the process parameter. To signal an out-of-control condition more quickly some charts use a headstart feature. They are referred as charts with fast initial response (FIR). The CDP chart can also be used with FIR. In this articIe we analyze and compare the performance of geometric CDP charts with and with no F1R. To do it we model the CDP charts with a Markov chain and find cIosed-form ARL expressions. We find the conditional decision procedure useful when the fraction p of nonconforming units detenorates. However the CDP chart is not very effective for signaling decreases in p

Abstract:

A fast initial response (FIR) feature for the run sum R chart is proposed and its ARL performance estimated by a Markov chain representation. It is shown that this chart is more sensitive than several R charts with runs rules proposed by different authors. We conclude that the run sum R chart is simple to use and a very effective tool for monitoring increases and decreases in process dispersion

Abstract:

The standard S chart signals an out-of-control condition when one point exceeds a control limit. It can be augmented with runs rules to improve its performance in detecting assignable causes. A commonly used rule signals when k consecutive points exceed a control limit. This rule can be used alone or to supplement the standard chart. In this article we derive ARL expressions for charts with the k-ofk runs rule. We show how to design S charts with this runs rule, compare their ARL performance, and make a control chart recommendation when it is important to monitor for both increases and decreases in process dispersion

Abstract:

We analyze the performance of traditional R charts and introduce modifications for monitoring both increases and decreases in the process dispersion. We show that the use of equal tail probability limits and the use of sorne runs rules does not represent a significant improvement for monitoring increases and decreases in the variability of the process. We propose to use the R chart with a central line equal. to the median of the distribution of the range. We al so suggest supplementing this chart with a runs rule that signals when nine consecutive points lie on the same side of the median line. We find that such modifications lead to R charts with improved performance for monitoring the process dispersion

Abstract:

To increase the performance for detecting small shifts, control charts are used with runs rules, The Western Electrical Handbook (1956) suggests runs rules to be used with the Shewhart X chart. In this article, we review the performance of two sets of runs rules. The rules of one set signal if k succesive points fall beyond a limito The rules of the other set signal if k out of k+ 1 consecutive points fall beyond a different limito We suggest runs rules from these sets. They are intended to be used with a modified Shewhart X chart, a chart with 3.3σ limits. We find that for small shifts all suggested charts have improved performance than the Shewhart X chart. For large shifts they have comparable performance

Abstract:

Most control charts for variables data are constructed and analyzed under the assumption that observations are independent and normally distributed. Although this may be adequate for many processes, there are situations where these basic assumptions do not hold. In such cases the use of traditional control charts may not be effective. In this article, we estimate the performance of control charts derived under the assumption of normality (normal charts) but used with a much broader range of distributions. We consider monitoring the dispersion of processes that follow the exponential power family of distributions (a family of distributions which includes the normal as a special case). We have found that if a normal CUSUM chart is used with several of these processes the rate of false alarms might be quite different from the rate that results when a normal process is monitored. A normal chart might also be not sensitive enough in detecting changes in such processes. CUSUM charts suitable for monitoring this family of processes are derived to show how much sensitivity is recovered when the correct chart is used

Abstract:

In this paper we analyze several control charts suitable for monitoring process dispersion when subgrouping is not possible or not desirable. We compare the performances of a moving range chart, a cumulative sum (CUSUM) chart based on moving ranges, a CUSUM chart based on an approximate normalizing transformation, a self-starting CUSUM chart, a change-point CUSUM chart, and a exponentially weighted moving average chart based on the subgroup variance. The average run length performances of these charts are also estimated and compared

Abstract:

We consider several control charts for monitoring normal processes for changes in dispersion. We present comparisons of the average run length performances of these charts. We demonstrate that a CUSUM chart based on the likelihood ratio test for the change point problem for normal variances has an ARL performance that is superior to other procedures. Graphs are given to aid in designing this control chart

Abstract:

It is a common practice to monitor the fraction p of non-conforming units to detect whether the quality of a process improves or deteriorates. Users commonly assume that the number of non-conforming units in a subgroup is approximately normal, since large subgroup sizes are considered. If p is small this approximation might fail even for large subgroup sizes. If in addition, both upper and lower limits are used, the performance of the chart in terms of fast detection may be poor. This means that the chart might not quickly detect the presence of special causes. In this paper the performance of several charts for monitoring increases and decreases in p is analyzed based on their Run Length (RL) distribution. It is shown that replacing the lower control limit by a simple runs rule can result in an increase in the overall chart performance. The concept of RL unbiased performance is introduced. It is found that many commonly used p charts and other charts proposed in the literature have RL biased performance. For this reason new control limits that yield an exact (or nearly) RL unbiased chart are proposed

Abstract:

When monitoring a process it is important to quickly detect increases and decreases in its variability. In addition to preventing any increase in the variability of the process and any deterioration in the quality of the output, it is also important to search for special causes that may result in a smaller process dispersion. Considering this, users should always try to monitor for both increases and decreases in the variability. The process variability is commonly monitored by means of a Shewhart range chart. For small subgroup sizes this control chart has a lower control limit equal to zero. To help monitor for both increases and decreases in variability, Shewhart charts with probability limits or runs rules can be used. CUSUM and EWMA charts based on the range or a function of the subgroup variance can also be used. In this paper a CUSUM chart based on the subgroup range is proposed. Its performance is compared with that of other charts proposed in the literature. It is found that for small subgroup sizes, it has an excellent performance and it thus represents a powerful alternative to currently utilized strategies.

Abstract:

Most control charts for variables data are constructed and analyzed under the assumption that observations are independent and normally distributed. Although this may be adequate for many processes, there are situations where these basic assumptions do not hold. The use of control charts in such cases may not be effective if the rate of false alarms is high or if the control chart is not sensitive in detecting changes in the process. In this paper, the ARL performance of a CUSUM chart for dispersion is analyzed under a variety of non-normal process models. We will consider processes that follow the exponential power family of distributions, a symmetric class of distributions which includes the normal distribution as a special case

Resumen:

En este trabajo estudiamos la elección de financiamiento a largo plazo entre créditos sindicatos y la emisión de bonos de las empresas mexicanas. Restringimos nuestra atención a aquellas empresas que emiten títulos financieros en los mercados de capitales. Nuestros hallazgos sugieren que el tamaño de la empresa, la calidad de sus garantías, y su calidad crediticia, juegan un papel importante en el resultado de la elección. También encontramos que los efectos marginales de estas tres características no son consistentes a lo largo de su rango de valores. En particular, encontramos evidencia de que la calidad crediticia de las empresas tiene un efecto no monótono en la elección

Abstract:

We study the long-term financing choice between syndicated loans and Mexican firms' issuance of public debt. We restrict our attention to those firms that issue securities in capital markets. Our findings suggest that the firm's size, the quality of its collateral, and credit quality play an important role in the outcome of choice. We also find that the marginal effects of these three characteristics are not consistent along their range of values. In particular, we found evidence that the credit quality of firms has a non-monotonic effect in the choice

Resumen:

Propósito. En este artículo se examinan los factores que afectan la probabilidad de que una empresa salga a Bolsa, utilizando una base de datos integral de empresas privadas y públicas en México, de todos los sectores, durante 2006-2014

Abstract:

Purpose. The purpose of this paper is to examine the factors that affect the likelihood of being public using a comprehensive database of private and public companies in Mexico, from all sectors, during 2006-2014

Resumen:

Este artículo analiza la demanda de vivienda en México a través del gasto en servicios de vivienda y el costo de uso del capital residencial de cada hogar representativo por percentil de ingreso. La hipótesis de ingreso permanente se considera como función de las características sociodemográficas y el grado de educación del jefe del hogar. Asimismo, se obtienen las elasticidades de ingreso, riqueza, edad del jefe de familia, tamaño del hogar y número de empleados, así como la semielasticidad del costo de uso de capital residencial. El gasto en vivienda es inelástico, aunque es más sensible al ingreso corriente que al permanente. También demostramos que existe una estructura regresiva en este mercado y se realiza un análisis de sensibilidad con el fin de medir el impacto en el gasto de vivienda ante ciertas variaciones del costo de uso residencial de largo plazo

Abstract:

This article analyzes the demand for housing in Mexico through the approach of spending on housing services and user cost of owner-occupied of each representative household by income percentile. The hypothesis of permanent income as a function of the socio-demographic characteristics and the degree of education of the household head is included in the model. We obtain the elasticity of income, wealth, age of head of household, size of household and number of occupied; as well as the semi-elasticity of the user cost of residential capital. It should be noted that expenditure on housing is inelastic, although it is more sensitive to current income than the permanent income. We show that this market struture is regresive, therefore a sensitivity analysis is performed in order to measure the impact on the housing expenditure related to certain variations of the long-run user cost of owner-occupied

Abstract:

This paper proposes a theoretical model that offers a rationale for the formation of lender syndicates. We argue that the ex-ante process of information acquisition may affect the strategies used to create syndicates. For large loans, the restrictions on lending impose a natural reason for syndication. We study medium-sized loans instead, where there is some room for competition since each financial institution has the ability to take the loan in full by itself. In this case, syndication would be the optimal choice only if their screening costs are similar. Otherwise, lenders would be compelled to compete, since a lower screening cost can create a comparative advantage in interest rates

Abstract:

We consider a model of bargaining by concessions where agents can terminate negotiations by accepting the settlement of an arbitrator. The impact of pragmatic arbitrators—that enforce concessions that precede their appointment—is compared with that of arbitrators that act on principle—ignoring prior concessions. We show that while the impact of arbitration always depends on how costly that intervention is relative to direct negotiation, the range of scenarios for which it has an impact, and the precise effect of such impact, does change depending on the behavior— pragmatic or on principle—of the arbitrator. Moreover the requirement of mutual consent to appoint the arbitrator matters only when he is pragmatic. Efficiency and equilibrium are not aligned since agents sometimes reach negotiated agreements when an arbitrated settlement is more efficient and vice versa. What system of arbitration has the best performance depends on the arbitration and negotiation costs, and each can be optimal for plausible environments

Abstract:

This paper analyzes a War of Attrition where players enjoy private information about their outside opportunities. The main message is that uncertainty about the possibility that the opponent opts out increases the equilibrium probability of concession

Abstract:

In many modern production systems the human operator is faced with problems when it comes to interacting with the production system using the control system. One reason for this is that the control systems are mainly designed with respect to production, without taking into account how an operator is supposed to interact with it. This article presents a control system where the goal is to increase flexibility and reusability of production equipment and program modules. Apart from this, the control system is also suitable for human operator interaction. To make it easier for an operator to interact with the control system, the operator activities vis-a-vis the control system have been divided into so called control levels. One of the six predefined control levels is described more in detail to illustrate how production can be manipulated with the help of a control system at this level. The communication with the control system is accomplished with the help of a graphical tool that interacts with a relational database. The tool has been implemented in Java to make it platform-independent. Some examples of the tool and how it can be used are provided

Abstract:

Modern control systems often exhibit problems in switches between automatic and manual system control. One reason for this is the structure of the control system, which is usually not designed for this type of action. This article presents a method for splitting the control system into different control levels. By switching between these control levels, the operator can increase or decrease the number of manual control activities he wishes to perform while still enjoying the support of the control system. The structural advantages of the control levels are demonstrated for two types of operator activity; 1 control flow tracing; and 2 control flow alteration. These two types of operator activity can be used in such situations as when locating an error, introducing a new machine, changing the ordering of products or optimizing the production flow

Abstract:

Partly due to the introduction of computers and intelligent machines in modern manufacturing, the role of the operator has changed with time. More and more of the work tasks have been automated, reducing the need for human interactions. One reason for this is the decrease in the relative cost of computers and machinery compared to the cost of having operators. Even though this statement may be true in industrialized countries it is not evident that it is valid in developing countries. However, a statement that is valid for both industrialized countries and developing countries is to obtain balanced automation systems. A balanced automation system is characterized by "the correct mix of automated activities and the human activities". The way of reaching this goal, however, might be different depending on the place of the manufacturing installation. Aspects, such as time, money, safety, flexibility and quality, govern the steps to take in order to reach a balanced automation system. In this paper there are defined six steps of automation that identify areas of work activities in a modern manufacturing system, that might be performed by either an automatic system or a human. By combining these steps of automation in what is called levels of automation, a mix of automatic and manual activities is obtained. Through the analysis of these levels of automation, with respect to machine costs and product quality, it is demonstrated which the lowest possible automation level should be when striving for balanced automation systems in developing countries. The bottom line of the discussion is that product supervision should not be left to human operators solely, but rather be performed automatically by the system

Abstract:

In the matching with contracts literature, three well-known conditions (from stronger to weaker)–substitutes, unilateral substitutes (US), and bilateral substitutes (BS)–have proven to be critical. This paper aims to deepen our understanding of them by separately axiomatizing the gap between BS and the other two. We first introduce a new “doctor separability” condition (DS) and show that BS, DS, and irrelevance of rejected contracts (IRC) are equivalent to US and IRC. Due to Hatfield and Kojima (2010) and Aygün and Sönmez (2012), we know that US, “Pareto separability” (PS), and IRC are the same as substitutes and IRC. This, along with our result, implies that BS, DS, PS, and IRC are equivalent to substitutes and IRC. All of these results are given without IRC whenever hospitals have preferences

Abstract:

The paper analyzes the role of labor market segmentation and relative wage rigidity in the transmission process of disinflation policies in an open economy facing imperfect capital markets. Wages are flexible in the nontradables sector, and based on efficiency factors in the tradables sector. With perfect labor mobility, a permanent reduction in the devaluation rate leads in the long run to a real appreciation, a lower ratio of output of tradables to nontradables, an increase in real wages measured in terms of tradables, and a fall in the product wage in the nontradables sector. Under imperfect labor mobility, unemployment temporarily rises.

Abstract:

In this paper we study the adjacency matrix of some infinite graphs, which we call the shift operator on the Lp space of the graph. In particular, we establish norm estimates, we find the norm for some cases, we decide the triviality of the kernel of some infinite trees, and we find the eigenvalues of certain infinite graphs obtained by attaching an infinite tail to some finite graphs

Resumen:

México es un país que se encuentra inmerso en un proceso acelerado de envejecimiento poblacional. Las personas adultas mayores están expuestas a múltiples riesgos, entre ellos la violencia familiar, no solo la generada por sus parejas, sino también por parte de otros miembros de la familia. El objetivo propuesto en este artículo fue analizar las características y principales factores asociados con la violencia familiar de las mujeres adultas mayores (sin incluir la violencia de pareja), según grupos de edad (60-69 o 70 años o más) y subtipos de violencia (emocional, económica, física y sexual). Se hizo un análisis secundario de la Encuesta Nacional sobre la Dinámica de las Relaciones en los Hogares (ENDIREH) 2016. La muestra final estuvo conformada por 18,416 mujeres de 60 años o más, lo cual representó un total de 7,043,622 mujeres de dicho grupo de edad. Se hizo un análisis descriptivo y se estimaron modelos de regresión binaria para determinar los principales factores asociados con la violencia y sus subtipos. La violencia emocional fue la más frecuente, seguida de la económica. Las mujeres de edad más avanzada tuvieron una mayor prevalencia de violencia familiar

Abstract:

Mexico as a country is immersed in a process of accelerated population aging. Older people are exposed to multiple risks, including domestic violence, not only from their couple, but also from other family members. The proposed objective of this article is to analyze the characteristics and main factors associated with family violence of older adult women in 2016 -excluding intimate partner violence- by age groups (60-69 and 70 years old or more) and subtypes of violence (emotional, economical, physical and sexual). An analysis of The National Survey on the Dynamics of Household Relationships (ENDIREH in Spanish) 2016 was used. The final sample consisted of 18,416 women aged 60 or more, which represented a total of 7,043,622 women in the age group. A descriptive analysis was carried out and the binary regression models were estimated to determine to main factors associated with violence and its subtypes. The emotional violence was the most frequent, followed by economic violence

Abstract:

The fact that shocks in early life can have long-term consequences is well established in the literature. This paper examines the effects of extreme precipitations on cognitive and health outcomes and shows that impacts can be detected as early as 2 years of age. Our analyses indicate that negative conditions (i.e., extreme precipitations) experienced during the early stages of life affect children’s physical, cognitive and behavioral development measured between 2 and 6 years of age. Affected children exhibit lower cognitive development (measured through language, working and long-term memory and visual-spatial thinking) in the magnitude of 0.15 to 0.19 SDs. Lower height and weight impacts are also identified. Changes in food consumption and diet composition appear to be key drivers behind these impacts. Partial evidence of mitigation from the delivery of government programs is found, suggesting that if not addressed promptly and with targeted policies, cognitive functioning delays may not be easily recovered

Abstract:

A number of studies document gender differentials in agricultural productivity. However, they are limited to region and crop-specific estimates of the mean gender gap. This article improves on previous work in three ways. First, data representative at the national level and for a wide variety of crops is exploited. Second, decomposition methods—traditionally used in the analysis of wage gender gaps—are employed. Third, heterogeneous effects by women's marital status and along the productivity distribution are analyzed. Drawing on data from the 2011–2012 Ethiopian Rural Socioeconomic Survey, we find an overall 23.4 percentage point productivity differential in favor of men, of which 13.5 percentage points (57%) remain unexplained after accounting for gender differences in land manager characteristics, land attributes, and access to resources. The magnitude of the unexplained fraction is large relative to prior estimates in the literature. A more detailed analysis suggests that differences in the returns to extension services, land certification, land extension, and product diversification may contribute to the unexplained fraction. Moreover, the productivity gap is mostly driven by non-married female managers—particularly divorced women—; married female managers do not display a disadvantage. Finally, overall and unexplained gender differentials are more pronounced at mid-levels of productivity

Abstract:

Public transportation is a basic everyday activity. Costs imposed by violence might have far-reaching consequences. We conduct a survey and exploit the discontinuity in the hours of operation of a program that reserves subway cars exclusively for women in Mexico City. The program seems to be successful at reducing sexual harassment toward women by 2.9 percentage points. However, it produces unintended consequences by increasing nonsexual aggression incidents (e.g., insults, shoving) among men by 15.3 percentage points. Both sexual and nonsexual violence seem to be costly; however, our results do not imply that costs of the program outweigh its benefits

Abstract:

We measure the effect of a large nationwide tax reform on sugar-added drinks and caloric-dense food introduced in Mexico in 2014. Using scanner data containing weekly purchasesof 47,973 barcodes by 8,130 households and an RD design, we find that calories purchasedfrom taxed drinks and taxed food decreased respectively by 2.7% and 3%. However, this wascompensated by increases from untaxed categories, such that total calories purchased didnot change. We find increases in cholesterol (12.6%), sodium (5.8%), saturated fat (3.1%), carbohydrates (2%), and proteins (3.8%)

Abstract:

The frequency and intensity of extreme temperature events are likely to increase with climate change. Using a detailed dataset containing information on the universe of loans extended by commercial banks to private firms in Mexico, we examine the relationship between extreme temperatures and credit performance. We find that unusually hot days increase delinquency rates, primarily affecting the agricultural sector, but also non-agricultural industries that rely heavily on local demand. Our results are consistent with general equilibrium effects originated in agriculture that expand to other sectors in agricultural regions. Additionally, following a temperature shock, affected firms face increased challenges in accessing credit, pay higher interest rates, and provide more collateral, indicating a tightening of credit during financial distress

Abstract:

In this work we construct a numerical scheme based on finite differences to approximate the free boundary of an American call option. Points of the free boundary are calculated by approximating the solution of the Black-Scholes partial differential equation with finite differences on domains that are parallelograms for each time step. Numerical results are reported

Abstract:

We present higher-order quadrature rules with end corrections for general Newton–Cotes quadrature rules. The construction is based on the Euler–Maclaurin formula for the trapezoidal rule. We present examples with 6 well-known Newton–Cotes quadrature rules. We analyzemodified end corrected quadrature rules, which consist on a simple modification of the Newton–Cotes quadratures with end corrections. Numerical tests and stability estimates show the superiority of the corrected rules based on the trapezoidal and the midpoint rules

Abstract:

The constructions of the quadratures are based on the method of central corrections described in [4]. The quadratures consist of the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. Integrals of the above type appear in scattering calculations; we test the performance of the quadrature rules with an example of this kind

Abstract:

In this report we construct corrected trapezoidal quadrature rules up to order 40 to evaluate 2-dimensional integrals of the form ʃD v(x, y) log((x2 + y2)1/2)dxdy, where the domain D is a square containing the point of singularity (0, 0) and v is a C∞ function of compact support contained in D. The procedure we use is a modification of the method constructed in [1]. These quadratures are particularly useful in acoustic scattering calculations with large wave numbers. We describe how to extend the procedure to calculate other 2-dimensional integrals with different singularities

Abstract:

In this paper we construct an algorithm to approximate the solution of the initial value problem y'(t) = f(t,y) with y(t0) = y0. The method is implicit and combines the classical Simpson’s rule with the Simpson’s 3/8 rule to yield an unconditionally A-stable method of order 4

Abstract:

In this paper, we propose an anisotropic adaptive refinement algorithm based on the finite element methods for the numerical solution of partial differential equations. In 2-D, for a given triangular grid and finite element approximating space V, we obtain information on location and direction of refinement by estimating the reduction of the error if a single degree of freedom is added to V. For our model problem the algorithm fits highly stretched triangles along an interior layer, reducing the number of degrees of freedom that a standard h-type isotropic refinement algorithm would use

Abstract:

In this paper we construct an algorithm that generates a sequence of continuous functions that approximate a given real valued function f of two variables that have jump discontinuities along a closed curve. The algorithm generates a sequence of triangulations of the domain of f . The triangulations include triangles with high aspect ratio along the curve where f has jumps. The sequence of functions generated by the algorithm are obtained by interpolating f on the triangulations using continuous piecewise polynomial functions. The approximation error of this algorithm is O(1/N2) when the triangulation contains N triangles and when the error is measured in the L1 norm. Algorithms that adaptively generate triangulations by local regular refinement produce approximation errors of size O(1/N), even if higher-order polynomial interpolation is used

Abstract:

We construct correction coefficients for high-order trapezoidal quadrature rules to evaluate three-dimensional singular integrals of the form, J(v) = integral(D) v(x, y, z)/root x(2) +y(2) + z(2) dx dy dz, where the domain D is a cube containing the point of singularity (0, 0, 0) and v is a C-infinity function defined on R-3. The procedure employed here is a generalization to 3-D of the method of central corrections for logarithmic singularities [1] in one dimension, and [2] in two dimensions. As in one and two dimensions, the correction coefficients for high-order trapezoidal rules for J(v) are independent of the number of sampling points used to discretize the cube D. When v is compactly supported in D, the approximation is the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. These quadrature rules provide an efficient, stable and accurate way of approximating J(v). We demonstrate the performance of these quadratures of orders up to 17 for highly oscillatory functions v. These type of integrals appear in scattering calculations in 3-D

Abstract:

We present a high-order, fast, iterative solver for the direct scattering calculation for the Helmholtz equation in two dimensions. Our algorithm solves the scattering problem formulated as the Lippmann-Schwinger integral equation for compactly supported, smoothly vanishing scatterers. There are two main components to this algorithm. First, the integral equation is discretized with quadratures based on high-order corrected trapezoidal rules for the logarithmic singularity present in the kernel of the integral equation. Second, on the uniform mesh required for the trapezoidal rule we rewrite the discretized integral operator as a composition of two linear operators: a discrete convolution followed by a diagonal multiplication; therefore, the application of these operators to an arbitrary vector, required by an iterative method for the solution of the discretized linear system, will cost N(2)log(N) for a N-by-N mesh, with the help of FFT. We will demonstrate the performance of the algorithm for scatterers of complex structures and at large wave numbers. For numerical implementations, CMRES iterations will be used, and corrected trapezoidal rules up to order 20 will be tested

Abstract:

In this report, we construct correction coefficients to obtain high-order trapezoidal quadrature rules to evaluate two-dimensional integrals with a logarithmic singularity of the form J(v) = ∫D v(x, y) ln (√x2 + y2) dx dy, where the domain D is a square containing the point of singularity (0,0) and v is a C∞ function defined on the whole plane ℝ2. The procedure we use is a generalization to 2-D of the method of central corrections for logarithmic singularities described in [1]. As in 1-D, the correction coefficients are independent of the number of sampling points used to discretize the square D. When v has compact support contained in D, the approximation is the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. These quadrature rules give an efficient, stable, and accurate way of approximating J(v). We provide the correction coefficients to obtain corrected trapezoidal quadrature rules up to order 20

Abstract:

This paper addresses the problem of the optimal design of batch plants with imprecise demands in product amounts. The design of such plants necessarily involves the way that equipment may be utilized, which means that plant scheduling and production must form an integral part of the design problem. This work relies on a previous study, which proposed an alternative treatment of the imprecision (demands) by introducing fuzzy concepts, embedded in a multi-objective Genetic Algorithm (GA) that takes into account simultaneously maximization of the net present value (NPV) and two other performance criteria, i.e. the production delay/advance and a flexibility criterion. The results showed that an additional interpretation step might be necessary to help the managers choosing among the non-dominated solutions provided by the GA. The analytic hierarchy process (AHP) is a strategy commonly used in Operations Research for the solution of this kind of multicriteria decision problems, allowing the apprehension of manager subjective judgments. The major aim of this study is thus to propose a software integrating the AHP theory for the analysis of the GA Pareto-optimal solutions, as an alternative decision-support tool for the batch plant design problem solution

Abstract:

A workflow is a set of steps or tasks that model the execution of a process, e.g., protein annotation, invoice generation and composition of astronomical images. Workflow applications commonly require large computational resources. Hence, distributed computing approaches (such as Grid and Cloud computing) emerge as a feasible solution to execute them. Two important factors for executing workflows in distributed computing platforms are (1) workflow scheduling and (2) resource allocation. As a consequence, there is a myriad of workflow scheduling algorithms that map workflow tasks to distributed resources subject to task dependencies, time and budget constraints. In this paper, we present a taxonomy of workflow scheduling algorithms, which categorizes the algorithms into (1) best-effort algorithms (including heuristics, metaheuristics, and approximation algorithms) and (2) quality-of-service algorithms (including budget-constrained, deadline-constrained and algorithms simultaneously constrained by deadline and budget). In addition, a workflow engine simulator was developed to quantitatively compare the performance of scheduling algorithms

Resumen:

Hoy en día, los riesgos y oportunidades relacionados con la sostenibilidad surgen de la dependencia de una entidad ante los recursos que necesita para operar, pero además por el impacto que ocasiona en estos, por lo que podría traer varios impactos contables en la determinación de información financiera. De esta manera, tanto la información contenida en sus estados financieros como la incluida en la información a revelar sobre sostenibilidad relacionada con la información financiera son datos esenciales para que un usuario evalúe el valor de una entidad. Estos aspectos y otros adicionales se abordan de manera sencilla y amigable en la obra Impactos contables de acuerdo con las NIF para el cierre de estados financieros 2022, que es de utilidad para la preparación y presentación de la información financiera del ejercicio 2022; por ello, este libro es imprescindible para los Contadores independientes y de empresas de cualquier tamaño, además de que está dirigido y es de gran ayuda para el personal de despachos, docentes y estudiantes, tanto de nivel licenciatura como de posgrado y, por supuesto, para los preparadores de información financiera, empresarios y público en general

Abstract:

We study the behavior of a decision maker who prefers alternative x to alternative y in menu A if the utility of x exceeds that of y by at least a threshold associated with y and A. Hence the decision maker's preferences are given by menu-dependent interval orders. In every menu, her choice set comprises of undominated alternatives according to this preference. We axiomatize this broad model when thresholds are monotone, i.e., at least as large in larger menus. We also obtain novel characterizations in two special cases that have appeared in the literature: the maximization of a fixed interval order where the thresholds depend on the alternative and not on the menu, and the maximization of monotone semiorders where the thresholds are independent of the alternatives but monotonic in menus

Abstract:

Given a scattered space X = (X, tau) and an ordinal lambda, we define a topology tau+lambda in such a way that tau+0 = tau and, when X is an ordinal with the initial segment topology, the resulting sequence {tau+lambda}(lambda is an element of Ord) coincides with the family of topologies {I-lambda}(lambda is an element of Ord) used by Icard, Joosten, and the second author to provide semantics for polymodal provability logics. We prove that given any scattered space X of large-enough rank and any ordinal lambda > 0, GL is strongly complete for tau(+lambda). The special case where X = omega(omega) + 1 and lambda = 1 yields a strengthening of a theorem of Abashidze and Blass

Abstract:

We introduce verification logic, a variant of Artemov’s logic of proofs with new terms of the form ¡þ! satisfying the axiom schema þ then ¡þ!:þ. The intention is for ¡þ! to denote a proof of þ in Peano arithmetic, whenever such a proof exists. By a suitable restriction of the domain of ¡·!, we obtain the verification logic VS5, which realizes the axioms of Lewis' system S5. Our main result is that VS5 is sound and complete for its arithmetical interpretation

Abstract:

This note examines evidence of non-fundamentalness in the rate of variation of annual per capita capital stock for OECD countries in the period 1955–2020. Leeper et al. (2013) proposed a theoretical model in which, due to agents performing fiscal foresight, this economic series could exhibit a non-fundamental behavior (in particular, a non-invertible moving average component), which has important implications for modeling and forecasting. Using the methodology proposed in Velasco and Lobato (2018), which delivers consistent estimators of the autoregressive and moving average parameters without imposing fundamentalness assumptions, we empirically examine whether the capital data are better represented with an invertible or a non-invertible moving average model. We find strong evidence in favor of the non-invertible representation since for the countries that present significant innovation asymmetry, the selected model is predominantly non-invertible

Abstract:

Shelf life experiments have as an outcome a matrix of zeroes and ones that represent the acceptance or no acceptance of customers when presented with samples of the product under evaluation in a random fashion within a designed experiment. This kind of response is called a Bernoulli response due to the dichotomous nature (0,1) of its values. It is not rare to find inconsistent sequences of responses, that is when a customer rejects a less aged sample and does not reject an older sample. That is, we find a zero before a one. This is due to the human factor present in the experiment. In the presence of this kind of inconsistencies some conventions have been taken in the literature in order to estimate shelf life distribution using methods and software from the reliability field which requires numerical responses. In this work we propose a method that does not require coding the original responses into numerical values. We use a more reliable coding by using the Bernoulli response directly and using a Bayesian approach. The resulting method is based on solid Bayesian theory and proved computer programs. We show by means of an example and simulation studies that the new methodology clearly beats the methodology proposed by Hough. We also provide the R software necessary for the implementation

Abstract:

Definitive Screening Designs (DSD) are a class of experimental designs that have the possibility to estimate linear, quadratic and interaction effects with relatively little experimental effort. The linear or main effects are completely independent of two factor interactions and quadratic effects. The two factor interactions are not completely confounded with other two factor interactions, and quadratic effects are estimable. The number of experimental runs is twice the number of factors of interest plus one. Several approaches have been proposed to analyze the results of these experimental plans, some of these approaches take into account the structure of the design, others do not. The first author of this paper proposed a Bayesian sequential procedure that takes into account the structure of the design, this procedure consider normal and non normal responses. The creators of the DSD originally performed a forward stepwise regression programmed in JMP, and also used the minimization of a bias corrected version of Akaike's information criterion, and later they proposed a frequentist procedure that considers the structure of the DSD. Both the frequentist and Bayesian procedures, when the number of experimental runs is twice the number of factors of interest plus one, use as initial step fitting a model with only main effects and then check the significance of these effects to proceed. In this paper we present modification of the Bayesian procedure that incorporates the Bayesian factor identification which is an approach that computes, for each factor, the posterior probability that it is active, this includes the possibility that it is present in linear, quadratic or two factor interactions. This a more comprehensive approach than just testing the significance of an effect

Resumen:

Paquete estadístico con interfaz gráfica para análisis secuencial bayesiano de diseños discriminantes definitivos

Abstract:

Definitive Screening Designs are a class of experimental designs that under factor sparsity have the potential to estimate linear, quadratic and interaction effects with little experimental effort. BAYESDEF is a package that performs a five step strategy to analyze this kind of experiments that makes use of tools coming from the Bayesian approach. It also includes the least absolute shrinkage and selection operator (lasso) as a check (Aguirre VM. (2016))

Abstract:

With the advent of widespread computing and availability of open source programs to perform many different programming tasks, nowadays there is a trend in Statistics to program tailor made applications for non statistical customers in various areas. This is an alternative to having a large statistical package with many functions many of which never are used. In this article, we present CONS an R package dedicated to Consonance Analysis. Consonance Analysis is a useful numerical and graphical exploratory approach for evaluating the consistency of the measurements and the panel of people involved in sensory evaluation. It makes use of several uni and multivariate techniques either graphical or analytical, particularly Principal Components Analysis. The package is implemented in a graphical user interface in order to get a user friendly package

Abstract:

Definitive screening designs (DSDs) are a class of experimental designs that allow the estimation of linear, quadratic, and interaction effects with little experimental effort if there is effect sparsity. The number of experimental runs is twice the number of factors of interest plus one. Many industrial experiments involve nonnormal responses. Generalized linear models (GLMs) are a useful alternative for analyzing these kind of data. The analysis of GLMs is based on asymptotic theory, something very debatable, for example, in the case of the DSD with only 13 experimental runs. So far, analysis of DSDs considers a normal response. In this work, we show a five-step strategy that makes use of tools coming from the Bayesian approach to analyze this kind of experiment when the response is nonnormal. We consider the case of binomial, gamma, and Poisson responses without having to resort to asymptotic approximations. We use posterior odds that effects are active and posterior probability intervals for the effects and use them to evaluate the significance of the effects. We also combine the results of the Bayesian procedure with the lasso estimation procedure to enhance the scope of the method

Abstract:

It is not uncommon to deal with very small experiments in practice. For example, if the experiment is conducted on the production process, it is likely that only a very few experimental runs will be allowed. If testing involves the destruction of expensive experimental units, we might only have very small fractions as experimental plans. In this paper, we will consider the analysis of very small factorial experiments with only four or eight experimental runs. In addition, the methods presented here could be easily applied to larger experiments. A Daniel plot of the effects to judge significance may be useless for this type of situation. Instead, we will use different tools based on the Bayesian approach to judge significance. The first tool consists of the computation of the posterior probability that each effect is significant. The second tool is referred to in Bayesian analysis as the posterior distribution for each effect. Combining these tools with the Daniel plot gives us more elements to judge the signiicance of an effect. Because, in practice, the response may not necessarily be normally distributed, we will extend our approach to the generalized linear model setup. By simulation, we will show that not only in the case of discrete responses and very small experiments, the usual large sample approach for modeling generalized linear models may produce a very biased and variable estimators, but also that the Bayesian approach provides a very sensible results

Abstract:

Inference for quantile regression parameters presents two problems. First, it is computationally costly because estimation requires optimising a non-differentiable objective function which is a formidable numerical task, specially with many number of observations and regressors. Second, it is controversial because standard asymptotic inference requires the choice of smoothing parameters and different choices may lead to different conclusions. Bootstrap methods solve the latter problem at the price of enlarging the former. We give a theoretical justification for a new inference method consisting of the construction of asymptotic pivots based on a small number of bootstrap replications. The procedure still avoids smoothing and reduces usual bootstrap methods’ computational cost. We show its usefulness to draw inferences on linear or non-linear functions of the parameters of quantile regression models

Abstract:

Repeatability and reproducibility (R&R) studies can be used to pinpoint the parts of a measurement system that might need improvement. By using simulation, there is practically no difference in using five or 10 parts in many R&R studies

Abstract:

The existing methods for analyzing unreplicated fractional factorial experiments that do not contemplate the possibility of outliers in the data have a poor performance for detecting the active effects when that contingency becomes a reality. There are some methods to detect active effects under this experimental setup that consider outliers. We propose a new procedure based on robust regression methods to estimate the effects that allows for outliers. We perform a simulation study to compare its behavior relative to existing methods and find that the new method has a very competitive or even better power. The relative power improves as the contamination and size of outliers increase when the number of active effects is up to four

Abstract:

The paper presents the asymptotic theory of the efficient method of moments when the model of interest is not correctly specified. The paper assumes a sequence of independent and identically distributed observations and a global misspecification. It is found that the limiting distribution of the estimator is still asymptotically normal, but it suffers a strong impact in the covariance matrix. A consistent estimator of this covariance matrix is provided. The large sample distribution on the estimated moment function is also obtained. These results are used to discuss the situation when the moment conditions hold but the model is misspecified. It also is shown that the overidentifying restrictions test has asymptotic power one whenever the limit moment function is different from zero. It is also proved that the bootstrap distributions converge almost surely to the previously mentioned distributions and hence they could be used as an alternative to draw inferences under misspecification. Interestingly, it is also shown that bootstrap can be reliably applied even if the number of bootstrap replications is very small

Abstract:

It is well known that outliers or faulty observations affect the analysis of unreplicated factorial experiments. This work proposes a method that combines the rank transformation of the observations, the Daniel plot and a formal statistical testing procedure to assess the significance of the effects. It is shown, by means of previous theoretical results cited in the literature, examples and a Monte Carlo study, that the approach is helpful in the presence of outlying observations. The simulation study includes an ample set of alternative procedures that have been published in the literature to detect significant effects in unreplicated experiments. The Monte Carlo study also, gives evidence that using the rank transformation as proposed, provides two advantages: keeps control of the experimentwise error rate and improves the relative power to detect active factors in the presence of outlying observations

Abstract:

Most of the inferential results are based on the assumption that the user has a "random" sample, by this it is usually understood that the observations are a realization from a set of independent identically distributed random variables. However most of the time this is not true mainly for two reasons: one, the data are not obtained by means of a probabilistic sampling scheme from the population, the data are just gathered as they becomes available or in the best of the cases using some kind of control variables and quota sampling.; and second, even if a probabilistic scheme is used, the sample design is complex in the sense that it was not simple random sampling with replacement, but instead some sort of stratification or clustering or a combination of both was required. For an excellent discussion about the kind of considerations that should be made in the first situation see Hahn and Meeker (1993) and a related comment in Aguirre (1994). For the second problem there is a book about the topic in Skinner et a1.(1989). In this paper we consider the problem of evaluating the effect of sampling complexity on Pearson's Chi-square and other alternative tests for goodness of fit for proportions. Work on this problem can be found in Shuster and Downing (1976), Rao and Scott (1974), Fellegi (1980), Holt et al. (1980), Rao and Scott (1981), and Thomas and Rao (1987). Out of this work come up several adjustments to Pearson's test, namely: Wald type tests, average eigenvalue correction and Satterthwaite type correction. There is a more recent and general resampling approach given in Sitter (1992), but it was not pursued in this study

Abstract:

Sometimes data analysis using the usual parametric techniques produces misleading results due to violations of the underlying assumptions, such as outliers or non-constant variances. In particular, this could happen in unreplicated factorial or fractional factorial experiments. To help in this situation alternative analyses have been proposed. For example Box and Meyer give a Bayesian analysis allowing for possibly faulty observations in un replicated factorials and the well known Box-Cox transformation can be used when there is a change in dispersion. This paper presents an analysis based on the rank transformation that deals with the above problems. The analysis is simple to use and can be implemented with a general purpose statistical computer package. The procedure is illustrated with examples from the literature. A theoretical justification is outlined at the end of the paper

Abstract:

The article considers the problem of choosing between two (possibly) nonlinear models that have been fitted to the same data using M-estimation methods. An asymptotically normally distributed lest statistics using a Monte Carlo study. We found that the presence of a competitive model either in the null or the alternative hypothesis affects the distributional properties of the tests, and that in the case that the data contains outlying observations the new procedure had a significantly higher power that the rest of the test

Abstract:

Fuller (1976), Anderson (1971), and Hannan (1970) introduce infinite moving average models as the limit in the quadratic mean of a sequence of partial sums, and Fuller (1976) shows that if the assumption of independence of the addends is made then the limit almost surely holds. This note shows that without the assumption of independence, the limit holds with probability one. Moreover, the proofs given here are easier to teach

Abstract:

A test for the problem or choosing between several nonnested nonlinear regression models simultaneously is presented. The test does not require an explicit specification of a parametric family of distributions for the error term and has a closed form

Abstract:

The asymptotic dislribution of the generalized Cox test for choosing between two multivariate, nonlinear regression models in implicit form is derived. The data is assumed to be generated by a model that need not be either the null or the non-null model. As the data-generating model is not subjected to a Pitman drift the analysis is global, not local, and provides a fairly complete qualitative descriptíon of the power characteristics or the generalized Cox test. Some investigations of these characteristics are included. A new test statistic is introduced that does not requíre an explicit specification of the error distributíon of the null model. The idea is to replace an analytical computation of the expectation of the Cox difference with a bootstrap estimate. The null dístributíon of this new test is derived

Abstract:

A great deal of research has investigated how various aspects of ethnic identity influence consumer behavior, yet this literature is fragmented. The objective of this article was to present an integrative theoretical model of how individuals are motivated to think and act in a manner consistent with their salient ethnic identities. The model emerges from a review of social science and consumer research about US Hispanics, but researchers could apply it in its general form and/or adapt it to other populations. Our model extends Oyserman's (Journal of Consumer Psychology, 19, 250) identity-based motivation (IBM) model by differentiating between two types of antecedents of ethnic identity salience: longitudinal cultural processes and situational activation by contextual cues, each with different implications for the availability and accessibility of ethnic cultural knowledge. We provide new insights by introducing three ethnic identity motives that are unique to ethnic (nonmajority) cultural groups: belonging, distinctiveness, and defense. These three motives are in constant tension with one another and guide longitudinal processes like acculturation, and ultimately influence consumers' procedural readiness and action readiness. Our integrative framework organizes and offers insights into the current body of Hispanic consumer research, and highlights gaps in the literature that present opportunities for future research

Abstract:

In many Solvency and Basel loss data, there are thresholds or deductibles that affect the analysis capability. On the other hand, the Birnbaum-Saunders model has received great attention during the last two decades and it can be used as a loss distribution. In this paper, we propose a solution to the problem of deductibles using a truncated version of the Birnbaum-Saunders distribution. The probability density function, cumulative distribution function, and moments of this distribution are obtained. In addition, properties regularly used in insurance industry, such as multiplication by a constant (inflation effect) and reciprocal transformation, are discussed. Furthermore, a study of the behavior of the risk rate and of risk measures is carried out. Moreover, estimation aspects are also considered in this work. Finally, an application based on real loss data from a commercial bank is conducted

Abstract:

This paper proposes two new estimators for determining the number of factors (r) in static approximate factor models. We exploit the well-known fact that the r largest eigenvalues of the variance matrix of N response variables grow unboundedly as N increases, while the other eigenvalues remain bounded. The new estimators are obtained simply by maximizing the ratio of two adjacent eigenvalues. Our simulation results provide promising evidence for the two estimators

Abstract:

We study a modification of the Luce rule for stochastic choice which admits the possibility of zero probabilities. In any given menu, the decision maker uses the Luce rule on a consideration set, potentially a strict subset of the menu. Without imposing any structure on how the consideration sets are formed, we characterize the resulting behavior using a single axiom. Our result offers insight into special cases where consideration sets are formed under various restrictions

Abstract:

Purpose– This paper summarizes the findings of a research project aimed at benchmarking the environmental sustainability practices of the top 500 Mexican companies. Design/methodology/approach– The paper surveyed the firms with regard to various aspects of their adoption of environmental sustainability practices, including who or what prompted adoption, future adoption plans, decision-making responsibility, and internal/external challenges. The survey also explored how the adoption of environmental sustainability practices relates to the competitiveness of these firms. Findings– The results suggest that Mexican companies are very active in the various areas of business where environmental sustainability is relevant. Not surprisingly, however, the Mexican companies are seen to be at an early stage of development along the sustainability “learning curve”. Research limitations/implications– The sample consisted of 103 self-selected firms representing the six primary business sectors in the Mexican economy. Because the manufacturing sector is significantly overrepresented in the sample and because of its importance in addressing issues of environmental sustainability, when appropriate, specific results for this sector are reported and contrasted to the overall sample. Practical implications– The vast majority of these firms see adopting environmental sustainability practices as being profitable and think this will be even more important in the future. Originality/value– Improving the environmental performance of business firms through the adoption of sustainability practices is compatible with competitiveness and improved financial performance. In Mexico, one might expect that the same would be true, but only anecdotal evidence was heretofore available

Abstract:

We study the consumption-portfolio allocation problem in continuous time when asset prices follow Lévy processes and the investor is concerned about potential model misspecification. We derive optimal consumption and portfolio policies that are robust to uncertainty about the hard-to-estimate drift rate, jump intensity and jump size parameters. We also provide a semi-closed form formula for the detection-error probability and compare various portfolio holding strategies, including robust and non-robust policies. Our quantitative analysis shows that ignoring uncertainty leads to significant wealth loss for the investor

Abstract:

We exploit the manifold increase in homicides in 2008–11 in Mexico resulting from its war on organized drug traffickers to estimate the effect of drug-related homicides on housing prices. We use an unusually rich data set that provides national coverage of housing prices and homicides and exploits within-municipality variations. We find that the impact of violence on housing prices is borne entirely by the poor sectors of the population. An increase in homicides equivalent to 1 standard deviation leads to a 3 percent decrease in the price of low-income housing

Abstract:

This paper examines foreign direct investment (FDI) in the Hungarian economy in the period of post-Communist transition since 1989. Hungary took a quite aggressive approach in welcoming foreign investment during this period and as a result had the highest per capita FDI in the region as of 2001. We discuss the impact of FDI in terms of strategic intent, i.e., market serving and resource seeking FDI. The effect of these two kinds of FDI is contrasted by examining the impact of resource seeking FDI in manufacturing sectors and market serving FDI in service industries. In the case of transition economies, we argue that due to the strategic intent, resource seeking FDI can imply a short-term impact on economic development whereas market serving FDI strategically implies a long-term presence with increased benefits for the economic development of a transition economy. Our focus is that of market serving FDI in the Hungarian banking sector, which has brought improved service and products to multinational and Hungarian firms. This has been accompanied by the introduction of innovative financial products to the Hungarian consumer, in particular consumer credit including mortgage financing. However, the latter remains an underserved segment with much growth potential. For public policy in Hungary and other transition economies, we conclude that policymakers should consider the strategic intent of FDI in order to maximize its benefits in their economies

Abstract:

We propose a general framework for extracting rotation invariant features from images for the tasks of image analysis and classification. Our framework is inspired in the form of the Zernike set of orthogonal functions. It provides a way to use a set of one-dimensional functions to form an orthogonal set over the unit disk by non-linearly scaling its domain, and then associating it an exponential term. When the images are projected into the subspace created with the proposed framework, the rotations in the image affect only the exponential term while the value of the orthogonal functions serve as rotation invariant features. We exemplify our framework using the Haar wavelet functions to extract features from several thousand images of symbols. We then use the features in an OCR experiment to demonstrate the robustness of the method

Abstract:

In this paper we explore the use of orthogonal functions as generators of representative, compact descriptors of image content. In Image Analysis and Pattern Recognition such descriptors are referred to as image features, and there are some useful properties they should possess such as rotation invariance and the capacity to identify different instances of one class of images. We exemplify our algorithmic methodology using the family of Daubechies wavelets, since they form an orthogonal function set. We benchmark the quality of the image features generated by doing a comparative OCR experiment with three different sets of image features. Our algorithm can use a wide variety of orthogonal functions to generate rotation invariant features, thus providing the flexibility to identify sets of image features that are best suited for the recognition of different classes of images

Abstract:

The COVID-19 pandemic triggered a huge, sudden uptake in work from home, as individuals and organizations responded to contagion fears and government restrictions on commercial and social activities (Adams-Prassl et al. 2020; Bartik et al. 2020; Barrero et al. 2020; De Fraja et al. 2021). Over time, it has become evident that the big shift to work from home will endure after the pandemic ends (Barrero et al. 2021). No other episode in modern history involves such a pronounced and widespread shift in working arrangements in such a compressed time frame. The Industrial Revolution and the later shift away from factory jobs brought greater changes in skill requirements and business operations, but they unfolded over many decades. These facts prompt some questions: What explains the pandemic's role as catalyst for a lasting uptake in work from home (WFH)? When looking across countries and regions, have differences in pandemic severity and the stringency of government lockdowns had lasting effects on WFH levels? What does a large, lasting shift to remote work portend for workers? Finally, how might the big shift to remote work affect the pace of innovation and the fortunes of cities?

Abstract:

The pandemic triggered a large, lasting shift to work from home (WFH). To study this shift, we survey full-time workers who finished primary school in twenty-seven countries as of mid-2021 and early 2022. Our crosscountry comparisons control for age, gender, education, and industry and treat the United States mean as the baseline. We find, first, that WFH averages 1.5 days per week in our sample, ranging widely across countries. Second, employers plan an average of 0.7 WFH days per week after the pandemic, but workers want 1.7 days. Third, employees value the option to WFH two to three days per week at 5 percent of pay, on average, with higher valuations for women, people with children, and those with longer commutes. Fourth, most employees were favorably surprised by their WFH productivity during the pandemic. Fifth, looking across individuals, employer plans for WFH levels after the pandemic rise strongly with WFH productivity surprises during the pandemic. Sixth, looking across countries, planned WFH levels rise with the cumulative stringency of government-mandated lockdowns during the pandemic. We draw on these results to explain the big shift to WFH and to consider some implications for workers, organization, cities, and the pace of innovation

Resumen:

This paper presents evidence of large learning losses and partial recovery in Guanajuato, Mexico, during and after the school closures related to the COVID-19 pandemic. Learning losses were estimated using administrative data from enrollment records and by comparing the results of a census-based standardized test administered to approximately 20,000 5th and 6th graders in: (a) March 2020 (a few weeks before school closed); (b) November 2021 (2 months after schools reopened); and (c) June of 2023 (21 months after schools re-opened and over three years after the pandemic started). On average, students performed 0.2 to 0.3 standard deviations lower in Spanish and math after schools reopened, equivalent to 0.66 to 0.87 years of schooling in Spanish and 0.87 to 1.05 years of schooling in math. By June of 2023, students were able to make up for -60% of the learning loss that built up during school closures but still scored 0.08–0.11 standard deviations below their pre-pandemic levels (equivalent to 0.23–0.36 years of schooling)

Abstract:

In this work, we propose a hybrid AI system consisting of a multi-agent system simulating students during an exam and a teacher monitoring them, as well as an evolutionary algorithm that finds classroom arrangements which minimize cheating incentives. The students will answer the exam based on how much knowledge they have about the topic of the exam. In our simulation, then they enter a decision phase in which, for those questions they don't know the answer to, they will either cheat or answer by guessing. If a student gets caught cheating, his/her exam will be cancelled. The purpose of this study is to examine the question of how different monitoring behaviors on the part of the teacher affect the cheating behaviors of students. The results of this study show that an unbiased teacher, that is, a teacher that monitors every student with the same probability, produces minimal cheating incentives for students

Abstract:

When analyzing catastrophic risk, traditional measures for evaluating risk, such as the probable maximum loss (PML), value at risk (VaR), tail-VaR, and others, can become practically impossible to obtain analytically in certain types of insurance, such as earthquake, and certain types of reinsurance arrangements, specially non-proportional with reinstatements. Given the available information, it can be very difficult for an insurer to measure its risk exposure. The transfer of risk in this type of insurance is usually done through reinsurance schemes combining diverse types of contracts that can greatly reduce the extreme tail of the cedant’s loss distribution. This effect can be assessed mathematically. The PML is defined in terms of a very extreme quantile. Also, under standard operating conditions, insurers use several “layers” of non proportional reinsurance that may or may not be combined with some type of proportional reinsurance. The resulting reinsurance structures will then be very complicated to analyze and to evaluate their mitigation or transfer effects analytically, so it may be necessary to use alternative approaches, such as Monte Carlo simulation methods. This is what we do in this paper in order to measure the effect of a complex reinsurance treaty on the risk profile of an insurance company. We compute the pure risk premium, PML as well as a host of results: impact on the insured portfolio, risk transfer effect of reinsurance programs, proportion of times reinsurance is exhausted, percentage of years it was necessary to use the contractual reinstatements, etc. Since the estimators of quantiles are known to be biased, we explore the alternative of using an Extreme Value approach to complement the analysis

Abstract:

Estimation of adequate reserves for outstanding claims is one of the main activities of actuaries in property/casualty insurance and a major topic in actuarial science. The need to estimate future claims has led to the development of many loss reserving techniques. There are two important problems that must be dealt with in the process of estimating reserves for outstanding claims: one is to determine an appropriate model for the claims process, and the other is to assess the degree of correlation among claim payments in different calendar and origin years. We approach both problems here. On the one hand we use a gamma distribution to model the claims process and, in addition, we allow the claims to be correlated. We follow a Bayesian approach for making inference with vague prior distributions. The methodology is illustrated with a real data set and compared with other standard methods

Abstract:

Consider a random sample X1, X2,. . ., Xn, from a normal population with unknown mean and standard deviation. Only the sample size, mean and range are recorded and it is necessary to estimate the unknown population mean and standard deviation. In this paper the estimation of the mean and standard deviation is made from a Bayesian perspective by using a Markov Chain Monte Carlo (MCMC) algorithm to simulate samples from the intractable joint posterior distribution of the mean and standard deviation. The proposed methodology is applied to simulated and real data. The real data refers to the sugar content (°BRIX level) of orange juice produced in different countries

Abstract:

This paper is concerned with the situation that occurs in claims reserving when there are negative values in the development triangle of incremental claim amounts. Typically these negative values will be the result of salvage recoveries, payments from third parties, total or partial cancellation of outstanding claims due to initial overestimation of the loss or to a possible favorable jury decision in favor of the insurer, rejection by the insurer, or just plain errors. Some of the traditional methods of claims reserving, such as the chain-ladder technique, may produce estimates of the reserves even when there are negative values. However, many methods can break down in the presence of enough (in number and/or size) negative incremental claims if certain constraints are not met. Historically the chain-ladder method has been used as a gold standard (benchmark) because of its generalized use and ease of application. A method that improves on the gold standard is one that can handle situations where there are many negative incremental claims and/or some of these are large. This paper presents a Bayesian model to consider negative incremental values, based on a three-parameter log-normal distribution. The model presented here allows the actuary to provide point estimates and measures of dispersion, as well as the complete distribution for outstanding claims from which the reserves can be derived. It is concluded that the method has a clear advantage over other existing methods. A Markov chain Monte Carlo simulation is applied using the package WinBUGS

Abstract:

The BMOM is particularly useful for obtaining post-data moments and densities for parameters and future observations when the form of the likelihood function is unknown and thus a traditional Bayesian approach cannot be used. Also, even when the form of the likelihood is assumed known, in time series problems it is sometimes difficult to formulate an appropriate prior density. Here, we show how the BMOM approach can be used in two, nontraditional problems. The first one is conditional forecasting in regression and time series autoregressive models. Specifically, it is shown that when forecasting disaggregated data (say quarterly data) and given aggregate constraints (say in terms of annual data) it is possible to apply a Bayesian approach to derive conditional forecasts in the multiple regression model. The types of constraints (conditioning) usually considered are that the sum, or the average, of the forecasts equals a given value. This kind of condition can be applied to forecasting quarterly values whose sum must be equal to a given annual value. Analogous results are obtained for AR(p) models. The second problem we analyse is the issue of aggregation and disaggregation of data in relation to predictive precision and modelling. Predictive densities are derived for future aggregate values by means of the BMOM based on a model for disaggregated data. They are then compared with those derived based on aggregated data

Resumen:

El problema de estimar el valor acumulado de una variable positiva y continua para la cual se ha observado una acumulación parcial, y generalmente con sólo un reducido número de observaciones (dos años), se puede llevar a cabo aprovechando la existencia de estacionalidad estable (de un periodo a otro). Por ejemplo, la cantidad por pronosticar puede ser el total de un periodo (año) y el cual debe hacerse en cuanto se obtiene información sólo para algunos subperiodos (meses) dados. Estas condiciones se presentan de manera natural en el pronóstico de las ventas estacionales de algunos productos ‘de temporada’, tales como juguetes; en el comportamiento de los inventarios de bienes cuya demanda varía estacionalmente, como los combustibles; o en algunos tipos de depósitos bancarios, entre otros. En este trabajo se analiza el problema en el contexto de muestreo por conglomerados. Se propone un estimador de razón para el total que se quiere pronosticar, bajo el supuesto de estacionalidad estable. Se presenta un estimador puntual y uno para la varianza del total. El método funciona bien cuando no es factible aplicar metodología estándar debido al reducido número de observaciones. Se incluyen algunos ejemplos reales, así como aplicaciones a datos publicados con anterioridad. Se hacen comparaciones con otros métodos

Abstract:

The problem of estimating the accumulated value of a positive and continuous variable for which some partially accumulated data has been observed, and usually with only a small number of observations (two years), can be approached taking advantage of the existence of stable seasonality (from one period to another). For example the quantity to be predicted may be the total for a period (year) and it needs to be made as soon as partial information becomes available for given subperiods (months). These conditions appear in a natural way in the prediction of seasonal sales of style goods, such as toys; in the behavior of inventories of goods where demand varies seasonally, such as fuels; or banking deposits, among many other examples. In this paper, the problem is addressed within a cluster sampling framework. A ratio estimator is proposed for the total value to be forecasted under the assumption of stable seasonality. Estimators are obtained for both the point forecast and the variance. The procedure works well when standard methods cannot be applied due to the reduced number of observations. Some real examples are included as well as applications to some previously published data. Comparisons are made with other procedures

Abstract:

We present a Bayesian solution to forecasting a time series when few observations are available. The quantity to predict is the accumulated value of a positive, continuous variable when partially accumulated data are observed. These conditions appear naturally in predicting sales of style goods and coupon redemption. A simple model describes the relation between partial and total values, assuming stable seasonality. Exact analytic results are obtained for point forecasts and the posterior predictive distribution. Noninformative priors allow automatic implementation. The procedure works well when standard methods cannot be applied due to the reduced number of observations. Examples are provided

Resumen:

Este artículo evalúa algunos aspectos del Proyecto Piloto de Nutrición, Alimentación y Salud (PNAS). Describe brevemente el Proyecto y presenta las características de la población beneficiaria, luego profundiza en el problema de la pobreza y a partir de un índice se evalúa la selección de las comunidades beneficiadas por el Proyecto. Posteriormente se describe la metodología usada en el análisis costo-efectividad y se da el procedimiento para el cálculo de los cocientes del efecto que tuvo el PNAS específicamente en el gasto en alimentos. Por último, se presentan las conclusiones que, entre otros aspectos, arrojan que el efecto del PNAS en el gasto en alimentos de las familias indujo un incremento del gasto de 7.3% en la zona ixtlera y de 4.3% en la zona otomí-mazahua, con un costo de 29.9 nuevos pesos (de 1991) y de 40.9 para cada una de las zonas, respectivamente

Abstract:

An evaluation is made of some aspects of the Proyecto Piloto de Nutrición, Alimentación y Salud, a Pilot Program for Nutrition, Food and Health of the Mexican Government (PNAS). We give a brief description of the Project and characteristics of the target population. We then describe and use the FGT Index to determine if the communities included in the Project were correctly chosen. We describe the method of cost-effectiveness analysis used in this article. The procedure for specifying cost-effectiveness ratios is next presented, and their application to measure the impact of PNAS on Food Expenditures carried out. Finally we present empirical results that show that, among other results, PNAS increased Food Expenditures of the participating households by 7.3% in the Ixtlera Zone and by 4.3% in the Otomí Mazahua Zone, at a cost of N$29.9 (1991) and N$40.9 for each, respectively

Resumen:

Con frecuencia las instituciones financieras internacionales y los gobiernos locales se ven implicados en la implantación de programas de desarrollo. Existe amplia evidencia de que los mejores resultados se obtienen cuando la comunidad se compromete en la operación de los programas, es decir cuando existe participación comunitaria. La evidencia es principalmente cualitativa, pues no hay métodos para medir cuantitativamente esta participación. En este artículo se propone un procedimiento para generar un índice agregado de participación comunitaria. Está orientado de manera específica a medir el grado de participación comunitaria en la construcción de obras de beneficio colectivo. Para estimar los parámetros del modelo que se propone es necesario hacer algunos supuestos, debido a las limitaciones en la información. Se aplica el método a datos de comunidades que participaron en un proyecto piloto de nutrición-alimentación y salud que se llevó a cabo en México

Abstract:

There is ample evidence that the best results are obtained in development programs when the target community gets involved in their implementation and/or operation. The evidence is mostly qualitative, however, since there are no methods for measuring this participation quantitatively. In this paper we present a procedure for generating an aggregate index of community participation based on productivity. It is specifically aimed at measuring community participation in the construction of works for collective benefit. Because there are limitations on the information available, additional assumptions must be made to estimate parameters. The method is applied to data from communities in Mexico participating in a national nutrition, food and health program

Abstract:

A Bayesian approach is used to derive constrained and unconstrained forecasts in an autoregressive time series model. Both are obtained by formulating an AR(p) model in such a way that it is possible to compute numerically the predictive distribution for any number of forecasts. The types of constraints considered are that a linear combination of the forecasts equals a given value. This kind of restriction is applied to forecasting quarterly values whose sum must be equal to a given annual value. Constrained forecasts are generated by conditioning on the predictive distribution of unconstrained forecasts. The procedures are applied to the Quarterly GNP of Mexico, to a simulated series from an AR(4) process and to the Quarterly Unemployment Rate for the United States

Abstract:

The problem of temporal disaggregation of time series is analyzed by means of Bayesian methods. The disaggregated values are obtained through a posterior distribution derived by using a diffuse prior on the parameters. Further analysis is carried out assuming alternative conjugate priors. The means of the different posterior distribution are shown to be equivalent to some sampling theory results. Bayesian prediction intervals are obtained. Forecasts for future disaggregated values are derived assuming a conjugate prior for the future aggregated value

Abstract:

A formulation of the problem of detecting outliers as an empirical Bayes problem is studied. In so doing we encounter a non-standard empirical Bayes problem for which the notion of average risk asymptotic optimality (a.r.a.o.) of procedures is defined. Some general theorems giving sufficient conditions for a.r.a.o. procedures are developed. These general results are then used in various formulations of the outlier problem for underlying normal distributions to give a.r.a.o. empirical Bayes procedures. Rates of convergence results are also given using the methods of Johns and Van Ryzin

Resumen:

El texto examina cuáles son las características y rasgos del habla tanto de las mujeres como de los hombres; hace una valoración a partir de algunas de sus causas y concluye con una invitación a hacernos conscientes de la forma de expresarnos

Abstract:

This article examines the distinctive characteristics and features of how both women and men speak. Based on this analysis, the author will make an assessment, and then invite the reader to become aware of their manner of speaking

Resumen:

Inés Arredondo perteneció a la llamada Generación de Medio Siglo, particularmente, al grupo de intelectuales y artistas que fundaron e impulsaron las actividades culturales de la Casa del Lago durante los años sesenta. El artículo es una semblanza que da cuenta tanto de los hechos más importantes que marcaron la vida y la trayectoria intelectual de Inés Arredondo, como de los rasgos particulares (estéticos) que definen la obra narrativa de esta escritora excepcional

Abstract:

Inés Arredondo belonged to the so-called Mid-Century Generation, namely a group of intellectuals and artists that established and promoted Casa del Lago’s cultural activities in the Sixties. This article gives an account of the important events and intellectual journey that shaped the writer’s life particularly the esthetic characteristics that shaped the narrative work of this exceptional writer

Abstract:

Informality is a structural trait in emerging economies affecting the behavior of labor markets, financial access and economy-wide productivity. This paper develops a simple general equilibrium closed economy model with nominal rigidities, labor and financial frictions to analyze the transmission of shocks and of monetary policy. In the model, the informal sector provides a flexible margin of adjustment to the labor market at the cost of a lower productivity. In addition, only formal sector firms have access to financing, which is instrumental in their production process. In a quantitative version of the model calibrated to Mexican data, we find that informality: (i) dampens the impact of demand and financial shocks, as well as of technology shocks specific to the formal sector, on wages and inflation, but (ii) heightens the inflationary impact of aggregate technology shocks. The presence of an informal sector also increases the sacrifice ratio of monetary policy actions. From a Central Bank perspective, the results imply that informality mitigates inflation volatility for most type of shocks but makes monetary policy less effective

Resumen:

En el presente trabajo, estudiamos los espacios de Brown, que son conexos y no completamente de Hausdorff. Utilizando progresiones aritméticas, construimos una base BG para una topología TG de N, y mostramos que (N, TG), llamado el espacio de Golomb, es de Brown. También probamos que hay elementos de BG que son de Brown, mientras que otros están totalmente separados. Escribimos algunas consecuencias de este resultado. Por ejemplo, (N, TG) no es conexo en pequeño en ninguno de sus puntos. Esto generaliza un resultado probado por Kirch en 1969. También damos una prueba más simple de un resultado presentado por Szczuka en 2010

Abstract:

In the present paper we study Brown spaces which are connected and not completely Hausdorff. Using arithmetic progressions, we construct a base BG for a topology TG on N, and show that (N, TG), called the Golomb space is a Brown space. We also show that some elements of BG are Brown spaces, while others are totally separated. We write some consequences of such result. For example, the space (N, TG) is not connected "im kleinen" at each of its points. This generalizes a result proved by Kirchin 1969. We also present a simpler proof of a result given by Szczuka in 2010

Resumen:

El libro sintetiza la experiencia adquirida en cursos de Ordenadores y Programación impartidos por la autora durante tres años en la Licenciatura de Infórmática de la Facultad de Ciencias de la Universidad Autónoma de Barcelona

Resumen:

En recientes años se ha incrementado el interés en el desarrollo de nuevos materiales en este caso compositos, ya que estos materiales más avanzados pueden realizar mejor su trabajo que los materiales convencionales (K. Morsi, A. Esawi.,, 2006). En el presente trabajo se analiza el efecto de la adición de nanotubos de carbono incorporando nano partículas de plata para aumentar tanto sus propiedades eléctricas como mecánicas. La realización de aleaciones de Aluminio con nanotubos de carbono utilizando molienda de baja energía con una velocidad de 140 rpm y durante un periodo de 24 horas de molienda, partiendo de aluminio al 98% se realizó una aleación con 0.35 de nanotubos de carbono, la molienda se realizó para obtener una buena homogenización ya que la distribución afecta al comportamiento de las propiedades (Amirhossein Javadi, 2013), además de la reducción de partícula y finalmente la incorporación de nanotubos de carbono adicionando nanopartículas de plata por la reducción con borohidruro de sodio por medio de la punta ultrasónica, Las aleaciones obtenidas fueron caracterizadas por Microscopia electrónica de barrido (MEB), Análisis de Difracción de Rayos X, se realizaron pruebas de dureza y finalmente se realizaron pruebas de conductividad eléctrica

Abstract:

In recent years has increased interest in the development of new materials in this case composites, as these more advanced materials can perform their work better than conventional materials. (K. Morsi, A. Esawi.,, 2006). In the present work we analyze the effect of the addition of carbon nanotubes incorporating nano silver particles to increase both their electrical and mechanical properties. The performance of aluminum alloys with carbon nanotubes using low energy grinding with a speed of 140 rpm and during a period of 24 hours of grinding, starting from 98% aluminum, an alloy was made with 0.35 carbon nanotubes, grinding (Amirhossein Javadi, 2013), in addition to the reduction of particle and finally the incorporation of carbon nanotubes by adding silver nanoparticles by the reduction with sodium borohydride by Medium of the ultrasonic tip. The obtained alloys were characterized by Scanning Electron Microscopy (SEM), X-Ray Diffraction Analysis, hardness tests were performed and electrical conductivity tests were finally carried out

Abstract:

In this study, high temperature reactions of Fe–Cr alloys at 500 and 600 °C were investigated using an atmosphere of N2–O2 8 vol% with 220 vppm HCl, 360 vppm H2O and 200 vppm SO2; moreover the following aggressive salts were placed in the inlet: KCl and ZnCl2. The salts were placed in the inlet to promote corrosion and increase the chemical reaction. These salts were applied to the alloys via discontinuous exposures. The corrosion products were characterized using thermo-gravimetric analysis, scanning electron microscopy and X-ray diffraction.The species identified in the corrosion products were: Cr2O3, Cr2O (Fe0.6Cr0.4)2O3, K2CrO4, (Cr, Fe)2O3, Fe–Cr, KCl, ZnCl2, FeOOH, σ-FeCrMo and Fe2O3. The presence of Mo, Al and Si was not significant and there was no evidence of chemical reaction of these elements. The most active elements were the Fe and Cr in the metal base. The Cr presence was beneficial against corrosion; this element decelerated the corrosion process due to the formation of protective oxide scales over the surfaces exposed at 500 °C and even more notable at 600 °C; as it was observed in the thermo-gravimetric analysis increasing mass loss. The steel with the best performance was alloy Fe9Cr3AlSi3Mo, due to the effect of the protective oxides inclusive in presence of the aggressive salts

Abstract:

Cognitive appraisal theory predicts that emotions affect participation decisions around risky collective action. However, little existing research has attempted to parse out the mechanisms by which this process occurs. We build a global game of regime change and discuss the effects that fear may have on participation through pessimism about the state of the world, other players' willingness to participate, and risk aversion. We test the behavioral effects of fear in this game by conducting 32 sessions of an experiment in two labs where participants are randomly assigned to an emotion induction procedure. In some rounds of the game, potential mechanisms are shut down to identify their contribution to the overall effect of fear. Our results show that in this context, fear does not affect willingness to participate. This finding highlights the importance of context, including integral versus incidental emotions and the size of the stakes, in shaping effect of emotions on behavior

Abstract:

Multilayer perceptron networks have been designed to solve supervised learning problems in which there is a set of known labeled training feature vectors. The resulting model allows us to infer adequate labels for unknown input vectors. Traditionally, the optimal model is the one that minimizes the error between the known labels and those inferred labels via such a model. The training process results in those weights that achieve the most adequate labels. Training implies a search process which is usually determined by the descent gradient of the error. In this work, we propose to replace the known labels by a set of such labels induced by a validity index. The validity index represents a measure of the adequateness of the model relative only to intrinsic structures and relationships of the set of feature vectors and not to previously known labels. Since, in general, there is no guarantee of the differentiability of such an index, we resort to heuristic optimization techniques. Our proposal results in an unsupervised learning approach for multilayer perceptron networks that allows us to infer the best model relative to labels derived from such a validity index which uncovers the hidden relationships of an unlabeled dataset

Abstract:

Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters) whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method

Abstract:

One of the basic endeavors in Pattern Recognition and particularly in Data Mining is the process of determining which unlabeled objects in a set do share interesting properties. This implies a singular process of classification usually denoted as "clustering", where the objects are grouped into k subsets (clusters) in accordance with an appropriate measure of likelihood. Clustering can be considered the most important unsupervised learning problem. The more traditional clustering methods are based on the minimization of a similarity criteria based on a metric or distance. This fact imposes important constraints on the geometry of the clusters found. Since each element in a cluster lies within a radial distance relative to a given center, the shape of the covering or hull of a cluster is hyper-spherical (convex) which sometimes does not encompass adequately the elements that belong to it. For this reason we propose to solve the clustering problem through the optimization of Shannon's Entropy. The optimization of this criterion represents a hard combinatorial problem which disallows the use of traditional optimization techniques, and thus, the use of a very efficient optimization technique is necessary. We consider that Genetic Algorithms are a good alternative. We show that our method allows to obtain successfull results for problems where the clusters have complex spatial arrangements. Such method obtains clusters with non-convex hulls that adequately encompass its elements. We statistically show that our method displays the best performance that can be achieved under the assumption of normal distribution of the elements of the clusters. We also show that this is a good alternative when this assumption is not met

Abstract:

This paper proposes a novel distributed controller that solves the leader-follower and the leaderless consensus problems in the task space for networks composed of robots that can be kinematically and dynamically different (heterogeneous). In the leader-follower scenario, the controller ensures that all the robots in the network asymptotically reach a given leader pose (position and orientation), provided that at least one follower robot has access to the leader pose. In the leaderless problem, the robots asymptotically reach an agreement pose. The proposed controller is robust to variable time-delays in the communication channel and does not rely on velocity measurements. The controller is dynamic, it cancels-out the gravity effects and it incorporates a proportional to the error term between the robot and the controller virtual position. The controller dynamics consists of a simple proportional scheme plus damping injection through a second-order (virtual) system. The proposed approach employs the singularity free unit-quaternions to represent the orientation of the end-effectors, and the network is represented by an undirected and connected interconnection graph. The application to the control of bilateral teleoperators is described as a special case of the leaderless consensus solution. The paper presents numerical simulations with a network composed of four 6-Degrees-of-Freedom (DoF) and one 7-DoF robot manipulators. Moreover, we also report some experiments with a 6-DoF industrial robot and two 3-DoF haptic devices

Abstract:

This article examines the effects of committee specialization and district characteristics on speech participation by topic and congressional forum. It argues that committee specialization should increase speech participation during legislative debates, while district characteristics should affect the likelihood of speech participation in non-lawmaking forums. To examine these expectations, we analyze over 100,000 speeches delivered in the Chilean Chamber of Deputies between 1990 and 2018. To carry out our topic classification task, we utilize the recently developed state-of-the-art multilingual Transformer model XLM-RoBERTa. Consistent with informational theories, we find that committee specialization is a significant predictor of speech participation in legislative debates. In addition, consistent with theories purporting that legislative speech serves as a vehicle for the electoral connection, we find that district characteristics have a significant effect on speech participation in non-lawmaking forums

Abstract:

According to conventional wisdom, closed-list proportional representation (CLPR) electoral systems create incentives for legislators to favor the party line over their voters' positions. However, electoral incentives may induce party leaders to tolerate shirking by some legislators, even under CLPR. This study argues that in considering whose deviations from the party line should be tolerated, party leaders exploit differences in voters' relative electoral influence resulting from malapportionment. We expect defections in roll call votes to be more likely among legislators elected from overrepresented districts than among those from other districts. We empirically test this claim using data on Argentine legislators' voting records and a unique dataset of estimates of voters' and legislators' placements in a common ideological space. Our findings suggest that even under electoral rules known for promoting unified parties, we should expect strategic defections to please voters, which can be advantageous for the party's electoral fortunes

Abstract:

This article examines speech participation under different parliamentary rules: open forums dedicated to bill debates, and closed forums reserved for non-lawmaking speeches. It discusses how electoral incentives influence speechmaking by promoting divergent party norms within those forums. Our empirical analysis focuses on the Chilean Chamber of Deputies. The findings lend support to the view that, in forums dedicated to non-lawmaking speeches, participation is greater among more institutionally disadvantaged members (backbenchers, women, and members from more distant districts), while in those that are dedicated to lawmaking debates, participation is greater among more senior members and members of the opposition

Abstract:

We present a novel approach to disentangle the effects of ideology, partisanship, and constituency pressures on roll-call voting. First, we place voters and legislators on a common ideological space. Next, we use roll-call data to identify the partisan influence on legislators’ behavior. Finally, we use a structural equation model to account for these separate effects on legislative voting. We rely on public opinion data and a survey of Argentine legislators conducted in 2007–08. Our findings indicate that partisanship is the most important determinant of legislative voting, leaving little room for personal ideological position to affect legislators’ behavior

Abstract:

Legislators in presidential countries use a variety of mechanisms to advance their electoral careers and connect with relevant constituents. The most frequently studied activities are bill initiation, co-sponsoring, and legislative speeches. In this paper, the authors examine legislators' information requests (i.e. parliamentary questions) to the government, which have been studied in some parliamentary countries but remain largely unscrutinised in presidential countries. The authors focus on the case of Chile - where strong and cohesive national parties coexist with electoral incentives that emphasise the personal vote - to examine the links between party responsiveness and legislators' efforts to connect with their electoral constituencies. Making use of a new database of parliamentary questions and a comprehensive sample of geographical references, the authors examine how legislators use this mechanism to forge connections with voters, and find that targeted activities tend to increase as a function of electoral insecurity and progressive ambition

Abstract:

In her response, Alfaro fleshes out two main questions that come out from her book The Belief in Intuition. First, what is the relation between Henri Bergson and Max Scheler's personalist anthropology, on the one hand, and politics, on the other? What kinds of political order and civic education would sustain dense moral psychology, such as the one she claims follows from their writings? Second, and more specifically, what theory of authority corresponds to the philosophical anthropology that we learn from these authors? How can the "small-scale exemplarity" that Alfaro argues is articulated in their writings be an alternative to how we think today about charisma and emulation in the public sphere? In articulating these responses, Alfaro reflects on phenomena like uncertainty, education, and social media in the contemporary public sphere

Abstract:

Using insights from two of the major proponents of the hermeneutical approach, Paul Ricoeur and Hannah Arendt-who both recognized the ethicopolitical importance of narrative and acknowledged some of the dangers associated with it-I will flesh out the worry that "narrativity" in political theory has been overly attentive to storytelling and not heedful enough of story listening. More specifically, even if, as Ricoeur says, "narrative intelligence" is crucial for self-understanding, that does not mean, as he invites us to, that we should always seek to develop a "narrative identity" or become, as he says, "the narrator of our own life story." I offer that, perhaps inadvertently, such an injunction might turn out to be detrimental to the "art of listening." This, however, must also be cultivated if we want to do justice to our narrative character and expect narrative to have the political role that both Ricoeur and Arendt envisaged. Thus, although there certainly is a "redemptive power" in narrative, when the latter is understood primarily as the act of narration or as the telling of stories, there is a danger to it as well. Such a danger, I think, intensifies at a time like ours, when, as some scholars have noted, "communicative abundance" or the "ceaseless production of redundancy" in traditional and social media has often led to the impoverishment of the public conversation

Abstract:

In this paper, I take George Lakoff and Mark Johnson's thesis that metaphors shape our reality to approach the judicial imagery of the new criminal justice system in Mexico (in effect since 2016). Based on twenty-nine in-depth interviews with judges and other members of the judiciary, I study what I call the "dirty minds" metaphor, showing its presence in everyday judicial practice and analyzing both its cognitive basis as well as its effects in how criminal judges understand their job. I argue that the such a metaphor, together with the "fear of contamination" it raises as a result, is misleading and goes to the detriment of the judicial virtues that should populate the new system. The conclusions I offer are relevant beyond the national context, inter alia, because they concern a far-reaching paradigm of judgment

Abstract:

Recent efforts to theorize the role of emotions in political life have stressed the importance of sympathy, and have often recurred to Adam Smith to articulate their claims. In the early twentieth-century, Max Scheler disputed the salutary character of sympathy, dismissing it as an ultimately perverse foundation for human association. Unlike later critics of sympathy as a political principle, Scheler rejected it for being ill equipped to salvage what, in his opinion, should be the proper basis of morality, namely, moral value. Even if Scheler's objections against Smith's project prove to be ultimately mistaken, he had important reasons to call into question its moral purchase in his own time. Where the most dangerous idol is not self-love but illusory self-knowledge, the virtue of self-command will not suffice. Where identification with others threatens the social bond more deeply than faction, “standing alone” in moral matters proves a more urgent task

Abstract:

Images of chemical molecules can be produced, manipulated, simulated and analyzed using sophisticated chemical software. However, in the process of publishing such images into scientific literature, all their chemical significance is lost. Although images of chemical molecules can be easily analyzed by the human expert, they cannot be fed back into chemical software and loose much of their potential use. We have developed a system that can automatically reconstruct the chemical information associated to the images of chemical molecules thus rendering them computer readable. We have benchmarked our system against a commercially available product and have also tested it using chemical databases of several thousand images with very encouraging results

Abstract:

We present an algorithm that automatically segments and classifies the brain structures in a set of magnetic resonance (MR) brain images using expert information contained in a small subset of the image set. The algorithm is intended to do the segmentation and classification tasks mimicking the way a human expert would reason. The algorithm uses a knowledge base taken from a small subset of semiautomatically classified images that is combined with a set of fuzzy indexes that capture the experience and expectation a human expert uses during recognition tasks. The fuzzy indexes are tissue specific and spatial specific, in order to consider the biological variations in the tissues and the acquisition inhomogeneities through the image set. The brain structures are segmented and classified one at a time. For each brain structure the algorithm needs one semiautomatically classified image and makes one pass through the image set. The algorithm uses low-level image processing techniques on a pixel basis for the segmentations, then validates or corrects the segmentations, and makes the final classification decision using higher level criteria measured by the set of fuzzy indexes. We use single-echo MR images because of their high volumetric resolution; but even though we are working with only one image per brain slice, we have multiple sources of information on each pixel: absolute and relative positions in the image, gray level value, statistics of the pixel and its three-dimensional neighborhood and relation to its counterpart pixels in adjacent images. We have validated our algorithm for ease of use and precision both with clinical experts and with measurable error indexes over a Brainweb simulated MR set

Abstract:

We present an attractive methodology for the compression of facial gestures that can be used to drive interaction in real time applications. Using the eigenface method we build compact representation spaces for a variety of facial gestures. These compact spaces are the so called eigenspaces. We do real time tracking and segmentation of facial features from video images and then use the eigenspaces to find compact descriptors of the segmented features. We use the system for an avatar videoconference application where we achieve real time interactivity with very limited bandwidth requirements. The system can also be used as a hands free man-machine interface

Abstract:

We use interactive virtual environments for cognitive behavioral therapy. Working together with children therapists and psychologists, our computer graphics group developed 5 interactive simulators for the treatment of fears and behavior disorders. The simulators run in real time on P4 PCs with graphic accelerators, but also work online using streaming techniques and Web VR engines. The construction of the simulators starts with ideas and situations proposed by the psychologists, these ideas are then developed by graphic designers and finally implemented in 3D virtual worlds by our group. We present the methodology we follow to turn the psychologists ideas and then the graphic designer’s sketches into fully interactive simulators. Our methodology starts with a graphic modeler to build the geometry of the virtual worlds, the models are then exported to a dedicated OpenGL VR engine that can interface with any VR peripheral. Alternatively, the models can be exported to a Web VR engine. The simulators are cost efficient since they require not much more than the PC and the graphics card. We have found that both the therapists and the children that use the simulators find this technology very attractive

Abstract:

We study the motion of the negative curved symmetric two and three center problem on the Poincare upper semi plane model for a surface of constant negative curvature κwhich without loss of generality we assume k = -1. Using this model, we first derive the equations of motion for the 2-and 3-center problems. We prove that for 2-center problem, there exists a unique equilibrium point and we study the dynamics around it. For the motion restricted to the invariant y-axis, we prove that it is a center, but for the general two center problem it is unstable. For the 3-center problem, we show the nonexistence of equilibrium points. We study two particular integrable cases, first when the motion of the free particle is restricted to the y-axis, and second when all particles are along the same geodesic. We classify the singularities of the problem and introduce a local and a global regularization of all them. We show some numerical simulations for each situation

Abstract:

We consider the curved 4-body problems on spheres and hyperbolic spheres. After obtaining a criterion for the existence of quadrilateral configurations on the equator of the sphere, we study two restricted 4-body problems, one in which two masses are negligible and another in which only one mass is negligible. In the former, we prove the evidence square-like relative equilibria, whereas in the latter we discuss the existence of kite-shaped relative equilibria

Abstract:

In this paper, we study the hypercyclic composition operators on weighted Banach spaces of functions defined on discrete metric spaces. We show that the only such composition operators act on the "little" spaces. We characterize the bounded composition operators on the little spaces, as well as provide various necessary conditions for hypercyclicity

Abstract:

We give a complete characterisation of the single and double arrow relations of the standard context K(Ln) of the lattice Ln of partitions of any positive integer n under the dominance order, thereby addressing an open question of Ganter, 2020/2022

Abstract:

Demand response (DR) programs and local markets (LM) are two suitable technologies to mitigate the high penetration of distributed energy resources (DER) that is vastly increasing even during the current pandemic in the world. It is intended to improve operation by incorporating such mechanisms in the energy resource management problem while mitigating the present issues with Smart Grid (SG) technologies and optimization techniques. This paper presents an efficient intraday energy resource management starting from the day-ahead time horizon, which considers load uncertainty and implements both DR programs and LM trading to reduce the operating costs of three load aggregator in an SG. A random perturbation was used to generate the intraday scenarios from the day-ahead time horizon. A recent evolutionary algorithm HyDE-DF, is used to achieve optimization. Results show that the aggregators can manage consumption and generation resources, including DR and power balance compensation, through an implemented LM

Abstract:

Demand response programs, energy storage systems, electric vehicles, and local electricity markets are appropriate solutions to offset the uncertainty associated with the high penetration of distributed energy resources. It aims to enhance efficiency by adding such technologies to the energy resource management problem while also addressing current concerns using smart grid technologies and optimization methodologies. This paper presents an efficient intraday energy resource management starting from the day ahead time horizon, which considers the uncertainty associated with load consumption, renewable generation, electric vehicles, electricity market prices, and the existence of extreme events in a 13-bus distribution network with high integration of renewables and electric vehicles. A risk analysis is implemented through conditional value-at-risk to address these extreme events. In the intraday model, we assume that an extreme event will occur to analyze the outcome of the developed solution. We analyze the solution's impact departing from the day-ahead, considering different risk aversion levels. Multiple metaheuristics optimize the day-ahead problem, and the best-performing algorithm is used for the intraday problem. Results show that HyDE gives the best day-ahead solution compared to the other algorithms, achieving a reduction of around 37% in the cost of the worst scenarios. For the intraday model, considering risk aversion also reduces the impact of the extreme scenarios

Abstract:

The central configurations given by an equilateral triangle and a regular tetrahedron with equal masses at the vertices and a body at the barycenter have been widely studied in [9] and [14] due to the phenomena of bifurcation occurring when the central mass has a determined value m*. We propose a variation of this problem setting the central mass as the critical value m* and letting a mass at a vertex to be the parameter of bifurcation. In both cases, 2D and 3D, we verify the existence of bifurcation, that is, for a same set of masses we determine two new central configurations. The computation of the bifurcations, as well as their pictures have been performed considering homogeneous force laws with exponent a < −1

Abstract:

The leaderless and the leader-follower consensus are the most basic synchronization behaviors for multiagent systems. For networks of Euler-Lagrange (EL) agents different controllers have been proposed to achieve consensus, requiring in all cases, either the cancellation or the estimation of the gravity forces. While, in the first case, it is shown that a simple Proportional plus damping (P+d) scheme with exact gravity cancellation can achieve consensus, in the latter case, it is necessary to estimate, not just the gravity forces, but the parameters of the whole dynamics. This requires the computation of a complicated regressor matrix, that grows in complexity as the degrees-of-freedom of the EL-agents increase. To simplify the controller implementation we propose in this paper a simple P+d scheme with only adaptive gravity compensation. In particular, two adaptive controllers that solve both consensus problems by only estimating the gravitational term of the agents and hence without requiring the complete regressor matrix are reported. The first controller is a simple P+d scheme that does not require to exchange velocity information between the agents but requires centralized information. The second controller is a Proportional-Derivative plus damping (PD+d) scheme that is fully decentralized but requires exchanges of speed information between the agents. Simulation results demonstrate the performance of the proposed controllers

Abstract:

Medical image segmentation is one of the most productive research areas in medical image processing. The goal of most new image segmentation algorithms is to achieve higher segmentation accuracy than existing algorithms. But the issue of quantitative, reproducible validation of segmentation results, and the questions: What is segmentation accuracy ?, and: What segmentation accuracy can a segmentation algorithm achieve ? remain wide open. The creation of a validation framework is relevant and necessary for consistent and realistic comparisons of existing, new and future segmentation algorithms. An important component of a reproducible and quantitative validation framework for segmentation algorithms is a composite index that will measure segmentation performance at a variety of levels. In this paper we present a prototype composite index that includes the measurement of seven metrics on segmented image sets. We explain how the composite index is a more complete and robust representation of algorithmic performance than currently used indices that rate segmentation results using a single metric. Our proposed index can be read as an averaged global metric or as a series of algorithmic ratings that will allow the user to compare how an algorithm performs under many categories

Abstract:

How is the size of the informal sector affected when the distribution of social expenditures across formal and informal workers changes? How is it affected when the tax rate changes along with the generosity of these transfers? In our search model, taxes are levied on formal-sector workers as a proportion of their wage. Transfers, in contrast, are lump-sum and are received by both formal and informal workers. This implies that high-wage formal workers subsidize low-wage formal workers as well as informal workers. We calibrate the model to Mexico and perform counterfactuals. We find that the size of the informal sector is quite inelastic to changes in taxes and transfers. This is due to the presence of search frictions and to the cross-subsidy in our model: for low-wage formal jobs, a tax increase is roughly offset by an increase in benefits, leaving the unemployed approximately indifferent. Our results are consistent with the empirical evidence on the recent introduction of the “Seguro Popular” healthcare program

Abstract:

We calibrate the cost of sovereign defaults using a continuous time model, where government default decisions may trigger a change in the regime of a stochastic TFP process. We calibrate the model to a sample of European countries from 2009 to 2012. By comparing the estimated drift in default relative to that in no-default, we find that TFP falls in the range of 3.70–5.88 %. The model is consistent with observed falls in GDP growth rates and subsequent recoveries and illustrates why fiscal multipliers are small during sovereign debt crises

Abstract:

Employment to population ratios differ markedly across Organization for Economic Cooperation and Development (OECD) countries, especially for people aged over 55 years. In addition, social security features differ markedly across the OECD, particularly with respect to features such as generosity, entitlement ages, and implicit taxes on social security benefits. This study postulates that differences in social security features explain many differences in employment to population ratios at older ages. This conjecture is assessed quantitatively with a life cycle general equilibrium model of retirement. At ages 60-64 years, the correlation between the simulations of this study's model and observed data is 0.67. Generosity and implicit taxes are key features to explain the cross-country variation, whereas entitlement age is not

Abstract:

The consequences of increases in the scale of tax and transfer programs are assessed in the context of a model with idiosyncratic productivity shocks and incomplete markets. The effects are contrasted with those obtained in a stand-in house hold model featuring no idiosyncratic shocks and complete markets. The main finding is that the impact on hours remains very large, but the welfare consequences are very different. The analysis also suggests that tax and transfer policies have large effects on average labor productivity via selection effects on employment

Abstract:

Atmospheric pollution components have negative effects in the health and life of people. Outdoor pollution has been extenseively studied, but a large portion of people stay indoors. Our research focuses on indoor pollution forecasting using deep learning techniques coupled with the large processing capabilities of the cloud computing. This paper also shares the implementation using an open source approach of the code for modeling time-series of different sources data. We believe that further research can leverage the outcomes of our research

Abstract:

Sergio Verdugo's provocative Foreword challenges us to think about whether the concepts we inherited from classical constitutionalism are still useful for understanding our current reality. Verdugo refutes any attempt to defend what he calls "the conventional approach to constituent power." The objective of this article is to contradict Verdugo's assertions which, the Foreword claims, are based on an incorrect notion of the people as a unified body, or as a social consensus. The article argues, instead, for the plausibility of defending the popular notion of constituent power by anchoring it in a historical and dynamic concept of democratic legitimacy. It concludes that, although legitimizing deviations from the established channels for political transformation entails risks, we must assume them for the sake of the emancipatory potential of constituent power

Resumen:

El libro La Corte Enrique Santiago Petracchi II que se reseña muestra la impronta de Enrique Santiago Petracchi durante su segundo período a cargo de la presidencia de la Corte Suprema de Justicia de la Nación Argentina, ocurrida entre los años 2003 y 2006. La Corte se estudia tanto desde el punto de vista interno, con sus reformas en busca de transparencia y legitimidad, como desde el punto de vista externo, contextual y de relación con el resto de los poderes del Estado y la sociedad civil. El recuento incluye una mención de los hitos jurisprudenciales de dicho período y explica cómo la figura de Petracchi es central para comprender este momento de la Corte

Abstract:

The book "The Court Enrique Santiago Petracchi II" that is reviewed shows the imprint of Chief Justice Enrique Santiago Petracchi at the Argentinean Supreme Court of Justice during his second term that occurred between 2003 and 2006. The Court is studied both from the internal point of view, with its reforms in search of transparency and legitimacy, as well as from the external, contextual point of view and relationship with the rest of the State branches and civil society. The account includes a mention of the leading cases of that period and explains how the figure of Petracchi is central to understanding this moment of the Court

Resumen:

El artículo plantea algunas inquietudes sobre el análisis que Gargarella hace de la sentencia de la Corte Interamericana de Derechos Humanos en el caso Gelman vs. Uruguay. Aceptando la idea principal de que las decisiones pueden gradarse según su legitimidad democrática, cuestiona que la Ley de Caducidad uruguaya haya sido legítima en un grado significativo y por tanto haya debido respetarse en su validez por la Corte Interamericana. Ello por cuanto la decisión no fue tomada por todas las personas posiblemente afectadas por la misma. Este principio normativo de inclusión es fundamental para la teoría de la democracia de Gargarella, sin embargo, no fue considerado en su análisis del caso. El artículo explora las consecuencias de tomarse en serio el principio de inclusión y los problemas prácticos que este apareja, en especial respecto de la constitución del demos. Finalmente, propone una alternativa de solución mediante la justicia constitucional/convencional, de acuerdo con un entendimiento procedimental de la misma basado en la participación, según lo pensó J. H. Ely

Abstract:

The article raises some questions about Gargarella’s analysis of the Inter-American Court of Human Rights’ ruling in Gelman v. Uruguay. It accepts the main idea regarding the possibility to grad democratic legitimacy of deci-sions, but it questions that the Uruguayan amnesty has had a high degree of legitimacy. This is because the decision was not made by all possibly affected people. This normative principle of inclusion is fundamental to Gargarella’s theory of democracy, however, it was not considered in his analysis of the case. The article explores the consequences of taking the principle of inclu-sion seriously and the practical problems that it involves, especially regard-ing the constitution of the demos. Finally, it proposes an alternative solution through a procedural understanding of judicial review based on participation, as proposed by J.H. Ely

Resumen:

Este artículo reflexionará críticamente sobre la jurisprudencia de la Suprema Corte de Justicia de la Nación mexicana en torno al principio constitucional de paridad de género. Para hacerlo, se apoyará en una reconstrucción teórica de tres diferentes fundamentos en los que se puede basar la paridad según se entienda la representación política de las mujeres y el principio de igualdad. Estas posturas se identificarán como “de la igualdad formal”, “de la defensa de las cuotas” y “de la paridad”, resaltando cómo en las sentencias mexicanas se mezclan los argumentos de las dos últimas posturas de forma desordenada e incoherente. Finalmente, el artículo propiciará una interpretación del principio de paridad de género como dinámico y por tanto abierto a aportaciones progresivas del sujeto político feminista, que evite los riesgos del esencialismo como el reforzar el sistema binario de sexos. Un principio que, en este momento histórico, requiere medidas de acción afirmativa correctivas, provisionales y estratégicas para alcanzar la igualdad sustantiva de género en la representación política

Abstract:

This article will critically reflect on the jurisprudence of the Mexican Supreme Court of Justice regarding the constitutional principle of gender parity. To do so, it will rely on a theoretical reconstruction of three different foundations of the principle, based on the understanding of the political representation of women and the principle of equality. I will call these positions: "of formal equality," "of the defense of quotas," and "of parity," highlighting how the arguments of the last two positions are mixed in the Mexican judgments in a disorderly and incoherent way. Finally, the article will promote an interpretation of the principle of gender parity as dynamic and therefore open to progressive contributions from the feminist political subject, avoiding both the risks of essentialism and reinforcing the binary system of sexes. This principle requires, at this historical moment, corrective, provisional, and strategic affirmative action measures to achieve substantive gender equality in political representation

Resumen:

El artículo analiza la posibilidad de que estemos ante una transformación constitucional más allá del texto constitucional. Así, en el marco de la llamada "Cuarta Transformación" que se anunció para el país luego de las últimas elecciones, se intenta una reflexión diagnóstica sobre el papel que juega la Suprema Corte de Justicia.

Resumen:

El presente trabajo tiene por objeto analizar el funcionamiento de la rigidez constitucional en México como garantía de la supremacía constitucional. Para ello comenzaré con un estudio sobre la idea de rigidez y la distinguiré del concepto de supremacía. Posteriormente utilizaré dichas categorías para analizar el sistema mexicano y cuestionar su eficacia, es decir, la adecuación entre el medio (rigidez) y el fin (supremacía). Por último haré un par de propuestas de modificación del mecanismo de reforma constitucional en vistas a hacerlo más democrático, más deliberativo y con ello más eficaz para garantizar la supremacía constitucional

Abstract:

This paper analyzes how constitutional rigidity works in México and its consequences for constitutional supremacy. It starts with a conceptual distinction between rigidity and supremacy. Subsequently those categories are used to analyze mexican system and to question the amendment process capability to guarantee constitutional supremacy. Finally, the paper makes some proposals to amend the Mexican constitutional amendment process in order to make it more democratic, deliberative and effective to guarantee constitutional supremacy

Resumen:

El constitucionalismo popular como corriente constitucional contemporánea plantea una revisión crítica a la historia del constitucionalismo norteamericano, reivindicando el papel del pueblo en la interpretación constitucional. A la vez, presenta un contenido normativo anti-elitista que trasciende sus orígenes y que pone en cuestión la idea de que los jueces deban tener la última palabra en las controversias sobre derechos fundamentales. Esta faceta tiene su correlato en los diseños institucionales propuestos, propios de un constitucionalismo débil y centrados en la participación popular democrática

Abstract:

Popular constitutionalism is a contemporary constitutional theory with a critical view of U.S' constitutional narrative focus on judicial supremacy. Instead, popular constitutionalism regards the people as main actor. It defends an anti-elitist understanding of constitutional law. From the institutional perspective, popular constitutionalism proposes a weak model of constitutionalism and a strong participatory democracy

Resumen:

El trabajo tiene como objetivo analizar la sentencia dictada por la Primera Sala de la Suprema Corte de Justicia mexicana en el amparo en revisión 152/2013 en la que se declaró la inconstitucionalidad de la exclusión de los homosexuales del régimen matrimonial en el Estado de Oaxaca. Esta sentencia refleja un cambio importante en la forma de entender el interés legítimo tratándose de la impugnación de normas autoaplicativas (es decir, de normas que causan perjuicio sin que medie acto de aplicación), dando paso a la justiciabilidad de los mensajes estigmatizantes. En el caso, esta forma más amplia de entender el interés legítimo está basada en la percepción de que el derecho discrimina a través de los mensajes que transmite; situación que la Suprema Corte considera puede combatir a través de sus sentencias de amparo. Asimismo, se plantean algunos retos e inquietudes que suscita la sentencia a la luz del activismo judicial que puede conllevar

Abstract:

This paper is focus on the amparo en revision 152/2013 issued by the first chamber of the Supreme Court of Mexico. For now on the Supreme Court is able to judge the stigmatizing messages of law. Furthermore, the amparo en revision 152/2013 develops a broader conception of discrimination and a more activist role of the Supreme Court. Finally, I express some thoughts about the issues that this judgment could pose to the Supreme Court

Abstract:

We elicit subjective probability distributions from business executives about their own-firm outcomes at a one-year look-ahead horizon. In terms of question design, our key innovation is to let survey respondents freely select support points and probabilities in five-point distributions over future sales growth, employment, and investment. In terms of data collection, we develop and field a new monthly panel Survey of Business Uncertainty. The SBU began in 2014 and now covers about 1,750 firms drawn from all 50 states, every major nonfarm industry, and a range of firm sizes. We find three key results. First, firm-level growth expectations are highly predictive of realized growth rates. Second, firm-level subjective uncertainty predicts the magnitudes of future forecast errors and future forecast revisions. Third, subjective uncertainty rises with the firm’s absolute growth rate in the previous year and with the magnitude of recent revisions to its expected growth rate. We aggregate over firm-level forecast distributions to construct monthly indices of business expectations (first moment) and uncertainty (second moment) for the U. S. private sector

Abstract:

We consider several economic uncertainty indicators for the US and UK before and during the COVID-19 pandemic: implied stock market volatility, newspaper-based policy uncertainty, Twitter chatter about economic uncertainty, subjective uncertainty about business growth, forecaster disagreement about future GDP growth, and a model-based measure of macro uncertainty. Four results emerge. First, all indicators show huge uncertainty jumps in reaction to the pandemic and its economic fallout. Indeed, most indicators reach their highest values on record. Second, peak amplitudes differ greatly -from a 35% rise for the model-based measure of US economic uncertainty (relative to January 2020) to a 20-fold rise in forecaster disagreement about UK growth. Third, time paths also differ: Implied volatility rose rapidly from late February, peaked in mid-March, and fell back by late March as stock prices began to recover. In contrast, broader measures of uncertainty peaked later and then plateaued, as job losses mounted, highlighting differences between Wall Street and Main Street uncertainty measures. Fourth, in Cholesky-identified VAR models fit to monthly U.S. data, a COVID-size uncertainty shock foreshadows peak drops in industrial production of 12-19%

Abstract:

This paper proposes a tree-based incremental-learning model to estimate house pricing using publicly available information on geography, city characteristics, transportation, and real estate for sale. Previous machine-learning models capture the marginal effects of property characteristics and location on prices using big datasets for training. In contrast, our scenario is constrained to small batches of data that become available in a daily basis, therefore our model learns from daily city data, employing incremental-learning to provide accurate price estimations each day. Our results show that property prices are highly influenced by the city characteristics and its connectivity, and that incremental models efficiently adapt to the nature of the house pricing estimation task

Abstract:

In journalism, innovation can be achieved by integrating various factors of change. This article reports the results of an investigation carried out in 2020; the study sample comprised journalists who participated as social researchers in a longitudinal study focused on citizens' perceptions of the Mexican electoral process in 2018. Journalistic innovation was promoted by the development of a novel methodology that combined existing journalistic resources and the use of qualitative social research methodologies. This combination provided depth to the journalistic coverage, optimized reporting, editing, design, and publication processes, improved access to more complete and diverse information in real time, and enhanced the capabilities of journalists. The latter transformed, through differential behaviors, their way of thinking about and valuing the profession by reconceptualizing and re-evaluating journalistic practices in which they were involved, resulting in an improvement in journalistic quality

Abstract:

Purpose: This research analyzes national identity representations held by Generation Z youth living in the United States-Mexico-Canada Agreement (USMCA) countries. In addition, it aims to identify the information on these issues that they are exposed to through social media. Methods: A qualitative approach carried out through in-depth interviews was selected for the study. The objective is to reconstruct social meaning and the social representation system. The constant comparative method was used for the information analysis, backed by the NVivo program. Findings: National identity perceptions of the adolescents interviewed are positive in terms of their own groups, very favorable regarding Canadians, and unfavorable vis-à-vis Americans. Furthermore, the interviewees agreed that social media have influenced their desire to travel or migrate, and if considering migrating, they have also provided advice as to which country they might go to. On another point, Mexicans are quite familiar with the Treaty; Americans are split between those who know something about it and those who have no information whatsoever; whereas Canadians know nothing about it. This reflects a possible way to improve information generated and spread by social media. Practical implications: The results could improve our understanding of how young people interpret the information circulating in social media and what representations are constructed about national identities. We believe this research can be replicated in other countries. Social implications: We might consider that the representations Generation Z has about the national identities of these three countries and what it means to migrate could have an impact on the democratic life of each nation and, in turn, on the relationship among the three USMCA partners

Resumen:

Este artículo tiene como objetivo describir las etapas de transformación de la identidad social de los habitantes de la región de Los Altos de Jalisco, México, a partir de la Guerra Cristera y hasta la década de los años 90. El proceso se ha desarrollado en cuatro fases: Oposición (1926-1929), Ajuste (1930-1970), Reforzamiento (1970-1990) y Cambio (1990- ). Este análisis se realiza desde la teoría de la mediación social y presenta un avance de la investigación realizada para la tesis de doctorado Los mitos vivos de México: Identidad regional en Los Altos de Jalisco, dirigida por Manuel Martín Serrano

Abstract:

This article aims to describe the stages of transformation of the social identity in Los Altos de Jalisco, Mexico, from the Cristera War until the 1990s. The process has been developed in four phases: Opposition (1926-1929), Adjustment (1930-1970), Reinforcement (1970-1990) and Change (1990- ). This analysis is carried out from the theory of social mediation and presents an advance of the research performed for the doctoral thesis Los mitos vivos de México: Identidad regional en Los Altos de Jalisco, directed by Manuel Martín Serrano

Resumen:

Las identidades criollas en México han sido poco investigadas. Una de estas identidades se gestó en Los Altos de Jalisco a lo largo de cuatro siglos y permaneció casi intacta, incluso luego de la guerra cristera que cimbró la zona en 1926. Fundamentalmente, la identidad alteña está cimentada en la percepción que se tiene del origen hispano de la comunidad, así como del catolicismo arraigado y reforzado por los cristeros. De esta forma, la familia, institución sagrada para los alteños, guarda y conserva lo hispano y lo católico. El matrimonio funda la familia, le da soporte, orden y estabilidad a la organización social. Al hablar de noviazgo, el concepto de virginidad es central en la conformación de la identidad. La mujer debe conservar su pureza para que la familia alteña siga manteniendo sus valores

Resumen:

¿Cómo se ven a sí mismos y a los otros los habitantes de una región de México? ¿Cómo se articulan las percepciones sobre las creencias del grupo y sus acciones? En este artículo se presenta un modelo para conceptualizar cómo está estructurada la identidad social. A partir de un estudio de caso, la investigación toma como centro los elementos de la identidad, a los que se definen como unidades de significación contenidas en una creencia o enunciado. Posteriormente se detalla el proceso de análisis a través del cual es posible entender su articulación y organización. A través de dos Mundos y dos Dimensiones que conforman la identidad social así como tres ejes que la configuran, el Espacio, el Tiempo y la Relación, este documento analiza el caso de los habitantes de Los Altos, Jalisco, México, con la encomienda de dotar de una primera herramienta para entender quiénes y cómo son, a decir de sí mismos, los habitantes de esta región

Abstract:

How do the individuals from a particular Mexican region see themselves and the rest of the inhabitants of the area? How are the perceptions about the group’s beliefs and their actions articu-lated? This article presents a model aimed at conceptualizing how social identity is structured. Based on a case study, the research focuses on the elements of identity, defined as units of signification within a belief or statement. The paper also details the process of analysis through which it is possible to understand its articulation and organization. We do it by means of two Worlds and two Dimensions that conform the social identity, as well as three axes that shape it: Space, Time and the Relationship, this document analyzes the case of inhabi-tants of Los Altos, Jalisco, Mexico, in order to provide a first tool that helps understand who the inhabitants of this region are, and what they are like, according to themselves

Abstract:

We propose an algorithm for creating line graphs from binary images. The algorithm consists of a vectorizer followed by a line detector that can handle a large variety of binary images and is tolerant to noise. The proposed algorithm can accurately extract higher-level geometry from the images lending itself well to automatic image recognition tasks. Our algorithm revisits the technique of image polygonization proposing a very robust variant based on subpixel resolution and the construction of directed paths along the center of the border pixels where each pixel can correspond to multiple nodes along one path. The algorithm has been used in the areas of chemical structure and musical score recognition and is available for testing at www.docnition.com. Extensive testing of the algorithm against commercial and noncommercial methods has been conducted with favorable results

Resumen:

La relación entre la profesionalidad del periodismo y las normas éticas forman un vínculo importante para una actividad que se enfrenta a nuevos desafíos, que provienen tanto de la reconfiguración de las industrias mediáticas como de la crisis de identidad de la propia profesión. En este marco, se suma la revalorización de saber contar historias con contenidos informativos desde el empleo de herramientas y recursos literarios. Así, el periodismo narrativo potencia un periodismo comprometido, crítico y transformador que, en entornos transmedia, se enfrenta a nuevas preguntas: ¿qué implicaciones éticas tiene la participación activa de las audiencias, la generación de comunidades virtuales, el uso del lenguaje claro, la conjunción de diversos medios de comunicación y el manejo de datos e historias personales? En esta comunicación se presenta un análisis de esta problemática desde la Teoría de la Otredad, como esencia de una ética actual y solidaria que permite el entendimiento y aceptación del ser humano. Como línea central se describen los principales retos que se han detectado a partir de análisis de los principios del periodismo transmedia

Abstract:

The relationship between the professionalism of journalism and ethical standards is an important link for an activity that is facing new challenges, stemming both from the reconfiguration of media industries and the identity crisis of the profession itself. In this context, the revaluation of the ability to tell stories with informative content from the use of literary tools and resources is added. Thus, narrative journalism enhances a committed, critical and transformative journalism that, in transmedia environments, faces new questions: what are the ethical implications of the active participation of audiences, the generation of virtual communities, the use of clear language, the combination of different media and the handling of data and personal stories? This paper presents an analysis of this problem from the perspective of the Theory of Otherness, as the essence of a current and supportive ethics that allows the understanding and acceptance of the human being. As a central line, the main challenges that have been detected from the analysis of the principles of transmedia journalism

Abstract:

Business cycles in emerging economies display very volatile consumption and strongly countercyclical trade balance. We show that aggregate consumption in these economies is not more volatile than output once durables are accounted for. Then, we present and estimate a real business cycles model for a small open economy that accounts for this empirical observation. Our results show that the role of permanent shocks to aggregate productivity in explaining cyclical fluctuations in emerging economies is considerably lower than previously documented. Moreover, we find that financial frictions are crucial to explain some key business cycle properties of these economies

Abstract:

Afunction f from a domain in R3 to the quaternions is said to be inframono-genic if ðfð =0, where ð=ð/ðx0+(ð/ð𝜕x1)e1+(ð/ðx2)e2. All inframonogenic functions are biharmonic. In the context of functions f=f0+f1e1+f2e2 taking values in the reduced quaternions, we show that the inframonogenic homoge-neous polynomials of degree n form a subspace of dimension 6n+3. We use the homogeneous polynomials to construct an explicit, computable orthogonal basis for the Hilbert space of square-integrable inframonogenic functions defined inthe ball in R3

Abstract:

We decompose traditional betas into semibetas based on the signed covariation between the returns of individual stocks in an international market and the returns of three risk factors: local, global, and foreign exchange. Using high-frequency data, we empirically assess stock return co-movements with these three risk factors and find novel relationships between these factors and future returns. Our analysis shows that only semibetas derived from negative risk factor and stock return downturns command significant risk premia. Global downside risk is negatively priced in the international market and local downside risk is positively priced

Abstract:

We use intraday data to compute weekly realized variance, skewness, and kurtosis for equity returns and study the realized moments' time-series and cross-sectional properties. We investigate if this week's realized moments are informative for the cross-section of next week's stock returns. We find a very strong negative relationship between realized skewness and next week's stock returns. A trading strategy that buys stocks in the lowest realized skewness decile and sells stocks in the highest realized skewness decile generates an average weekly return of 19 basis points with a t-statistic of 3.70. Our results on realized skewness are robust across a wide variety of implementations, sample periods, portfolio weightings, and firm characteristics, and are not captured by the Fama-French and Carhart factors. We find some evidence that the relationship between realized kurtosis and next week's stock returns is positive, but the evidence is not always robust and statistically significant. We do not find a strong relationship between realized volatility and next week's stock returns

Resumen:

En este artículo se propone que el humor constituye una forma de comunicación intrapersonal particularmente apta para la (auto) educación filosófica que se encuentra en el corazón de la práctica de la filosofía. Se explican los resultados epistemológicos y éticos de un uso sistemático de la risa autorreferencial. Se defienden los beneficios de una cosmovisión basada en el reconocimiento del ridículo humano, Homo risibilis, comparándolo con otros enfoques de la condición humana

Abstract:

This article presents humor as enacting an intra-personal communication particularly apt for the philosophic (self) education that lies at the heart of the practice of philosophy. It explains the epistemological and ethical outcomes of a systematic use of self-referential laughter. It argues for the benefits of a worldview predicated on acknowledging human ridicule, Homo risibilis, comparing it with other approaches to the human predicament

Abstract:

This paper examines the effects of noncontributory pension programs at the federal and state levels on Mexican households' saving patterns using micro data from the Mexican Income and Expenditure Survey. We find that the federal program curtails saving among households whose oldest member is either 18-54 or 65-69 years old, possibly through anticipation effects, a decrease in the longevity risk faced by households, and a redistribution of income between households of different generations. Specifically, these households appear to be reallocating income away from saving into human capital investments, like education and health. Generally, state programs have neither significant effects on household saving, nor does the combination of federal and state programs. Finally, with a few exceptions, noncontributory pensions have no significant impact on the saving of households with members 70 years of age or older-individuals eligible for those pensions, plausibly because of their dissaving stage in the lifecycle

Abstract:

This paper empirically investigates the determinants of the Internet and cellular phone penetration levels in a crosscountry setting. It offers a framework to explain differences in the use of information and communication technologies in terms of differences in the institutional environment and the resulting investment climate. Using three measures of the quality of the investment climate, Internet access is shown to depend strongly on the country’s institutional setting because fixed-line Internet investment is characterized by a high risk of state expropriation, given its considerable asset specificity. Mobile phone networks, on the other hand, are built on less site-specific, re-deployable modules, which make this technology less dependent on institutional characteristics. It is speculated that the existence of telecommunications technology that is less sensitive to the parameters of the institutional environment and, in particular, to poor investment protection provides an opportunity for better understanding of the constraints and prospects for economic development

Abstract:

We consider a restricted three body problem on surfaces of constant curvature. As in the classical Newtonian case the collision singularities occur when the position particle with infinitesimal mass coincides with the position of one of the primaries. We prove that the singularities due to collision can be locally (each one separately) and globally (both as the same time) regularized through the construction of Levi-Civita and Birkhoff type transformations respectively. As an application we study some general properties of the Hill’s regions and we present some ejection-collision orbits for the symmetrical problem

Abstract:

We consider a symmetric restricted three-body problem on surfaces Mk2 of constant Gaussian curvature k ≠ 0, which can be reduced to the cases k = ±1. This problem consists in the analysis of the dynamics of an infinitesimal mass particle attracted by two primaries of identical masses describing elliptic relative equilibria of the two body problem on Mk2, i.e., the primaries move on opposite sides of the same parallel of radius a. The Hamiltonian formulation of this problem is pointed out in intrinsic coordinates. The goal of this paper is to describe analytically, important aspects of the global dynamics in both cases k = ±1 and determine the main differences with the classical Newtonian circular restricted three-body problem. In this sense, we describe the number of equilibria and its linear stability depending on its bifurcation parameter corresponding to the radial parameter a. After that, we prove the existence of families of periodic orbits and KAM 2-tori related to these orbits

Abstract:

We classify and analyze the orbits of the Kepler problem on surfaces of constant curvature (both positive and negative, S2 and H2, respectively) as functions of the angular momentum and the energy. Hill's regions are characterized and the problem of time-collision is studied. We also regularize the problem in Cartesian and intrinsic coordinates, depending on the constant angular momentum, and we describe the orbits of the regularized vector field. The phase portraits both for S2 and H2 are pointed out

Abstract:

We consider a setup in which a principal must decide whether or not to legalize a socially undesirable activity. The law is enforced by a monitor who may be bribed to conceal evidence of the offense and who may also engage in extortionary practices. The principal may legalize the activity even if it is a very harmful one. The principal may also declare the activity illegal knowing that the monitor will abuse the law to extract bribes out of innocent people. Our model offers a novel rationale for legalizing possession and consumption of drugs while continuing to prosecute drug dealers

Abstract:

In this paper we initiate the study of the forward and backward shifts on the discrete generalized Hardy space of a tree and the discrete generalized little Hardy space of a tree. In particular, we investigate when these shifts are bounded, find the norm of the shifts if they are bounded, characterize the trees in which they are an isometry, compute the spectrum in some concrete examples, and completely determine when they are hypercyclic

Abstract:

We study a channel through which inflation can have effects on the real economy. Using job creation and destruction data from U.S. manufacturing establishments from 1973-1988, we show that both jobs created by new establishments and jobs destroyed by dying establishments are negatively correlated with inflation. These results are robust to controls for the real-business cycle and monetary policy. Over a longer time frame, data on business failures confirm our results obtained from job creation and destruction data. We discuss how interaction of inflation with financial-markets, nominal-wage rigidities, and imperfect competition could explain the empirical evidence

Abstract:

We study how discount window policy affects the frequency of banking crises, the level of investment, and the scope for indeterminacy of equilibrium. Previous work has shown that providing costless liquidity through a discount window has mixed effects in terms of these criteria: It prevents episodes of high liquidity demand from causing crises but can lead to indeterminacy of stationary equilibrium and to inefficiently low levels of investment. We show how offering discount window loans at an above-market interest rate can be unambiguously beneficial. Such a policy generates a unique stationary equilibrium. Banking crises occur with positive probability in this equilibrium and the level of investment is suboptimal, but a proper combination of discount window and monetary policies can make the welfare effects of these inefficiencies arbitrarily small. The near-optimal policies can be viewed as approximately implementing the Friedman rule

Abstract:

We investigate the dependence of the dynamic behavior of an endogenous growth model on the degree of returns to scale. We focus on a simple (but representative) growth model with publicly funded inventive activity. We show that constant returns to reproducible factors (the leading case in the endogenous growth literature) is a bifurcation point, and that it has the characteristics of a transcritical bifurcation. The bifurcation involves the boundary of the state space, making it difficult to formally verify this classification. For a special case, we provide a transformation that allows formal classification by existing methods. We discuss the new methods that would be needed for formal verification of transcriticality in a broader class of models

Abstract:

We evaluate the desirability of having an elastic currency generated by a lender of last resort that prints money and lends it to banks in distress. When banks cannot borrow, the economy has a unique equilibrium that is not Pareto optimal. The introduction of unlimited borrowing at a zero nominal interest rate generates a steady state equilibrium that is Pareto optimal. However, this policy is destabilizing in the sense that it also introduces a continuum of nonoptimal inflationary equilibria. We explore two alternate policies aimed at eliminating such monetary instability while preserving the steady-state benefits of an elastic currency. If the lender of last resort imposes an upper bound on borrowing that is low enough, no inflationary equilibria can arise. For some (but not all) economies, the unique equilibrium under this policy is Pareto optimal. If the lender of last resort instead charges a zero real interest rate, no inflationary equilibria can arise. The unique equilibrium in this case is always Pareto optimal

Abstract:

We consider the nature of the relationship between the real exchange rate and capital formation. We present a model of a small open economy that produces and consumes two goods, one tradable and one not. Domestic residents can borrow and lend abroad, and costly state verification (CSV) is a source of frictions in domestic credit markets. The real exchange rate matters for capital accumulation because it affects the potential for investors to provide internal finance, which mitigates the CSV problem. We demonstrate that the real exchange rate must monotonically approach its steady state level. However, capital accumulation need not be monotonic and real exchange rate appreciation can be associated with either a rising or a falling capital stock. The relationship between world financial market conditions and the real exchange rate is also investigated

Abstract:

In the Mexican elections, the quick count consists in selecting a random sample of polling stations to forecast the election results. Its main challenge is that the estimation is done with incomplete samples, where the missingness is not at random. We present one of the statistical models used in the quick count of the gubernatorial elections of 2021. The model is a negative binomial regression with a hierarchical structure. The prior distributions are thoroughly tested for consistency. Also, we present a fitting procedure with an adjustment for bias, capable of running in less than 5 min. The model yields probability intervals with approximately 95% coverage, even with certain patterns of biased samples observed in previous elections. Furthermore, the robustness of the negative binomial distribution translates to robustness in the model, which can fit well big and small candidates, and provides an additional layer of protection when there are database errors

Abstract:

Quick counts based on probabilistic samples are powerful methods for monitoring election processes. However, the complete designed samples are rarely collected to publish the results in a timely manner. Hence, the results are announced using partial samples, which have biases associated to the arrival pattern of the information. In this paper, we present a Bayesian hierarchical model to produce estimates for the Mexican gubernatorial elections. The model considers the poll stations poststratified by demographic, geographic, and other covariates. As a result, it provides a principled means of controlling for biases associated to such covariates. We compare methods through simulation exercises and apply our proposal in the July 2018 elections for governor in certain states. Our studies find the proposal to be more robust than the classical ratio estimator and other estimators that have been used for this purpose

Abstract:

Despite the rapid change in cellular technologies, Mobile Network Operators (MNOs) keep a high percentage of their deployed infrastructure using Global System for Mobile communications (GSM) technologies. With about 3.5 billion subscribers, GSM remains as the de facto standard for cellular communications. However, the security criteria envisioned 30 years ago, when the standard was designed, are no longer sufficient to ensure the security and privacy of the users. Furthermore, even with the newest fourth generation (4G) cellular technologies starting to be deployed, these networks could never achieve strong security guarantees because the MNOs keep backwards- compatibility given the huge amount of GSM subscribers. In this paper, we present and describe the tools and necessary steps to perform an active attack against a GSM-compatible network, by exploiting the GSM protocol lack of mutual authentication between the subscribers and the network. The attack consists of a so-called man-in-the- middle attack implementation. By using Software Defined Radio (SDR), open-source libraries and open- source hardware, we setup a fake GSM base station to impersonate the network and therefore eavesdrop any communications that are being routed through it and extract information from their victims. Finally, we point out some implications of the protocol vulnerabilities and how these can not be mitigated in the short term since 4G deployments will take long time to entirely replace the current GSM infrastructure

Abstract:

This study investigates whether incidental electoral successes of women contribute to sustained gender parity in Finland, a leader in gender equality. Utilizing election lotteries used to resolve ties in vote counts, we estimate the causal effect of female representation. Our findings indicate that women's electoral performance (measured by female seat and vote shares) within parties improves following a lottery win, potentially due to increased exposure. However, these gains are offset by negative spillovers on female candidates from other parties. One reason why this may occur is that voters and parties may perceive sufficient female representation has been achieved, leading to reduced support for female candidates in other parties. Consequently, marginal increases in female representation do not translate to overall gains in women elected. In high but uneven gender parity contexts, such increases may not be self-reinforcing, highlighting the complexity of achieving sustained gender equality in politics

Abstract:

It is shown in the paper that the problem of speed observation for mechanical systems that are partially linearisable via coordinate changes admits a very simple and robust (exponentially stable) solution with a Luenberger-like observer. This result should be contrasted with the very complicated observers based on immersion and invariance reported in the literature. A second contribution of the paper is to compare, via realistic simulations and highly detailed experiments, the performance of the proposed observer with well-known high-gain and sliding mode observers. In particular, to show that – due to their high sensitivity to noise, that is unavoidable in mechanical systems applications – the performance of the two latter designs is well below par

Abstract:

We formulate a p-median facility location model with a queuing approximation to determine the optimal locations of a given number of dispensing sites (Point of Dispensing-PODs) from a predetermined set of possible locations and the optimal allocation of staff to the selected locations. Specific to an anthrax attack, dispensing operations should be completed in 48 hours to cover all exposed and possibly exposed people. A nonlinear integer programming model is developed and it formulates the problem of determining the optimal locations of facilities with appropriate facility deployment strategies, including the amount of servers with different skills to be allocated to each open facility. The objective of the mathematical model is to minimize the average transportation and waiting times of individuals to receive the required service. The mathematical model has waiting time performance measures approximated with a queuing formula and these waiting times at PODs are incorporated into the p-median facility location model. A genetic algorithm is developed to solve this problem. Our computational results show that appropriate locations of these facilities can significantly decrease the average time for individuals to receive services. Consideration of demographics and allocation of the staff decreases waiting times in PODs and increases the throughput of PODs. When the number of PODs to open is high, the right staffing at each facility decreases the average waiting times significantly. The results presented in this paper can help public health decision makers make better planning and resource allocation decisions based on the demographic needs of the affected population

Abstract:

Robust statistical data modelling under potential model mis-specification often requires leaving the parametric world for the nonparametric. In the latter, parameters are infinite dimensional objects such as functions, probability distributions or infinite vectors. In the Bayesian nonparametric approach, prior distributions are designed for these parameters, which provide a handle to manage the complexity of nonparametric models in practice. However, most modern Bayesian nonparametric models seem often out of reach to practitioners, as inference algorithms need careful design to deal with the infinite number of parameters. The aim of this work is to facilitate the journey by providing computational tools for Bayesian nonparametric inference. The article describes a set of functions available in the R package BNPdensity in order to carry out density estimation with an infinite mixture model, including all types of censored data. The package provides access to a large class of such models based on normalised random measures, which represent a generalisation of the popular Dirichlet process mixture. One striking advantage of this generalisation is that it offers much more robust priors on the number of clusters than the Dirichlet. Another crucial advantage is the complete flexibility in specifying the prior for the scale and location parameters of the clusters, because conjugacy is not required. Inference is performed using a theoretically grounded approximate sampling methodology known as the Ferguson & Klass algorithm. The package also offers several goodness-of-fit diagnostics such as QQ plots, including a cross-validation criterion, the conditional predictive ordinate. The proposed methodology is illustrated on a classical ecological risk assessment method called the species sensitivity distribution problem, showcasing the benefits of the Bayesian nonparametric framework

Abstract:

This paper focuses on model selection, specification and estimation of a global asset return model within an asset allocation and asset and liability management framework. The development departs from a single currency capital market model with four state variables: stock index, short and long term interest rates and currency exchange rates. The model is then extended to the major currency areas, United States, United Kingdom, European Union and Japan, and to include a US economic model containing GDP, inflation, wages and government borrowing requirements affecting the US capital market variables. In addition, we develop variables representing emerging market stock and bond indices. In the largest extension we treat a four currency capital markets model and US, UK, EU and Japan macroeconomic variables. The system models are estimated with seemingly unrelated regression estimation (SURE) and generalised autoregressive conditional heteroscedasticity (GARCH) techniques. Simulation, impulse response and forecasting performance is discussed in order to analyse the dynamics of the models developed

Abstract:

Gender stereotypes, the assumptions concerning appropriate social roles for men and women, permeate the labor market. Analyzing information from over 2.5 million job advertisements on three different employment search websites in Mexico, exploiting approximately 235,00 that are explicitly gender-targeted, we find evidence that advertisements seeking "communal" characteristics, stereotypically associated with women, specify lower salaries than those seeking "agentic" characteristics, stereotypically associated with men. Given the use of gender-targeted advertisements in Mexico, we use a random forest algorithm to predict whether non-targeted ads are in fact directed toward men or women, based on the language they use. We find that the non-targeted ads for which we predict gender show larger salary gaps (8–35 percent) than explicitly gender-targeted ads (0–13 percent). If women are segregated into occupations deemed appropriate for their gender, this pay gap between jobs requiring communal versus agentic characteristics translates into a gender pay gap in the labor market

Abstract:

This paper analyses the contract between an entrepreneur and an investor, using a non-zero sum game in which the entrepreneur is interested in company survival and the investor in maximizing expected net present value. Theoretical results are given and the model's usefulness is exemplified using simulations. We have observed that both the entrepreneur and the investor are better off under a contract which involves repayments and a share of the start-up company. We also have observed that the entrepreneur will choose riskier actions as the repayments become harder to meet up to a level where the company is no longer able to survive

Abstract:

We consider the problem of managing inventory and production capacity in a start-up manufacturing firm with the objective of maximising the probability of the firm surviving as well as the more common objective of maximising profit. Using Markov decision process models, we characterise and compare the form of optimal policies under the two objectives. This analysis shows the importance of coordination in the management of inventory and production capacity. The analysis also reveals that a start-up firm seeking to maximise its chance of survival will often choose to keep production capacity significantly below the profit-maximising level for a considerable time. This insight helps us to explain the seemingly cautious policies adopted by a real start-up manufacturing firm

Abstract:

Start-up companies are considered an important factor in the success of a nation’s economy. We are interested in the decisions for long-term survival of these firms when they have considerable cash restrictions. In this paper we analyse several inventory control models to manage inventory purchasing and return policies. The Markov decision models are formulated for both established companies that look at maximising average profit and start-up companies that look at maximising their long-term survival probability. We contrast both objectives, and present properties of the policies and the survival probabilities. We find that start-up companies may need to be riskier if the return price is very low, but there is a period where a start-up firm becomes more cautious than an established company and there is a point, as it accumulates capital, where it starts behaving as an established firm. We compare the various models and give conditions under which their policies are equivalent

Abstract:

A recent cross-cultural study suggests employees may be classified, based on their scores on a measure of work ethic, into three profiles labeled as "live to work," "work to live," and "work as a necessary evil." The present study assesses whether these profiles were stable before and after an extended lockdown that forced employees to work from home for 2 years because of the COVID-19 pandemic. To assess our core research question, we conducted a longitudinal study with employees of a company in the financial sector, collecting data in two waves: February 2020 (n = 692) and June 2022 (n = 598). Tests of profile similarity indicated a robust structural and configural equivalence of the profiles before and after the lockdown. As expected, the prolonged pandemic-based lockdown had a significant effect on the proportion of individuals in each profile. Implications for leading and managing in a post-pandemic workforce are presented and discussed

Abstract:

This study examines the impact of a specific training intervention on both individual- and unit-level outcomes. We sought to examine the extent to which a training intervention incorporating key elements of error management training: (1) positively impacted sales specific self-efficacy beliefs of trainees; and (2) positively impacted unit-level sales growth over time. Results of an 11-week longitudinal field experiment across 19 stores in a national bakery chain indicated that the sales self-efficacy of trainees significantly increased between the levels they had 2 weeks before the intervention started and 4 weeks after it was initiated. Results based on a repeated measures ANOVA also indicated significantly higher sales performance in the intervention group compared with a non-intervention control group. We also sought to address the extent to which individual-level effects may be linked to the organizational level. We also provide evidence with respect to the extent to which changes in individual self-efficacy were associated with unit-level sales performance. Results confirmed this multi-level effect as evidenced by a moderate significant correlation between the average self-efficacy of the staff of each store and its sales performance across the weeks the intervention was in effect. The study contributes to the existing literature by providing direct evidence of the impact of an HRD intervention at multiple organizational levels

Abstract:

Despite the acceptance of work ethic as an important individual difference, little research has examined the extent to which work ethic may reflect shared environmental or socio-economic factors. This research addresses this concern by examining the influence of geographic proximity on the work ethic experienced by 254 employees from Mexico, working in 11 different cities in the Northern, Central and Southern regions of the country. Using a sequence of complementary analyses to assess the main source of variance on seven dimensions of work ethic, our results indicate that work ethic is most appropriately considered at the individual level

Abstract:

This paper explores the relationship between individual work values and unethical decision-making and actual behavior at work through two complementary studies. Specifically, we use a robust and comprehensive model of individual work values to predict unethical decision-making in a sample of working professionals and accounting students enrolled in ethics courses, and IT employees working in sales and customer service. Study 1 demonstrates that young professionals who rate power as a relatively important value (i.e., those reporting high levels of the self-enhancement value) are more likely to violate professional conduct guidelines despite receiving training regarding ethical professional principles. Study 2, which examines a group of employees from an IT firm, demonstrates that those rating power as an important value are more likely to engage in non-work-related computing (i.e., cyberloafing) even when they are aware of a monitoring software that tracks their computer usage and an explicit policy prohibiting the use of these computers for personal reasons

Abstract:

This panel study, conducted in a large Venezuelan organization, took advantage of a serendipitous opportunity to examine the organizational commitment profiles of employees before and after a series of dramatic, and unexpected, political events directed specifically at the organization. Two waves of organizational commitment data were collected, 6 months apart, from a sample of 152 employees. No evidence was found that employees' continuance commitment to the organization was altered by the events described here. Interestingly, however, both affective and normative commitment increased significantly during the period of the study. Further, employee's commitment profiles at Wave 2 were more differentiated than they were at Wave 1

Abstract:

This study, based in a manufacturing plant in Venezuela, examines the relationship between perceived task characteristics, psychological empowerment and commitment, using a questionnaire survey of 313 employees. The objective of the study was to assess the effects of an organizational intervention at the plant aimed at increasing productivity by providing performance feedback on key aspects of its daily operations. It was hypothesized that perceived characteristics of the task environment, such as task meaningfulness and task feedback, will enhance psychological empowerment, which in turn will have a positive impact on employee commitment. Test of a structural model revealed that the relationship of task meaningfulness and task feedback with affective commitment was partially mediated by the empowerment dimensions of perceived control and goal internalization. The results highlight the role of goal internalization as a key mediating mechanism between job characteristics and affective commitment. The study also validates a Spanish-language version of the psychological empowerment scale by Menon (2001)

Resumen:

A pesar de la extensa validación transcultural del modelo de compromiso organizacional de Meyer y Allen (1991), han surgido ciertas dudas respecto a la independencia de los componentes afectivo y normativo y, también, sobre la unidimensionalidad de este último. Este estudio analiza la estabilidad de la estructura del modelo y examina el comportamiento de la escala normativa, empleando 100 muestras, de 250 sujetos cada una, extraídas aleatoriamente de una base de datos de 4.689 empleados. Los resultados muestran cierta estabilidad del modelo, y apoyan parcialmente a la corriente que propone el desdoblamiento del componente normativo en dos subdimensiones: el deber moral y el sentimiento de deuda moral

Abstract:

Although there has been extensive cross-cultural validation of Meyer and Allen’s (1991) model of organizational commitment, some doubts have emerged concerning both the independence of the affective and normative components, and the unidimensionality of the former. This study focuses on analyzing the stability of the model’s structure, and on examining the behaviour of the normative scale. For this purpose, we employed 100 samples of 250 subjects each, extracted randomly from a database of 4,689 employees. The results show certain stability of the model, and partially support research work suggesting the unfolding of the normative component into two subdimensions: one related to a moral duty, and the other to a sense of indebtedness

Abstract:

In recent years there has been an increasing interest among researchers and practitioners to analyze what makes a firm attractive in the eyes of university students, and if individual differences such as personality traits have an impact on this general affect towards a particular organization. The main goal of the present research is to demonstrate that a recently conceptualized narrow trait of personality named dispositional resistance to change (RTC), that is, the inherent tendency of individuals to avoid and oppose changes (Oreg, 2003), can predict organizational attraction of university students to firms that are perceived as innovative or conservative. Three complementary studies were carried out using a total sample of 443 college students from Mexico. In addition to validating the hypotheses, our findings suggest that as the formation of the images of organizations in students’ minds is done through social cognitions, simple stimuli such as physical artifacts, when used in an isolated manner, do not have a significant impact on organizational attraction

Abstract:

The Work Values Scale EVAT (based on its initials in Spanish: Escala de Valores hacia el Trabajo) was created in 2000 to measure values in the work contexto The instrument operationalizes the four higher-order-values of the Schwartz 'rheory (1992) through sixteen items focused on work scenarios. The questionnaire has been used among large samples of Mexican and Spanish individuals reporting adequate psychometric properties. The instrument has recently been translated into Portuguese and Italian, and subsequent1y used in a large-scale study with nurses in Portugal andin a sample of various occupations in Italy. The purpose of this research was to demonstrate the cross-cultural validity of the Work Values Scale EVAT in Spanish, Portuguese, and Italian. Our results suggest that the original Spanish version of the EVAT scale and the new Portuguese and Italian versions are equivalent

Abstract:

The authors examined the validity of the Spanish-language version of the dispositional resistance lO change (RTC) scale. First, the structural validity of the new questionnaire was evaluated using a nested sequence of confirmatory factor analyses. Second, the external validity ofthe questionnaire was assessed, using the four higher-order values of the Schwartz's theory and the four dimensions of the RTC scale: routine seeking, emotional reaction, short-term focus and cognitive rigidity. A sample of 553 undergraduate students from Mexico and Spain was used in the analyses. The results confirmed both the construct structure and the external validity of the questionnaire

Abstract:

The authors examined the convergent validity of the four dimensions of the Resistance to Change scale (RTC): routine seeking, emotional reaction, short-term focus and cognitive rigidity and the four higher-order values of the Schwartz’s theory, using a nested sequence of confirmatory factor analyses. A sample of 553 undergraduate students from Mexico and Spain was used in the analyses. The results confirmed the external validity of the questionnaire

Resumen:

Este estudio analiza el impacto de la diversidad de valores entre los integrantes de los equipos sobre un conjunto de variables de proceso, así como sobre dos tareas con diferentes demandas de interacción social. En particular, se analiza el efecto de la diversidad de valores sobre el conflicto en la tarea y en las relaciones, la cohesión y la autoeficacia grupal. Utilizando un simulador de trabajo en equipo y una muestra de 22 equipos de entre cinco y siete individuos, se comprobó que la diversidad en valores en un equipo, influye de forma directa sobre las variables de proceso y sobre la tarea que demanda baja interacción social, la relación entre la diversidad de valores sobre el desempeño, se ve mediada por las variables de proceso. Se proponen algunas acciones que permitirían poner en práctica los resultados de esta investigación en el contexto organizacional

Abstract:

This study investigates the impact of value diversity among team members on team process and performance criteria on two tasks of differing social interaction demands. Specifically, the criteria of interest included task conflict, relationship conflict, cohesion, and team efficacy and task performance on two tasks demanding different levels of social interaction. Utilizing a team work simulator and a sample comprised of 22 learns of five to seven individuals, it was demonstrated that value diversity directly impacts both task performance and process criteria on the task demanding low social interaction. Meanwhile, in the task requiring high social interaction, value diversity related to task performance via the mediating effects of team processes. Some specific actions are proposed in order to apply the results of this research in the daily context of organizations

Abstract:

The Work Values Scale EVAT (based on its initials in Spanish) was created in 2000 to measure values in the work context. The instrument operationalizes the four higher-order-values of the Schwartz Theory (1992) through sixteen items focused on work scenarios. The questionnaire has been used among large samples of Mexican and Spanish individuals (Arciniega & González, 2006: 2005, González & Arciniega 2005), reporting adequate psychometric properties. The instrument has recently been translated into Portuguese and Italian, and subsequently used in a large-scale study with nurses in Portugal and in a sample of various occupations in Italy. The purpose of this research was to demonstrate the cross-cultural validity of the Work Values Scale EVAT in Spanish, Portuguese, and Italian, using a new technique of measurement equivalence: confirmatory multidimensional scaling (CMDS). Our results suggest that CMDS is a serviceable technique for assessing measurement equivalence, but requires improvements to provide precise fit indices

Abstract:

We examine the impact of team member value and personality diversity on team processes and performance. The research is divided into two studies. First, we examine the impact of personality and value diversity on team performance, relationship and task conflict, cohesion, and team self-efficacy. Second, we evaluate the effect of team members’ values diversity on team performance in two different types of tasks, one cognitive, and the other complex. In general, our results suggest that higher levels of diversity with respect to values were associated with lower levels of team process variables. Also, as expected we found that the influence of team values diversity is higher on a cognitive task than on a complex one

Abstract:

sidering the propositions of Simon (1990;1993) and Korsgaard áp.d collaborators (1997), that an individual who assigns priority to values related to altruism tends to pay less attention to evaluating personal costs and benefits when processing social information, as well as the basic premises of job satisfaction that establishes that this attitude is centered on a cognitive process of evaluating how specific conditions or outcomes in a job fulfill the needs and values of a persono We proposed that individuals who score higher on values associated with altruism, will reveal higher scores on all specific facets of job satisfaction than those who score lower. A sample of 3,201 Mexican employees, living in 11 cities and working for 30 different companies belonging to the same holding, was used in this study. The results of the research c1early support the central hypothesis

Abstract:

Some reviews have shown how different attitudes, demographic and organizational variables generate organizational commitment. Few studies have reported how work values and organizational factors create organizational commitment. This investigation is an attempt to explore the influence that both sets of variables have on organizational commitment. Using the four high-order values proposed by Schwartz (1992) to operationalize the construct of work values, we evaluated the influence of these work values on the development of organizational commitment, in comparison with four facets of work satisfaction and four organizational factors: empowerment, knowledge of organizational goals, and training and communication practices. A sample of 982 employees from eight companies of Northeastern Mexico was used in this study. Our findings suggest that work values occupy less important place on the development of organizational commitment when compared to organizational factors, such as the perceived knowledge of the goals of the organization, or some attitudes such as satisfaction with security and opportunities of development

Abstract:

In this paper we propose the use of new iterative methods to solve symmetric linear complementarity problems (SLCP) that arise in the computation of dry frictional contacts in Multi-Rigid-Body Dynamics. Specifically, we explore the two-stage iterative algorithm developed by Morales, Nocedal and Smelyanskiy [1]. The underlying idea of that method is to combine projected Gauss-Seidel iterations with subspace minimization steps. Gauss-Seidel iterations are aimed to obtain a high quality estimation of the active set. Subspace minimization steps focus on the accurate computation of the inactive components of the solution. Overall the new method is able to compute fast and accurate solutions of severely ill-conditioned LCPs. We compare the performance of a modification of the iterative method of Morales et al with Lemke’s algorithm on robotic object grasping problems.

Abstract:

This paper investigates corporate social (and environmental) responsibility (CSR) disclosure practices in Mexico. By analysing a sample of Mexican companies in 2010, it utilises a detailed manual content analysis and identifies corporate-governance-related determinants of CSR disclosure. The study shows a general association between the governance variables and both the content and the semantic properties of CSR information published by Mexican companies. Although an increased international influence on CSR disclosure is noted, the study reveals the symbolic role of CSR committees and the negative influence of foreign ownership on community disclosure, suggesting that improvements in business engagement with stakeholders are needed for CSR to be instrumental in business conduct

Abstract:

Effective policy-making requires that voters avoid electing malfeasant politicians. However, informing voters of incumbent malfeasance in corrupt contexts may not reduce incumbent support. As our simple learning model shows, electoral sanctioning is limited where voters already believed incumbents to be malfeasant, while information's effect on turnout is non-monotonic in the magnitude of reported malfeasance. We conducted a field experiment in Mexico that informed voters about malfeasant mayoral spending before municipal elections, to test whether these Bayesian predictions apply in a developing context where many voters are poorly informed. Consistent with voter learning, the intervention increased incumbent vote share where voters possessed unfavorable prior beliefs and when audit reports caused voters to favorably update their posterior beliefs about the incumbent's malfeasance. Furthermore, we find that low and, especially, high malfeasance revelations increased turnout, while less surprising information reduced turnout. These results suggest that improved governance requires greater transparency and citizen expectations

Abstract:

A Vector Autoregressive Model of the mexican economy was employed to empirically find the transmission channels of price formation. The structural changes affecting the behavior of the inflation rate during 1970-1987, motivated the analysis of the changing influences of the explanatory variables within three different subperiods, namely: 1970-1976, 1978-1982 and 1983- 1987. A main finding is that, among the variables considered, the public prices were the most important in explaining the variability of the inflation, irrespective of the subperiod under study. Another finding is that inflationary inertia played a different role in each subperiod

Abstract:

Value stream mapping (VSM) is a valuable practice employed by industry experts to identify inefficiencies in the value chain due to its visual representation capability and general ease of use. Enabled by a shift towards digitalization and smart connected systems, this project investigates the possibilities of transitioning VSM from a manual to digital process through the utilization of data generated from a Real-Time Location System (RTLS). This study focuses on merging the aspects of RTLS and VSM such that their advantages are combined to form a more robust and effective VSM process. Two simulated experiments and an initial validation test were conducted to demonstrate the capability of the system to function in an industrial environment by replicating an actual production process. The two experiments represent the current state of conditions of the company in two different instances of time. These outputs from the tracking system are then modified and converted into inputs for VSM. A VSM application was modified and utilized to create a digital value stream map with relevant performance parameters. Finally, a stochastic simulation was carried out to compare and extrapolate the results to a 16hrs shift to measure, among other outputs, the utilization of the machines with the two RTLS scenarios

Abstract:

The degradation of biopolymers such as polylactic acid (PLA) has been studied for several years; however, the results regarding the mechanism of degradation are not completely understood yet. PLA is easily processed by traditional techniques including injection molding, blow molding, extrusion, and thermoforming; in this research, the extrusion and injection molding processes were used to produce PLA samples for accelerated destructive testing. The methodology employed consisted of carrying out material testing under the guidelines of several ASTM standards; this research hypothesized that the effects of UV light, humidity, and temperature exposure have a statistical difference in the PLA degradation rate. The multivariate analysis of non-parametric data is presented as an alternative to multivariate analysis, in which the data do not satisfy the essential assumptions of a regular MANOVA, such as multivariate normality. A package in the R software that allows the user to perform a non-parametric multivariate analysis when necessary was used. This paper presents a study to determine if there is a significant difference in the degradation rate after 2000 h of accelerated degradation of a biopolymer using the multivariate and non-parametric analyses of variance. The combination of the statistical techniques, multivariate analysis of variance and repeated measures, provided information for a better understanding of the degradation path of the biopolymer

Abstract:

Dried red chile peppers [Capsicum annuum (L.)] are an important agricultural product grown throughout the Southwestern United States and is extensively used in food and for commercial application. Given the high, broad demand for chile attention to the methods of harvesting, storage, transport, and packaging are critical for profitability. Currently, chile should be stored no more than 24 to 36 hours at ambient temperatures from the time of harvest due to the potential for natural fermentation to destroy the crop. The rate for calculating and determining the amount of useable/destroyed chile in ambient conditions is determined by several variables that include the harvesting method (hand-picked, mechanized), time of harvest following the optimal harvesting point (season), weather variations (moisture). In this work, a stochastic simulation-based model is presented to forecast optimal harvesting scenarios capable of supporting farmers and chile processors better plan/manage planting and growth acceleration programs. The tool developed allows for the economic feasibility of storage/stabilization systems, advanced mechanical harvesters, and other future advances based on the amount increase in chile yield to be analyzed. We used described simulation as an analysis tool to obtain the expected coverage and the estimation of the mean and quantile

Abstract:

While the degradation of Polylactic Acid (PLA) has been studied for several years, results regarding the mechanism for determining degradation are not completely understood. Through accelerated degradation testing, data can be extrapolated and modeled to test parameters such as temperature, voltage, time, and humidity. Accelerated lifetime testing is used as an alternative to experimentation under normal conditions. The methodology to create this model consisted of fabricating series of ASTM specimens using extrusion and injection molding. These specimens were tested through accelerated degradation; tensile and flexural testing were conducted at different points of time. Nonparametric inference tests for multivariate data are presented. The results indicate that the effect of the independent variable or treatment effect (time) is highly significant. This research intends to provide a better understanding of biopolymer degradation. The findings indicated that the proposed statistical models can be used as a tool for characterization of the material regarding the durability of the biopolymer as an engineering material. Having multiple models, one for each individual accelerating variable, allow deciding which parameter is critical in the characterization of the material

Abstract:

The degradation of biopolymers such as polylactic acid (PLA) has been studied for several years, however, results regarding the mechanism of degradation are not completely understood yet. It would be advantageous to predict and model the degradation of PLA rates by means of performance. High strength and thermoplasticity allow PLA to be used to manufacture a great variety of products. This material is easily processed by traditional techniques including injection molding process, blow molding, extrusion, and thermoforming ; extrusion and injection molding processes were used to produce PLA samples for accelerated destructive testing in this research; the methodology employed consists of carrying out material testing under the guidelines of several ASTM standards, this research hypothesizes that UV light, humidity, and temperature exposure have a statistical difference in the degradation rate. The multivariate analysis of non-parametric data is presented as an alternative for multivariate analysis in which the data do not satisfy the essential assumptions of regular MANOVA; such as multivariate normality. Ellis et al. created a package in R software that allows the user to perform a non-parametric multivariate analysis, when necessary. This paper presents a study to determine if there is a significant difference in the degradation process of a biopolymer using the multivariate and nonparametric analysis of variance. The combination of the statistical techniques, multivariate analysis of variance, and repeated measures provided information for a better understanding of the degradation path of the biopolymers

Abstract:

Formal models of animal sensorimotor behavior can provide effective methods for generating robotic intelligence. In this article we describe how schema-theoretic models of the praying mantis derived from behavioral and neuroscientific data can be implemented on a hexapod robot equipped with a real time color vision system. This implementation incorporates a wide range of behaviors, including obstacle avoidance, prey acquisition, predator avoidance, mating, and chantlitaxia behaviors that can provide guidance to neuroscientists, ethologists, and roboticists alike. The goals of this study are threefold: to provide an understanding and means by which fielded robotic systems are not competing with other agents that are more effective at their designated task; to permit them to be successful competitors within the ecological system and capable of displacing less efficient agents; and that they are ecologically sensitive so that agent–environment dynamics are well-modeled and as predictable as possible whenever new robotic technology is introduced.

Resumen:

Francisco Villa no es personaje protagónico de la obra literaria de Mauricio Magdaleno, sin embargo, a lo largo de toda su trayectoria este trató de reflexionar sobre la relevancia histórica e identitaria de aquel para México. Entonces, se propone y se analiza un amplio corpus de obras literarias y periodísticas del escritor para conocer su postura ante el villismo y su indiscutible líder. A partir de un enfoque esencialmente historiográfico y literario se trazan las confluencias familiares y estilísticas que atraviesan al autor. Debemos tener en cuenta que Magdaleno vivió su infancia en dos ciudades relevantes para el encumbramiento del militar, Zacatecas y Aguascalientes, e incluso existe un relato que narra el encuentro entre ambos. El análisis evidencia la presencia de Pancho Villa en los géneros literarios tradicionales y de tradición oral, así como las diferentes formas de apropiación que la literatura de la Revolución hizo de su figura para leer la historia actual

Abstract:

Francisco Villa is not a leading character in the literary work of Mauricio Magdaleno, however, throughout his career he tried to reflect on his historical and identity relevance of Villa for Mexico. Then, a wide corpus of literary and journalistic works of the writer is proposed and analyzed to know the Magdaleno’s position about the Villismo and its indisputable leader. From an essentially historiographical and literary approach, the family relationship and stylistic confluences that cross the author are traced. We must know that Magdaleno lived his childhood in two cities relevant to the rise of the military, Zacatecas and Aguascalientes, and there is even a story that narrates the meeting between both. The analysis evidences the presence of Pancho Villa in traditional literary genres and oral tradition, as well as the different forms of appropriation that the literature of the Mexican Revolution made of his figure to read current history

Resumen:

El objetivo principal de este trabajo es entender la contribución del guion a la película Río Escondido en la creación de una obra colectiva, polifónica, de convergencia y alejada de una visión clásica de la autoría. Para ello, proponemos un análisis documental de la historia y otro propiamente del guion como vehículo semiótico que posibilita el camino de un lenguaje escrito a uno visual. Este último análisis se centra en la relevancia del discurso literario del guion que se omite en la película y que puede suponer una discordia entre el guionista y el director. Hallar estos espacios de quiebre posibilita un acercamiento a nuevas formas de ver el cine mexicano, alejadas de la industria y los clichés, y más cercanas al ámbito literario

Abstract:

The aim of this work is to establish the significance of the film Río Escondido’s script. Consequently, it is evident that in this film the role of the author is different since its script is a collective, polyphonic, and convergent work. In such a way, it is analyzed the History’s documents in connection to the story from the script as a semiotic approach that allows the transfer from text to screen. By this way, the analysis is focused on the relevance of the script’s literariness which is supressed in the film and might implies a controversy in between the writer and the director. It is concluded, in this approach, the importance of the literary analysis in the Mexican film industry, far away from mercantilization and stated cliches

Resumo:

O objetivo principal deste trabalho é contribuição de roteiro do filme Río Escondido na criação de uma obra coletiva, polifônica, de convergência e longe de una visão clássica da autoria. Assim, propomos uma análise documental da história e outra propriamente do roteiro de filme como veículo semiótico que visibiliza o percuso de uma linguagem escrita a uma visual. Esta última análise foca na relevância do discurso literário do roteiro e filme que é omitido do filme que que e pode levar a um desentendimiento entre o rotereista e o diretor. Encontrar esses espaços inovadores permite uma abordagem de novas formas de ver o cinema mexicano, longe da indústria e dos clichés, e mais perto do campo literário

Resumen:

El propósito de este artículo es analizar, desde el punto de vista de la historia, la historiografía y el análisis literario, la novela Serpa Pinto. Pueblos en la tormenta (1943), de Giuseppe Garretto. Esta novela, publicada por primera vez en castellano a pesar de la nacionalidad italiana de su autor, pasó desapercibida para la crítica. Revisamos tanto el contexto histórico de las referencias de la obra como de la publicación de la primera edición; situamos la obra como parte de la historiografía literaria del exilio de refugiados europeos en Latinoamérica; analizamos las características literarias de la novela, así como las estrategias de rememoración narrativa que emplea el autor

Abstract:

The purpose of this article is to analyze the novel Serpa Pinto. Pueblos en la tormenta (1943) by Giuseppe Garretto, from the point of view of history, historiography and literary analysis. Published for the first time in Spanish despite the Italian nationality of the author, this novel went unnoticed by critics. We aim to study the historical context of the work references and the publication of the first edition, in order to situate the work as part of the literary historiography of the exile of European refugees in Latin America. Moreover, we analyze the literary characteristics of the novel, as well as the narrative remembrance strategies used by the author

Resumen:

Alfonso Cravioto formó parte de las organizaciones intelectuales y políticas más destacadas de principios del siglo XX. A lo largo de su vida combinó la actividad política y diplomática con la creación literaria, poesía y ensayo, principalmente. En este artículo nos proponemos destacar su contribución a la educación en México, desde una perspectiva que enlaza su experiencia vital con la sensibilidad que lo caracterizó y le ganó el reconocimiento y afecto de sus contemporáneos, al tiempo que anticipó y participó en los primeros años de la Secretaría de Educación Pública

Abstract:

Alfonso Cravioto was a member of the most prominent intellectual and political organizations of the early 20th century. Throughout his life he joined political and diplomatic activities with creating literature, poetry and essays. In this article we intend to highlight his contribution to education in Mexico, from a perspective that links his life experience with the sensitivity which was one of his most pronounced characteristics. Cravioto was recognized and beloved by his contemporaries, and he anticipated and participated in the first years of the Secretary of Mexican Public Education

Resumen:

En la actualidad, una edición crítica completa no sólo debe ir acompañada de una crítica textual rigurosa, sino también de una genética que considere todos los testimonios. En este sentido, el presente trabajo analiza algunas de las problemáticas que se podrían producir si elaborásemos una edición crítica de todos los cuentos de Mauricio Magdaleno, dos de las más importantes serían las siguientes: primera, el hecho de que la mayor parte de los cuentos, tal y como los conocemos hoy, tuviera versiones previas publicadas en fuentes periódicas, lo cual nos obligaría a una investigación más exhaustiva y a una constitución del texto a partir de la colación de variantes de testimonios hemerográficos; segunda, la decisión de criterios que permitan superar la frontera genérica entre un relato costumbrista escrito a partir de una anécdota personal y un cuento susceptible de ser incluido en esta propuesta. Así, ambas problemáticas tienen como raíz común que la labor escritural más estable en la que se desempeñó Magdaleno fuera la periodística, ya que con ésta pudo compatibilizar otras actividades que, sobre todo, le servían para un propósito de sustento material. Una edición crítica de los cuentos de Mauricio Magdaleno otorgaría no sólo la seguridad de poder hacer una buena lectura del texto final, sino que también pondría de relieve el proceso creador del autor, lo cual ayudaría a superar la crítica anacrónica a la que se le ha sometido. Por último, y con base en el estado actual de la cuestión que se expone en este trabajo, se presenta una tabla con las versiones de los cuentos encontradas hasta ahora y otra con una propuesta sustentada de un posible índice de los cuentos que integrarían la edición crítica

Abstract:

Currently, a complete critical edition must not only be accompanied by a rigorous textual criticism but also by a genetic edition that considers all the testimonies. In this sense, this work analyzes some of the problems that could occur if we publish a critical edition the entirety of Mauricio Magdaleno’s short stories. There would be two important problems: first, the fact that most of the stories, as we know them today, had previous versions published in periodical sources, which would oblige us to make an exhaustive investigation and to gather and collate the texts from the varied hemerographic testimonies; second, the decision of any criterion that allows overcoming the generic border between a story written from a personal anecdote and a short story that can be included in our edition. Thus, both problems have as a common root: Magdaleno was a journalist and supplemented his journalistic career with other activities that served as material support. A critical edition of Mauricio Magdaleno’s short stories would be a good reading of his work, and we could know about the author’s creative process, which would help to overcome the anachronistic criticism that has been published on the topic. Finally, and based on the current status, we propose a table with the versions of the short stories found and another table with a proposal supported by a possible index of the short stories that would make up the critical edition

Resumen:

El objetivo principal de este trabajo es aproximarse a una conceptualización de la poesía de argumento o versos de argumentar que se produce en el son jarocho, especialmente en la región de Los Tuxtlas (México). Además, se analizarán sus características poéticas y la forma en que se desarrolla dentro del ritual festivo. Para ello, el estudio se apoyará en el análisis de algunas de estas poesías contenidas en cuadernos de poetas, así como en testimonios orales de estos, ya recogido en libros o en entrevistas, es decir, se analizará la poesía en relación con su contexto. En el dialogismo y en la tópica de esta poesía se encuentran dos elementos fundamentales para entender las dinámicas de tradicionalización e innovación que se producen en estos rituales festivos músico-poéticos, a pesar del componente creativo —improvisado a veces— que depende de los verseros

Abstract:

The main objective of this work is to approach a conceptualization of the poetry of argu ment or verses of arguing that occurs in the son jarocho, especially in the region of Los Tu xtlas (Mexico). In addition, we will analyze its poetic characteristics and the way it develops within the festive ritual. For this, we will do an analysis of some of these poems contained in poets’ notebooks, as well as on oral testimonies of these poets, whether they have been collected in books or interviews, that is, we will analyze the poetry in relation to its context. In the dialogism and in the topic of this poetry we find two fundamental elements to unders tand the dynamics of traditionalization and innovation that take place in these poetic-musical festive rituals, despite the creative component —sometimes improvised— that depends on the verseros

Resumo:

O objetivo principal deste trabalho é abordar uma conceituação da poesia de argumento ou versos de argumentação que se produz no son jarocho, especialmente na região de Los Tuxtlas. Além disso, serão analisadas as suas características poéticas e a forma como se des envolve dentro do ritual festivo. Para tanto, será desenvolvida uma análise tanto de alguns desses poemas contidos em cadernos de poetas, quanto os depoimentos orais destes, sejam eles coletados em livros ou entrevistas, ou seja, se analisará a poesia em relação ao seu con texto. Encontramos no dialogismo e na temática desta poesia dois elementos fundamentais para compreender a dinâmica de tradicionalização e inovação que se realiza nestes rituais poético-musicais festivos, apesar da componente criativa —por vezes improvisada— que depende dos versos

Resumen:

La tradición del huapango arribeño que se lleva a cabo en la Sierra Gorda de Querétaro y Guanajuato, y en la Zona Media de San Luis Potosí, se revela en la celebración de rituales festivos de carácter músico-poético, tanto civiles como religiosos, en donde la glosa conjuga la copla y la décima. La tradición se rige bajo la influencia de una serie de normas de carácter consuetudinario y oral, entre las cuales destacan el «reglamento» y el «compromiso». El presente artículo indaga en torno a la naturaleza jurídica, lingüística y literaria de dicha normatividad; su interacción con otras normas de carácter externo a la fiesta; su influencia, tanto en la performance o ritual festivo -especialmente en la topada-, como en la creación poética. A partir de fuentes etnográficas (entrevistas y grabaciones de fiestas) y bibliográficas, el objetivo es dilucidar el papel que juega dicha normatividad en la conservación y transformación de la tradición

Abstract:

The tradition of the huapango arribeño that is performed in the Sierra Gorda of Querétaro and Guanajuato, and in the Zona Media of San Luis Potosí, is reflected in the celebration of festival rituals of a musical-poetic nature, both civil and religious, where the glosa blends the coplaand the décima. The tradition is governed by a series of rules of traditional and oral nature, among which the «reglamento» (rules) and the «compromiso» (commitment) stand out. This article investigates the legal, linguistic and literary nature of these norms; their interaction with other norms of an external character to the festival; their influence, both in the performance or festive ritual -especially in the topada-, and in the poetic creation. Using ethnographic (interviews and recordings of festivals) and bibliographic sources, the aim is to elucidate the role these norms played in the conservation and transformation of tradition

Resumen:

Desde el punto de vista estético, los escritores Mauricio Magdaleno y Salvador Novo formaron parte de dos corrientes literarias extremas. Las trayectorias de ambos autores gozan de sorprendentes paralelismos tanto biográficos como en relación con el cultivo de diferentes disciplinas literarias. El debate que se suscitó en México en los años veinte y treinta en torno a la identidad y a la nacionalidad provocó un enfrentamiento feroz entre ambos, que, sin embargo, no impidió un progresivo acercamiento propiciado por la participación política. El artículo muestra el difícil equilibrio entre la toma de posicionamientos ideológicos, la responsabilidad generacional ante la construcción de un Estado moderno y la inquietud artística. Un poema inédito de Salvador Novo a Maurico Magdaleno y una imagen de ambos velando el cuerpo del padre Ángel María Garibay K. desmuestran la cercanía que se profesaron al final de sus vidas

Abstract:

From an aesthetic point of view, writers Mauricio Magdaleno and Salvador Novo formed part of two extreme literary trends. The literary career of both authors shows striking parallels in their biography and in their relation to the cultivation of different literary disciplines. The causes a fierce confrontation between them. This, however, does not prevent a progresive approach favored by political participation. The article illustrated the difficult balance between takin ideological positions. the generational responsability to build a modern state and the artistic interest. An unpublished poem from Salvador Novo to Mauricio Magdaleno and an image of both keeping vigil over the body of Father Angel Maria Garibay K. demonstrate the closeness that professed themselves at the end of their lives

Abstract:

This article examines the ability of recently developed statistical learning procedures, such as random forests or support vector machines, for forecasting the first two moments of stock market daily returns. These tools present the advantage of the flexibility of the considered nonlinear regression functions even in the presence of many potential predictors. We consider two cases: where the agent's information set only includes the past of the return series, and where this set includes past values of relevant economic series, such as interest rates, commodities prices or exchange rates. Even though these procedures seem to be of no much use for predicting returns, it appears that there is real potential for some of these procedures, especially support vector machines, to improve over the standard GARCH(1,1) model the out-of-sample forecasting ability for squared returns. The researcher has to be cautious on the number of predictors employed and on the specific implementation of the procedures since using many predictors and the default settings of standard computing packages leads to overfitted models and to larger standard errors

Abstract:

Participating in regular physical activity (PA) can help people maintain a healthy weight, and it reduces their risks of developing cardiovascular diseases and diabetes. Unfortunately, PA declines during early adolescence, particularly in minority populations. This paper explores design requirements for mobile PA-based games to motivate Hispanic teenagers to exercise. We found that some personality traits are significantly correlated to preference for specific motivational phrases and that personality affects game preference. Our qualitative analysis shows that different body weights affect beliefs about PA and games. Design requirements identified from this study include multi-player capabilities, socializing, appropriate challenge level, and variety

Abstract:

To achieve accurate tracking control of robot manipulators, many schemes have been proposed. Some common approaches are based on robust and adaptive control techniques, while when necessary velocity observers are employed. Robust techniques have the advantage of requiring few prior information of the robot model parameters/structure or disturbances while tracking can be achieved, for instance, by using sliding mode control. On the contrary, adaptive techniques guarantee trajectory tracking but under the assumption that the robot model structure is perfectly known and it is linear in the unknown parameters, while joint velocities are also available. In this letter, some experiments are carried out to find out whether combining a robust and an adaptive controller may increase the performance of the system, as long as the adaptive term can be treated as a perturbation by the robust controller. The results are compared with an adaptive robust control law, showing that the proposed combined scheme performs better than the separated algorithms, working on their own and then the comparison laws

Abstract:

To achieve accurate tracking control of robot manipulators many schemes have been proposed. A common approach is based on adaptive control techniques, which guarantee trajectory tracking under the assumption that the robot model structure is perfectly known and linear in the unknown parameters, while joint velocities are available. Despite tracking errors tend to zero, parameter errors do not unless some persistent excitation condition is fulfilled. There are few works dealing with velocity observation in conjunction with adaptive laws. In this note, an adaptive control/observer scheme is proposed for tracking position of robot manipulators. It is shown that tracking and observation errors are ultimately bounded, with the characteristic that when a persistent excitation condition is matched then they, as well as the parameter errors, tend to zero. Simulation results are in good agreement with the developed theory

Abstract:

This study determinates that morbidity presents a mediating impact between intimate partner violence against women and labor productivity in terms of absenteeism and presenteeism. Partial least squares structural equation modeling (PLS-SEM) was used on a nationwide representative sample of 357 female owners of micro-films in Peru. The resulting data reveals that morbidity is a mediating variable between intimate partner violence against women and absenteeism (ß=0.213; p<.001), as well as between intimate partner violence against women and presenteeism (ß=0.336; p<.001). This finding allows us to understand how such intimate partner violence against women negatively affects the workplace productivity in the context of a micro-enterprise, a key element in many economies across the world

Abstract:

The purpose of this paper is to determine the prevalence of economic violence against women, specifically in formal sector micro-firms managed by women in Peru, a key Latin American emerging market. Additionally, the authors have identified the demographic characteristics of the micro-firms, financing and credit associated with women who suffer economic violence. Design/methodology/approach. In this study, a structured questionnaire was administered to a representative sample nationwide (357 female micro-entrepreneurs). Findings. The authors found that 22.2 percent of female micro-entrepreneurs have been affected by economic violence at some point in their lives, while at the same time 25 percent of respondents have been forced by their partner to obtain credit against their will. Lower education level, living with one’s partner, having children, business location in the home, lower income, not having access to credit, not applying credit to working capital needs, late payments and being forced to obtain credit against one’s will were all factors associated with economic violence. Furthermore, the results showed a significant correlation between suffering economic violence and being a victim of other types of violence (including psychological, physical or sexual); the highest correlation was with serious physical violence (r=0.523, p<0.01)

Abstract:

A standard technique for producing monogenic functions is to apply the adjoint quaternionic Fueter operator to harmonic functions. We will show that this technique does not give a complete system in L2 of a solid torus, where toroidal harmonics appear in a natural way. One reason is that this index-increasing operator fails to produce monogenic functions with zero index. Another reason is that the non-trivial topology of the torus requires taking into account a cohomology coefficient associated with monogenic functions, apparently not previously identified because it vanishes for simply connected domains. In this paper, we build a reverse-Appell basis of harmonic functions on the torus expressed in terms of classical toroidal harmonics. This means that the partial derivative of any element of the basis with respect to the axial variable is a constant multiple of another basis element with subindex increased by one. This special basis is used to construct respective bases in the real L2-Hilbert spaces of reduced quaternion and quaternion-valued monogenic functions on toroidal domains

Abstract:

The Fourier method approach to the Neumann problem for the Laplacian operator in the case of a solid torus contrasts in many respects with the much more straight forward situation of a ball in 3-space. Although the Dirichlet-to-Neumann map can be readily expressed in terms of series expansions with toroidal harmonics, we show that the resulting equations contain undetermined parameters which cannot be calculated algebraically. A method for rapidly computing numerical solutions of the Neumann problem is presented with numerical illustrations. The results for interior and exterior domains combine to provide a solution for the Neumann problem for the case of a shell between two tori

Abstract:

The paper addresses the issues raised by the simultaneity between the supply function and the domestic and foreign demand for exportables, analysing the microeconomic foundations of the simultaneous price and output decisions of a firm which operates in the exportables sector of an open economy facing a domestic and a foreign demand for its output. A specific characteristic of the model is that it allows for the possibility of price discrimination, which is suggested by the observed divergencies in the behaviour of domestic and export prices. The famework developed is used to investigate the recent behaviour of prices and output in two industries of the German manufacturing sector

Abstract:

We introduce a new method for building models of CH, together with Π2 statements over H(ω2), by forcing. Unlike other forcing constructions in the literature, our construction adds new reals, although only 1ﭏ-many of them. Using this approach, we build a model in which a very strong form of the negation of Club Guessing at ω1 known as Measuring holds together with CH, thereby answering a well-known question of Moore. This construction can be described as a finite-support weak forcing iteration with side conditions consisting of suitable graphs of sets of models with markers. The CH-preservation is accomplished through the imposition of copying constraints on the information carried by the condition, as dictated by the edges in the graph

Abstract:

Measuring says that for every sequence (Cδ)δ<ω1 with each Cδ being a closed subset of δ there is a club C ⊆ ω1 such that for every δ ∈ C, a tail of C ∩ δ is either contained in or disjoint from Cδ. We answer a question of Justin Moore by building a forcing extension satisfying measuring together with 2ℵ0 > ℵ2. The construction works over any model of ZFC + CH and can be described as a finite support forcing iteration with systems of countable structures as side conditions and with symmetry constraints imposed on its initial segments. One interesting feature of this iteration is that it adds dominating functions f : ω1 −→ ω1 mod. countable at each stage

Abstract:

We separate various weak forms of Club Guessing at ω1 in the presence of 2No large, Martin's Axiom, and related forcing axioms. We also answer a question of Abraham and Cummings concerning the consistency of the failure of a certain polychromatic Ramsey statement together with the continuum large. All these models are generic extensions via finite support iterations with symmetric systems of structures as side conditions, possibly enhanced with ω-sequences of predicates, and in which the iterands are taken from a relatively small class of forcing notions. We also prove that the natural forcing for adding a large symmetric system of structures (the first member in all our iterations) adds N1-many reals but preserves CH

Resumen:

La estructura de À la recherche du temps perdu se explica a partir de la experiencia psíquica del tiempo del protagonista de la novela, según la concepción proustiana de la temporalidad a la luz del pensamiento de San Agustín, Henri Bergson y Edmund Husserl

Abstract:

The structure of À la recherche du temps perdu is explained through the psychic experience of time of the protagonist of the novel, according to the Proustian conception of temporality in light of the thought of Saint Augustine, Henri Bergson and Edmund Husserl

Resumen:

La actividad onírica ocupa un lugar preponderante en la antropología filosófica de María Zambrano. Para la pensadora andaluza, soñar es una facultad cognitiva de la que depende, en último análisis, el desarrollo anímico y la salud emocional de la persona humana. Esta nota muestra la actualidad del pensamiento zambraniano sobre los sueños a la luz de algunas tesis de la neurociencia contemporánea

Abstract:

The oneiric activity occupies a preponderant place in the philosophical anthropology of María Zambrano. For the Andalusian thinker, dreaming is a cognitive faculty on which depends, in the last analysis, the development of the soul and the emotional health of the human person. This note shows the actuality of Zambrano's thought on dreams in the light of some theses of contemporary neuroscience

Resumen:

Utilizando como motivo conductor la idea de que la biblioteca de un académico refleja su ethos y su cosmovisión, este texto epidíctico celebra la trayectoria intelectual de Nora Pasternac, profesora del Departamento Académico de Lenguas del ITAM

Abstract:

Using as a driving motif the idea that an academic’s library reflects his ethos and wordview, this epidictic text celebrates the intellectual trajectory of Nora Pasternac, professor of the Academic Department of Languages at ITAM

Resumen:

En el texto se comentan algunos pasajes de tres novelas y un cuento de Ignacio Padilla a la luz de la Monadología, de G. W. Leibniz, y del Manuscrito encontrado en Zaragoza, de Jan Potocki, con el propósito de mostrar el uso de la construcción en abismo como procedimiento narrativo en la obra de este escritor mexicano

Abstract:

The text discusses some passages of three novels and a story by Ignacio Padilla in light of Monadologie by G.W. Leibniz and the Manuscript Found in Saragossa by Jan Potocki, with the purpose of showing the use of mise en abyme as a narrative technique in the work of this Mexican writer

Resumen:

Utilizando como principio explicativo la noción de distancia fenomenológica, en el ensayo se exponen las formas de relación entre lengua fuente y lengua meta, y entre texto fuente y texto meta, en la teoría de la traducción de Walter Benjamin

Abstract:

Using the notion of phenomenological distance as an explanatory principle, we will explore in this article the relationship between source language and target language and between source text and target text in Walter Benjamin's translation theory

Resumen:

En la antropología filosófica de María Zambrano, dormir y despertar no son meros actos empíricos del vivir cotidiano, sino operaciones egológicas formales involucradas en el proceso de autoconstitución del sujeto. En este artículo se describen poniendo en diálogo el libro zambraniano Los sueños y el tiempo con la noción de ipseidad, tal como la entienden Paul Ricoeur, Edmund Husserl, Hans Blumenberg y Michel Henry

Abstract:

In Maria Zambrano’s philosophical anthropology, sleeping and awakening are not mere empirical acts of daily life, but formal egological acts involves in self-construction. In this article, we will discuss Maria Zambrano’s work, The dreams and time, along with the idea of selfhood as understood by Paul Ricoeur, Edmund Husserl, Hans Blumenberg, and Michel Henry

Abstract:

The homotopy classification problem for complete intersections is settled when the complex dimension is larger than the total degree

Abstract:

A rigidity theorem is proved for principal Eschenburg spaces of positive sectional curvature. It is shown that for a very large class of such spaces the homotopy type determines the diffeomorphism type

Abstract:

We address the problem of parallelizability and stable parallelizability of a family of manifolds that are obtained as quotients of circle actions on complex Stiefel manifolds. We settle the question in all cases but one, and obtain in the remaining case a partial result

Abstract:

The cohomology algebra mod p of the complex projective Stiefel manifolds is determined for all primes p. When p = 2 we also determine the action of the Steenrod algebra and apply this to the problem of existence of trivial subbundles of multiples of the canonical line bundle over a lens space with 2-torsion, obtaining optimal results in many cases

Abstract:

The machinery of M. Kreck and S. Stoltz is used to obtain a homeomorphism and diffeomorphism classification of a family of Eschenburg spaces. In contrast with the family of Wallach spaces studied by Kreck and Stolz we obtain abundant examples of homeomorphic but not diffeomorphic Eschenburg spaces. The problem of stable parallelizability of Eschenburg spaces is discussed in an appendix

Abstract:

In this paper, we introduce the notion of a linked domain and prove that a non-manipulable social choice function defined on such a domain must be dictatorial. This result not only generalizes the Gibbard-Satterthwaite Theorem but also demonstrates that the equivalence between dictatorship and non-manipulability is far more robust than suggested by that theorem. We provide an application of this result in a particular model of voting. We also provide a necessary condition for a domain to be dictatorial and use it to characterize dictatorial domains in the cases where the number of altematives is three

Abstract:

We study entry and bidding patterns in sealed bid and open auctions. Using data fromthe U.S. Forest Service timber auctions, we document a set of systematic effects: sealed bid auctions attract more small bidders, shift the allocation toward these bidders, and can also generate higher revenue. A private value auction model with endogenous participation can account for these qualitative effects of auction format. We estimate the model’s parameters and show that it can explain the quantitative effects as well. We then use the model to assess bidder competitiveness, which has important consequences for auction design

Abstract:

The role of domestic courts in the application of international law is one of the most vividly debated issues in contemporary international legal doctrine. However, the methodology of interpretation of international norms used by these courts remains underexplored. In particular, the application of the Vienna rules of treaty interpretation by domestic courts has not been sufficiently assessed so far. Three case studies (from the US Supreme Court, the Mexican Supreme Court, and the European Court of Justice) show the diversity of approaches in this respect. In the light of these case studies, the article explores the inevitable tensions between two opposite, yet equally legitimate, normative expectations: the desirability of a common, predictable methodology versus the need for flexibility in adapting international norms to a plurality of domestic environments

Abstract:

Christensen, Baumann, Ruggles, and Sadtler (2006) proposed that organizations addressing social problems may use catalytic innovation as a strategy to create social change. These innovations aim to create scalable, sustainable, and systems-changing solutions. This empirical study examines: (a) whether catalytic innovation applies to Mexican social entrepreneurship; (b) whether those who adopt Christensen et al.’s (2006) strategy generate more social impact; and (c) whether they demonstrate economic success. We performed a survey of 219 Mexican social entrepreneurs and found that catalytic innovation does occur within social entrepreneurship, and that those social entrepreneurs who use catalytic innovations not only maximize their social impact but also maximize their profits, and that they do so with diminishing returns to scale

Résumé:

Christensen, Baumann, Ruggles et Sadtler (2006) proposent que les organisations qui s'occupent de problèmes sociaux, peuvent utiliser l'innovation catalytique comme une stratégie visant à créer un changement social. Ces innovations cherchent à créer des solutions évolutives, durables, et qui changent le système. Cette étude empirique examine : (a) si l'innovation catalytique peut s'appliquer dans le domaine de l'entrepreneuriat social mexicain; (b) si les entrepreneurs qui adoptent la stratégie de Christensen et al. (2006), donne lieu à un impact social; et (c) si elle démontre engendrer un succès économique. Nous avons effectué un sondage à 219 entrepreneurs sociaux mexicains et avons constaté que l'innovation catalytique se produit au sein des entreprenariats sociaux, et que les entrepreneurs sociaux qui utilisent des innovations catalytiques maximisent non seulement l'impact social, mais aussi leurs profits et qu'ils le font avec des rendements à échelle décroissante

Abstract:

We study pattern formation in a 2D reaction-diffusion (RD) subcellular model characterizing the effect of a spatial gradient of a plant hormone distribution on a family of G-proteins associated with root hair (RH) initiation in the plant cell Arabidopsis thaliana. The activation of these G-proteins, known as the Rho of Plants (ROPs), by the plant hormone auxin is known to promote certain protuberances on RH cells, which are crucial for both anchorage and the uptake of nutrients from the soil. Our mathematical model for the activation of ROPs by the auxin gradient is an extension of the model of Payne and Grierson [PLoS ONE, 4 (2009), e8337] and consists of a two-component Schnakenberg-type RD system with spatially heterogeneous coefficients on a 2D domain. The nonlinear kinetics in this RD system model the nonlinear interactions between the active and inactive forms of ROPs. By using a singular perturbation analysis to study 2D localized spatial patterns of active ROPs, it is shown that the spatial variations in the nonlinear reaction kinetics, due to the auxin gradient, lead to a slow spatial alignment of the localized regions of active ROPs along the longitudinal midline of the plant cell. Numerical bifurcation analysis together with time-dependent numerical simulations of the RD system are used to illustrate both 2D localized patterns in the model and the spatial alignment of localized structures

Abstract:

We aimed to make a theoretical contribution to the happy-productive worker thesis by expanding the study to cases where this thesis does not fit. We hypothesized and corroborated the existence of four relations between job satisfaction and innovative performance: (a) unhappy-unproductive, (b) unhappy-productive, (c) happy-unproductive, and (d) happy-productive. We also aimed to contribute to the happy-productive worker thesis by studying some conditions that influence and differentiate among the four patterns. Hypotheses were tested in a sample of 513 young employees representative of Spain. Cluster analysis and discriminant analysis were performed. We identified the four patterns. Almost 15 % of the employees had a pattern largely ignored by previous studies (e.g., unhappy-productive). As hypothesized, to promote well-being and performance among young employees, it is necessary to fulfill the psychological contract, encourage initiative, and promote job self-efficacy. We also confirmed that over-qualification characterizes the unhappy-productive pattern, but we failed to confirm that high job self-efficacy characterizes the happy-productive pattern. The results show the relevance of personal and organizational factors in studying the well-being-performance link in young employees

Abstract:

Conventional wisdom suggests that promising free information to an agent would crowd out costly information acquisition. We theoretically demonstrate that this intuition only holds as a knife-edge case in which priors are symmetric. Indeed, when priors are asymmetric, a promise of free information in the future induces agents to increase information acquisition. In the lab, we test whether such crowding out occurs for both symmetric and asymmetric priors. Our results are qualitatively in line with the predictions: When priors are asymmetric, the promise of future free information induces subjects to acquire more costly information

Abstract:

Region-of-Interest (ROI) tomography aims at reconstructing a region of interest C inside a body using only x-ray projections intersecting C and it is useful to reduce overall radiation exposure when only a small specific region of a body needs to be examined. We consider x-ray acquisition from sources located on a smooth curve Γ in R3 verifying the classical Tuy condition. In this generic situation, the non-trucated cone-beam transform of smooth density functions f admits an explicit inverse Z as originally shown by Grangeat. However Z cannot directly reconstruct f from ROI-truncated projections. To deal with the ROI tomography problem, we introduce a novel reconstruction approach. For densities f in L∞(B) where B is a bounded ball in R3, our method iterates an operator U combining ROI-truncated projections, inversion by the operator Z and appropriate regularization operators. Assuming only knowledge of projections corresponding to a spherical ROI C subset of subset B, given ɛ > 0, we prove that if C is sufficiently large our iterative reconstruction algorithm converges at exponential speed to an ɛ-accurate approximation of f in L∞. The accuracy depends on the regularity of f quantified by its Sobolev norm in W5(B). Our result guarantees the existence of a critical ROI radius ensuring the convergence of our ROI reconstruction algorithm to an ɛ-accurate approximation of f. We have numerically verified these theoretical results using simulated acquisition of ROI-truncated cone-beam projection data for multiple acquisition geometries. Numerical experiments indicate that the critical ROI radius is fairly small with respect to the support region B

Resumen:

El Tratado de Libre Comercio entre la UE y México (tlcuem) entró en vigor en el año 2000, constituyéndose en uno de los acuerdos más importantes del comercio transatlántico. El objetivo de este trabajo es analizar los resultados del acuerdo en materia de comercio entre los países socios al cabo de una década, e identificar los principales determinantes económicos. Se estima un modelo de gravedad para una muestra de 60 países durante el periodo 1994-2011. Los resultados indican que dicho tratado ha sido relevante en la intensificación de las relaciones comerciales entre ambos socios

Abstract:

The Free Trade Agreement between the European Union and Mexico (eumfta) was enforced in 2000, becoming one of the most important transatlantic trade agreements. The goal of this research is to analyze the results of this agreement a decade after the signature. A gravity model is estimated for a sample of 60 countries along the period 1994-2011. The results indicate that such an agreement has given rise to an increase in the bilateral trade flows between these two commercial partners

Abstract:

We study an at-scale natural experiment in which debit cards were given to cash transfer recipients who already had a bank account. Using administrative account data and household surveys, we find that beneficiaries accumulated a savings stock equal to 2% of annual income after two years with the card. The increase in formal savings represents an increase in overall savings, financed by a reduction in current consumption. There are two mechanisms. First, debit cards reduce transaction costs of accessing money. Second, they reduce monitoring costs, which led beneficiaries to check their account balances frequently and build trust in the bank

Abstract:

Transaction costs are a significant barrier to the take-up and use of formal financial services. Account opening fees and minimum balance requirements prevent the poor from opening bank accounts (Dupas and Robinson 2013), and small subsidies can lead to large increases in take-up (Cole, Sampson, and Zia 2011). Indirect transaction costs—such as travel time -are also a barrier: the distance to the nearest bank or mobile money agent is a key predictor of take-up of savings accounts (Dupas et al. forthcoming) and mobile money (Jack and Suri 2014). In turn, increased access to financial services can reduce poverty and increase welfare (Burgess and Pande 2005; Suri and Jack 2016). Digital financial services, such as ATMs, debit cards, mobile money, and digital credit, have the potential to reduce transaction costs. However, existing studies rarely measure indirect transaction costs. We provide evidence on how a specific technology -a debit card- lowers indirect transaction costs by reducing travel distance and foregone activities. We study a natural experiment in which debit cards tied to existing savings accounts were rolled out geographically over time to beneficiaries of the Mexican cash transfer program Oportunidades. Prior to receiving debit cards, beneficiaries received transfers directly into a savings account every two months. After receiving cards, beneficiaries continue to receive their benefits in the savings account, but can access their transfers and savings at any bank’s ATM. They can also check their balances at any bank’s ATM or use the card to make purchases at point of sale terminals. We find that debit cards reduce the median road distance to access the account from 4.8 to 1.3 kilometers (km). As a result, the proportion of beneficiaries who walk to withdraw the transfer payments increases by 59 percent. Furthermore, prior to receiving debit cards, 84 percent of beneficiari

Abstract:

We study a natural experiment in which debit cards are rolled out to beneficiaries of a cash transfer program, who already received transfers directly deposited into a savings account. Using administrative account data and household surveys, we find that before receiving debit cards, few beneficiaries used the accounts to make more than one withdrawal per period, or to save. With cards, beneficiaries increase their number of withdrawals and check their balances frequently; the number of checks decreases over time as their reported trust in the bank and savings increase. Their overall savings rate increases by 3–4 percent of household income

Abstract:

Two career-concerned experts sequentially give advice to a Bayesian decision maker (D). We find that secrecy dominates transparency, yielding superior decisions for D. Secrecy empowers the expert moving late to be pivotal more often. Further, (i) only secrecy enables the second expert to partially communicate her information and its high precision to D and swing the decision away from first expert's recommendation; (ii) if experts have high average precision, then the second expert is effective only under secrecy. These results are obtained when experts only recommend decisions. If they also report the quality of advice, fully revealing equilibrium may exist

Abstract:

In recent years, more and more countries have included different kinds of gender considerations in their trade agreements. Yet many countries have still not signed their very first agreement with a gender equality-related provision. Though most of the agreements negotiated by countries in the Asia-Pacific region have not explicitly accommodated gender concerns, a limited number of trade agreements signed by countries in the region have presented a distinct approach: the nature of provisions, drafting style, location in the agreements, and topic coverage of such provisions contrast with the gender-mainstreaming approach employed by the Americas or other regions. This chapter provides a comprehensive account and assessment of gender-related provisions included in the existing trade agreements negotiated by countries in the Asia-Pacific, explains the extent to which gender concerns are mainstreamed in these agreements, and summarizes the factors that impede such mainstreaming efforts in the region

Abstract:

The most common provisions we find in almost all multilateral, regional and bilateral trade agreements are the exception clauses that allow countries to protect public morals, humans, animals or plant health and life and conserve exhaustible natural resources. If countries can allow trade-restrictive measures that aim to protect these non-economic interests, is it possible to negotiate a specific exception to justify measures that are aimed at protecting women's economic interests as well? Is the removal of barriers that impede women's participation in trade any less important than the conservation of exhaustible natural resources such as sea turtles or dolphins? In that context, this article prepares a case for the inclusion of a specific exception that can allow countries to leverage women's economic empowerment through international trade agreements. This is done after carrying out an objective assessment of whether a respondent could seek protection under the existing public morality exception to justify a measure that is taken to protect women's economic interests

Abstract:

Mexico has by far the world's highest death rate linked to obesity and other chronic diseases. As a response to the growing pandemic of obesity, Mexico has adopted a new compulsory front-of-pack labeling regulation for pre-packaged foods and nonalcoholic beverages. This article provides an assessment of the regulation's consistency with international trade law and the arguments that might be invoked by either side in a hypothetical trade dispute on this matter

Abstract:

In the past few months, we have witnessed the 'worst deal' in the history of the USA become the 'best deal' in the history of the USA. The negotiation leading to the United States-Mexico-Canada Agreement (USMCA) appeared as an 'asymmetrical exchange' scenario that could have led to an unbalanced outcome for Mexico. However, Mexico stood firm on its positions and negotiated a modernized version of North American Free Trade Agreement. Mexico faced various challenges during this renegotiation, not only because it was required to negotiate with two developed countries but also due to the high level of ambition and demands raised by the new US administration. This paper provides an account of these impediments. More importantly, it analyzes the strategies that Mexico used to overcome the resource constraints it faced amidst the unpredictable political dilemma in the US and at home. In this manner, this paper seeks to provide a blueprint of strategies that other developing countries could employ to overcome their negotiation capacity constraints, especially when they are dealing with developed countries and in uncertain political environments

Abstract:

Health pandemics affect women and men differently, and they can make the existing gender inequalities much worse. COVID-19 is one such pandemic, which can have substantial gendered implications both during and in the post-pandemic world. Its economic and social consequences could deepen the existing gender inequalities and roll back the limited gains made in respect of women empowerment in the past few decades. The impending global recession, multiple trade restrictions, economic lockdown, and social distancing measures can expose vulnerabilities in social, political, and economic systems, which, in turn, could have a profound impact on women’s participation in trade and commerce. The article outlines five main reasons that explain why this health pandemic has put women employees, entrepreneurs, and consumers at the frontline of the struggle. It then explores how free trade agreements can contribute in repairing the harm in the post-pandemic world. In doing so, the author sheds light on various ways in which the existing trade agreements embrace gender equality considerations and how they can be better prepared to help minimize the pandemic-inflicted economic loss to women

Abstract:

The World Trade Organization (WTO) Dispute Settlement System (DSS) is in peril. The Appellate Body (AB) is being held as a 'hostage' by the very architect and the most frequent user of WTO DSS, the United States of America. This will bring the whole DSS to a standstill as the inability of AB to review the appeals will have a kill-off effect on the binding value of Panel rulings. If the most celebrated DSS collapses, the members would not be able to enforce their WTO rights. The WTO-inconsistent practices and violations would increase and remain unchallenged. The rights without remedies would soon lose their charm, and we might witness a higher and faster drift away from multilateral trade regulation. This is a grave situation. This piece is an academic attempt to analyse and diffuse the key points of criticism against AB. A comprehensive assessment of reasons behind this criticism could be a starting point to resolve this gridlock. The first part of this Article investigates the reasons and motivations of the US behind these actions as we cannot address the problems without understanding them in a comprehensive manner. The second part looks at this issue from a systemic angle as it seeks to address the debate on whether WTO resembles common or civil law, as most of the criticism directed towards judicial activism and overreach is 'much ado about nothing'. The concluding part of this piece briefly looks at the proposals already made by scholars to resolve this deadlock, and it leaves the readers with a fresh proposal to deliberate upon

Abstract:

In the recent years, we have witnessed a sharp increase in the number of free trade agreements (FTAs) with gender-related provisions. The key champions of this evolution include Canada, Chile, New Zealand, Australia and Uruguay. These countries have proposed a new paradigm, i.e. a paradigm where FTAs are considered vehicles to achieving the economic empowerment of women. This trend is spreading like a wild-fire to other parts of the world. More and more countries are expressing their interest in ensuring that their FTAs are genderresponsive and not simply gender-neutral or gender-blind in nature. The momentum is on, and we can expect many more agreements in the future to include stand-alone chapters or exclusive provisions on gender issues. This article is an attempt to tap into this ongoing momentum, as it puts forward a newly designed self-evaluation maturity framework to measure gender-responsiveness of trade agreements. The proposed framework is to help policy-makers and negotiators to: (1) measure gender-responsiveness of trade agreements; (2) identify areas where agreements need critical improvements; and (3) receive recommendations to improve the gender-fabric of trade agreements that they are negotiating or have already negotiated. This is the first academic intervention presenting this type of gender-responsiveness model for trade agreements

Abstract:

Purpose - World Trade Organisation grants rights to its members, and WTO Dispute Settlement Understanding (DSU) provides a rule-oriented consultative and judicial mechanism to protect these rights in cases of WTO-incompatible trade infringements. However, the DSU participation benefits come at a cost. These costs are acutely formidable for least developing countries (LDCs) which have small market size and trading stakes. No LDC has ever filed a WTO compliant, with the only exception of India-Battery dispute filed by Bangladesh against India. This paper aims to look at the experience of how Bangladesh – so far the only LDC member that has filed a formal WTO complaint – persuaded India to withdraw anti-dumping duties India had imposed on the import of acid battery from Bangladesh. Design/methodology/approach - The investigation is grounded on practically informed findings gathered through authors’ work experience and several semi-structured interviews and discussions which the authors have conducted with government representatives from Bangladesh, government and industry representatives from other developing countries, trade lawyers and officials based in Geneva and Brussels, and civil society organisations. Findings - The discussion provides a sound indication of the participation impediments that LDCs can face at WTO DSU and the ways in which such challenges can be overcome with the help of resources available at the domestic level. It also exemplifies how domestic laws and practices can respond to international legal instruments and impact the performance of an LDC at an international adjudicatory forum. Originality/value - Except one book chapter and a working paper, there is no literature available on this matter. This investigation is grounded on practically informed findings gathered with the help of original empirical research conducted by the authors

Abstract:

Mexico has employed special methodologies for price-determination and calculation of dumping margins againts Chinese imports in almost all anti-dumping investigations. This chapter attemps to explain and analyze the NME-especific procedures eployed by Mexican authorities in anti-dumping proceedings againts China. It also clarifies the Mexican standpoint on the controversial issue of how the expiry of section 15(a)(ii) of China's Accession Protocol to the WTO impacts the surviving parts of Section 15 of the Protocol, and whether Mexico has changed its treatment towards Chinese imports following the expiry of Section 15(a)(ii) post 12 December 2016

Abstract:

Multiple scholarly works have argued that developing country members of World Trade Organization (WTO) should enhance their dispute settlement capacity to successfully and cost effectively navigate the system of WTO Dispute Settlement Understanding (DSU). It is one thing to be a part of WTO agreements and know the WTO rules, and another to know how to use and take advantage of those agreements and rules in practice. The present investigation seeks to conduct a detailed examination of the latter with a specific focus on critically examining public private partnership (PPP) strategies that can enable developing countries to effectively utilize the provisions of WTO DSU. To achieve this purpose, the article examines how Brazil, one of the most active DSU users among developing countries, has strengthened its DSU participation by engaging its private stakeholders during the management of WTO disputes. The identification and evaluation of the PPP strategies employed by the government and industries in Brazil may prompt other developing countries to determine their individual approach towards PPP for the handling of WTO disputes

Abstract:

World Trade Organisation Dispute Settlement Understanding (WTO DSU) is a two-tier mechanism. The first tier is international adjudication and the second tier is domestic handling of trade disputes. Both tiers are interdependent and interconnected. A case that is poorly handled at the domestic level generally stands a relatively lower chance of success at the international level, and hence, the future of WTO litigation is partially predetermined by the manner in which it is handled at the domestic level. Moreover, most of the capacity-related challenges faced by developing countries at WTO DSU are deeply rooted in the domestic context of these countries, and their solutions can best be found at the domestic level. The present empirical investigation seeks to explore a domestic solution to the capacity-related challenges faced mainly by developing countries, as it examines the model of public private partnership (PPP). In particular, the article examines how India, one of the most active DSU users among developing countries, has strengthened its DSU participation by engaging its private stakeholders during the management of WTO disputes. The identification and evaluation of the PPP strategies employed by the government and industries, along with an analysis of the challenges and potential limitations that such partnerships have faced in India, may prompt other developing countries to review or revise their individual approach towards the future handling of WTO dispute

Abstract:

With the advent of globalization and industrialization, the significance of WTO DSU as an international institution of trade dispute governance has expanded tremendously, as a landmark achievement of Uruguay Round negotiations. Exploring the fact that whether the 'pendulum' of DSU is tilted towards developed economies, much to the disadvantage of the developing world, it becomes imperative to devise a strategy within the existing framework, to balance the equilibrium of this tilted pendulum. WTO, being recognized as an area of public international law, and expanding its routes to the sphere of private international law, the approach of public private partnership can be efficiently designed to help developing countries overcome their challenges in using WTO DSU, among the other approaches suggested by various experts. This study aims at exploring ways in which this partnership can be devised and implemented in the context of developing countries and also analyzing the limits of developing countries in implementing this strategy

Resumen:

Las respuestas más comunes al escalamiento percibido del crimen con violencia a través de la mayor parte de América Latina son el aumento del tamaiío y los poderes de la policía local y -en la mayorra de los casos incrementar-la participación de las fuerzas armadas para confrontar tanto al crimen común como al organizado. En México el debate se ha visto agudizado por la extensa violencia vinculada a los conflictos entre organizaciones de narcotráfico y entre éstas y las fuerzas de seguridad del gobierno, en las cuales el ejército y la marina han desempeiíado papeles importantes. Con base en la World Values Survey y datos del Barómetro de las Américas, examinamos tendencias de la confianza pública en la policía, el sistema judicial y las fuerzas armadas en México entre 1990 y 2010. Aquí preguntamos: ¿Está difundido y generalizado a través de la muestra el apoyo público para emplear a los militares como policías? ¿O existen patrones de apoyo y oposición respecto a la opinión pública? Nuestros hallazgos principales fueron: 1) que las fuerzas armadas clasificaron en primer lugar en relación con la confianza, mientras que la confianza en otras instituciones mexicanas tuvo una tendencia negativa entre 2008 y 2010, además la confianza en los militares aumentó ligeramente; 2) los encuestados respondieron que los militares respetan los derechos humanos más que el promedio y sustancialmente más que la policla o el gobierno en general; 3) el apoyo público para los militares en la lucha contra el crimen es fuerte y está distribuido de manera equitativa a través del espectro ideológico y de los grupos sociodemográficos, y 4) los patrones de apoyo surgen con mayor claridad respecto a percepciones, actitudes y juicios de desempeño. A modo de conclusión consideramos algunas de las implicaciones pollticas y de polltica de nuestros hallazgos.

Abstract:

Typical responses to the perceived escalation of violent crime throughout most of Latin America are to increase the size and powers of the regular police and -in most cases- to expand the involvement by the armed forces to confront both common and organized crime. Participation by the armed forces in domestic policing, in turn, has sparked debates in several countries about the serious risks incurred, especially with respect to human rights violations. In Mexico the debate is sharpened by the extensive violence linked to conflicts among drug-trafficking organizations and between these and the government's security forces, in which the Army and Navy have played leading roles. Using World Values Survey and Americas Barometer data, we examine trends in public confidence in the police, justice system, and armed forces in Mexico over 1990-2010. Using Vanderbilt University's 2010 LAPOP survey we compare levels of trust in various social, political, and government actors, locating Mexico in the broader Latin America context. Here we ask: Is public support for using the military as police widespread and generalized across the sample? Or are there patterns of support and opposition with respect to public opinion? Our main findings are that: 1) the armed forces rank at the top regarding trust, and -while trust in other Mexican institutions tended to decline in 2008-2010- trust in the military increased slightly; 2) respondents indicate that the military respects human rights more than the average and substantially more than the police or government generally; 3) public support for the military in fighting crime is strong and distributed evenly across the ideological spectrum and across socio-demographic groups, and 4) patterns of support emerge more clearly with respect to perceptions, attitudes, and performance judgments. By way of conclusion we conconsider some of the political and policy implications of our findings

Abstract:

We study a class of boundedly rational choice functions which operate as follows. The decision maker uses two criteria in two stages to make a choice. First, she shortlists the top two alternatives, i.e. two finalists, according to one criterion. Next, she chooses the winner in this binary shortlist using the second criterion. The criteria are linear orders that rank the alternatives. Only the winner is observable. We study the behavior exhibited by this choice procedure and provide an axiomatic characterization of it. We leave as an open question the characterization of a generalization to larger shortlists

Abstract:

In this study, we analyzed students' understanding of a complex calculus graphing problem. Students were asked to sketch the graph of a function, given its analytic properties (1st and 2nd derivatives, limits, and continuity) on specific intervals of the domain. The triad of schema development in the context of APOS theory was utilized to study students' responses. Two dimensions of understanding emerged, 1 involving properties and the other involving intervals. A student's coordination of the 2 dimensions is referred to as that student's overall calculus graphing schema. Additionally, a number of conceptual problems were consistently demonstrated by students throughout the study, and these difficulties are discussed in some detail

Abstract:

Array-based comparative genomic hybridization (aCGH) is a high-resolution, high-throughput technique for studying the genetic basis of cancer. The resulting data consist of log fluorescence ratios as a function of the genomic DNA location and provide a cytogenetic representation of the relative DNA copy number variation. Analysis of such data typically involves estimating the underlying copy number state at each location and segmenting regions of DNA with similar copy number states. Most current methods proceed by modeling a single sample/array at a time, and thus fail to borrow strength across multiple samples to infer shared regions of copy number aberrations. We propose a hierarchical Bayesian random segmentation approach for modeling aCGH data that uses information across arrays from a common population to yield segments of shared copy number changes. These changes characterize the underlying population and allow us to compare different population aCGH profiles to assess which regions of the genome have differential alterations. Our method, which we term Bayesian detection of shared aberrations in aCGH (BDSAScgh), is based on a unified Bayesian hierarchical model that allows us to obtain probabilities of alteration states as well as probabilities of differential alterations that correspond to local false discovery rates for both single and multiple groups. We evaluate the operating characteristics of our method via simulations and an application using a lung cancer aCGH data set. This article has supplementary material online

Abstract:

The quadratic and linear cash flow dispersion measures M2 and Ñ are two immunization risk measures designed to build immunized bond portfolios. This paper generalizes these two measures by showing that any dispersion measure is an immunization risk measure and therefore, it sets up a tool to be used in empirical testing. Each new measure is derived from a different set of shocks (changes on the term structure of interest rates) and depends on the corresponding subset of worst shocks. Consequently, a criterion for choosing appropriate immunization risk measures is to take those developed from the most reasonable sets of shocks and the associated subset of worst shocks and then select those that work best empirically. Adopting this approach, this paper then explores both numerical examples and a short empirical study on the Spanish Bond Market in the mid-1990s to show that measures between linear and quadratic are the most appropriate, and amongst them, the linear measure has the best properties. This confirms previous studies on US and Canadian markets that maturity-constrained-duration-matched portfolios also have good empirical behavior

Abstract:

This paper presents a condition equivalent to the existence of a Riskless Shadow Asset that guarantees a minimum return when the asset prices are convex functions of interest rates or other state variables. We apply this lemma to immunize default-free and option-free coupon bonds and reach three main conclusions. First, we give a solution to an old puzzle: why do simple duration matching portfolios work well in empirical studies of immunization even though they are derived in a model inconsistent with equilibrium and shifts on the term structure of interest rates are not parallel, as assumed? Second, we establish a clear distinction between the concepts of immunized and maxmin portfolios. Third, we develop a framework that includes the main results of this literature as special cases. Next, we present a new strategy of immunization that consists in matching duration and minimizing a new linear dispersion measure of immunization risk

Abstract:

Given modal logics L1 and L2, their lexicographic product L1 x L2 is a new logic whose frames are the Cartesian products of an L1-frame and an L2-frame, but with the new accessibility relations reminiscent of a lexicographic ordering. This article considers the lexicographic products of several modal logics with linear temporal logic (LTL) based on "next" and "always in the future". We provide axiomatizations for logics of the form L x LTL and define cover-simple classes of frames; we then prove that, under fairly general conditions, our axiomatizations are sound and complete whenever the class of L-frames is cover-simple. Finally, we prove completeness for several concrete logics of the form L x LTL

Abstract:

Classic institutionalism claims that even authoritarian and non-democratic regimens would prefer institutions where all members could make advantageous transactions. Thus, structural reform geared towards preventing and combating corruption should be largely preferred by all actors in any given setting. The puzzle, then, is why governments decide to maintain, or even create, inefficient institutions. A perfect example of this paradox is the establishment of the National Anti-corruption System (SNA) in Mexico. This is a watchdog institution, created to fight corruption, which is itself often portrayed as highly corrupted and inefficient. The limited scope of anti-corruption reforms in the country is explained by the institutional setting in which these reforms take place, where political behaviour is highly determined by embedded institutions that privilege centralized decision-making. Mexican reformers have historically privileged those reforms that increase their gains and power, and delayed and boycotted those that negatively affect them. Since anti-corruption reforms adversely affected rent extraction and diminished the power of a set of political actors, the bureaucrats who benefited from the current institutional setting embraced limited reforms or even boycotted them. Thus, to understand failed reforms it is necessary to understand the deep-rooted political institutions that shape the behaviour of political actors. This analysis is important for other modern democracies where powerful bureaucratic minorities are often able to block changes that would be costly to their interests, even if the changes would increase net gains for the country as a whole

Abstract:

In this paper we study the problem of Hamiltonization of nonholonomic systems from a geometric point of view. We use gauge transformations by 2-forms (in the sense of Ševera and Weinstein in Progr Theoret Phys Suppl 144:145–154 2001) to construct different almost Poisson structures describing the same nonholonomic system. In the presence of symmetries, we observe that these almost Poisson structures, although gauge related, may have fundamentally different properties after reduction, and that brackets that Hamiltonize the problem may be found within this family. We illustrate this framework with the example of rigid bodies with generalized rolling constraints, including the Chaplygin sphere rolling problem. We also see through these examples how twisted Poisson brackets appear naturally in nonholonomic mechanics

Abstract:

We propose a novel method for reliably inducing stress in drivers for the purpose of generating real-world participant data for machine learning, using both scripted in-vehicle stressor events and unscripted on-road stressors such as pedestrians and construction zones. On-road drives took place in a vehicle outfitted with an experimental display that lead drivers to believe they had prematurely ran out of charge on an isolated road. We describe the elicitation method, course design, instrumentation, data collection procedure and the post-hoc labeling of unplanned road events to illustrate how rich data about a variety of stress-related events can be elicited from study participants on-road. We validate this method with data including psychophysiological measurements, video, voice, and GPS data from (N=20) participants. Results from algorithmic psychophysiological stress analysis were validated using participant self-reports. Results of stress elicitation analysis show that our method elicited a stress-state in 89% of participants

Abstract:

Do economic incentives explain forced displacement during conflict? This paper examines this questionin Colombia, which has had one of the world’s most acute situations of internal displacement associatedwith conflict. Using data on the price of bananas along with data on historical levels of production, I findthat price increases generate more forced displacement in municipalities more suitable to produce thisgood. However, I also show that this effect is concentrated in the period in which paramilitary powerand operations reached an all-time peak. Additional evidence shows that land concentration amongthe rich has increased substantially in districts that produce these goods. These findings are consistentwith extensive qualitative evidence that documents the link between economic interests and local polit-ical actors who collude with illegal armed groups to forcibly displace locals and appropriate their land,especially in areas with more informal land tenure systems, like those where bananas are grown morefrequently

Abstract:

This study was undertaken to explore pre-service teachers' understanding of injections and surjections. There were 54 pre-service teachers specialising in the teaching of Mathematics in Grades 10–12 curriculum who participated in the project. The concepts were covered as part of a real analysis course at a South African university. Questionnaires based on an initial genetic decomposition of the concepts of surjective and injective functions were administered to the 54 participants. Their written responses, which were used to identify the mental constructions of these concepts, were analysed using an APOS (action-process-object-schema) framework and five interviews were carried out. The findings indicated that most participants constructed only Action conceptions of bijection and none demonstrated the construction of an Object conception of this concept. Difficulties in understanding can be related to students' lack of construction of the concepts of functions and sets that are a prerequisite to working with bijections

Resumen:

El autor intenta deducir una teoría poética del escritor mexicano partiendo de su obra, que divide en tres partes; enseguida, tras un interludio –“la noche obscura del poeta”– trata la función del recuerdo como estímulo literario. Y termina con un apunte hacia el popularismo artístico de Alfonso Reyes

Abstract:

The author attempts to formulate a poetic theory of this Mexican writer based on his works, which is divided in three parts; after an interlude –“the poet’s darkest night”– he studies how remembrance works as a literary stimulus and concludes commenting on Alfonso Reyes’ artistic popularism

Abstract:

The cognitive domains of a communication scheme for learning physics are related to a framework based on epistemology, and the planning of an introductory calculus textbook in classical mechanics is shown as an example of application

Abstract:

Uniform inf-sup conditions are of fundamental importance for the finite element solution of problems in incompressible fluid mechanics, such as the Stokes and Navier–Stokes equations. In this work we prove a uniform inf-sup condition for the lowest-order Taylor–Hood pairs Q2×Q1 and P2×P1 on a family of affine anisotropic meshes. These meshes may contain refined edge and corner patches. We identify necessary hypotheses for edge patches to allow uniform stability and sufficient conditions for corner patches. For the proof, we generalize Verfürth’s trick and recent results by some of the authors. Numerical evidence confirms the theoretical results

Abstract:

In this work we present and analyze new inf-sup stable, and stabilised, finite element methods for the Oseen equation in anisotropic quadrilateral meshes. The meshes are formed of closed parallelograms, and the analysis is restricted to two space dimensions. Starting with the lowest order Q 1 2 x P 0 pair, we first identify the pressure components that make this finite element pair to be non-inf-sup stable, especially with respect to the aspect ratio. We then propose a way to penalise them, both strongly, by directly removing them from the space, and weakly, by adding a stabilisation term based on jumps of the pressure across selected edges. Concerning the velocity stabilisation, we propose an enhanced grad-div term. Stability and optimal a priori error estimates are given, and the results are confirmed numerically

Abstract:

In his landmark article, Richard Morris (1981) introduced a set of rat experiments intended “to demonstrate that rats can rapidly learn to locate an object that they can never see, hear, or smell provided it remains in a fixed spatial location relative to distal room cues” (p. 239). These experimental studies have greatly impacted our understanding of rat spatial cognition. In this article, we address a spatial cognition model primarily based on hippocampus place cell computation where we extend the prior Barrera–Weitzenfeld model (2008) intended to allow navigation in mazes containing corridors. The current work extends beyond the limitations of corridors to enable navigation in open arenas where a rat may move in any direction at any time. The extended work reproduces Morris’s rat experiments through virtual rats that search for a hidden platform using visual cues in a circular open maze analogous to the Morris water maze experiments. We show results with virtual rats comparing them to Morris’s original studies with rats

Abstract:

The study of behavioral and neurophysiological mechanisms involved in rat spatial cognition provides a basis for the development of computational models and robotic experimentation of goal-oriented learning tasks. These models and robotics architectures offer neurobiologists and neuroethologists alternative platforms to study, analyze and predict spatial cognition based behaviors. In this paper we present a comparative analysis of spatial cognition in rats and robots by contrasting similar goal-oriented tasks in a cyclical maze, where studies in rat spatial cognition are used to develop computational system-level models of hippocampus and striatum integrating kinesthetic and visual information to produce a cognitive map of the environment and drive robot experimentation. During training, Hebbian learning and reinforcement learning, in the form of Actor-Critic architecture, enable robots to learn the optimal route leading to a goal from a designated fixed location in the maze. During testing, robots exploit maximum expectations of reward stored within the previously acquired cognitive map to reach the goal from different starting positions. A detailed discussion of comparative experiments in rats and robots is presented contrasting learning latency while characterizing behavioral procedures during navigation such as errors associated with the selection of a non-optimal route, body rotations, normalized length of the traveled path, and hesitations. Additionally, we present results from evaluating neural activity in rats through detection of the immediate early gene Arc to verify the engagement of hippocampus and striatum in information processing while solving the cyclical maze task, such as robots use our corresponding models of those neural structures

Abstract:

Anticipation of sensory consequences of actions is critical for the predictive control of movement that explains most of our sensory-motor behaviors. Plenty of neuroscientific studies in humans suggest evidence of anticipatory mechanisms based on internal models. Several robotic implementations of predictive behaviors have been inspired on those biological mechanisms in order to achieve adaptive agents. This paper provides an overview of such neuroscientific and robotic evidences; a high-level architecture of sensory-motor coordination based on anticipatory visual perception and internal models is then introduced; and finally, the paper concludes by discussing the relevance of the proposed architecture within the context of current research in humanoid robotics

Abstract:

The study of spatial memory and learning in rats has inspired the development of multiple computational models that have lead to novel robotics architectures. Evaluation of computational models and resulting robotic architecture is usually carried out at the behavioral level by evaluating experimental tasks similar to those performed with rats. While multiple metrics are defined to evaluate behavioral performance in rats, metrics for robot task evaluation are very limited mostly to success/failure and time to complete task. In this paper we present a set of metrics taken from rat spatial memory and learning evaluation to further analyze performance in robots. The proposed set of metrics, learning latency and ability to navigate minimal distance to goal, should offer the robotics community additional tools to assess performance and validity of models in biologically-inspired robotic architectures at the task performance level. We also provide a comparative evaluation using these metrics between similar spatial tasks performed by rat and robot in comparable environments

Abstract:

In this paper we present a comparative behavioral analysis of spatial cognition in rats and robots by contrasting a similar goal-oriented task in a cyclical maze, where a computational system-level model of rat spatial cognition is used integrating kinesthetic aud visual information to produce a cognitive map of the environnnent and drive robot experimentation. A discussion of experiments in rats and robots is presented contrasting learning latency while characterizing behavioral procedures such as body rotations during navigation and election of routes to the goal

Abstract:

This paper presents a robot architecture with spatial cognition and navigation capabilities that captures some properties of the rat brain structures involved in learning and memory. This architecture relies on the integration of kinesthetic and visual information derived from artificial landmarks, as well as on Hebbian learning, to build a holistic topological-metric spatial representation during exploration, and employs reinforcement learning by means of an Actor-Critic architecture to enable learning and unlearning of goal locations. From a robotics perspective, this work can be placed in the gap between mapping and map exploitation currently existent in the SLAM literature. The exploitation of the cognitive map allows the robot to recognize places already visited and to find a target from any given departure location, thus enabling goal-directed navigation. From a biological perspective, this study aims at initiating a contribution to experimental neuroscience by providing the system as a tool to test with robots hypotheses concerned with the underlying mechanisms of rats' spatial cognition. Results from different experiments with a mobile AlBO robot inspired on c1assical spatial tasks with rats are described, and a comparative analysis is provided in reference to the reversal task devised by O'Keefe in 1983

Abstract:

A computational model of spatial cognition in rats is used to control an autonomous mobile robot while solving a spatial task within a cyclic maze. In this paper we evaluate the robot's behavior in terms of place recognition in multiple directions and goal-oriented navigation against the results derived from experimenting with laboratory rats solving the same spatial task in a similar maze. We provide a general description of the bio-inspired model, and a comparative behavioral analysis between rats and robot

Abstract:

In this paper we present a model designed on the basis of the rat's brain neurophysiology to provide a robot with spatial cognition and goal-oriented navigation capabilities. We describe place representation and recognition processes in rats as the basis for topological map building and exploitation by robots. We experiment with the model by training a robot to find the goal in a maze starting from a fixed location, and by testing it to reach the same target from new different starting locations

Abstract:

We present a model designed on the basis of the rat's brain neurophysiology to provide a robot with spatial cognition and goal-oriented navigation capabilities. We describe target learning and place recognition processes in rats as basis for topological map building and exploitation by robots. We experiment with the model in different maze configurations by training a robot to find the goal starting from a fixed localion, and by testing it to reach the same target from new different starting locations

Abstract:

In this paper we present a model designed on the basis of the neurophysiology of the rat hippocampus to control the navigation of a real robot. The model allows the robot to learn reward locations dynamically moved in different environments, to build a topological map, and to return home autonomously. We describe robot experimentation results from our tests in a T-maze, an 8-arm, radial maze and an extended maze

Abstract:

In this paper we present a model composed of layers of neurons designed on the basis of the neurophysiology of the rat hippocampus to control the navigation of a real robot. The model allows the robot to learn reward locations in different mazes and to return home autonomously by building a topological map of the environment. We described robotic experimentation results from our tests in a T-maze, an 8-arm radial maze and a 3-T shaped maze

Abstract:

In this paper we present a biologically-inspired robotic exploration and navigation model based on the neurophysiology of the rat hippocampus that allows a robot to rind goals and return home autonomously by building a topological map of the environment. We present simulation and experimentation results from a T-maze tested and discuss future research

Abstract:

The time-dependent restricted (n+ 1) -body problem concerns the study of a massless body (satellite) under the influence of the gravitational field generated by n primary bodies following a periodic solution of the n-body problem. We prove that the satellite has periodic solutions close to the large-amplitude circular orbits of the Kepler problem (comet solutions), and in the case that the primaries are in a relative equilibrium, close to small-amplitude circular orbits near a primary body (moon solutions). The comet and moon solutions are constructed with the application of a Lyapunov–Schmidt reduction to the action functional. In addition, using reversibility techniques, we compute numerically the comet and moon solutions for the case of four primaries following the super-eight choreography

Abstract:

We aim to associate a cytokine profile obtained through data mining with the clinical characteristics of patients with subgroups with advanced non-small-cell lung cancer (NSCLC). Our results provide evidence that complex cytokine networks may be used to identify patient sub-groups with different prognoses in advanced NSCLC that could serve as potential biomarkers for best treatment choices

Resumen:

La neumonitis por hipersensibilidad (NH) es una enfermedad inflamatoria difusa del parénquima pulmonar provocada por la inhalación repetida de partículas orgánicas. Las células dendríticas y sus precursores desempeñan un papel importante no sólo como células presentadoras de antígenos, sino también como parte de una red de procesos inmunorregulatorios. Dependiendo de su linaje y estado de diferenciación y activación, las células dentríticas pueden promover una intensa respuesta inmunológica por parte de los linfocitos T o bien, producir un estado de anergia. Objetivo: Caracterizar fenotípicamente las células dentríticas de origen mieloide (CDm) y plasmacitoide (CDp) presentes en el lavado bronquioalveolar (LBA) de pacientes con NH en etapas subaguda o crónica y comparada con lo observado en pacientes con fibrosis pulmonar idiopática (FPI) y sujetos control

Abstract:

Hypersensitivity pneumonitis (HP) is a diffuse inflammatory disease of lung parenchyma resulting from repetitive inhalation of organic particles. Dendritic cells and their precursors play an important role not only as antigen presenting cells but also as part of an immunoregulatory network. Depending on their lineage and stage of differentiation and activation, dendritic cells can promote a strong T-lymphocytes-mediated immunological response or an anergy state. Objective: To phenotypically characterize myeloid (mDCs) and plasmacitoid (pDCs) dendritic cells recovered in bronchoalveolar lavage from patients with subacute or chronic HP, and to compare the results with that obtained in patients with idiopathic pulmonary fibrosis (IPF) and healthy controls. Methods: BAL cells from 8 patients with subacute HP, 8 with chronic HP, 8 with IPF and 4 healthy subjects were used. The phenotype of dendritic cells subpopulations were characterized by means of flow cytometry