• 2018-07
  • 2018-10
  • 2018-11
  • The remainder of the paper is as follows


    The remainder of the paper is as follows. Section 2 provides an introduction to RFID technology, and discusses how it is currently used by large firms, and how it could be used in principle to provide measurement of turnover and profits in microenterprises. Section 3 provides details of our trial, including the technology used, how we selected firms, how the tagging process worked in practice, and our office trial. Section 4 provides the results, and Section 5 concludes. An online amphetamine sulfate provides photograph and video illustrations of the products used and the tagging process.
    RFID technology and its use to measure inventory levels, turnover, and profits
    Details of our proof-of-concept trial
    Introduction Randomized-controlled trials (RCTs) are an increasingly important tool for policy evaluation and estimation of economic parameters. However, they are expensive, and efficient use of limited resources (funding, inputs from implementation partners, and researchers\' time) requires that they be designed carefully. In an important contribution, Bruhn and McKenzie (2009) reviewed stratification methods that were common in economics RCTs at the time, and showed that large gains in precision were available by adopting more sophisticated stratification methods from the clinical trials literature. These stratification methods require researchers to obtain stratification covariates from all subjects prior to randomization. However, this is not always feasible. In clinical trials, subjects are often allocated to treatment as they arrive. In field trials, operational constraints may prevent defining and surveying the full sample frame in advance. In such situations, subjects must be assigned sequentially, with the researcher only learning the value of the stratification variables for that subject\'s at the time of enrollment and assignment. In this paper, we propose the use of D-optimal sequential allocation (Atkinson, 1982) to improve balance and power when subjects are enrolled sequentially. The D-optimal method minimizes imbalance given the constraint of not knowing covariate values in advance. We describe the method and its properties, and provide an algorithm for its implementation. We conduct a set of simulations, based on Bruhn and McKenzie (2009), and show that the D-optimal method offers clear benefits relative to commonly used sequential alternatives. In fact, surprisingly, optimal sequential designs are comparably well-balanced to stratifications performed with full knowledge of covariates in advance. In spite of these practical advantages, the method had not, to our knowledge and according to three survey articles, ever been employed in the field. We describe our amphetamine sulfate experience implementing the method in a water treatment and hygiene intervention in Dhaka, Bangladesh (Guiteras et al., 2015), and offer practical advice on its implementation under field conditions. Implementation was feasible with standard software (Stata), and produced an allocation that was well-balanced both on the stratification variables chosen ex ante and, ex post, on other important variables that were not included in the stratification.
    Theory Our exposition follows Atkinson (2002), with some changes in notation. First, we lay out the model and notation. Second, we develop the theory for the traditional situation of a fixed population of N subjects, for whom covariates X have been collected in advance. Third, we introduce sequential designs using a simplified case where the researcher is concerned with the precision of all estimated parameters, both treatment effects and nuisance parameters (coefficients on stratification variables). Finally, we adapt the sequential design to the standard situation where only precisely estimated treatment effects are of interest.
    Inference Confidence intervals can be constructed from the usual regression-based methods, and the standard covariance matrices can also be used for t-tests of hypotheses. Shao et al. (2010) prove that controlling for balancing variables will yield tests of the correct size. As emphasized by Bruhn and McKenzie (2009), researchers should commit ex-ante to controlling for the balancing variables, since this increases power on average, but retaining the option to analyze without controlling for the balancing variables gives the researcher a degree of freedom that can distort the size of a test.