text
stringlengths 0
538k
|
---|
qui log using example, replace
/***
\documentclass{beamer}
\usepackage{default}
\title{Generating PDF slides with LaTeX Markup with MarkDoc package}
\author{E. F. Haghish}
\date{August 15, 2016}
\begin{document}
\maketitle
***/
/***
\begin{frame}
\frametitle{First Slide}
Contents of the first slide
\end{frame}
***/
/***
\begin{frame}
\frametitle{Famous Composers}
\begin{center}
\begin{tabular}{|l|c|}\hline
J.\ S.\ Bach & 1685--1750 \\
W.\ A.\ Mozart & 1756--1791 \\
L.\ Beethoven & 1770--1827 \\
F.\ Chopin & 1810--1849 \\
R.\ Schumann & 1810--1856 \\
B.\ Bart\'{o}k & 1881--1945 \\ \hline
\end{tabular}
\end{center}
\end{frame}
***/
*********************** IMPORT THE SAME TABLE FROM A FILE
//IMPORT ./Beamer/table.tex
/***
\begin{frame}
\frametitle{Second Slide}
Contents of the second slide
\end{frame}
\end{document}
***/
qui log c
markdoc example, exp(slide) markup(latex) replace
|
qui log using example, replace
/***
\documentclass{beamer}
\title{Generating PDF slides with LaTeX Markup with MarkDoc package}
\author{E. F. Haghish}
\date{August 15, 2016}
\def\verbatim@font{\ttfamily\tiny}
\makeatother
\begin{document}
\maketitle
***/
/***
\begin{frame}
\frametitle{First Slide}
Contents of the first slide
\end{frame}
***/
display "results from Stata commands"
sysuse auto, clear
summarize
/***
\begin{frame}
\frametitle{Second Slide}
Contents of the second slide
\end{frame}
\end{document}
***/
qui log c
markdoc example, exp(pdf) markup(latex) replace
|
qui log using example, replace
/***
\documentclass{beamer}
\title{Generating PDF slides with LaTeX Markup with MarkDoc package}
\author{E. F. Haghish}
\date{August 15, 2016}
\begin{document}
\maketitle
***/
/***
\begin{frame}
\frametitle{First Slide}
Contents of the first slide
\end{frame}
***/
display "results from Stata commands"
sysuse auto, clear
summarize
/***
\begin{frame}
\frametitle{Second Slide}
Contents of the second slide
\end{frame}
\end{document}
***/
qui log c
markdoc example, exp(pdf) markup(latex) replace
|
qui log using example, replace
/***
---
title: "Dynamic Slides in MarkDoc Package"
author: E. F. Haghish
date: February 2016
output:
beamer_presentation:
theme: "Boadilla"
colortheme: "lily"
fonttheme: "structurebold"
---
Creating dynamic slides in Stata
================================
- Dynamic slides are slides created for statistical representation and thus,
includes code and output from Stata.
- The __`markdoc`__ command allows you to
easily create a _smcl log file_ and use it for creating dynamic analysis
reports or even PDF slides. In this tutorial I will demonstrate how to quickly
create the slides from Stata. Here are a few capabilities of the package.
About MarkDoc
=============
MarkDoc is a general purpose literate programming package that can export
dynamic reports in several formats within Stata. However, this tutorial only
demonstrates the dynamic slide feature.
- Adding images and figures
- Interpreting mathematics
- creating PDF slides
How Does it Word?
=================
- Install MarkDoc (ssc install markdoc)
- Install a full LaTeX distribution
- provide the path to **pdfLaTeX** in the `printer()` option (see below)
- Learn from this example!
How about mathematics?
======================
The dynamic slides are compiled with LaTeX and naturally,
LaTeX mathematical notations are supported. Here is an example:
- $f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n$
How about Stata Output?
======================
You can execute any command and present the outputs as well!
***/
sysuse auto, clear
tab price if price < 4000
//OFF
histogram price
graph export graph.png, replace
//ON
/***
Adding a graph
==========================
![Histogram of the *Price* variable](graph.png)
How to create more slides?
==========================
- Every heading creates a new slide.
- You can also use __#__ at the beginning of the line for creating a new slide.
- or make a line with equal signs under a text line (see the code)
***/
qui log c
markdoc example, export(slide) replace // printer("PATH/TO/pdflatex")
|
cap erase programming_course.pdf
capture log c
qui log using programming_course, replace
/***
---
title: "Stata, as a programming language"
author: E. F. Haghish
date: March 2016
fontsize: 14pt
output:
beamer_presentation:
theme: "Boadilla"
colortheme: "lily"
fonttheme: "structurebold"
includes:
in_header: body.tex
---
ABout the workshop
================================
- Reviewing different features of Stata programming language
- Giving examples and homework for each feature
- The workshop is completely lab-based, teaching solutions to actual problems
ABout the workshop
================================
- The workshop includes a number of "problems" categorized as
Beginner, Intermediate, and Advanced level
- The solutions are already provided, but you should make an attempt for
solving them yourself
- After each problem, we discuss possible solutions and we spend a few minutes
programming
- I will cover a wide-range of programs
Beginner Level
================================================================================
Intermediate Level
================================================================================
Problem 1 : File
================
"file allows programmers to read and write both text and binary files, so file
could be used to write a program to input data in some complicated situation.
Files are referred to by a file handle. When you open a file, you specify the
file handle that you want to use"
- read the following file with the program and count the number of lines!
"https://raw.githubusercontent.com/haghish/MarkDoc/master/README.md"
- Write an ado program that reads a file and counts the number of lines
Solution
================
***/
capture program drop countline
program countline
syntax using/
tempname handle
file open `handle' using `"`using'"', read
file read `handle' line
local lnum 0
while r(eof) == 0 {
file read `handle' line
local lnum `++lnum'
}
file close `handle'
display as txt "(the file includes {bf:`lnum'} lines)"
end
countline using "https://raw.githubusercontent.com/haghish/MarkDoc/master/README.md"
/***
Advanced Level
================================================================================
Problem 1 : File
================
In the first problem of the intermediate level, we wrote a program that opens
a file and counts the number of lines. Rewrite that program with Mata
***/
qui log c
markdoc programming_course, export(slide) replace linesize(120)
|
//cd "/Users/haghish/Desktop/new"
/***
\documentclass[11pt]{amsart}
\usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots.
\geometry{letterpaper} % ... or a4paper or a5paper or ...
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{epstopdf}
\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}
\title{Chapter 3}
\author{E. F. Haghish}
\date{} % Activate to display a given date or no date
\begin{document}
\maketitle
\section{}
\subsection{}
\end{document}
***/
markdoc all_latex.do, export(pdf) markup(latex) replace linesize(120) ///
printer("/usr/texbin/pdflatex")
|
/*** DO NOT EDIT THIS LINE -----------------------------------------------------
Version: 0.0.0
Intro Description
=================
packagename -- A new module for ...
Author(s) section
=================
Author name ...
Author affiliation ...
to add more authors, leave an empty line between authors' information
Second author ...
For more information visit {browse "http://www.haghish.com/markdoc":MarkDoc homepage}
Syntax
=================
{opt exam:ple} {depvar} [{indepvars}] {ifin} using
[{it:{help filename:filename}}]
[{cmd:,} {it:options}]
{synoptset 20 tabbed}{...}
{synopthdr}
{synoptline}
{synopt :{opt rep:lace}}replace this example{p_end}
{synopt :{opt app:end}}work further on this help file{p_end}
{synopt :{opt addmore}}you can add more description for the options; Moreover,
the text you write can be placed in multiple lines {p_end}
{synopt :{opt learn:smcl}}you won't make a loss learning
{help smcl:SMCL Language} {p_end}
{synoptline}
----------------------------------------------------- DO NOT EDIT THIS LINE ***/
* Note: If you like to leave the "Intro Description" or "Author(s) section
* empty, erase the text but KEEP THE HEADINGS
/***
\section{Reading statistical packages}
:Literate programming was adopted by statisticians for documenting the process of data analysis and generating dynamic analysis reports \cite{ Literate_programming_using_noweb, 1994_Literate_programming_simplified, Literate_Statistical_Programming.Concepts_and_Tools} and several literate programming packages have been developed particularly for statistical languages to meet the special features required for analysis reports \cite{weaver, markdoc, xie2013knitr, xieknit, leisch2002sweave, allaire2015rmarkdown, statweave}.
>While statisticians strongly support literate programming for data analysis, when it comes to programming statistical packages, the best they have come up with is documenting individual functions and objects using literate programming packages such as Roxygen2 \cite{roxygen2:In-source_documentation_for_R} in R or MarkDoc \cite{markdoc} in Stata. However, as the number of script files and the structural complexity of the package (nested files or functions) increase, the object documentation will provide minor information about the overall structure and logic of the package.
Any R package includes a number of R script files and each may include one or more functions. Furthermore, each of these functions may call other functions written in different script files within the package as well. Thus, an R package with a few files and functions, may have a very complex nested structure. Furthermore, R packages may include script files written in other programming languages, such as C or C++ which further increases the complexity of the package.
My argument is that open source programming languages such as R that do not require header files, should be carefully documented at both micro-level and macro-level to improve the readability of the package. The micro-level documentation is already advocated by literate programming packages. However, macro-level documentation -- inferring on the overall architecture of the package and function dependency -- is surprisingly ignored. To treat R packages truly as work of literature that should be read by humans and not just the compiler, as Knuth proposed, we should bear in mind that instead of instructing a ``computer what to do'', we shall ``concentrate rather on explaining to human beings what we want a computer to do'' \cite{knuth1984literate, knuth1992book}.
Yet, there has been no software for macro-level documentation of R packages. In the current article, I introduce CodeMap, an application written for Macintosh operating system, that discovers the file and function dependency tree of an R package, and visualize them in an interactively.
***/
markdoc example.do, export(sthlp) markup() replace linesize(120) ///
printer("/usr/texbin/pdflatex")
/***
Example
=================
explain what it does
. example command
second explanation
. example command
***/
|
/***
Dynamic Graph
=============
1
2
3
+---------+--------
|
+---------+-------+ +--------+ +--------------------+
| markdown|source |------>| mdddia |------+--->| processed markdown |
+---------+-------+ +--------+ | +--------------------+
+--->| image files |
-- -- +--------------------+
+-----------------+ +--------+ +--------------------+
| markdown|source |------>| mdddia |------+----| processed markdown |
+-----------------+ +--------+ | +--------------------+
| +--->| image files |
+------------------+ +--------------------+
| diagram creation |
+------------------+
| ditaa/dot/rdfdot |
+------------------+
- - -Tab
- - -
- - -
Source | SS df MS Number of obs = 74
-------------+---------------------------------- F(1, 72) = 20.26
Model | 139449474 1 139449474 Prob > F = 0.0000
Residual | 495615923 72 6883554.48 R-squared = 0.2196
-------------+---------------------------------- Adj R-squared = 0.2087
Total | 635065396 73 8699525.97 Root MSE = 2623.7
------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg | -238.8943 53.07669 -4.50 0.000 -344.7008 -133.0879
cons | 11253.06 1170.813 9.61 0.000 8919.088 13587.03
------------------------------------------------------------------------------
***/
*cap prog drop markdoc
|
// -----------------------------------------------------------------------------
// Literate Visualization Notation
// =============================================================================
qui log using 000, replace smcl
/***
Dynamic Software Visualization
=========================
Writing software documentation and visualizing seletive information can
boost active learning in students. It can also provide very useful information
about structure of a software and encourages users to read the code and
contribute to an open-source project. __MarkDoc package__ supports the
___DOT___ (graph description language), and it can produce HTML documents
with graphs, written within the source code or data analysis.
The graph can be written in different sections. However, MarkDoc has a
separate syntax for beginning a new graph or continuing the previous graphs.
to begin a new graph, use the regular syntax for wiriting documentation plus
a double dollar sign and to continue using the current
graph, use a single dollar sign. Here is an example:
***/
/***$$
digraph {
Hello -> World;
}
***/
/***
Continues figures
-----------------
It's much easier to write a detailed graph in different section and update
it, when there is a change in that section. Writing a detailed graph at once
and within a single section makes it _less interesting_ and makes it feel like
_extra work_ when the job is already done. However if the graph is developed
gradually, as the program is written, it's more encouraging. For example, the
graph below is written in two separate sections.
***/
/***$$
graph ER {
node [shape=box]; course; institute; student;
node [shape=ellipse]; {node [label="name"] name0; name1; name2;}
code; grade; number;
node [shape=diamond,style=filled,color=lightgrey]; "C-I"; "S-C"; "S-I";
name0 -- course;
code -- course;
course -- "C-I" [label="n",len=1.00];
"C-I" -- institute [label="1",len=1.00];
***/
/***$
institute -- name1;
institute -- "S-I" [label="1",len=1.00];
"S-I" -- student [label="n",len=1.00];
student -- grade;
student -- name2;
student -- number;
student -- "S-C" [label="m",len=1.00];
"S-C" -- course [label="n",len=1.00];
label = "\n\nEntity Relation Diagram\ndrawn by NEATO";
fontsize=20;
}
***/
/***
Example 1
---------
***/
/***$$
digraph {
subgraph cluster_0 {
label="Subgraph A";
a -> b;
b -> c;
c -> d;
}
subgraph cluster_1 {
label="Subgraph B";
a -> f;
f -> c;
}
}
***/
/***
Example 2
---------
***/
/***$$
digraph {
a -> b[label="0.2",weight="0.2"];
a -> c[label="0.4",weight="0.4"];
c -> b[label="0.6",weight="0.6"];
c -> e[label="0.6",weight="0.6"];
e -> e[label="0.1",weight="0.1"];
e -> b[label="0.7",weight="0.7"];
}
***/
/***
Example 3
---------
***/
/***$$
digraph G {
subgraph cluster_0 {
style=filled;
color=lightgrey;
node [style=filled,color=white];
a0 -> a1 -> a2 -> a3;
label = "process #1";
}
subgraph cluster_1 {
node [style=filled];
b0 -> b1 -> b2 -> b3;
label = "process #2";
color=blue
}
start -> a0;
start -> b0;
a1 -> b3;
b2 -> a3;
a3 -> a0;
a3 -> end;
b3 -> end;
start [shape=Mdiamond];
end [shape=Msquare];
}
***/
qui log c
markdoc 000, replace exp(html)
|
// -----------------------------------------------------------------------------
// empty template
// =============================================================================
qui log using 000, replace smcl
qui log c
markdoc 000, replace exp()
// -----------------------------------------------------------------------------
// Markdown notation
// =============================================================================
qui log using 000, replace smcl
/***
This is a regular heading
=========================
This is a subheading
--------------------
Text paragraph perhaps?
***/
forval num = 1/1 {
/***
Text can be included in the loop, but is only printed once
=========================================================
***/
}
qui log c
markdoc 000, replace exp()
// -----------------------------------------------------------------------------
// Stata commands & comments
// =============================================================================
qui log using 000, replace smcl
sysuse auto, clear
// this is a comment
*also this is a comment
regress price mpg
forval num = 1/1 {
// this is a comment
*also this is a comment
/*
this is a comment
*/
/****
Crazy comments?
*/
}
qui log c
markdoc 000, replace exp()
// -----------------------------------------------------------------------------
// Special Markups
// =============================================================================
qui log using 000, replace smcl
/**/ sysuse auto, clear
/**/ sysuse auto, clear
/**/ sysuse auto, clear
/**/ sysuse auto, clear
/***/ sysuse auto, clear
/***/ sysuse auto, clear
/***/ sysuse auto, clear
//OFF
forval num = 1/1 {
// this is a comment
*also this is a comment
display "try"
}
//ON
// ??? THIS DOES NOT WORK ;)
/**/ forval num = 1/1 {
// this is a comment
*also this is a comment
display "try"
}
qui log c
markdoc 000, replace exp()
// -----------------------------------------------------------------------------
// Weaver Commands
// =============================================================================
qui log using 000, replace smcl
txt "this command can be literally anywhere" _n ///
"======================================"
txt "this command can be literally anywhere" _n ///
"======================================
txt "this command can be literally anywhere" _n ///
"======================================
img markdoc.ado, title("This is the title")
img markdoc.ado, title("This is the title")
img markdoc.ado, title("This is the title")
tble (Title 1, variables, something \ 1 , 2 , 3)
tble (Title 1, variables, something \ 1 , 2 , 3)
tble (Title 1, variables, something \ 1 , 2 , 3)
qui log c
markdoc 000, replace exp()
// -----------------------------------------------------------------------------
// COMPLEX LOOP
// =============================================================================
qui log using 000, replace smcl
/**/ sysuse auto, clear
/***/ sysuse auto, clear //2
//OFF
//ON
if 100 > 12 {
local m `n'
*this is a comment
display "how about this?"
}
forval num = 1/1 {
display "yeap!"
/****
Crazy comments?
*/
display "yeap!"
qui log off
*this is a comment
//this is a comment
qui log on
/***
Text canbe included in the loop, but is only printed once
=========================================================
***/
}
//OFF
/***
Does it word?
=============
this is some text paragraph.
***/
display "hi"
//ON
qui log c
markdoc 000, replace exp(docx) nonumber
// -----------------------------------------------------------------------------
// Mata
// =============================================================================
qui log using 000, replace smcl
txt "## Testing Mata"
clear mata
mata
/***
How about some more text? 2
---------------------------
***/
// COMMENT
function zeros(c)
{
a = J(c, 1, 0)
return(a)
}
function zeros2(c)
{
a = J(c, 1, 0)
return(a)
}
printf("HELLO WORLD!\n")
/***
How about some more text? 3
---------------------------
***/
/**/ printf("Hide Command!\n")
/***/ printf("Hide Output!\n")
end
qui log c
markdoc 000, replace exp() nonumber
// -----------------------------------------------------------------------------
// Docx Document Exportation
// =============================================================================
qui log using 000, replace smcl
txt "## Testing Mata" _n ///
"This is how it works!"
/***
How about some more text? 2
==========================
some more?
----------
### How about Heading 3?
***/
sysuse auto, clear
regress price mpg
qui log c
cap erase 000.docx
markdoc 000, replace exp(docx) toc nonumber style(stata)
// -----------------------------------------------------------------------------
// PDF Document Exportation
// =============================================================================
qui log using 000, replace smcl
/***
This is heading 1
***/
txt "## Testing Mata" _n ///
"This is how it works!"
sysuse auto, clear
regress price mpg
qui log c
markdoc 000, replace exp(html) nonumber style() toc statax tit("This is the title")
// -----------------------------------------------------------------------------
// Stata Journal Exportation using Markdown
// =============================================================================
qui log using 000, replace smcl
/***
This is heading 1
***/
txt "## Testing Mata" _n ///
"This is how it works!"
sysuse auto, clear
regress price mpg
qui log c
markdoc 000, replace exp(tex) nonumber texmaster style(stata) toc statax ///
tit("This is the title") author("haghish") date ///
summary("THIS IS S H DASID JI FJ FKJD FDJS FDF OEI DLFK D÷L FSOF OEF POFI DLFK")
// -----------------------------------------------------------------------------
// Literate Visualization Notation
// =============================================================================
qui log using 000, replace smcl
/***
Dynamic Software Visualization
=========================
Writing software documentation and visualizing seletive information can
boost active learning in students...
Example 1
--------------------
***/
/***$$
digraph {
a -> b;
***/
/***$
}
***/
qui log c
markdoc 000, replace exp(html)
|
cd "C:\Users\haghish\Dropbox\STATA\MY PROGRAMS\MarkDoc\MarkDoc 3.6.5"
********************************************************************************
cap qui prog drop markdoc
cap qui log c
qui log using torture, replace
clear
sysuse auto
reg price mpg
/***
This is a text chunk!
=====================
![alt text](graph.png "thitle")
[hypertext](http://haghish.com "Title")
This is a text paragraph. This is a text paragraph.
This is a text paragraph. This is a text paragraph.
***/
//Problem Discovered which merges the text output
/**/ display as text "this is a text paragraph"
/**/ display as text "this is a text paragraph"
/**/ display as text "this is a text paragraph"
/***/ display as text "this is a text paragraph"
/***/ display as text "this is a text paragraph"
/***/ display as text "this is a text paragraph"
local n `m' $thisthat
/***
The TXT Command
===============
***/
txt this is the txt command that can be used for writing dynamic text. ///
this is the txt command that can be used for writing dynamic text. ///
this is the txt command that can be used for writing dynamic text. ///
this is the txt command that can be used for writing dynamic text. ///
txt this is the txt command that can be used for writing dynamic text. ///
this is the txt command that can be used for writing dynamic text. ///
this is the txt command that can be used for writing dynamic text. ///
this is the txt command that can be used for writing dynamic text. ///
txt "this is the txt command that can be used for writing dynamic text." ///
"this is the txt command that can be used for writing dynamic text." ///
"this is the txt command that can be used for writing dynamic text." ///
"this is the txt command that can be used for writing dynamic text." ///
txt "this is a text paragraph" _n "yeah " ///
"newline?" _s(5) "skipped? :-) " _dup(2) "yo"
txt "this is a text paragraph" _n(2) "yeah " ///
"**newline?**" _s(5) "skipped? :-) " _dup(2) "yo"
local nn 123456789.1234
txt "this is a text paragraph " mm " or " %9.3f mm " or " `nn' " and more text" _n(4)
txt "this is a text paragraph " mm " or " `nn' " and more text" _n(4)
txt c "CODE " mm " or " `nn' " and more text" _n(4)
qui log c
cap qui prog drop markdoc
markdoc torture, replace linesize(100)
markdoc torture, replace linesize(100) export(pdf)
markdoc torture, replace linesize(100) export(docx)
markdoc torture, replace linesize(100) export(docx) template(markdoc_simple.docx)
markdoc torture, replace linesize(100) export(odt) template(markdoc_simple.odt)
markdoc torture, replace linesize(100) export(odt) template(torture2.odt)
exit
//creating PDF
cap erase example.pdf
markdoc example, exp(pdf) markup(html) replace
markdoc example, exp(pdf) replace statax
markdoc example, exp(pdf) replace
markdoc example, exp(pdf) markup(latex) texmaster replace
markdoc example, exp(pdf) markup(latex) printer("C:\program Files\MiKTeX 2.9\miktex\bin\x64\pdflatex.exe") texmaster replace
markdoc example, exp(pdf) markup(latex) printer("/usr/texbin/pdflatex") texmaster replace
//HTML
cap erase example.html
markdoc example, exp(html) replace
markdoc example, exp(html) replace statax
********************************************************************************
* LATEX
********************************************************************************
//empty log
//=========
cap log close
qui log using torture_latex, replace
qui log c
markdoc torture_latex, export(tex) markup(latex) replace
//Stata command ONLY
//=========================
cap log close
qui log using torture_latex, replace
sysuse auto, clear
summarize price
qui log c
markdoc torture_latex, export(tex) markup(latex) replace
// COMMENT ONLY
// =========================
cap log close
qui log using torture_latex, replace
/***
\section {SECTION}
This is a comment
***/
qui log c
markdoc torture_latex, export(tex) markup(latex) replace
//Start with Comment
//=========================
cap log close
qui log using torture_latex, replace
/***
This is a comment
***/
sysuse auto, clear
summarize price
qui log c
markdoc torture_latex, export(tex) markup(latex) replace
//Start AND END with Comment
//=========================
cap log close
qui log using torture_latex, replace
/***
This is a comment
***/
sysuse auto, clear
summarize price
/***
This is a comment
***/
qui log c
markdoc torture_latex, export(tex) markup(latex) replace
//LaTeX Heading
//=========================
cap log close
qui log using torture_latex, replace
/***
This is a comment
***/
sysuse auto, clear
summarize price
/***
This is a comment
***/
qui log c
markdoc torture_latex, export(tex) markup(latex) replace ///
template(Torture_test/LaTeX/latexHeading.tex) texmaster
markdoc torture_latex, export(tex) markup(latex) replace texmaster
set linesize 80
cap log close
qui log using torture_latex, replace
/***
\documentclass[a4paper]{article}
\usepackage{amsmath}
\usepackage[english]{babel}
\usepackage{blindtext}
\begin{document}
***/
display "{p}this is a text"
summarize mpg
txt "Loop number `x'"
display "this is a text"
/***
this is a text paragraph. this is a text paragraph.
this is a text paragraph. this is a text paragraph
***/
summarize mpg
/***
\blinddocument
\end{document}
***/
qui log c
markdoc torture_latex, export(tex) markup(latex) replace
markdoc torture_latex, export(tex) markup(latex) style(empty) replace
//test option
markdoc, test
markdoc, test pandoc("C:\ado\plus\Weaver 2\Pandoc\pandoc.exe") ///
printer("C:\ado\plus\Weaver 2\wkhtmltopdf\bin\wkhtmltopdf.exe")
local pandoc "C:\ado\plus\Weaver 2\Pandoc\pandoc.exe"
local printer "C:\ado\plus\Weaver 2\wkhtmltopdf\bin\wkhtmltopdf.exe"
markdoc example, export(html) statax linesize(120) replace pandoc("`pandoc'")
markdoc example, export(pdf) statax linesize(120) replace pandoc("`pandoc'") printer("`printer'")
|
capture erase example.docx
/*
MarkDoc Torture Test 001 : Main Features Test
---------------------------------------------
Author: E. F. Haghish
Institute for Medical Biometry and Statistics (IMBI)
University of Freiburg
http://haghish.com/stat
@haghish
Requirements: Install MarkDoc, Weaver, and Statax packages
Install Pandoc and Wkhtmltopdf
If you wish to produce PDF Slides, install a complete LaTeX
*/
set scheme s1manual
set linesize 80
cap qui log c
sysuse auto, clear
qui log using example, replace smcl
/***
MarkDoc Feature Test
====================
Editable mathematical notations
-------------------------------
Documenting statistical software or even teaching statistics often requires
some mathematical discussions. The notation can be included in the output
dynamically!
$f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n$
Figure 1 shows the distribution of the __Price__ variable, followed by a
regression analysis.
***/
histogram price
img, title("Figure 1. The histogram of the price variable")
regress price mpg
//OFF
mat A = r(table)
scalar p1 = A[4,1]
//ON
txt "As the output demonstrates, the average price of cars is " pmean " and " ///
"the standard deviation is " psd ". The summary of the __price__ and " ///
"__mpg__ variables is given in Table 1. " ///
"Moreover, the regression analysis reveals the coefficient of the __mpg__ " ///
"variable to be " coe1 " with a P-value of " p1 ", which is significant."
//OFF
qui summarize price
scalar pnum = r(N)
scalar pmean = r(mean)
scalar psd = r(sd)
qui summarize mpg
//ON
tbl ("__variable__", "__Observations__", "__Mean__", "__SD__" \ ///
"__Price__", pnum, pmean, psd \ ///
"__Mpg__", r(N), r(mean), r(sd) ), ///
title("_Table 1_. Summary of the price and mpg variables")
qui log c
markdoc example, install replace export(docx) ///
title("MarkDoc Main Features Test") author("E. F. Haghish") ///
affiliation("IMBI, University of Freiburg") summary("This document presents " ///
"the main features of [MarkDoc package](http://www.haghish.com/markdoc), " ///
"a multi-purpose package for creating dynamic PDF slides and analysis reports " ///
"in many formats such as __pdf__, __docx__, __html__, __latex__, and " ///
"__epub__. The package supports many features and recognizes three markup " ///
"languages, which are Markdown, HTML, and LaTeX." )
markdoc example, install replace export(slide) ///
title("MarkDoc Main Features Test") author("E. F. Haghish") ///
affiliation("IMBI, University of Freiburg") summary("This document presents " ///
"the main features of [MarkDoc package](http://www.haghish.com/markdoc), " ///
"a multi-purpose package for creating dynamic PDF slides and analysis reports " ///
"in many formats such as __pdf__, __docx__, __html__, __latex__, and " ///
"__epub__. The package supports many features and recognizes three markup " ///
"languages, which are Markdown, HTML, and LaTeX." )
|
/*
MarkDoc Torture Test 001 : Main Features Test
---------------------------------------------
Author: E. F. Haghish
Institute for Medical Biometry and Statistics (IMBI)
University of Freiburg
http://haghish.com/stat
@haghish
Requirements: Install MarkDoc, Weaver, and Statax packages
Install Pandoc and Wkhtmltopdf
If you wish to produce PDF Slides, install a complete LaTeX
*/
capture erase example.docx
set scheme s1manual
set linesize 80
cap qui log c
qui log using example, replace smcl
/***
MarkDoc Feature Test
====================
This document attempts to check the main features of MarkDoc, for a broad
assessment of the package before every new release.
***/
sysuse auto, clear
/***
Text styling check
------------------
_this is Italic_.
__this is bold__.
___this is Italic and Bold___.
`this text line should be in monospace font`.
Variable check
--------------
Figure 1 shows the distribution of the __Price__ variable, followed by a
regression analysis.
***/
histogram price
img, title("Figure 1. The histogram of the price variable")
regress price mpg
//OFF
mat A = r(table)
scalar p1 = A[4,1]
//ON
txt "As the output demonstrates, the average price of cars is " pmean " and " ///
"the standard deviation is " psd ". The summary of the __price__ and " ///
"__mpg__ variables is given in Table 1. " ///
"Moreover, the regression analysis reveals the coefficient of the __mpg__ " ///
"variable to be " coe1 " with a P-value of " p1 ", which is significant."
//OFF
qui summarize price
scalar pnum = r(N)
scalar pmean = r(mean)
scalar psd = r(sd)
qui summarize mpg
//ON
tbl ("__variable__", "__Observations__", "__Mean__", "__SD__" \ ///
"__Price__", pnum, pmean, psd \ ///
"__Mpg__", r(N), r(mean), r(sd) ), ///
title("_Table 1_. Summary of the price and mpg variables")
qui log c
markdoc example, install replace export(docx) ///
title("MarkDoc Main Features Test") author("E. F. Haghish") ///
affiliation("IMBI, University of Freiburg") summary("This document presents " ///
"the main features of [MarkDoc package], a multi-purpose package for creating " ///
"dynamic PDF slides and analysis reports in many formats such as __pdf__, " ///
"__docx__, __html__, __latex__, and __epub__. The package supports many " ///
"features and recognizes three markup languages, which are Markdown, HTML, " ///
"and LaTeX." )
|
// -----------------------------------------------------------------------------
// empty template
// =============================================================================
qui log using example, replace smcl
qui log c
markdoc example, replace markup(latex) exp(tex) //texmaster
qui log using example2, replace smcl
qui log c
markdoc example2, replace markup(latex) exp(tex) texmaster
|
// -----------------------------------------------------------------------------
// begin heading with TXT
// =============================================================================
qui log using example, replace smcl
txt "\documentclass{article}" _n ///
"\usepackage{graphics}" _n ///
"\begin{document}" _n
txt "\section{This is a text heading}" _n
/***
\subsection{How about some more text?}
this is some text paragraph.
\includegraphics{graph.png}
***/
txt "\end{document}" _n
qui log c
markdoc example, replace markup(latex) exp(tex)
|
// -----------------------------------------------------------------------------
// With MATA
// =============================================================================
qui log using example, replace smcl
/***
Hi there
***/
txt "\section{This is a text heading}"
sysuse auto, clear
summarize price mpg weight
if 100 > 12 {
local m `n'
}
/***
\subsection{How about some more text?}
this is some text paragraph.
***/
clear mata
mata
/***
\section{How about Mata?}
This should also work in mata
***/
function zeros(c)
{
a = J(c, 1, 0)
return(a)
}
function zeros2(c)
{
a = J(c, 1, 0)
return(a)
}
/***
\section{How about Mata?}
This should also work in mata
***/
printf("HELLO WORLD!\n")
end
qui log c
markdoc example, replace markup(latex) export(tex) texmaster
|
// -----------------------------------------------------------------------------
// Creating Table of Content
// =============================================================================
cap erase example.pdf
qui log using example, replace smcl
txt \section{This is a text heading}
/***
\subsection{How about some more text?}
this is some text paragraph.
***/
qui log c
markdoc example, replace markup(latex) exp(pdf) toc texmaster
|
// -----------------------------------------------------------------------------
// Creating Table of Content
// =============================================================================
cap erase example.pdf
qui log using example, replace smcl
txt \section{This is a text heading}
/***
\subsection{How about some more text?}
this is some text paragraph.
***/
qui log c
markdoc example, replace markup(latex) exp(tex) toc texmaster
|
// -----------------------------------------------------------------------------
// template with a TXT & Textmaster command
// =============================================================================
qui log using example, replace smcl
txt \section{This is a text heading}
if 100 > 12 {
local m `n'
}
qui log c
markdoc example, replace markup(latex) exp(tex) //nonumber texmaster
qui log using example2, replace smcl
sysuse auto, clear
txt \section{This is a text heading}
qui log c
markdoc example2, replace markup(latex) exp(tex) texmaster
|
capture erase example.smcl
capture erase example.md
// -----------------------------------------------------------------------------
// Stata commands & comments
// =============================================================================
qui log using example, replace smcl
sysuse auto, clear
// this is a comment
*also this is a comment
regress price mpg
forval num = 1/1 {
// this is a comment
*also this is a comment
/*
this is a comment
*/
/****
Crazy comments?
*/
}
qui log c
markdoc example, replace exp()
|
capture erase example.smcl
capture erase example.md
capture erase example.docx
// -----------------------------------------------------------------------------
// Docx Document Exportation
// =============================================================================
qui log using example, replace smcl
txt "## Testing Mata" _n ///
"This is how it works!"
/***
How about some more text? 2
==========================
some more?
----------
### How about Heading 3?
***/
sysuse auto, clear
regress price mpg
qui log c
cap erase example.docx
markdoc example, replace exp(docx) toc nonumber style(stata)
|
// -----------------------------------------------------------------------------
// empty template
// =============================================================================
qui log using example, replace smcl
qui log c
markdoc example, replace exp()
|
capture erase example.smcl
capture erase example.md
capture erase example.docx
// -----------------------------------------------------------------------------
// HTML Document Exportation
// =============================================================================
qui log using example, replace smcl
/***
This is heading 1
=================
<h1>This is also HTML written with HTML tag</h1>
![This Is a Figure!](./graph.png)
***/
txt "## Testing __`txt`__ command" _n ///
"This is how it works!"
sysuse auto, clear
regress price mpg
qui log c
markdoc example, replace exp(html) nonumber style() toc statax tit("This is the title")
|
capture erase example.smcl
capture erase example.md
// -----------------------------------------------------------------------------
// COMPLEX LOOP
// =============================================================================
qui log using example, replace smcl
/**/ sysuse auto, clear
/***/ sysuse auto, clear //2
//OFF
//ON
if 100 > 12 {
local m `n'
*this is a comment
display "how about this?"
}
forval num = 1/5 {
display "yeap!" _n
/****
Crazy comments?
*/
display "yeap!" _n
qui log off
*this is a comment
//this is a comment
qui log on
/***
Text can be included in the loop, but is only printed once
=========================================================
***/
txt "#### but dynamic text can appear in a loop many times!" _n
}
//OFF
/***
Does it word?
=============
this is some text paragraph.
***/
display "hi"
//ON
qui log c
markdoc example, replace exp(docx) nonumber
|
capture erase example.smcl
capture erase example.md
// -----------------------------------------------------------------------------
// Special Markups
// =============================================================================
qui log using example, replace smcl
/**/ sysuse auto, clear
/**/ sysuse auto, clear
/**/ sysuse auto, clear
/**/ sysuse auto, clear
/***/ sysuse auto, clear
/***/ sysuse auto, clear
/***/ sysuse auto, clear
/***
Using "ON" and "OFF" to hide loop code
======================
Make sure to add INDENTS before the markers to make sure MarkDoc is robust to indents
***/
//OFF
forval num = 1/1 {
// this is a comment
*also this is a comment
display "try"
}
//ON
/***
MarkDoc allows you to __IMPORT__ external files (Markdown, HTML, LaTeX)
In the dynamic document. Here I test for that! MarkDoc should be resistent to
INDENT.
***/
//IMPORT ./Markdown/import.md
forval num = 1/1 {
// this is a comment
*also this is a comment
display "try"
}
/***
Loops cannot be hidden
======================
***/
// ??? THIS DOES NOT WORK ;)
/**/ forval num = 1/1 {
// this is a comment
*also this is a comment
display "try"
}
qui log c
markdoc example, replace exp()
|
capture erase example.smcl
capture erase example.md
capture erase example.docx
// -----------------------------------------------------------------------------
// Mata
// =============================================================================
qui log using example, replace smcl
txt "## Testing Mata"
clear mata
mata
/***
How about some more text? 2
---------------------------
***/
// COMMENT
function zeros(c)
{
a = J(c, 1, 0)
return(a)
}
function zeros2(c)
{
a = J(c, 1, 0)
return(a)
}
printf("HELLO WORLD!\n")
/***
How about some more text? 3
---------------------------
***/
/**/ printf("Hide Command!\n")
/***/ printf("Hide Output!\n")
end
qui log c
markdoc example, replace exp() nonumber
|
capture erase example.smcl
capture erase example.md
// -----------------------------------------------------------------------------
// Markdown notation
// =============================================================================
qui log using example, replace smcl
/***
This is a regular heading
=========================
This is a subheading
--------------------
### Paragraph 3?
#### Paragraph 4?
##### Paragraph 5?
###### Paragraph 6?
Text paragraph perhaps?
***/
forval num = 1/1 {
/***
Text can be included in the loop, but is only printed once
=========================================================
So DON'T DO IT!
***/
}
qui log c
markdoc example, replace exp()
|
capture erase example.smcl
capture erase example.html
// -----------------------------------------------------------------------------
// PDF Document Exportation
// =============================================================================
qui log using example, replace smcl
/***
This is heading 1
=================
![This Is a Figure!](./graph.png)
***/
txt "## Testing __`txt`__ command" _n ///
"This is how it works!"
sysuse auto, clear
regress price mpg
qui log c
markdoc example, replace exp(pdf) nonumber style() toc statax tit("This is the title")
|
capture erase example.smcl
capture erase example.pdf
// -----------------------------------------------------------------------------
// Stata Journal Exportation using Markdown
// =============================================================================
qui log using example, replace smcl
/***
This is heading 1
***/
txt "## Testing __`txt`__ command" _n ///
"This is how it works!"
sysuse auto, clear
regress price mpg
qui log c
markdoc example, replace exp(tex) nonumber texmaster style(stata) toc statax ///
tit("This is the title") author("haghish") date ///
summary("THIS IS S H DASID JI FJ FKJD FDJS FDF OEI DLFK D÷L FSOF OEF POFI DLFK")
|
capture erase example.smcl
capture erase example.md
// -----------------------------------------------------------------------------
// Weaver Commands
// =============================================================================
qui log using example, replace smcl
txt "this command can be literally anywhere" _n ///
"======================================"
txt "this command can be literally anywhere" _n ///
"======================================
txt "this command can be literally anywhere" _n ///
"======================================"
sysuse auto, clear
hist price
graph export graph.png, as(png) width(300) replace
img using graph.png, title("This is the title")
img , title("AUTOMATICALL LOADED")
tble (Title 1, variables, something \ 1 , 2 , 3)
tble (Title 1, variables, something \ 1 , 2 , 3)
tble (Title 1, variables, something \ 1 , 2 , 3)
qui log c
markdoc example, replace exp()
|
/***
`rundoc` command
================
The [`markdoc`](https://github.com/haghish/MarkDoc) command takes a `SMCL` log file
to create a dynamic document or presentation slides. This procedure requires
the user to create a log file and convert it to a dynamic document.
The __`rundoc`__ command, is simply a wrapper for MarkDoc to simplifies
typesettinf dynamic documents directly from a Stata do-file, without requiring
the do-file to include a log file.
The syntax for writing comments remains identical to
[`markdoc`](https://github.com/haghish/MarkDoc) command. This command should make
executing dynamic documents much simpler!
Features
--------
### executing Stata commands
The __`rundoc`__ command preserves all of the features of `markdoc`, because it
is simply a wrapper program. Therefore, it preserves all of the features of `markdoc` such
as executing Stata commands and syntax highlighting of the Stata commands using
[`statax`](https://github.com/haghish/statax) package:
***/
display "Hello MarkDoc"
sysuse auto, clear
summarize
/***
### Writing mathematical notations
Mathematical notations are supported in PDF, HTML, Docx, ODT (OpenOffice), and LaTeX:
$$ Y = \beta_{0} + \beta_{1}x_{1} + \epsilon $$
***/
|
cap erase example.html
cap qui log c
qui log using example, smcl replace
/***
# HTML Slides
- portable
- light
- fit many devices
***/
sysuse auto
summarize price
regress price mpg
forvalues n = 1/5 {
local m 12.3
global sm 23
}
/***
# In the evening
- Eat spaghetti
- Drink wine
# Conclusion
- And the answer is...
- $f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n$
***/
qui log c
*cap prog drop markdoc
cap qui log c
qui log using example, smcl replace
/***
# HTML Slides
- portable
- light
- fit many devices
***/
sysuse auto
summarize price
regress price mpg
forvalues n = 1/5 {
local m 12.3
global sm 23
}
/***
# In the evening
- Eat spaghetti
- Drink wine
# Conclusion
- And the answer is...
- $f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n$
***/
qui log c
*cap prog drop markdoc
markdoc example.smcl, exp(dzslide) replace
|
cap erase example.html
cap qui log c
qui log using example, smcl replace
/***
# HTML Slides
- portable
- light
- fit many devices
***/
sysuse auto
summarize price
regress price mpg
forvalues n = 1/5 {
local m 12.3
global sm 23
}
/***
# In the evening
- Eat spaghetti
- Drink wine
# Conclusion
- And the answer is...
- $f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n$
***/
qui log c
*cap prog drop markdoc
markdoc example.smcl, exp(slidy) replace title("hi")
|
/***
Adding a list
=============
* this is a text line
* this is also
* how about this?
Dot in the end
==============
It seems that if a line end with a "." dot,
and a new line is
started adter,
the __sthlp__ engine will add 2 space characters.
Like this.
Is that solved?
This is not true if the text is written immediately. after the dot
Also, removing.
space after the dot does not help!
__AFTER EVALUATING THE {help smcl}Â file__ it seems {help markdoc} is working
very fine and there is no bug. This is how Stata renders SMCL document...
but still, there is no reason why there should be 2 spaces.
This problem,
only happens after dots.
Ending the line with accent
===========================
The grave accents are used to `make` font monospace. Can we
Unsolved
===========================
***/
markdoc "./sthlp/bugs.do", export(sthlp) replace template(empty)
|
/*** DO NOT EDIT THIS LINE -----------------------------------------------------
Version: 1.0.0
Intro Description
=================
myprogram -- A new module for to print text. This description can include
multiple lines, but not several paragraphs. You may write the title section
in multiple lines. {help MarkDoc} will connect the lines to create only a
single paragraph. If you'd like to describe the package more, create a
description section below this header.
Author(s) section
=================
E. F. Haghish
University of Freiburg
haghish@imbi.uni-freiburg.de
{browse "http://www.haghish.com/markdoc"}
Syntax
=================
{opt myprogram} [{it:anything}] [, {opt b:old} {opt i:talic}]
{synoptset 20 tabbed}{...}
{synopthdr}
{synoptline}
{synopt :{opt b:old}}prints bold text{p_end}
{synopt :{opt i:talic}}prints italic text{p_end}
{synoptline}
----------------------------------------------------- DO NOT EDIT THIS LINE ***/
* Note: If you like to leave the "Intro Description" or "Author(s) section
* empty, erase the text but KEEP THE HEADINGS
cap prog drop myprogram
program myprogram
syntax [anything] [, Bold Italic]
if missing("`bold'") & missing("`italic'") {
display as txt `anything'
}
else if !missing("`bold'") & missing("`italic'") {
local anything : display `anything'
display as txt "{bf:`anything'}"
}
else if missing("`bold'") & !missing("`italic'") {
local anything : display `anything'
display as txt "{it:`anything'}"
}
else {
local anything : display `anything'
display "{bf:{it:`anything'}}"
}
end
/***
: Typically, the text paragraphs in Stata help files begin with an indention,
which makes the help file easier to read (and that's important). To do so,
place a {bf:{c -(}p 4 4 2{c )-}} directive above the line to indent the text
paragraph and you're good to go.
: this is a text paragraph that is processing __Markdown__, _italic text_ or
___underscored text___. The only thing left is the link! but how far can this
__text styling go__? I mean how far does the
rabit whole go? do you have any idea?
- - -
: this is a text paragraph that is processing __Markdown__, _italic text_ or
___underscored text___. The only thing left is the link! but how far can this
__text styling go__? I mean how far does the
rabit whole go? do you have any idea?
> this is a text paragraph that is processing __Markdown__, _italic text_ or
___underscored text___. The only thing left is the link! but how far can this
__text styling go__? I mean how far does the
rabit whole go? do you have any idea?
Another heading
----------------------
{p 4 4 2}
Typically, the text paragraphs in Stata help files begin with an indention,
which makes the help file easier to read (and that's important). To do so,
place a {bf:{c -(}p 4 4 2{c )-}} directive above the line to indent the text
paragraph and you're good to go.
***/
/***
Example
=================
print {bf:bold} text
. myprogram "print this text in bold", bold
print {it:italic} text
. myprogram "print this text in italic", i
***/
/***
Dynamic Graph
=============
+-----------------+ +--------+ +--------------------+
| markdown source |------>| mdddia |---------->| processed markdown |
+-----------------+ +--------+ | +--------------------+
| +--->| image files |
+------------------+ +--------------------+
| diagram creation |
+------------------+
| ditaa/dot/rdfdot |
+------------------+
***/
*cap prog drop markdoc
|
program myprogram
display "`0'"
end
|
// -----------------------------------------------------------------------------
// Test Header
// =============================================================================
cap erase myprogram.ado
copy ./Torture_test/sthlp/myprogram.do myprogram.ado, replace
markdoc myprogram.ado, replace export(sthlp) date
/*template(empty) title("myprogram") ///
summary("a module for literate programming") date author("me") aff("you") add("SHSD")
|
/*** DO NOT EDIT THIS LINE -----------------------------------------------------
Version: 1.0.0
Intro Description
=================
myprogram -- A new module for to print text. This description can include
multiple lines, but not several paragraphs. You may write the title section
in multiple lines. {help MarkDoc} will connect the lines to create only a
single paragraph. If you'd like to describe the package more, create a
description section below this header.
Author(s) section
=================
E. F. Haghish
University of Freiburg
haghish@imbi.uni-freiburg.de
{browse "http://www.haghish.com/markdoc"}
Syntax
=================
{opt myprogram} [{it:anything}] [, {opt b:old} {opt i:talic}]
{synoptset 20 tabbed}{...}
{synopthdr}
{synoptline}
{synopt :{opt b:old}}prints bold text{p_end}
{synopt :{opt i:talic}}prints italic text{p_end}
{synoptline}
----------------------------------------------------- DO NOT EDIT THIS LINE ***/
* Note: If you like to leave the "Intro Description" or "Author(s) section
* empty, erase the text but KEEP THE HEADINGS
cap prog drop myprogram
program myprogram
syntax [anything] [, Bold Italic]
if missing("`bold'") & missing("`italic'") {
display as txt `anything'
}
else if !missing("`bold'") & missing("`italic'") {
local anything : display `anything'
display as txt "{bf:`anything'}"
}
else if missing("`bold'") & !missing("`italic'") {
local anything : display `anything'
display as txt "{it:`anything'}"
}
else {
local anything : display `anything'
display "{bf:{it:`anything'}}"
}
end
/***
: Typically, the text paragraphs in Stata help files begin with an indention,
which makes the help file easier to read (and that's important). To do so,
place a {bf:{c -(}p 4 4 2{c )-}} directive above the line to indent the text
paragraph and you're good to go.
: this is a text paragraph that is processing __Markdown__, _italic text_ or
___underscored text___. The only thing left is the link! but how far can this
__text styling go__? I mean how far does the
rabit whole go? do you have any idea?
- - -
: this is a text paragraph that is processing __Markdown__, _italic text_ or
___underscored text___. The only thing left is the link! but how far can this
__text styling go__? I mean how far does the
rabit whole go? do you have any idea?
> this is a text paragraph that is processing __Markdown__, _italic text_ or
___underscored text___. The only thing left is the link! but how far can this
__text styling go__? I mean how far does the
rabit whole go? do you have any idea?
Another heading
----------------------
{p 4 4 2}
Typically, the text paragraphs in Stata help files begin with an indention,
which makes the help file easier to read (and that's important). To do so,
place a {bf:{c -(}p 4 4 2{c )-}} directive above the line to indent the text
paragraph and you're good to go.
***/
/***
Example
=================
print {bf:bold} text
. myprogram "print this text in bold", bold
print {it:italic} text
. myprogram "print this text in italic", i
***/
/***
Dynamic Graph
=============
+-----------------+ +--------+ +--------------------+
| markdown source |------>| mdddia |------+--->| processed markdown |
+-----------------+ +--------+ | +--------------------+
| +--->| image files |
+------------------+ +--------------------+
| diagram creation |
+------------------+
| ditaa/dot/rdfdot |
+------------------+
Source | SS df MS Number of obs = 74
-------------+---------------------------------- F(1, 72) = 20.26
Model | 139449474 1 139449474 Prob > F = 0.0000
Residual | 495615923 72 6883554.48 R-squared = 0.2196
-------------+---------------------------------- Adj R-squared = 0.2087
Total | 635065396 73 8699525.97 Root MSE = 2623.7
------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg | -238.8943 53.07669 -4.50 0.000 -344.7008 -133.0879
_cons | 11253.06 1170.813 9.61 0.000 8919.088 13587.03
------------------------------------------------------------------------------
***/
*cap prog drop markdoc
markdoc test/helptest.ado, replace export(sthlp)
|
qui {
/**********************************************************************************/
/* program masala_merge : Fuzzy match using masalafied levenshtein */
/*
Meta-algorithm:
1. stata outsheets two files, with an id and a name column.
2. python gets filenames and parameters from command lines, reads two files into two dictionaries
3. python outputs a single file, with id, str1, str2, distance
4. stata reads this file and processes, makes decisions about which matches to keep
See readme.md for sample usage.
*/
/***********************************************************************************/
cap prog drop masala_merge
prog def masala_merge
{
syntax [varlist] using/, S1(string) OUTfile(string) [FUZZINESS(real 1.0) quietly KEEPUSING(passthru) SORTWORDS]
/* require tmp and masala_dir folders to be set */
if mi("$tmp") | mi("$MASALA_PATH") {
disp as error "Need to set globals 'tmp' and 'MASALA_PATH' to use this program"
exit
}
/* store sort words parameter */
if !mi("`sortwords'") {
local sortwords "-s"
}
/* define maximum distance for lev.py as 0.35 + 1.25 * (largest acceptable match).
This is the threshold limit, i.e. if we accept a match at 2.1, we'll reject it
if there's another match at 0.4 + 2.1*1.25. (this is hardcoded below) */
local max_dist = 0.40 + 1.25 * 2.1 * `fuzziness'
/* make everything quiet until python gets called -- this output is not helpful */
qui {
/* create temporary file to store original dataset */
tempfile master
save `master', replace
/* create a random 5-6 digit number to make the temp files unique */
local time = real(subinstr(`"`c(current_time)'"', ":", "", .))
local nonce = floor(`time' * runiform() + 1)
local src1 $tmp/src1_`nonce'.txt
local src2 $tmp/src2_`nonce'.txt
local out $tmp/out_`nonce'.txt
local lev_groups $tmp/lev_groups_`nonce'.dta
preserve
keep `varlist' `s1'
sort `varlist' `s1'
/* merge two datasets on ids to produce group names */
merge m:m `varlist' using `using', keepusing(`varlist' `s1')
// generate id groups
egen g = group(`varlist')
drop if mi(g)
qui sum g
local num_groups = r(max)
// save group list
keep g `varlist'
duplicates drop
save "`lev_groups'", replace
/* now prepare group 1 */
restore
preserve
keep `varlist' `s1'
/* drop if missing string and store # observations */
keep if !mi(`s1')
qui count
local g1_count = r(N)
/* bring in group identifiers */
merge m:1 `varlist' using "`lev_groups'", keepusing(g)
/* places with missing ids won't match group */
drop if _merge == 1
/* only keep matches */
keep if _merge == 3
duplicates drop
// outsheet string group 1
outsheet g `s1' using "`src1'", comma replace nonames
// prepare group2
di "opening `using'..."
use `using', clear
keep `varlist' `s1'
/* confirm no duplicates on this side */
tempvar dup
duplicates tag `varlist' `s1', gen(`dup')
count if `dup' > 0
if `r(N)' > 0 {
display as error "`varlist' `s1' not unique on using side"
exit 123
}
drop `dup'
/* drop if missing string and store # observations */
keep if !mi(`s1')
qui count
local g2_count = r(N)
// merge in group identifiers
merge m:1 `varlist' using "`lev_groups'", keepusing(g)
/* something wrong if didn't match group ids for any observation */
drop if _merge == 1
/* only keep matches */
keep if _merge == 3
duplicates drop
// outsheet string group 2
outsheet g `s1' using "`src2'", comma replace nonames
}
// call python levenshtein program
di "Matching `g1_count' strings to `g2_count' strings in `num_groups' groups."
di "Calling lev.py:"
di `" shell python -u $MASALA_PATH/lev.py -d `max_dist' -1 "`src1'" -2 "`src2'" -o "`out'" `sortwords'"'
!python $MASALA_PATH/lev.py -d `max_dist' -1 "`src1'" -2 "`src2'" -o "`out'" `sortwords'
di "lev.py finished."
/* quietly process the python output */
qui {
/* open output lev dataset */
/* take care, this generates an error if zero matches */
capture insheet using "`out'", comma nonames clear
/* if there are zero matches, create an empty outfile and we're done */
if _rc {
disp_nice "WARNING: masala_merge: There were no matches. Empty output file will be saved."
clear
save `outfile', replace emptyok
exit
}
ren v1 g
ren v2 `s1'_master
ren v3 `s1'_using
ren v4 lev_dist
/* merge group identifiers back in */
destring g, replace
merge m:1 g using "`lev_groups'", keepusing(`varlist')
/* _m == 1 would imply that our match list has groups not in the initial set */
assert _merge != 1
/* _m == 2 are groups with zero matches. drop them */
drop if _merge == 2
/* count specificity of each match */
bys g `s1'_master: egen master_matches = count(g)
bys g `s1'_using: egen using_matches = count(g)
/* count distance to second best match */
/* calculate best match for each var */
foreach v in master using {
bys g `s1'_`v': egen `v'_dist_rank = rank(lev_dist), unique
gen tmp = lev_dist if `v'_dist_rank == 1
bys g `s1'_`v': egen `v'_dist_best = max(tmp)
drop tmp
gen tmp = lev_dist if `v'_dist_rank == 2
bys g `s1'_`v': egen `v'_dist_second = max(tmp)
drop tmp
drop `v'_dist_rank
}
drop g _m
/* apply optimal matching rule (based on 1991-2001 pop census confirmed village matches in calibrate_fuzzy.do) */
/* initialize */
gen keep_master = 1
gen keep_using = 1
/* get mean length of matched string */
gen length = floor(0.5 * (length(`s1'_master) + length(`s1'_using)))
/* 1. drop matches with too high a levenshtein distance (threshold is a function of length) */
replace keep_master = 0 if lev_dist > (0.9 * `fuzziness') & length <= 4
replace keep_master = 0 if lev_dist > (1.0 * `fuzziness') & length <= 5
replace keep_master = 0 if lev_dist > (1.3 * `fuzziness') & length <= 8
replace keep_master = 0 if lev_dist > (1.4 * `fuzziness') & inrange(length, 9, 14)
replace keep_master = 0 if lev_dist > (1.8 * `fuzziness') & inrange(length, 15, 17)
replace keep_master = 0 if lev_dist > (2.1 * `fuzziness')
/* copy these thresholds to keep_using */
replace keep_using = 0 if keep_master == 0
/* 2. never use a match that is not the best match */
replace keep_master = 0 if (lev_dist > master_dist_best) & !mi(lev_dist)
replace keep_using = 0 if (lev_dist > using_dist_best) & !mi(lev_dist)
/* 3. apply best empirical safety margin rule */
replace keep_master = 0 if (master_dist_second - master_dist_best) < (0.4 + 0.25 * lev_dist)
replace keep_using = 0 if (using_dist_second - using_dist_best) < (0.4 + 0.25 * lev_dist)
/* save over output file */
order `varlist' `s1'_master `s1'_using lev_dist keep_master keep_using master_* using_*
save `outfile', replace
}
restore
/* run masala_review */
use `outfile', clear
/* if quietly is not specified, call masala_review, which calls masala_process */
if mi("`quietly'") {
masala_review `varlist', s1(`s1') master(`master') using(`using')
}
/* if quietly is specified, go directly to masala_process */
else {
masala_process `varlist', s1(`s1') master(`master') using(`using')
}
di "Masala merge complete."
di " Original master file was saved here: `master'"
di " Complete set of fuzzy matches is here: `outfile'"
}
end
/* *********** END program masala_merge ***************************************** */
/**********************************************************************************/
/* program masala_lev_dist : Calculate levenshtein distance between two vars */
/* uses external python program */
/***********************************************************************************/
cap prog drop masala_lev_dist
prog def masala_lev_dist
{
syntax varlist(min=2 max=2), GEN(name)
tokenize `varlist'
foreach i in _masala_word1 _masala_word2 _masala_dist __masala_merge {
cap drop `i'
}
gen _masala_word1 = `1'
gen _masala_word2 = `2'
replace _masala_word1 = lower(trim(_masala_word1))
replace _masala_word2 = lower(trim(_masala_word2))
gen _row_number = _n
/* create temporary file for python */
outsheet _row_number _masala_word1 _masala_word2 using $tmp/masala_in.csv, comma replace nonames
/* call external python program */
di "Calling lev.py..."
shell python $MASALA_PATH/lev.py -1 $tmp/masala_in.csv -o $tmp/masala_out.csv
/* convert created file to stata format */
preserve
insheet using $tmp/masala_out.csv, clear names
save $tmp/masala_lev_dist, replace
restore
/* merge result with new dataset */
merge 1:1 _row_number using $tmp/masala_lev_dist.dta, gen(__masala_merge) keepusing(_masala_dist)
/* clean up */
destring _masala_dist, replace
ren _masala_dist `gen'
drop _masala_word1 _masala_word2 _row_number
assert __masala_merge == 3
drop __masala_merge
}
end
/* *********** END program masala_lev_dist ***************************************** */
/*****************************************************************************************************/
/* program fix_spelling : Fixes spelling in a string variable based on a supplied master key. */
/* Runs a fuzzy masala-merge of data-list to master-list within group if specified */
/* Process: take data in data_list, merge it to master list
keep if _m == 2
fuzzy merge data-list to master-list, maybe within some group.
if a single match without competition, then replace data list with the data in master list
return new version of data list */
/* Must specify gen() or replace */
/* Targetfield and targetgroup() allow varnames in the master set to differ from varnmes in the key */
/* also allow running of program for pc01_district_name if a variable pc01_district_name already exists
in set
targetfield: refers to variable name in key named according to pc`year'_`place'_name
targetgroup: refers to variable names of group names on which to group spelling replacements, i.e.
i.e. group is pc01_state_name if fix_spelling pc01_district_name */
/* Example syntax:
fix_spelling district_name_fm, targetfield(pc01_district_name) group(state_name_fm) targetgroup(pc01_state_name) ///
src($keys/pc01_district_key) gen(new_district_name)
where target is universal naming syntax in keys, and state_name_fm / district_name_fm are names in set */
/*******************************************************************************************************/
cap prog drop fix_spelling
prog def fix_spelling
{
syntax varname(min=1 max=1), SRCfile(string) [GEN(name) GROUP(varlist) TARGETfield(string) TARGETGROUP(string) keepall replace FUZZINESS(real 2)]
/* put variable fix spelling in `varname' because we need arg "`1'" open */
tokenize `varlist'
local varname `1'
/* need to specify either generate or replace, not both */
if (mi("`gen'") & mi("`replace'")) | (!mi("`gen'") & !mi("`replace'")) {
display as error "fix_spelling: Need to specify either generate or replace, not both"
exit 1
}
/* if replace is set, create a temp var to be generated */
if !mi("`replace'") {
tempvar gen
}
/* if group is empty, need to create a master group that is the entire file */
if mi("`group'") {
tempvar __GROUP
gen `__GROUP' = 1
local nogroup = 1
local group "`__GROUP'"
}
/* for now, assume we have a source file */
/* create the master list */
qui {
preserve
/* open the synonym list: master location key */
use "`srcfile'", clear
/* if TARGETFIELD was specified, rename the target to the variable we want to match */
if !mi("`targetfield'") {
ren `targetfield' `varname'
}
/* if TARGETGROUP was specified, rename group variables to match key used to spellcheck */
if !mi("`targetgroup'") {
/* loop through each element in group and targetgroup to rename each variable from targetgroup as named in master set */
/* group -> tokens `1' -> `n', while we loop over target group. Need to do this because can't tokenize two strings */
tokenize `group'
local i = 1
foreach targetgroup_var in `targetgroup' {
cap confirm variable `targetgroup_var'
if _rc {
disp as error "ERROR fix_spelling: key file missing targetgroup var `targetgroup_var'"
exit 123
}
ren `targetgroup_var' ``i''
local i = `i' + 1
}
}
/* define a group if none exists */
if !mi("`nogroup'") gen `__GROUP' = 1
/* save clean, unique synonym list as stata file */
tempfile SPELLING_MASTER_LIST
keep `group' `varname'
duplicates drop
sort `group' `varname'
save `SPELLING_MASTER_LIST', replace
restore
/* create a list of unmatched names */
preserve
keep `varname' `group'
duplicates drop
/* get rid of exact matches - these will work well */
merge 1:1 `group' `varname' using `SPELLING_MASTER_LIST', gen(_merge1)
keep if _merge1 == 1
/* if nothing left, then the original list is fine and we're done */
qui count
if r(N) == 0 {
restore
noi di "100% of names matched exactly. No fuzzy matching necessary"
/* pass variable back into original or specified gen variable */
gen `gen' = `varname'
/* drop group var from main set if no group was specified */
if !mi("`nogroup'") drop `__GROUP'
exit
}
/* set tempfile to outfile spelling errors */
tempfile spelling_errors
/* otherwise, go to the fuzzy merge */
masala_merge `group' using `SPELLING_MASTER_LIST', s1(`varname') outfile(`spelling_errors') fuzziness(`fuzziness')
/* set tempfile for spelling corrections */
tempfile SPELLING_CORRECTIONS
/* review masala merge results */
use `spelling_errors', clear
/* exit if no matches */
count
if `r(N)' == 0 exit
/* keep best match for everything in badly-spelled set */
keep if keep_master == 1
keep `group' `varname'_master `varname'_using lev_dist
/* fix names and merge back to the original dataset */
ren `varname'_master `varname'
ren `varname'_using `gen'
ren lev_dist `gen'_dist
save `SPELLING_CORRECTIONS', replace
restore
/* tag exact matches (this merge only adds _merge_exact) */
merge m:1 `group' `varname' using `SPELLING_MASTER_LIST', gen(_merge_exact)
drop if _merge_exact == 2
/* then get fuzzy matches */
merge m:1 `group' `varname' using `SPELLING_CORRECTIONS', gen(_merge_fuzzy)
assert _merge_fuzzy != 2
/* if we have an exact match, shouldn't have a fuzzy match */
assert _merge_fuzzy == 1 if _merge_exact == 3
/* add exact matches */
replace `gen' = `varname' if _merge_exact == 3
replace `gen'_dist = 0 if _merge_exact == 3
drop _merge_exact _merge_fuzzy
/* if keepall specified, get places that didn't match */
if !mi("`keepall'") {
/* merge the spell-checked data back to the master list within the group */
ren `varname' `varname'_SP
ren `gen' `varname'
merge m:1 `group' `varname' using `SPELLING_MASTER_LIST', nogen keepusing(`varname')
ren `varname' `gen'
ren `varname'_SP `varname'
}
/* if group was not specified (or place is state so there is no group), drop group var */
if !mi("`nogroup'") drop `__GROUP'
}
/* if replace was specified */
if !mi("`replace'") {
/* show replacements made */
tempvar tag
qui egen `tag' = tag(`varname') if !mi(`gen') & `gen' != `varname'
disp_nice "Spelling fixes and Masala-Levenshtein distances:"
list `varname' `gen' `gen'_dist if `tag'
/* replace original var, show what was done, and drop the distance */
qui replace `varname' = `gen' if !mi(`gen')
drop `gen' `gen'_dist
}
}
end
/* *********** END program fix_spelling ***************************************** */
/**********************************************************************************/
/* program masala_review : Reviews masala_merge results and calls masala_process */
/***********************************************************************************/
cap prog drop masala_review
prog def masala_review
{
syntax varlist, s1(string) master(string) using(string) [keepusing(passthru)]
/* ensure a masala merge output file is open */
cap confirm var keep_master
if _rc {
di "You must open the masala_merge output file before running this program."
}
/* count and report matches that are exact, but with alternatives */
/* these are places where keep_master == 0 & lev_dist == 0 */
qui bys `s1'_master: egen _min_dist = min(lev_dist)
qui bys `s1'_master: egen _max_keep = max(keep_master)
qui count if _max_keep == 0 & _min_dist == 0
if `r(N)' > 0 {
di "+-------------------------------------" _n "| These are exact matches, where alternate good matches exist." _n ///
"| keep_master is 0, but masala_process() will keep the observations with lev_dist == 0." _n ///
"+-------------------------------------"
list `varlist' `s1'* lev_dist if _max_keep == 0 & _min_dist == 0
}
qui drop _max_keep _min_dist
/* visually review places with high lev_dist that script kept -- they look good. */
qui count if keep_master == 1 & lev_dist > 1
if `r(N)' > 1 {
disp_nice "These are high cost matches, with no good alternatives. keep_master is 1."
list `varlist' `s1'* lev_dist if keep_master == 1 & lev_dist > 1
}
/* run masala_process, and then show the unmatched places */
masala_process `varlist', s1(`s1') master(`master') using(`using') `keepusing'
/* tag each name so it doesn't appear more than once */
qui egen _ntag = tag(`varlist' `s1')
/* list unmatched places in a nice order */
qui gen _matched = _masala_merge == 3
gsort _matched -_ntag `varlist' `s1'
/* ensure we don't trigger obs. nos. out of range in final list, by counting observations */
qui count
if `r(N)' < 200 {
local limit `r(N)'
}
else {
local limit 200
}
/* list unmatched places */
qui count if _masala_merge < 3 & _ntag in 1/`limit'
if `r(N)' {
disp_nice "This is a sorted list of some places that did not match. Review for ideas on how to improve"
list `varlist' `s1' _masala_merge if _masala_merge < 3 & _ntag in 1/`limit', sepby(`varlist')
}
drop _ntag _matched
}
end
/* *********** END program masala_review ***************************************** */
/**********************************************************************************/
/* program masala_process : Rejoins the initial files in a masala_merge */
/**********************************************************************************/
cap prog drop masala_process
prog def masala_process
{
syntax varlist, s1(string) master(string) using(string) [keepusing(passthru)]
qui {
/* override keep_master if lev_dist is zero. */
replace keep_master = 1 if lev_dist == 0
/* keep highly reliable matches only */
keep if keep_master == 1
/* drop all masala merge's variables */
keep `varlist' `s1'*
/* bring back master dataset */
gen `s1' = `s1'_master
merge 1:m `varlist' `s1' using `master', gen(_masala_master)
/* fill in master fuzzy-string from unmatched data on master side */
replace `s1'_master = `s1' if mi(`s1'_master)
drop `s1'
/* bring back using dataset */
gen `s1' = `s1'_using
merge m:1 `varlist' `s1' using `using', `keepusing' gen(_masala_using)
/* fill in using fuzzy-string from unmatched data on using side */
replace `s1'_using = `s1' if mi(`s1'_using)
drop `s1'
/* set `s1' to the master value */
ren `s1'_master `s1'
/* fill in using values when _m == 2 */
replace `s1' = `s1'_using if mi(`s1')
}
/* Assertion: if we couldn't match back to the using, it must be unmatched from the master side */
assert _masala_master == 2 if _masala_using == 1
/* show merge result */
disp_nice "Results of masala_merge (counting unique strings only): "
/* tag each name so it doesn't appear more than once */
qui egen ntag = tag(`varlist' `s1')
/* create a standard merge output variable */
qui gen _masala_merge = 1 if _masala_master == 2
qui replace _masala_merge = 2 if _masala_using == 2
qui replace _masala_merge = 3 if _masala_using == 3 & _masala_master == 3
drop _masala_master _masala_using
label values _masala_merge _merge
/* show results */
table _masala_merge if ntag
qui drop ntag
}
end
/* *********** END program masala_process ***************************************** */
/**********************************************************************************/
/* program review_merge : call right after a merge to review potential matches */
/***********************************************************************************/
cap prog drop review_merge
prog def review_merge
{
syntax varlist [if/], [merge(string) list(varlist) SHOWall]
tempvar tag
if mi("`merge'") {
local merge _merge
}
/* `showall' parameter determines whether we limit to _merge < 3 or show all results */
if !mi("`showall'") {
local show 1
}
else {
local show `merge' < 3
}
/* need something in `if' if nothing passed in */
if mi("`if'") {
local if 1
}
/* separate list with sepby() if more than one variable in varlist */
tokenize "`varlist'"
if !mi("`2'") {
local sepby sepby(`1')
}
/* only show each posisble match once */
egen `tag' = tag(`varlist')
sort `varlist' `list'
list `varlist' `list' `merge' if `show' & `if' & `tag', `sepby'
drop `tag'
}
end
/* *********** END program review_merge ***************************************** */
/***********************************************************************************/
/* program create_merge_fragments : Call right after a merge to create separate
files of the unmatched pieces */
/***********************************************************************************/
cap prog drop create_merge_fragments
prog def create_merge_fragments
{
/* idea is we had:
file1: merge_vars a b c
file2: merge_vars d e f
we want to create file1 and file2 leftovers. hard part is getting the variables right.
syntax option:
- call with the completed merge file open, pass original master() and using() files back in.
*/
syntax anything, master(string) using(string) [merge(string) suffix(string)]
/* set default values for merge and suffix locals */
if mi("`merge'") local merge _merge
if mi("`suffix'") local suffix unmatched
/* hack work to get m:1, 1:1, or 1:m */
local merge_type = substr("`anything'", 1, 3)
if !inlist("`merge_type'", "1:1", "m:1", "1:m") {
di "Must specify 1:1, m:1 or 1:m as in merge syntax"
barf
}
local master_type = substr("`merge_type'", 1, 1)
local using_type = substr("`merge_type'", 3, 1)
local varlist = substr("`anything'", 4, .)
/* confirm varlist is a varlist */
confirm var `varlist'
/* we want to leave this unaltered, so wrap everything in a preserve / restore */
preserve
/* keep only the matches and drop _merge */
keep if `merge' == 3
drop `merge'
/* save the file with the matches. all we need is the varlist */
keep `varlist'
/* we only need one copy of each match, this allows everything below to be m:1 */
duplicates drop
tempfile merge3
save `merge3', replace
/* create master only file */
use `master', clear
/* merge it to list of matches */
/* 1:m is con->village. merged file will have many repeated cons */
/* m:1 is village->con. merged file will have each village once */
/* 1:1 obviously has each side once */
merge 1:`using_type' `varlist' using `merge3'
/* now we want to keep only the non-matches */
keep if `merge' == 1
/* there should not be any using, if we just ran this merge */
assert `merge' != 2
/* drop _merge and save fragment file */
drop `merge'
save `master'_`suffix', replace
/* repeat process for using side */
use `using', clear
/* merge it to list of matches */
/* 1:m is con->village. merged file will have each village once */
/* m:1 is village->con. merged file will have cons repeated */
/* 1:1 obviously has each side once */
merge `master_type':1 `varlist' using `merge3'
/* now we want to keep only the non-matches */
keep if `merge' == 1
/* there should not be any using, if we just ran this merge */
assert `merge' != 2
/* drop _merge and save fragment file */
drop `merge'
save `using'_`suffix', replace
restore
/* report what happened */
di "Created files with merge fragments:"
di " master: `master'_`suffix'"
di " using: `using'_`suffix'"
}
end
/* *********** END program create_merge_fragments ***************************************** */
/*****************************************************************************************************/
/* program synonym_fix : Replace strings for names that have multiple variants but one main version, */
/* using a externally-supplied dictionary */
/* i.e. uttaranchal -> uttarakhand */
/* external file has the following columns:
"master" : the target name
"pc01_district_name" : variable that we are replacing
"`group'" : a list of variables that define the context in which to match the name
named according to standard pc`year'_`place'_name
external file must be in CSV format.
*/
/* options targetfield and targetgroup allow variables in dataset to differ from variables in .csv
targetfield: refers to column name in external csv file named according to pc`year'_`place'_name
targetgroup: refers to the columns after the main name column on which to group replacements, i.e.
synonym_fix pc01_district_name has group pc01_state_name
*/
/*****************************************************************************************************/
cap prog drop synonym_fix
prog def synonym_fix
{
syntax varname, SYNfile(string) [GENerate(name) replace group(varlist) TARGETfield(string) TARGETGROUP(string) INSHEET]
/* put variable to replace using synonym_fix in `varname' */
tokenize `varlist'
local varname `1'
/* require generate or replace */
if (mi("`generate'") + mi("`replace'")) != 1 {
display as error "synonym_fix: generate or replace must be specified"
exit 1
}
/* if no generate specified, make replacements to passed in variable */
if mi("`generate'") {
local name = "`varname'"
}
/* if generate specified, copy the variable into the new slot and then change it gradually */
else {
gen `generate' = `varname'
local name = "`generate'"
}
qui {
/* verify 'master' variable is not in use in order to make replacements */
/* master refers to first column in csv that contains the correct string replacement value */
cap confirm variable master
if !_rc {
display as error "'master' variable already exist. synonym_fix() needs this varname to be free."
exit 123
}
/* insheet external csv from ~/iecmerge/pc[]/place_synonyms/ */
preserve
/* if insheet is specified, read the synfile by using insheet using */
if !mi("`insheet'") {
insheet using "`synfile'", clear delim(",") name
}
/* if insheet is not specified, read the synfile by using import delimited */
else {
import delimited "`synfile'", clear delim(",") varn(1)
}
/* targetfield: renaming the target name variable from the replacement file to match the master dataset */
/* if a target variable field was specified, rename the insheeted target field variable to match the varname in master dataset */
if !mi("`targetfield'") {
cap confirm variable `targetfield'
if _rc {
disp as error "ERROR synonym_fix: input file missing target var `targetfield'"
exit 123
}
ren `targetfield' `varname'
}
/* otherwise, if these names are the same, do a clean check to confirm synfix has the right field */
else {
cap confirm variable `varname'
if _rc {
disp as error "ERROR synonym_fix: input file missing variable `varname'"
exit 123
}
}
/* targetgroup: renaming group variables from replacement file to match master dataset */
/* if a target group was specified, rename the target group names from csv to match master group varnames passed in with group() */
/* if targetgroup is specified it is implicit that group has been specified */
if !mi("`targetgroup'") {
/* loop through each element in group and targetgroup to rename each variable from targetgroup as named in master set */
/* group -> tokens `1' -> `n', while we loop over target group. Need to do this because can't tokenize two strings */
tokenize `group'
local i = 1
foreach targetgroup_var in `targetgroup' {
cap confirm variable `targetgroup_var'
if _rc {
disp as error "ERROR synonym_fix: input file missing targetgroup var `targetgroup_var'"
exit 123
}
ren `targetgroup_var' ``i''
local i = `i' + 1
}
}
/* assert no duplicates on replacement value */
cap bys `varname' `group': assert _N == 1
if _rc {
display as error ""
display as error "ERROR synoynm_fix: Synonym file must be unique on value to be replaced (column 2) and optional merge variables."
noi di "Target group was: `group'"
noi duplicates list `varname' `group'
exit 123
}
/* write the dictionary of replacements */
tempfile dict
/* the csv should already be formatted lower and trimmed but perform replace in case it's not */
replace master = trim(lower(master))
foreach item in `group' {
replace `item' = trim(lower(`item'))
}
/* drop empty rows in case stata insheeted blank rows from the csv */
drop if mi(master)
save "`dict'", replace
restore
/* prepare group vars for merge */
if !mi("`group'") {
foreach v of varlist(`group') {
tostring `v', replace force
replace `v' = trim(lower(`v'))
}
}
/* merge passed in variable to the synonym column */
merge m:1 `group' `varname' using `dict', keepusing(master)
drop if _merge == 2
}
/* make the actual string replacement - put out of quietly block so user can see what happened */
replace `name' = master if !mi(master)
drop master _merge
}
end
/* *********** END program synonym_fix ***************************************** */
/********************************************************/
/* program masala_merge2 : Placeholder for masala_merge */
/********************************************************/
cap prog drop masala_merge2
prog def masala_merge2
{
syntax [varlist] using, S1(passthru) OUTfile(passthru) [FUZZINESS(passthru) quietly(passthru) KEEPUSING(passthru) SORTWORDS(passthru)]
masala_merge `varlist' `using', `s1' `outfile' `fuzziness' `quietly' `keepusing' `sortwords' `quietly'
}
end
/* *********** END program masala_merge2 ***************************************** */
}
|
*CCR# Female Form Cleaning
**Make sure you are using the Female Database with a date
*Should automatically use the FRQ but if not, use the directory below
*use "$datadir/CCR#_FRQ_$date.dta"
**Examples
*drop duplicates, drop forms that are incorrectly created
*Drop duplicate RE Celine Dion
*Two forms for same persion in EA1/structure/1household1, first one is only partially complete, second is complete
*drop if metainstanceID=="uuid:5734bec2-e9a7-4dd5-8c80-5475b13f04bd"
***drops duplicate Female**
*RE Jennifer Lopez
*created forms without linking household for EA1/structure/1household1
*drop if metainstanceID=="somemetainstanceID"
/*
*Christine-Nafula 66 Alice & Leah[The data is not completely the same and though the name is the same kept the one done on the 3rd visit
drop if metainstanceID=="uuid:7919e3b6-3e59-45d3-91d7-14bb2aaada3b"
drop if metainstanceID=="uuid:b7a9d82e-679c-45bb-8afb-408b2a750532"
replace structure=162 if structure==192 & metainstanceID=="uuid:68437931-e4d2-45f3-8d66-3a412244f19d"
*/
save, replace
|
*CCR# Household Form Cleaning
**Make sure you are using the Household Database with a date
**Should not need to update the dataset directory but if you do, use the command below
*use "$datadir/CCR#_Combined_$date"
***RE/EA specific cleaning
*IF there are exact duplicates, drop
*If there are households that were entered more than once and hh_duplicate_check is yes for one and no for the other, drop no
*If there are households with same number, but different people listed, add 1000 to one of the household numbers to indicate that the number
*was changed from the original but dont have the correct number
**Examples
*drop if metainstanceID=="uuid:3dd03052-4073-454b-9bee-26179f35047a"
*replace household=household+1000 if metainstanceID=="uuid:f10aad48-7ca4-4da3-9ddf-54718bba9fb4"
*replace FRS_form_name="" if metainstanceID=="uuid:a6a4e656-ecff-4fbd-b239-cf3a9ebf41eb" & FRS_form_name=="FR:new_town-5-2-Alberta-23"
*RE Rachel McAdams entered same person twice in EA#/strucutre#/household#
*drop if member_number=="uuid:6b13a668-503e-4ec6-813f-497840b65b37/HH_member[7]"
*replace num_HH_members=6 if metainstanceID=="uuid:6b13a668-503e-4ec6-813f-497840b65b37"
**Changing structure numbers**
save, replace
|
// ========= Drop duplicate Households =========
*=========== Clean Structure and Household number errors ==========================
|
****PMA 2020 Data Quality Checks****
** Original Version Written in Bootcamp July 21-23, 2014****
set more off
local CCRX $CCRX
******************************
use `CCRX'_Combined_$date.dta, clear
*Check if there are any remaining duplicates
duplicates report member_number
duplicates report FQmetainstanceID
capture drop dupFQmeta
duplicates tag FQmetainstanceID, gen(dupFQmeta)
duplicates drop FQmetainstanceID if FQmetainstanceID!="", force
save, replace
********************************************************************************************************************
******************************All country specific variables need to be encoded here********************
/*Section 1 is questions/variables that are in either household/female in all countries
Section 2 is questions/variables only in one country
***Household and Female
*Household
*Update corrected date of interview if phone had incorrect settings. Update to year/month of data collection
**Assets
**Livestock
**Floor
**Roof
**Walls
*Female
*Update corrected date of interview if phone had incorrect settings. Update to year/month of data collection
**School
**FP Provider
*/
local level1 state
local level2 lga
local level3 locality
*local level4 location
*Household DOI
capture drop doi*
gen doi=system_date
replace doi=manual_date if manual_date!="." & manual_date!=""
split doi, gen(doisplit_)
capture drop wrongdate
gen wrongdate=1 if doisplit_3!="2018"
replace wrongdate=1 if doisplit_1!="Apr" & doisplit_1!="May" & doisplit_1!="Jun" & doisplit_1!=""
*If survey spans across 2 years
/*replace wrongdate=1 if doisplit_3!="2018"
replace wrongdate=1 if doisplit_1!="Jan" & doisplit_1!=""
*/
gen doi_corrected=doi
replace doi_corrected=SubmissionDate if wrongdate==1 & SubmissionDate!=""
drop doisplit*
*Assets
split assets, gen(assets_)
local x=r(nvars)
foreach var in electricity radio tv mobile landline refrigerator cable_tv ///
electric_gen ac computer elec_iron fan watch bicycle motorcycle animalcart ///
car canoe boatmotor {
gen `var'=0 if assets!="" & assets!="-99"
forval y=1/`x' {
replace `var'=1 if assets_`y'=="`var'"
}
}
drop assets_*
*Livestock
foreach x in cows_bulls horses goats sheep chickens pigs {
capture rename owned_`x'* `x'_owned
capture label var `x'_owned "Total number of `x' owned"
destring `x'_owned, replace
}
*Roof/Wall/Floor
**Numeric codes come from country specific DHS questionnaire
label define floor_list 11 earth 12 dung 21 planks 22 palm_bamboo 31 parquet 32 vinyl_asphalt 33 ceramic_tiles 34 cement ///
35 carpet 96 other -99 "-99"
encode floor, gen(floorv2) lab(floor_list)
label define roof_list 11 no_roof 12 thatched 21 rustic_mat 22 palm_bamboo 23 wood_planks 24 cardboard ///
31 metal 32 wood 34 ceramic_tiles 35 cement 36 shingles 37 asbestos 96 other -99 "-99"
encode roof, gen(roofv2) lab(roof_list)
label define walls_list 11 no_walls 12 cane_palm 13 dirt 21 bamboo_mud 22 stone_mud 24 plywood 25 cardboard ///
26 reused_wood 31 cement 32 stone_lime 33 bricks 34 cement_blocks 36 wood_planks_shingles 96 other -99 "-99"
encode walls, gen(wallsv2) lab(walls_list)
*Language
capture label define language_list 1 english 2 hausa 3 igbo 4 yoruba 5 pidgin 96 other
encode survey_language, gen(survey_languagev2) lab(language_list)
label var survey_languagev2 "Language of household interview"
****************************************************************
*************************** Female ************************
**Country specific female questionnaire changes
*Year and month of data collection.
gen FQwrongdate=1 if thisyear!=2018 & thisyear!=.
replace FQwrongdate=1 if thismonth!=4 & thismonth!=5 & thismonth!=6 & thismonth!=.
*If survey spans across 2 years
/*replace FQwrongdate=1 if thisyear!=2018 & thisyear!=.
replace FQwrongdate=1 if thismonth!=1 & thismonth!=.
*/
gen FQdoi=FQsystem_date
replace FQdoi = FQmanual_date if FQmanual_date!="." & FQmanual_date!=""
gen FQdoi_corrected=FQdoi
replace FQdoi_corrected=FQSubmissionDate if FQwrongdate==1 & FQSubmissionDate!=""
*Education Categories
label def school_list 0 never 1 primary 2 secondary 3 higher -99 "-99"
encode school, gen(schoolv2) lab(school_list)
*Methods
**The only part that needs to be updated is 5. In countries with only one injectables option it should be injectables instead of injectables_3mo
label define methods_list 1 female_sterilization 2 male_sterilization 3 implants 4 IUD 5 injectables ///
6 injectables_1mo 7 pill 8 emergency 9 male_condoms 10 female_condoms 11 diaphragm ///
12 foam 13 beads 14 LAM 15 N_tablet 16 injectables_sc 30 rhythm 31 withdrawal ///
39 other_traditional -99 "-99"
encode first_method, gen(first_methodnum) lab(methods_list)
order first_methodnum, after(first_method)
encode current_recent_method, gen(current_recent_methodnum) lab(methods_list)
order current_recent_methodnum, after(current_recent_method)
encode recent_method, gen(recent_methodnum) lab(methods_list)
order recent_methodnum, after(recent_method)
encode pp_method, gen(pp_methodnum) lab(methods_list)
order pp_methodnum, after(pp_method)
capture encode penultimate_method, gen(penultimate_methodnum) lab(methods_list)
*Drop variables not included in country
*In variable list on the foreach line, include any variables NOT asked about in country
foreach var of varlist injectables3 injectables1 N_tablet {
sum `var'
if r(min)==0 & r(max)==0 {
drop `var'
}
}
capture confirm var sayana_press
if _rc==0 {
replace sayana_press=1 if regexm(current_method, "sayana_press") & FRS_result==1
}
*Source of contraceptive supplies
label define providers_list 11 govt_hosp 12 govt_health_center 13 FP_clinic 14 mobile_clinic_public 15 fieldworker_public ///
21 private_hospital 22 pharmacy 23 chemist 24 private_doctor 25 mobile_clinic_private 26 fieldworker_private ///
31 shop 32 church 33 friend_relative 34 NGO 35 market ///
96 other -88 "-88" -99 "-99"
encode fp_provider_rw, gen(fp_provider_rwv2) lab(providers_list)
capture encode fp_provider_rw, gen(fp_provider_rwv2) lab(providers_list)
*FQ Language
capture label define language_list 1 english 2 hausa 3 igbo 4 yoruba 5 pidgin 96 other
capture encode FQsurvey_language, gen(FQsurvey_languagev2) lab(language_list)
capture label var FQsurvey_language "Language of Female interview"
***************************************************************************************************
***SECTION 2: COUNTRY SPECIFIC QUESTIONS
capture confirm var religion
if _rc==0 {
label define religion_list 1 catholic 2 other_christian 3 islam 4 traditionalist 96 other -77 "-77" -99 "-99"
encode religion, gen(religionv2) lab(religion_list)
sort metainstanceID religionv2
bysort metainstanceID: replace religionv2 =religionv2[_n-1] if religionv2==.
label var religionv2 "Religion of household head"
}
capture confirm var ethnicity
if _rc==0 {
label define ethnicity_list 1 afo_gwandara 2 alago 3 eggon 4 fufulde 5 hausa 6 igbo 7 izon_ijaw 8 katab_tyap ///
9 mada 10 mambila 11 mumuye 12 ogoni 13 rundawa 14 wurkum 15 yoruba 96 other -99 "-99"
encode ethnicity, gen(ethnicityv2) lab(ethnicity_list)
sort metainstanceID ethnicityv2
bysort metainstanceID: replace ethnicityv2=ethnicityv2[_n-1] if ethnicityv2==.
label var ethnicityv2 "Ethnicity of household head"
}
//REVISION: BL 01Nov2017 follow-up consent
capture confirm var flw_*
if _rc==0 {
label var flw_willing "Willing to participate in another survey"
encode flw_willing, gen(flw_willingv2) lab(yes_no_dnk_nr_list)
label var flw_number_yn "Owns a phone"
encode flw_number_yn, gen(flw_number_ynv2) lab(yes_no_dnk_nr_list)
label var flw_number_typed "Phone number"
}
unab vars: *v2
local stubs: subinstr local vars "v2" "", all
foreach var in `stubs'{
rename `var' `var'QZ
order `var'v2, after(`var'QZ)
}
rename *v2 *
drop *QZ
//Kenya R6
capture label var hh_location_ladder "Location of house on wealth ladder: 1 = poorest, 10 = wealthiest"
***************************************************************************************************
********************************* COUNTRY SPECIFIC WEIGHT GENERATION *********************************
***************************************************************************************************
**Import sampling fraction probabilities and urban/rural
**NEED TO UPDATE PER COUNTRY
/*
merge m:1 EA using "C:/Users/Shulin/Dropbox (Gates Institute)/PMADataManagement_Uganda/Round5/WeightGeneration/UGR5_EASelectionProbabilities_20170717_lz.dta", gen(weightmerge)
drop region subcounty district
tab weightmerge
**Need to double check the weight merge accuracy
capture drop if weightmerge!=3
label define urbanrural 1 "URBAN" 2 "RURAL"
label val URCODE urbanrural
rename URCODE ur
capture rename EASelectionProbabiltiy EASelectionProbability
gen HHProbabilityofselection=EASelectionProbability * ($EAtake/HHTotalListed)
replace HHProbabilityofselection=EASelectionProbability if HHTotalListed<$EAtake
generate completedhh=1 if (HHQ_result==1) & metatag==1
*Denominator is any household that was found (NOT dwelling destroyed, vacant, entire household absent, or not found)
generate hhden=1 if HHQ_result<6 & metatag==1
*Count completed and total households in EA
bysort ur: egen HHnumtotal=total(completedhh)
bysort ur: egen HHdentotal=total(hhden)
*HHweight is1/ HHprobability * Missing weight
gen HHweight=(1/HHProbability)*(1/(HHnumtotal/HHdentotal)) if HHQ_result==1
**Generate Female weight based off of Household Weight
**total eligible women in the EA
gen eligible1=1 if eligible==1 & (last_night==1)
bysort ur: egen Wtotal=total(eligible1)
**Count FQforms up and replace denominator of eligible women with forms uploaded
*if there are more female forms than estimated eligible women
gen FQup=1 if FQmetainstanceID!=""
gen FQup1=1 if FQup==1 & (last_night==1)
bysort ur: egen totalFQup=total(FQup1)
drop FQup1
replace Wtotal=totalFQup if totalFQup>Wtotal & Wtotal!=. & totalFQup!=.
**Count the number of completed or partly completed forms (numerator)
gen completedw=1 if (FRS_result==1 ) & (last_night==1) //completed, or partly completed
bysort ur: egen Wcompleted=total(completedw)
*Gen FQweight as HHweight * missing weight
gen FQweight=HHweight*(1/(Wcompleted/Wtotal)) if eligible1==1 & FRS_result==1 & last_night==1
gen HHweightorig=HHweight
gen FQweightorig=FQweight
**Normalize the HHweight by dividing the HHweight by the mean HHweight (at the household leve, not the member level)
preserve
keep if metatag==1
su HHweight
replace HHweight=HHweight/r(mean)
sum HHweight
tempfile temp
keep metainstanceID HHweight
save `temp', replace
restore
drop HHweight
merge m:1 metainstanceID using `temp', nogen
**Normalize the FQweight
sum FQweight
replace FQweight=FQweight/r(mean)
sum FQweight
drop weightmerge HHProbabilityofselection completedhh-HHdentotal eligible1-Wcompleted
rename REGIONCODEUR strata
*/
***************************************************************************************************
********************************* GENERIC DONT NEED TO UPDATE *********************************
********************************************************************************************************************
*1. Drop unneccessary variables
rename consent_obtained HQconsent_obtained
drop consent* FQconsent FQconsent_start *warning* ///
respondent_in_roster roster_complete ///
deviceid simserial phonenumber *transfer *label* ///
witness_manual *prompt* witness_manual *check* *warn* FQKEY ///
unlinked* error_*heads metalogging eligibility_screen* ///
more_hh_members* *GeoID* dupFRSform deleteTest dupFQ FQresp error *note* ///
HHmemberdup waitchild
capture drop why_not_using_c
capture drop last_time_sex_lab menstrual_period_lab *unlinked close_exit
capture drop begin_using_lab
capture drop anychildren
capture drop yeschildren
capture drop childmerge
capture drop dupFQmeta
capture drop *Section*
rename HQconsent_obtained consent_obtained
capture drop if EA=="9999" | EA==9999
sort metainstanceID member_number
/***************** RECODE CURRENT METHOD **********************************
1. Recent EC users recoded to current users
2. LAM Users who are not using LAM recoded
3. Female sterilization users who do not report using sterilization are recoded
4. SP users recoded to SP
********************************************************************/
**Recode recent EC users to current users
gen current_methodnum=current_recent_methodnum if current_user==1
label val current_methodnum methods_list
gen current_methodnumEC=current_recent_methodnum if current_user==1
replace current_methodnumEC=8 if current_recent_methodnum==8 & current_user!=1
label val current_methodnumEC methods_list
gen current_userEC=current_user
replace current_userEC=. if current_methodnumEC==-99
replace current_userEC=1 if current_recent_methodnum==8 & current_user!=1
gen recent_userEC=recent_user
replace recent_userEC=. if current_recent_methodnum==8
gen recent_methodEC=recent_method
replace recent_methodEC="" if recent_method=="emergency"
gen recent_methodnumEC=recent_methodnum
replace recent_methodnumEC=. if recent_methodnum==8
label val recent_methodnumEC methods_list
gen fp_ever_usedEC=fp_ever_used
replace fp_ever_usedEC=1 if current_recent_methodnum==8 & fp_ever_used!=1
gen stop_usingEC=stop_using
gen stop_usingSIFEC=stop_usingSIF
replace stop_using_why_cc=subinstr(stop_using_why_cc, "difficult_to_conceive", "diff_conceive", .)
replace stop_using_why_cc=subinstr(stop_using_why_cc, "interferes_with_body", "interf_w_body", .)
foreach reason in infrequent pregnant wanted_pregnant husband more_effective no_method_available health_concerns ///
side_effects no_access cost inconvenient fatalistic diff_conceive interf_w_body other {
gen stop_usingEC_`reason'=stop_using_`reason'
replace stop_usingEC_`reason'=. if current_recent_methodnum==8
}
replace stop_usingEC="" if current_recent_methodnum==8
replace stop_usingSIFEC=. if current_recent_methodnum==8
gen future_user_not_currentEC=future_user_not_current
replace future_user_not_currentEC=. if current_recent_methodnum==8
gen future_user_pregnantEC=future_user_pregnant
replace future_user_pregnantEC=. if current_recent_methodnum==8
gen ECrecode=0
replace ECrecode=1 if (regexm(current_recent_method, "emergency"))
*******************************************************************************
* RECODE LAM
*******************************************************************************
tab LAM
* CRITERIA 1. Birth in last six months
* Calculate time between last birth and date of interview
* FQdoi_corrected is the corrected date of interview
gen double FQdoi_correctedSIF=clock(FQdoi_corrected, "MDYhms")
format FQdoi_correctedSIF %tc
* Number of months since birth=number of hours between date of interview and date
* of most recent birth divided by number of hours in the month
gen tsincebh=hours(FQdoi_correctedSIF-recent_birthSIF)/730.484
gen tsinceb6=tsincebh<6
replace tsinceb6=. if tsincebh==.
* If tsinceb6=1 then had birth in last six months
* CRITERIA 2. Currently ammenhoeric
gen ammen=0
* Ammenhoeric if last period before last birth
replace ammen=1 if menstrual_period==6
* Ammenhoerric if months since last period is greater than months since last birth
g tsincep = menstrual_period_value if menstrual_period==3 // months
replace tsincep = int(menstrual_period_value/30) if menstrual_period==1 // days
replace tsincep = int(menstrual_period_value/4.3) if menstrual_period==2 // weeks
replace tsincep = menstrual_period_value*12 if menstrual_period==4 // years
replace ammen=1 if tsincep>tsincebh & tsincep!=.
* Only women both ammenhoerric and birth in last six months can be LAM
gen lamonly=1 if current_method=="LAM"
replace lamonly=0 if current_methodnumEC==14 & (regexm(current_method, "rhythm") | regexm(current_method, "withdrawal") | regexm(current_method, "other_traditional"))
gen LAM2=1 if current_methodnumEC==14 & ammen==1 & tsinceb6==1
tab current_methodnumEC LAM2, miss
replace LAM2=0 if current_methodnumEC==14 & LAM2!=1
* Replace women who do not meet criteria as traditional method users
capture rename lam_probe_current lam_probe
capture confirm variable lam_probe
if _rc==0 {
capture noisily encode lam_probe, gen(lam_probev2) lab(yes_no_dnk_nr_list)
drop lam_probe
rename lam_probev2 lam_probe
replace current_methodnumEC=14 if LAM2==1 & lam_probe==1
replace current_methodnumEC=30 if lam_probe==0 & lamonly==0 & regexm(current_method, "rhythm")
replace current_methodnumEC=31 if current_methodnumEC==14 & lam_probe==0 & lamonly==0 & regexm(current_method, "withdrawal") & !regexm(current_method, "rhythm")
replace current_methodnumEC=39 if current_methodnumEC==14 & lam_probe==0 & lamonly==0 & regexm(current_method, "other_traditional") & !regexm(current_method, "withdrawal") & !regexm(current_method, "rhythm")
replace current_methodnumEC=39 if lam_probe==1 & current_methodnumEC==14 & LAM2==0
replace current_methodnumEC=. if current_methodnumEC==14 & lam_probe==0 & lamonly==1
replace current_userEC=0 if current_methodnumEC==. | current_methodnumEC==-99
}
else {
replace current_methodnumEC=39 if LAM2==0
}
drop tsince* ammen
*******************************************************************************
* RECODE First Method Female Sterilization
*******************************************************************************
replace current_methodnumEC=1 if first_methodnum==1
capture replace current_methodnumEC=1 if sterilization_probe==1
*******************************************************************************
* RECODE Injectables_SC
*injectable
*******************************************************************************
capture replace current_methodnumEC=16 if (injectable_probe_current==2 | injectable_probe_current==3) ///
& regexm(current_recent_method,"injectable")
capture replace recent_methodnumEC=16 if (injectable_probe_recent==2 | injectable_probe_recent==3)
gen first_methodnumEC=first_methodnum
capture replace first_methodnumEC=16 if injectable_probe_first==2
capture replace pp_methodnum=16 if (injectable_probe_pp==2 | injectable_probe_pp==3)
*******************************************************************************
* Define CP, MCP, TCP and longacting
*******************************************************************************
gen cp=0 if HHQ_result==1 & FRS_result==1 & (last_night==1)
replace cp=1 if HHQ_result==1 & current_methodnumEC>=1 & current_methodnumEC<=39 & FRS_result==1 & (last_night==1)
label var cp "Current use of any contraceptive method"
gen mcp=0 if HHQ_result==1 & FRS_result==1 & (last_night==1)
replace mcp=1 if HHQ_result==1 & current_methodnumEC>=1 & current_methodnumEC<=19 & FRS_result==1 & (last_night==1)
label var mcp "Current use of any modern contraceptive method"
gen tcp=0 if HHQ_result==1 & FRS_result==1 & (last_night==1)
replace tcp=1 if HHQ_result==1 & current_methodnumEC>=30 & current_methodnumEC<=39 & FRS_result==1 & (last_night==1)
label var tcp "Current user of any traditional contraceptive method"
gen longacting=current_methodnumEC>=1 & current_methodnumEC<=4 & mcp==1
label variable longacting "Current use of long acting contraceptive method"
label val cp mcp tcp longacting yes_no_dnk_nr_list
sort metainstanceID member_number
gen respondent=1 if firstname!="" & (HHQ_result==1 | HHQ_result==5)
replace respondent=0 if (HHQ_result==1 | HHQ_result==5) & respondent!=1
bysort metainstanceID: egen totalresp=total(respondent)
replace respondent=0 if totalresp>1 & totalresp!=. & relationship!=1 & relationship!=2
recast str244 names, force
saveold `CCRX'_Combined_ECRecode_$date.dta, replace version(12)
****************** KEEP GPS ONLY *******************
********************************************************************
preserve
keep if FQmetainstanceID!=""
keep FQLatitude FQLongitude FQAltitude FQAccuracy RE FQmetainstanceID $GeoID household structure EA
export excel using "`CCRX'_FQGPS_$date.csv", firstrow(var) replace
restore
preserve
keep if metatag==1
keep locationLatitude locationLongitude locationAltitude locationAccuracy RE metainstanceID $GeoID household structure EA
rename location* HQ*
export excel using "`CCRX'_HHQGPS_$date.csv", firstrow(var) replace
restore
****************** REMOVE IDENTIFYING INFORMATION *******************
*******************************************************************
capture rename facility_name* facility_nm*
drop *name* *Name*
drop *Latitude *Longitude *Altitude *Accuracy location*
capture drop *GPS*
capture rename facility_nm* facility_name*
capture drop flw_number_type
saveold `CCRX'_NONAME_ECRecode_$date.dta, replace version(12)
|
****PMA 2020 Data Quality Checks****
***Version Jan 17 2017****
**VERSION HISTORY AT END OF DO FILE
**First do file in series
/*Code is based on Stata 12 but some commands for Stata 11 and below are included and commented with *.
To use Stata 11 and below, comment out Stata 12 comments and remove the * before Stata 11 commands
This do file is designed to clean and check data. Information from Briefcase will need to be downloaded and exported as csv.
The do file will then (by country):
Step 1
a. Append all different versions of the Household Questionnaire into one version and destrings variables as appropriate, codes, and labels each questionnaire
b. Append all different versions of the Household Roster into one version and destrings variables as appropriate, codes, and labels each questionnaire
c. Append all different versions of the Female Questionnaire into one version and destrings variables as appropriate, codes, and labels each questionnaire
*All duplicates are tagged and dropped if they are complete duplicates
Step 2
a. Merge the Household Questionnaire, the Household Roster, and the Female Questionnaire into one file
*Also identifies any female questionnaires that exist but do not merge with a household form and all
*female quesitonnaires that are identified by an HHRoster but that do not have a FQ
It then runs the following checks by RE and EA (in some cases, REs may conduct interviews in EAs other
than their own. This catches any potential outside survey)s:
1. Total number of HHQs that are uploaded
2. Total number of HHQs that are marked as complete
3. Total number of HHQs that are marked as refused
4. Total number of HHQs that are marked as No One Home
5. Total number of eligible women identified
6. Total number of FQ forms that are uploaded to the server
7. Total number of FQ forms that were completed
8. Total number of FQ forms that were refused
9. Total number of FQ forms that were not at home
Additional information includes minimum time to complete surveys, number of HHQ and FQ
that do not have GPS or GPS more than 6 m
**********************************************************************************
*/
clear matrix
clear
set more off
set maxvar 30000
*******************************************************************************
* SET MACROS: UPDATE THIS SECTION FOR EACH COUNTRY/ROUND
*******************************************************************************
*BEFORE USE THE FOLLOWING NEED TO BE UPDATED:
*Country/Round/Abbreviations
global Country CD
global Round Round7
global round 7
global country CD
global CCRX CDR7
*Locals (Dont need to Update)
local Country "$Country"
local Round "$Round"
local CCRX "$CCRX"
*Year of the Survey
local SurveyYear 2018
local SYShort 18
******CSV FILE NAMES ****
*HHQ CSV File name
global HHQcsv CDR7_Household_Questionnaire_v6
*FQ CSV File name
global FQcsv CDR7_Female_Questionnaire_v6
***If the REs used a second version of the form, update these
*If they did not use a second version, DONT UPDATE
*global HHQcsv2 GHR5_Household_Questionnaire_v6
*global FQcsv2 GHR5_Female_Questionnaire_v6
**************
**************
*******DO FILE NAMES******
*HHQ_DataChecking File Name
local HHQdofile CCRX_HHQ_Datachecking_v19.0_25Sep2018_AR
*FRQ_DataChecking File Name
local FRQdofile CCRX_FRQ_DataChecking_v27.0_15Jun2018_AR
*HHQmember_DataChecking File Name
local HHQmemberdofile CCRX_HHQmember_DataChecking_v7.0_30Oct2017_BL
*CleaningByRE_Female date and initials
local CleanFemaledate 05Oct2015
*GPS check Spatial Data (requires to generate cleaned Listing.dta beforehand)
local hhq_monit CCRX_HHQ_GeoMonitoring
*CleaningByRE_HHQ date and intitials
local CleanHHQdate 05Oct2015
*Country Specific Clean Weight and initials
local CountrySpecificdate v30.1_28Sep2018
*Country/Round specific module
local module1do CCRX_CCP_v1_14Nov2017_SJ
/*local module2do CCRX_InjectablesSC_v8_28Mar2018_BL
local module3do CCRX_Abortion_v06_13Jul2018_sob
local module4do CCRX-AbtModuleDataChecking-v06-13Jul2018-sob
*/
//REVISION: SJ 17AUG2017 add Analytics to parent file
*local analytics RJR3_HHQFQ_Analytics_Dofile_v7_10Aug2017_NS
************
**** GEOGRAPHIC IDENTIFIERS ****
global GeoID "level1 level2 level3 level4 EA"
*Geographic Identifier lower than EA to household
global GeoID_SH "structure household"
*rename level1 variable to the geographic highest level, level2 second level
*done in the final data cleaning before dropping other geographic identifiers
global level1name level1
global level2name level2
global level3name level3
global level4name level4
*Number of households selected per EA
global EAtake=35
**************
**** DIRECTORIES****
**Global directory for the dropbox where the csv files are originally stored
global csvdir "/Users/ealarson/Dropbox (Gates Institute)/7 DRC/PMADataManagement_DRC/Round7/Data/CSV_Files"
**Create a global data directory -
global datadir "/Users/ealarson/Documents/DRC/Data_NotShared/Round7/HHQFQ"
**Create a global do file directory
global dofiledir "/Users/ealarson/Dropbox (Gates Institute)/7 DRC/PMADataManagement_DRC/Round7/Cleaning_DoFiles/Current"
*******************************************************************************************
******* Stop Updating Macros Here *******
*******************************************************************************************
*******************************************************************************************
******* Stop Updating Macros Here *******
*******************************************************************************************
/*Define locals for dates. The current date will automatically update to the day you are running the do
file and save a version with that day's date*/
local today=c(current_date)
local c_today= "`today'"
global date=subinstr("`c_today'", " ", "",.)
cd "$datadir"
**The following commands should be run after the first time you run the data. These commands
*archive all the old versions of the datasets so that data is not deleted and if it somehow is,
*we will have backups of all old datasets. The shell command accesses the terminal in the background
*(outside of Stata) but only works for Mac. It is not necessary to use shell when using Windows but the commands are different
*The command zipfile should work for both Mac and Windows, however shell command only works for Mac.
*The following commands will zip old datasets and then remove them so that only the newest version is available
*Make sure you are in the directory where you will save the data
* Zip all of the old versions of the datasets and the excel spreadsheets.
*Replaces the archive, does not save into it so create a new archive every date you run the file
capture zipfile `CCRX'*, saving (Archived_Data/ArchivedHHQFQData_$date.zip, replace)
capture shell erase `CCRX'*
**Start log
capture log close
log using `CCRX'_DataCleaningQuality_$date.log, replace
*******************************************************************************************
******* Start the cleaning *******
*******************************************************************************************
*******************************************************************************************
*Step 1. Running the following do-file command imports all of the versions of the forms
*tags duplicates, renames variables, and change the format of some of the variables
**Dataset is named `CCRX'_HHQ_$date.dta
run "$dofiledir/`HHQdofile'.do"
duplicates drop metainstanceID, force
save, replace
**This is not fully cleaned. It is just destrung and encoded with variable labels
************************************************************************************
*******************************************************************************************
* Step 2 Household Roster Information - Repeats the same steps for the Household Roster
** Generates data file `CCRX'_HHQmember_$date.dta
run "$dofiledir/`HHQmemberdofile'.do"
**This is not fully cleaned. It is just destrung and encoded with variable labels
************************************************************************************
**Merges the household and the household roster together
use `CCRX'_HHQ_$date.dta
merge 1:m metainstanceID using `CCRX'_HHQmember_$date, gen (HHmemb)
save `CCRX'_HHQCombined, replace
save `CCRX'_Combined_$date, replace
************************************************************************************************************************
******************************HOUSEHOLD FORM CLEANING SECTION*********************************************
*********************************************************************************************************
******After you initially combine the household and household memeber, you will need to correct duplicate submisttions.
* You will correct those errors here and in the section below so that the next time you run the files, the dataset will
* be cleaned and only errors that remain unresolved are generated.
**Write your corrections into a do file named "/Whatever/your/path/name/is/CCR#_CleaningByREHHQ_DateYouWriteFile.do
run "$dofiledir/`CCRX'_CleaningByRE_HHQ_`CleanHHQdate'.do"
capture drop dupHHtag
egen GeoID=concat($GeoID), punc(-)
egen GeoID_SH=concat($GeoID structure household), punc(-)
save, replace
*******************************************************************************************
* Step 3 Female Questionnaire Information - Repeats the same steps for the Female Questionnaire
** Generates data file `CCRX'_FQ_$date.dta
************************************************************************************
run "$dofiledir/`FRQdofile'.do"
egen FQGeoID=concat($GeoID), punc(-)
egen FQGeoID_SH=concat($GeoID structure household), punc(-)
*This exports a list of female forms that are duplicated. Use this to track if any REs seem to be having trouble uploading forms
*dont need to make changes based on this list other than dropping exact duplicates and making sure REs are being patient and not hitting send
*multiple times
preserve
keep if dupFQ!=0
sort metainstanceName
capture noisily export excel metainstanceID RE FQGeoID_SH firstname FQ_age metainstanceName using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(DuplicateFemale) replace
if _rc!=198{
restore
}
else{
set obs 1
gen x="NO DUPLICATE FEMALE FORMS"
export excel x using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(DuplicateFemale) replace
restore
}
duplicates drop metainstanceID, force
save, replace
************************************************************************************************************************
******************************FEMALE FORM CLEANING SECTION*********************************************
*********************************************************************************************************
******After running the dataset each time, the excel file will generate a list of errors. You will correct those errors
*here and in the section below so that the next time you run the files, the dataset will be cleaned and only errors that remain
*unfinished are generated. *This is where you will clean the female forms for duplicates
*If you find multiple female forms submitted for the same person, or if the names do not exactly match,
*you will correct those errors here.
**Write your corrections into a do file named "/Whatever/your/path/name/is/`CCRX'_CleaningByRE_FEMALE_DateYouWriteFile.do
run "$dofiledir/`CCRX'_CleaningByRE_FEMALE_`CleanFemaledate'.do"
******************************************************************************************
************************************************************************************
************************************************************************************
*Step Four: Merge the datasets together and save a copy that is NOT cleaned of unneccessary variables
clear
use `CCRX'_FRQ_$date.dta
foreach var of varlist SubmissionDate times_visited system_date manual_date ///
start end today acquainted-firstname marital_status Latitude-Accuracy {
rename `var' FQ`var'
}
duplicates list metainstanceName RE
duplicates drop metainstanceID, force
duplicates report RE metainstanceName
duplicates tag RE metainstanceName, gen(dupFRSform)
sort RE metainstanceName
*rename province-household FQprovince-FQhousehold so that missing values from unlinked forms dont merge over
foreach var in $GeoID structure household {
rename `var' FQ`var'
}
rename FQEA EA
capture replace EA=unlinkedEA if unlinked=="1"
preserve
keep if dupFRSform!=0
capture noisily export excel metainstanceID RE FQGeoID FQstructure FQhousehold ///
metainstanceName FQfirstname FQ_age FQSubmissionDate FRS_result FQstart FQend unlinkedEA using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FRS_Duplicate_in_Female) sheetreplace
if _rc!=198{
restore
}
if _rc==198 {
clear
set obs 1
gen x="NO DUPLICATE FEMALE FORM NAME IN FRQ Dataset"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FRS_Duplicate_in_Female) sheetreplace
restore
}
drop duplink
duplicates tag link, gen(duplink)
preserve
keep if duplink!=0
capture noisily export excel metainstanceID RE FQGeoID FQstructure FQhousehold metainstanceName ///
link FQfirstname FQ_age FQSubmissionDate FRS_result FQstart FQend unlinkedEA using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(Duplicate_Link_in_FRQ) sheetreplace
if _rc!=198{
restore
}
if _rc==198 {
clear
set obs 1
gen x="NO DUPLICATE LINK ID IN FRQ DATASET"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(Duplicate_Link_in_FRQ) sheetreplace
restore
}
rename metainstanceName FRS_form_name
rename metainstanceID FQmetainstanceID
rename available FQavailable
capture rename *int FQ*int
rename KEY FQKEY
**This lists remaining duplicate female form names that have not already been cleaned. You cannot merge with duplicate female forms
*Must update the CleaningByRE_FEMALE do file above or drop duplicates
*To merge, must drop all remaining by duplicates
*BUT BEFORE FINAL CLEANING YOU MUST IDENTIFY WHICH OF THE FEMALE FORMS IS THE CORRECT ONE!!!!
gen linktag=1 if link=="" & unlinked=="1"
gen linkn=_n if linktag==1
tostring linkn, replace
replace link=linkn if linktag==1
duplicates drop link, force
save, replace
******************* Merge in Female Questionnaire ********************************
use `CCRX'_Combined_$date
**Above code drops duplicate FRS_form_name from FRQ but also need to make sure that there are no duplicates
*in the household
*Identify any duplicate FRS forms in the household. Make sure the households are also not duplicated
* and drop any remaining duplicated female and household forms before merging
*Write the instances to drop in the CleaningByRE files
*IF there are two women in the household with the same name and age, they will have the same FRS_form_name
*Rename one of the women FRS_form_nameB in the female, find the same woman in the household and rename
duplicates tag FRS_form_name if FRS_form_name!="", gen(dupFRSform)
tab dupFRSform
preserve
keep if dupFRSform!=0 & dupFRSform!=.
sort FRS_form_name
capture noisily export excel metainstanceID member_number RE GeoID_SH names FRS_form_name using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FRS_Duplicate_in_Household) sheetreplace
if _rc!=198{
restore
}
else {
clear
set obs 1
gen x="NO DUPLICATE FRS_form_name IN HOUSEHOLD"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FRS_Duplicate_in_Household) sheetreplace
restore
}
save, replace
preserve
keep if eligible==1
rename link_transfer link
merge m:1 link using `CCRX'_FRQ_$date.dta, gen(FQresp)
tempfile FRStemp
save `FRStemp', replace
restore
drop if eligible==1
append using `FRStemp'
sort metainstanceID member_number
egen metatag=tag(metainstanceID)
replace link="" if linktag==1
drop linktag
save, replace
**********************************************************************************
**********************************************************************************
**********************************************************************************
*******************Clean and Check Merged Data********************
**********************************************************************************
**Now you will clean the household file of duplicate households or misnumbered houses. Save these changes in this do file
**Use cleaning file to drop problems that have been cleaned already (update this file as problems are resolved)
capture drop dupHHtag
**Complete duplicates have already been exported out. Those that have been resolved already will be cleaned using the
*previous do file. If the observations have not been cleaned yet, the data will be exported out below
*This information exports out variables that have duplicate structures and households from forms submitted multiple times
**Establish which form is correct (check based on visit number, submission date, start date and end date and work with
*supervisor and RE to identify which form is correct and which should be deleted
preserve
keep if metatag!=0
duplicates tag GeoID_SH, gen(dupHHtag)
keep if dupHHtag!=0
sort GeoID_SH RE hh_duplicate_check
capture noisily export excel metainstanceID RE GeoID_SH names times_visited hh_duplicate_check resubmit_reasons HHQ_result system_date end SubmissionDate using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(DuplicateEAStructureHH) sheetreplace
if _rc!=198{
restore
}
else {
clear
set obs 1
gen x="NO DUPLICATE GeoID"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(DuplicateEAStructureHH) sheetreplace
restore
}
save, replace
**This line of code identifies households and structures that have the same number but in which there are more than one group of people
*Identify if the people are actually one household and the RE created more than one form OR if they are two different households
*and the RE accidentally labeled them with the same number
*Export out one observation per household/name combination for each household that has more than one group of people
preserve
keep if metatag==1
egen HHtag=tag(RE EA GeoID_SH names)
*Checking to see if there are duplicate households and structure that do NOT have the same people listed
*Tags each unique RE EA structure household and name combination
*Totals the number of groups in a household (should only ever be 1)
bysort RE EA GeoID_SH: egen totalHHgroups=total(HHtag)
keep if HHtag==1 & totalHHgroups>1 & metatag==1
capture noisily export excel metainstanceID RE GeoID_SH names hh_duplicate_check using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(DuplicateHH_DifferentNames) sheetreplace
if _rc!=198 {
restore
}
else {
clear
set obs 1
gen x="NO DUPLICATE HOUSEHOLDS WITH DIFFERENT NAMES"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(DuplicateHH_DifferentNames) sheetreplace
restore
}
/*IF THERE ARE ANY FEMALE FORMS THAT DO NOT MERGE or eligible females that do not have
a form merged to them, these will be flagged and exported for followup */
gen error=1 if FQresp==2
replace error=1 if FQresp==1 & eligible==1
save, replace
preserve
keep if error==1
gsort FQresp -unlinked RE
capture noisily export excel RE metainstanceID GeoID_SH link FRS_form_name firstname ///
FQmetainstanceID FQfirstname unlinked SubmissionDate FQSubmissionDate using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FRQmergeerror) sheetreplace
*Stata 12 or above use export excel
if _rc!=198{
restore
}
else{
clear
set obs 1
gen x="NO FEMALE QUESTIONNAIRE MERGE ERRORS"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FRQmergeerror) sheetreplace
restore
}
*Stata 11 or below use outsheet
*capture outsheet using RE frsformname FQmetainstanceID- FQfirstname using `CCRX'_HHQFQErrors_FRQmerge_$date.csv, comma replace
/* This line of code will identify if there are duplicate observations in the household. Sometimes the entire
roster duplicates itself. This will check for duplicate name, age, and relationships in the household*/
duplicates tag metainstanceID firstname age relationship if metainstanceID!="", gen(HHmemberdup)
preserve
drop if FQresp==2
keep if HHmemberdup!=0
sort RE
capture noisily export excel RE metainstanceID member_number GeoID_SH firstname age relationship using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(DupHHmember) sheetreplace
*Stata 12 or above use export excel
if _rc!=198{
restore
}
else{
clear
set obs 1
gen x="No duplicated records of household members"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(DupHHmember) sheetreplace
restore
}
save, replace
clear
**********************************************************************************
**********************************************************************************
**********************************************************************************
*Step Three: Run basic checks on the Household Questionnaire
use `CCRX'_Combined_$date.dta, replace
gen totalint=minutes(endSIF-startSIF)
gen FQtotalint=minutes(FQendSIF-FQstartSIF)
save, replace
capture drop HHtag
**Check: Number of files uploaded by RE
/*Count the total number of surveys, the total number of surveys by version uploaded, and the total number
of completions and refusals by RE and EA (since in some cases, REs may go to more than one EA). Also
calculate the mean number of hhmembers per household*/
preserve
keep if metatag==1
forval x = 1/9 {
gen HHQ_result_`x'=1 if HHQ_result==`x'
}
collapse (count) HHQ_result_* HHQ_result, by (RE $GeoID)
rename HHQ_result HQtotalup
rename HHQ_result_1 HQcomplete
rename HHQ_result_2 HQnothome
rename HHQ_result_4 HQrefusal
rename HHQ_result_8 HQnotfound
gen HQresultother=HHQ_result_5 + HHQ_result_6 + HHQ_result_3 + HHQ_result_7 + HHQ_result_9
save `CCRX'_ProgressReport_$date, replace
restore
**********************************************************************************
**********************************************************************************
*** Run basic checks on the Household Member Questionnaire
/*Number of eligible women identified and average number of eligible women per household*/
preserve
*Counting total number of eligible women identified in EA based only on COMPL`CCRX'ED FORMS
collapse (sum) eligible if HHQ_result==1, by(RE $GeoID)
rename eligible totaleligible
label var totaleligible "Total eligible women identified in EA - COMPLETED HH FORMS ONLY"
tempfile collapse
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
restore
**********************************************************************************
**********************************************************************************
*** Run basic checks on the Female Respondent Questionnaire
**Number of female surveys completed
**Female survey response rate
**Non-response of sensitive questions
preserve
**Number of female surveys uploaded, number of female surveys that do not link (error)
collapse (count) FQresp if FQresp!=1, by (RE $GeoID)
rename FQresp FQtotalup
label var FQtotalup "Total Female Questionnaires Uploaded (including errors)"
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
*Number of female questionnaires that are in the FQ database but do not link to a household
restore
preserve
capture collapse (count) FQresp if FQresp==2, by (RE $GeoID)
if _rc!=2000{
rename FQresp FQerror
label var FQerror "Female Questionnaires that do not match Household"
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
}
*Number of eligible women who are missing a female questionnaire (this should always be zero!)
restore
preserve
capture collapse (count) FQresp if eligible==1 & FQresp==1, by (RE $GeoID)
if _rc!=2000{
rename FQresp FQmiss
label var FQmiss "Eligible women missing female questionnaires"
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
}
**Completion and refusal rates for female questionnaire
restore
preserve
forval x = 1/6 {
gen FRS_result_`x'=1 if FRS_result==`x'
}
collapse (count) FRS_result_* FRS_result if FRS_result!=., by (RE $GeoID)
*Count the number of surveyes with each completion code
rename FRS_result_1 FQcomplete
rename FRS_result_4 FQrefusal
rename FRS_result_2 FRS_resultothome
gen FQresultother = FRS_result_3 + FRS_result_5 + FRS_result_6
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
restore
preserve
keep if metatag==1
keep if HHQ_result==1
drop if totalint<0
keep if totalint<=10
sort RE
capture noisily export excel RE metainstanceID GeoID_SH names totalint assets num_HH_members water_sources_all sanitation_all using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(HQInterview10min)
*Stata 12 or above use export excel
if _rc!=198{
restore
}
else{
clear
set obs 1
gen x="NO COMPLETE HOUSEHOLD INTERVIEWS LESS THAN 10 MINUTES"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(HQInterview10min)
restore
}
preserve
keep if metatag==1
keep if HHQ_result==1
drop if totalint<0
keep if totalint<10
capture collapse (count) totalint , by(RE $GeoID)
if _rc!=2000{
rename totalint HHQintless10
tempfile collapse
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
}
else {
use `CCRX'_ProgressReport_$date , clear
gen HHQintless10=0
save, replace
}
restore
preserve
**Minimum time to COMPLETED FQ form
keep if FRS_result==1 & HHQ_result==1
drop if FQtotalint<0
keep if FQtotalint<=10
sort RE
capture noisily export excel RE FQmetainstanceID GeoID_SH FRS_form_name FQtotalint FQ_age FQmarital_status current_user using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FQInterview10min)
*Stata 12 or above use export excel
if _rc!=198{
restore
}
else{
clear
set obs 1
gen x="NO COMPLETE FEMALE INTERVIEWS LESS THAN 10 MINUTES"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FQInterview10min)
restore
}
preserve
keep if FRS_result==1 & HHQ_result==1
drop if FQtotalint<0
keep if FQtotalint<10
capture collapse (count) FQtotalint , by(RE $GeoID)
if _rc!=2000{
rename FQtotalint FQintless10
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
}
else {
use `CCRX'_ProgressReport_$date, clear
gen FQintless10=0
save, replace
}
restore
**Add GPS checks for anything over 6m (or missing)
destring locationAccuracy, replace
gen GPSmore6=1 if locationAccuracy>6 | locationAccuracy==.
egen tag=tag(RE $GeoID structure household)
preserve
keep if GPSmore6==1 & metatag==1
sort RE
capture noisily export excel RE metainstanceID GeoID_SH names locationAccuracy using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(HHGPSmore6m)
if _rc!=198 {
restore
}
else {
clear
set obs 1
gen x="NO HH GPS MISSING OR MORE THAN 6M"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(HHGPSmore6m)
restore
}
preserve
keep if metatag==1
sort RE
capture collapse (count) metatag if locationAccuracy>6 | locationAccuracy==., by(RE $GeoID)
if _rc!=2000{
rename metatag HHQGPSAccuracymore6
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
}
else {
clear
use `CCRX'_ProgressReport_$date
gen HHQGPSAccuracymore6=0
save, replace
}
restore
**GPS Spatial data error-checks - By RE & Full list
preserve
do "$dofiledir/`hhq_monit'.do"
restore
**Repeat for Female Accuracy
drop GPSmore6
capture destring FQAccuracy, replace
gen GPSmore6=1 if (FQAccuracy>6 | FQAccuracy==.) & FRS_result!=.
preserve
keep if GPSmore6==1 & FRS_result!=.
capture noisily export excel RE metainstanceID GeoID_SH FRS_form_name FQAccuracy using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FQGPSmore6m)
if _rc!=198 {
restore
}
else {
clear
set obs 1
gen x="NO FQ GPS MISSING OR MORE THAN 6M"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FQGPSmore6m)
restore
}
preserve
keep if FRS_result==1
capture collapse (count) GPSmore6 if FRS_result!=., by(RE $GeoID)
if _rc!=2000{
rename GPSmore6 FQGPSAccuracymore6
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
}
restore
***** Creating 14/15 and 49/50 Age ratios for Females by RE/EA
preserve
foreach y in 14 15 49 50{
gen age`y'=1 if age==`y' & gender==2
}
capture collapse (sum) age14 age15 age49 age50, by(RE $GeoID)
if _rc!=2000 {
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
}
restore
** Exporting out forms that identify Bottled water and Refill water as only source of water AND/OR that identify bottled water/refill water for cooking/washing
**tag forms
gen bottletag=1 if (water_sources_all=="bottled" | water_sources_all=="sachet" | water_sources_all=="refill" | water_sources_all=="bottled sachet" | water_sources_all=="bottled refill")
replace bottletag=1 if (water_main_drinking=="bottled"| water_main_drinking=="sachet" | water_main_drinking=="refill") & (water_uses_cooking==1 | water_uses_washing==1)
preserve
keep if bottletag==1 & metatag==1
tab bottletag
sort RE
capture noisily export excel metainstanceID RE GeoID_SH water_sources_all water_main_drinking water_uses_cooking water_uses_washing using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(Bottledwater) sheetreplace
di _rc
if _rc!=198{
restore
}
else{
clear
set obs 1
gen x="NO PROBLEM WITH BOTTLED/REFILL WATER"
capture noisily export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(Bottledwater) sheetreplace
restore
}
preserve
collapse (sum) bottletag if metatag==1, by(RE $GeoID)
if _rc!=2000{
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID EA using `collapse', nogen
save, replace
}
else{
use `CCRX'_ProgressReport_$date
gen bottletag=.
save, replace
}
restore
***Checking data quality for HH integer variables
*Identify if there are any HH integer variables with a value of 77, 88, or 99 indicating a potential mistype on the part of the RE or in the Cleaning file
preserve
keep if metatag==1
keep country-link
sort level1-household RE
**Checking if numeric variables have the values
gen mistype=0
gen mistype_var=""
foreach var of varlist _all{
capture destring *_ow*, replace
capture confirm numeric var `var'
if _rc==0 {
replace mistype=mistype+1 if (`var'==77 | `var'==88 | `var'==99)
replace mistype_var=mistype_var+" "+"`var'" if `var'==77 | `var'==88 | `var'==99
}
}
*Exclude entries for structure and household
recode mistype 0=.
replace mistype_var=strtrim(mistype_var)
replace mistype=. if mistype_var=="structure" | mistype_var=="household" | mistype_var=="structure household"
replace mistype_var="" if mistype_var=="structure" | mistype_var=="household" | mistype_var=="structure household"
*Keep all variables that have been mistyped
levelsof mistype_var, local(typo) clean
keep if mistype!=.
keep metainstanceID RE GeoID_SH `typo'
capture drop structure
capture drop household
capture drop minAge
order metainstanceID RE GeoID_SH, first
capture noisily export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(HH_Potential_Typos) sheetreplace
if _rc!=198 {
restore
}
else {
clear
set obs 1
gen x="NO NUMERIC VARIABLES WITH A VALUE OF 77, 88, OR 99"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(HH_Potential_Typos) sheetreplace
restore
}
***Checking data quality for FQ integer variables
*Identify if there are any FQ integer variables with a value of 77, 88, or 99 indicating a potential mistype on the part of the RE or in the Cleaning file
preserve
keep FQSubmissionDate-FQresp GeoID_SH RE
sort GeoID_SH RE
**Checking if numeric variables have the values
gen mistype=0
gen mistype_var=""
foreach var of varlist _all{
capture confirm numeric var `var'
if _rc==0 {
replace mistype=mistype+1 if (`var'==77 | `var'==88 | `var'==99)
replace mistype_var=mistype_var+" "+"`var'" if `var'==77 | `var'==88 | `var'==99
}
}
*Exclude entries for structure and household
recode mistype 0=.
replace mistype_var=strtrim(mistype_var)
replace mistype=. if mistype_var=="FQstructure" | mistype_var=="FQhousehold" | mistype_var=="FQstructure FQhousehold"
replace mistype_var="" if mistype_var=="FQstructure" | mistype_var=="FQhousehold" | mistype_var=="FQstructure FQhousehold"
*Keep all variables that have been mistyped
levelsof mistype_var, local(typo) clean
keep if mistype!=.
keep FQmetainstanceID RE GeoID_SH `typo'
capture drop FQstructure
capture drop FQhousehold
order FQmetainstanceID RE GeoID_SH, first
capture noisily export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FQ_Potential_Typos) sheetreplace
if _rc!=198 {
restore
}
else {
clear
set obs 1
gen x="NO NUMERIC VARIABLES WITH A VALUE OF 77, 88, OR 99"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(FQ_Potential_Typos) sheetreplace
restore
}
**Flag forms where the number of household members listed in the dataset is not equal to the number calculated by ODK
gen numberpeopletag=1 if KEY!=""
bysort metainstanceID: egen numberpeoplelisted=total(numberpeopletag)
drop numberpeopletag
gen numberpeopletag =1 if numberpeoplelisted!=num_HH_members
preserve
keep if numberpeopletag==1 & metatag==1 & (HHQ_result==1 | HHQ_result==5)
sort RE
capture noisily export excel metainstanceID RE GeoID_SH names numberpeoplelisted num_HH_members ///
using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(Number_HH_member)
if _rc!=198{
restore
}
if _rc==198 {
clear
set obs 1
gen x="NUMBER OF HOUSEHOLD MEMBERS IN ODK AND IN DATASET IS CONSISTENT IN ALL FORMS"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(Number_HH_member) sheetreplace
restore
}
preserve
collapse (sum) numberpeopletag if metatag==1 & (HHQ_result==1 | HHQ_result==5), by (RE $GeoID)
if _rc!=2000{
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
}
else {
use `CCRX'_ProgressReport_$date
gen numberpeopletag==.
save, replace
}
restore
**Export out forms and total the number of forms where the date is entered incorrectly
split system_date, gen (system_date_)
capture confirm var system_date_3
if _rc!=0{
drop system_date_*
split system_date, gen(system_date_) parse(/ " ")
}
gen datetag=1 if system_date_3!="`SurveyYear'" & system_date_3!="`SYShort'"
drop system_date_*
split start, gen(start_)
capture confirm var start_3
if _rc!=0{
drop start_*
split start, gen(start_) parse(/ " ")
}
replace datetag=1 if start_3!="`SurveyYear'" & start_3!="`SYShort'"
drop start_*
split end, gen(end_)
capture confirm var end_3
if _rc!=0{
drop end_*
split end, gen(end_) parse(/ " ")
}
replace datetag=1 if end_3!="`SurveyYear'" & end_3!="`SYShort'"
drop end_*
preserve
keep if datetag==1 & metatag==1
sort RE
capture noisily export excel metainstanceID RE GeoID_SH names system_date start end datetag ///
using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(PhoneDateFlag)
if _rc!=198{
restore
}
if _rc==198 {
clear
set obs 1
gen x="NO FORMS WITH AN INCORRECT DATE"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(PhoneDateFlag) sheetreplace
restore
}
preserve
collapse (sum) datetag if metatag==1, by (RE $GeoID)
if _rc!=2000{
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
}
else {
use `CCRX'_ProgressReport_$date
gen datetag==.
save, replace
}
restore
**Flag any forms where at least one observation for household member info is missing
egen missingroster=rowmiss(gender age relationship usually last_night) if HHQ_result==1
replace missingroster=missingroster+1 if marital_status==. & age>=10
egen noresponseroster=anycount(gender age relationship usually last_night) if HHQ_result==1, values(-99 -88)
replace noresponseroster=noresponseroster+1 if marital_status==-99 & age>=10 & HHQ_result==1
gen missinginfo_roster=missingroster+noresponseroster
preserve
keep if missinginfo_roster>0 & missinginfo_roster!=.
sort RE
capture noisily export excel metainstanceID RE GeoID_SH firstname-last_night missinginfo_roster ///
using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(MissingRosterInfo)
if _rc!=198{
restore
}
if _rc==198 {
clear
set obs 1
gen x="NO OBSERVATIONS HAVE MISSING/NONRESPONSE INFORMATION IN THE ROSTER"
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(MissingRosterInfo) sheetreplace
restore
}
preserve
gen missinginfotag=1 if missinginfo_roster!=0 & missinginfo_roster!=.
collapse (sum) missinginfotag if metatag==1, by (RE $GeoID)
if _rc!=2000{
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
}
else {
use `CCRX'_ProgressReport_$date
gen missinginfotag==.
save, replace
}
restore
**Total the number of DNK for first marriage year, recent marriage year, age at first birth, age at first sex by RE
gen DNKfirstmarriage=1 if firstmarriageyear==2020
gen DNKcurrentmarriage=1 if recentmarriageyear==2020
gen DNKfirstbirth=1 if regexm(first_birth, "2020")
gen DNKrecentbirth=1 if regexm(recent_birth, "2020")
capture gen DNKpenultbirth=1 if regexm(penultimate_birth, "2020")
gen DNKNRfirstsex=1 if age_at_first_sex==-88 | age_at_first_sex==-99
gen DNKNRlastsex=1 if last_time_sex==-88 | last_time_sex==-99
preserve
keep if FQmetainstanceID!=""
collapse (sum) DNK* , by (RE $GeoID)
if _rc!=2000{
egen DNKNRtotal=rowtotal(DNK*)
save `collapse', replace
use `CCRX'_ProgressReport_$date
merge 1:1 RE $GeoID using `collapse', nogen
save, replace
}
else {
use `CCRX'_ProgressReport_$date
gen DNK==.
save, replace
}
restore
use `CCRX'_ProgressReport_$date, clear
drop FRS_result_*
save, replace
gen date="$date"
order date, before(RE)
preserve
order EA HQtotalup HQcomplete HQrefusal HQnothome HQnotfound HQresultother totaleligible FQtotalup FQcomplete FQrefusal FRS_resultothome FQresultother, last
collapse (sum) HQtotalup-FQresultother (min) HHQGPS* FQGPS* HHQintless10 FQintless10, by(date RE $GeoID)
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(SupervisorChecks) sheetreplace
restore
preserve
collapse (min) age14 age15 age49 age50 (sum) bottletag numberpeopletag datetag missinginfotag, by(date RE $GeoID)
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(AdditionalCleaning) sheetreplace
restore
export excel RE $GeoID DNK* using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(DNK_NR_Count) sheetreplace
***Overall counts
preserve
collapse (sum) HQtotalup HQcomplete-HHQ_result_5 totaleligible FQtotalup FQcomplete
label var HQtotalup "Total HH uploaded"
label var HQcomplete "Total HH complete"
gen HHresponse=HQcomplete/(HQcomplete + HQnothome + HHQ_result_3+ HQrefusal + HHQ_result_5)
label var HHresponse "Household response rate"
label var FQtotalup "Total FQ uploaded"
label var FQcomplete "Total FQ completed"
gen FQresponse=FQcomplete/FQtotalup
label var FQresponse "Female response rate"
tempfile temp
save `temp', replace
restore
clear
use `CCRX'_Combined_$date.dta
preserve
gen male=1 if gender==1
gen female=1 if gender==2
egen EAtag=tag($GeoID)
bysort $GeoID: egen EAtotal=total(metatag)
gen EAcomplete=1 if EAtotal==$EAtake & EAtag==1
collapse (sum) male female EAtag EAcomplete
gen sexratio=male/female
label var sexratio "Sex ratio - male:female"
label var EAtag "Number of EAs with any data submitted"
label var EAcomplete "Number of EAs with $EAtake HH forms submitted"
tempfile temp2
save `temp2'
use `temp'
append using `temp2'
keep HQtotalup HQcomplete HHresponse FQtotalup FQcomplete FQresponse sexratio EAtag EAcomplete
export excel using `CCRX'_HHQFQErrors_$date.xls, firstrow(varlabels) sh(OverallTotals) sheetreplace
restore
clear
**After the data is merged, use cleaning program and analysis program for basic checks
*****************************************************************************
********************************* Country and Round Specific Cleaning ***********************************
use `CCRX'_Combined_$date.dta, clear
capture noisily do "$dofiledir/`module1do'"
capture noisily do "$dofiledir/`module2do'"
capture noisily do "$dofiledir/`module3do'"
capture noisily do "$dofiledir/`module4do'"
save `CCRX'_Combined_$date.dta, replace
do "$dofiledir/`CCRX'_CountrySpecific_CleanWeight_`CountrySpecificdate'.do"
************************************************************************************
translate `CCRX'_DataCleaningQuality_$date.log `CCRX'_DataCleaningQuality_$date.pdf, replace
|
/****PMA 2020 Indonesia Data Quality Checks****
***Version Created in Bootcamp July 21-24, 2014
**Fourth do file in series***
This do file labels each variable in the Female Respondent questionnaire */
set more off
**If you want to run this file separately from the parent file, change the working directory below
cd "$datadir"
*all of the data is imported automatically as string
clear
clear matrix
local CCRX $CCRX
local FQcsv $FQcsv
local FQcsv2 $FQcsv2
clear
capture insheet using "$csvdir/`FQcsv'.csv", comma case
tostring *, replace force
save `CCRX'_FRQ.dta, replace
/*If you need to add an extra version of the forms, this will check if that
version number exists and add it. If the version does not, it will continue*/
clear
capture insheet using "$csvdir/`FQcsv2'.csv", comma case
if _rc==0 {
tostring *, replace force
append using `CCRX'_FRQ.dta, force
save, replace
}
use `CCRX'_FRQ.dta
save, replace
***REVISION HISTORY OF LARGE SCALE CHANGES
rename name_grp* *
rename date_group* *
rename location_information* *
rename location_* Zlocation_*
rename location* *
rename Zlocation_* location_*
rename *_grpfirst* **
rename *_grpcurrent* **
rename *_grprecent* **
rename *_method_method* *_method*
rename *_grpfp_provider* **
rename *_grpwhy_not_using* **
rename *grpfp_ad_* **
rename geographic_info_* *
rename unlinked*_* unlinked*
capture drop birthdate_grpbday_note birthdate_grpbday_note_unlinked
capture confirm var EA
if _rc!=0{
capture rename quartier EA
}
**Dropping variables from form re-programming April 2016
capture drop why_not_using_grp*
capture drop FQ_age
capture rename FQAage FQ_age
capture drop FQA*
capture rename AFSage_at_first_sex age_at_first_sex
capture drop AFS*
capture drop rec_birth_date
capture rename MOPmonths_pregnant months_pregnant
capture drop MOP*
capture drop births_live*
capture confirm var more_children_some
if _rc==0 {
replace more_children_none=more_children_some if more_children_some!=""
drop more_children_some
rename more_children_none more_children
replace wait_birth_none=wait_birth_some if wait_birth_some!=""
rename wait_birth_none wait_birth
drop wait_birth_some
gen pregnancy_last_desired=PDEpregnancy_desired if pregnant=="no"
gen pregnancy_current_desired=PDEpregnancy_desired if pregnant=="yes"
drop PDE*
replace visited_fac_none=visited_fac_some if visited_fac_some!=""
drop visited_fac_some
rename visited_fac_none visited_a_facility
}
capture drop rec_husband_date
capture rename BUSbegin_using begin_using
replace begin_using=SUSante_start_using if SUSante_start_using!=""
capture rename sussus_m ante_begin_using_month
capture rename susante_start_using ante_begin_using
capture rename busbus_m begin_using_month
capture drop BUS*
capture drop SUS*
capture drop age_begin_using
capture drop fp_provider_grp*
capture rename LTSlast_time_sex last_time_sex
capture drop LTS*
capture drop re_name calc_space deleteTest metalogging
capture rename mhm_grp* *
capture rename birthdate_grp* *
capture rename HCF* *
capture rename HCS* *
capture rename FB* *
capture rename RB* *
capture rename PB* *
capture rename CD* *
drop SPUstop_using_full_lab
rename SPU* *
gen day=substr(begin_using,-2,.)
gen month=substr(begin_using,6,2)
gen year=substr(begin_using,1,4)
gen str begin_usingv2=month + "/" + day + "/" + year if month!="" & day!="" & year!=""
drop begin_using
rename begin_usingv2 begin_using
destring day month year, replace
gen begin_usingSIF=mdy(month,day,year)
format begin_usingSIF %td
foreach date in birthdate hcf hcs fb rb spu {
replace `date'_y=subinstr(`date'_y, "Jan", "Feb", .) if `date'_m=="1"
replace `date'_y=subinstr(`date'_y, "Jan", "Mar", .) if `date'_m=="2"
replace `date'_y=subinstr(`date'_y, "Jan", "Apr", .) if `date'_m=="3"
replace `date'_y=subinstr(`date'_y, "Jan", "May", .) if `date'_m=="4"
replace `date'_y=subinstr(`date'_y, "Jan", "Jun", .) if `date'_m=="5"
replace `date'_y=subinstr(`date'_y, "Jan", "Jul", .) if `date'_m=="6"
replace `date'_y=subinstr(`date'_y, "Jan", "Aug", .) if `date'_m=="7"
replace `date'_y=subinstr(`date'_y, "Jan", "Sep", .) if `date'_m=="8"
replace `date'_y=subinstr(`date'_y, "Jan", "Oct", .) if `date'_m=="9"
replace `date'_y=subinstr(`date'_y, "Jan", "Nov", .) if `date'_m=="10"
replace `date'_y=subinstr(`date'_y, "Jan", "Dec", .) if `date'_m=="11"
}
rename birthdate_y birthdatev2
rename hcf_y husband_cohabit_start_firstv2
rename hcs_y husband_cohabit_start_recentv2
rename fb_y first_birthv2
rename rb_y recent_birthv2
rename spu_y stop_usingv2
replace birthmonth="-88" if birthdate_m=="-88"
rename *_m *_month
rename survey_language FQsurvey_language
****
capture label def yes_no_dnk_nr_list 0 no 1 yes -77 "-77" -88 "-88" -99 "-99"
label var times_visited "Visit number to female respondent"
label var your_name "Resident Enumerator name"
label var your_name_check "To RE: Is this your name?"
label var name_typed "RE Name if not correct in FQ C"
label var system_date "Date and Time"
label var system_date_check "Confirm Correct Date and Time"
label var manual_date "Correct Date and Time if not correct in FQ D1"
label var today "Date of Interview"
label var location_pro "Prompt"
label var EA "EA"
label var structure "Structure number"
label var household "Household number"
label var location_con "Confirmation screen"
label var name_check "Confirmatation interviewing correct woman"
label var aquainted "How well acquainted with the respondent"
label var available "Respondent present and available at least once"
label var consent_start "Consent screen"
label var consent "Consent screen"
label var begin_interview "May I begin the interview now?"
label var consent_obtained "Informed consent obtained"
label var witness_auto "Interviewer name"
label var witness_manual "Name check"
label var firstname "Name of respondent"
label var birthdate "Birth date"
label var birthyear "Year of Birth"
label var birthmonth "Month of Birth"
label var thismonth "Month of Interview - Used for age calculations"
label var thisyear "Year of Interview - Used for age calculations"
label var FQ_age "Age in Female Respondent Questionnaire"
capture label var age_check "Same age in Household Roster?"
capture label var age_label "Label regarding age"
capture label var age "Age in Household Roster"
label var school "Highest level of school attended"
label var marital_status "Marital status"
label var marriage_history "Been married once or more than once"
label var husband_cohabit_start_first "Month and year started living with first partner"
label var firstmarriagemonth "Month started living with first partner"
label var firstmarriageyear "Year started living with first partner"
label var husband_cohabit_start_recent "Year started living with current or most recent partner"
label var young_marriage_recent "Women married only once - less than 10"
label var marriage_warning_recent "Confirm less age 10 current marriage"
label var young_marriage_first "Women married more than once - first marriage less than 10"
label var marriage_warning_first "Confirm less age 10 first marrige"
capture label var other_wives "Partner have other wives"
rename birth_events birth_events_rw
capture label var birth_events_rw "How many times have you given birth"
label var first_birth "Date of FIRST live birth"
label var recent_birth "Most recent birth?"
capture label var days_since_birth "Days since most recent birth"
label var menstrual_period "Last menstrual period"
label var menstrual_period_value "Value if days, weeks, months, or years"
capture label var months_since_last_period
label var pregnant "Pregnancy status"
label var month_calculation "Months since last birth"
capture label var pregnant_hint "Hint if recent pregnancy"
label var months_pregnant "How many months pregnant"
label var more_children "Prefer to have another child or no more children - not pregnant"
label var more_children_pregnant "Prefer to have another child or no more children - currently pregnant"
label var wait_birth "How long would you like to wait until next child - not pregnant"
label var wait_birth_pregnant "How long would you like to wait until next child - pregnant"
label var wait_birth_value "How long to wait - value if months or years"
label var pregnancy_last_desired "Last birth - did you want it then, later, not at all - not pregnant"
label var pregnancy_current_desired "Current birth - did you want it then, later, not at all - currently pregnant"
capture label var fp_ever_user "Done anything to delay or avoid getting pregnant"
label var fp_ever_used "Ever used method of FP"
label var age_at_first_use "Age at first use of FP"
label var age_at_first_use_children "Number of living children at first use of FP"
label var first_method "First FP method used"
label var current_user "Currently using method of FP?"
label var current_method "Current FP method"
capture rename sterlization_permanent_inform sterilization_permanent_inform
capture label var sterilization_permanent_inform "Sterilization - did provider tell you it was permanent"
label var future_user_not_current "Do you think you will use a method in the future - not pregnant"
label var future_user_pregnant "Do you think you will use a method in the future - currently pregnant"
label var recent_user "Used a method last 12 months?"
label var recent_method "FP Method used in last 12 months"
label var current_or_recent_user
label var current_recent_method
label var current_recent_label
label var begin_using "When did you begin using method"
label var stop_using "When did you stop using method"
rename stop_using_why stop_using_why_cc
label var stop_using_why_cc "Why did you stop using method"
capture rename fp_provider fp_provider_rw
label var fp_provider_rw "Where did you obtain method when you started using"
label var fp_provider_check
label var method_fees "How much did you play pay the last time FP was obtained?"
label var fp_side_effects "When obtained method, told about side effects?"
label var fp_side_effects_instructions "Told what to do if experienced side effects?"
label var fp_told_other_methods "Told about FP methods you could use?"
label var fp_obtain_desired "Did you obtain the method you wanted?"
label var fp_obtain_desired_whynot "If not, why not?"
label var fp_final_decision "Who made final decision about the method?"
label var return_to_provider "Would you return to provider?"
label var refer_to_relative "Refer provider to relative or friend?"
label var why_not_using "Why not using a method?"
label var visited_by_health_worker "Visited by health worker about FP last 12 months"
label var visited_a_facility "Visited health facility last 12 months"
label var facility_fp_discussion "Talked to about FP at health facility"
label var partner_know "Partner aware of FP use"
label var penultimate_method_yn "Where you doing something before current method to delay or avoid pregnancy"
label var penultimate_method "Method used before current method"
label var pp_method_units "Specify days, months or years"
label var pp_method_value "Value in days, months or years"
label var pp_method_yn "Used something to avoid/delay pregnancy post most recent birth"
label var pp_method "Method used post most recent birth"
label var fp_ad_radio "Heard about FP on radio"
label var fp_ad_tv "Heard about FP on television"
label var fp_ad_magazine "Read about FP in newspaper/magazine"
label var fp_ad_call "Receive FP in voice or text message"
label var age_at_first_sex "Age at first sex"
label var years_since_first_sex
label var months_since_first_sex
label var last_time_sex "Last time you had sex - days, weeks, months, years"
label var last_time_sex_value "Last time you had sex - value"
label var thankyou
label var Latitude "Latitude"
label var Longitude "Longitude"
label var Altitude "Altitude"
label var Accuracy "Accuracy"
label var FRS_result "Result"
label var start "Start time"
label var end "End time"
label var emergency_12mo_yn "Used EC in the last 12 months"
label var FQsurvey_language "Language in which female survey was conducted"
destring times_visited, replace
destring EA, replace
destring consent_obtained, replace
destring structure, replace
destring household, replace
destring birthmonth, replace
destring birthyear, replace
destring thismonth, replace
destring thisyear, replace
destring FQ_age, replace
capture destring age, replace
destring young_marriage_first, replace
destring young_marriage_recent, replace
destring marriage_warning_first, replace
destring first_method_che, replace
destring recent_method_c, replace
destring current_or_recent_user, replace
destring recentmarriagemonth, replace
destring recentmarriageyear, replace
destring firstmarriagemonth, replace
destring firstmarriageyear, replace
capture destring birth_events_rw, replace
capture destring days_since_birth, replace
destring menstrual_period_value, replace
capture destring months_since_last_period, replace
destring month_calculation, replace
destring months_pregnant, replace
destring wait_birth_value, replace
destring age_at_first_use, replace
destring age_at_first_use_children, replace
destring method_fees, replace
destring age_at_first_sex, replace
destring years_since_first_sex, replace
destring months_since_first_sex, replace
destring last_time_sex_value, replace
destring pp_method_value, replace
destring age_ante_begin_using, replace
destring age_first_reported_use, replace
destring age_first_birth, replace
destring months_last_sex, replace
destring Latitude, replace
destring Longitude, replace
destring Altitude, replace
destring Accuracy, replace
capture label def yes_no_dnk_nr_list 0 no 1 yes -77 "-77" -88 "-88" -99 "-99"
foreach var of varlist your_name_check system_date_check location_con name_check ///
available begin_interview ///
pregnant fp_ever_used current_user ///
future_user_not_current future_user_pregnant recent_user fp_side_effects ///
fp_side_effects_instructions fp_told_other_methods fp_obtain_desired return_to_provider ///
refer_to_relative visited_by_health_worker visited_a_facility facility_fp_discussion ///
fp_ad_radio fp_ad_tv fp_ad_magazine fp_ad_call fp_ever_user penultimate_method_yn pp_method_yn ///
partner_know emergency_12mo_yn {
encode `var', gen(`var'v2) lab(yes_no_dnk_nr_list)
}
capture encode other_wives, gen(other_wivesv2) lab(yes_no_dnk_nr_list)
capture encode sterilization_permanent_inform , gen(sterilization_permanent_inform ) lab(yes_no_dnk_nr_list)
label def acquainted_list 1 very_well_acquainted 2 well_acquainted 3 not_well_acquainted 4 not_acquainted
encode aquainted, gen(aquaintedv2) lab(acquainted_list)
label define FQmarital_status_list 5 never_married 1 currently_married 2 currently_living_with_man 3 divorced 4 widow -99 "-99"
encode marital_status, gen(marital_statusv2) lab(FQmarital_status_list)
label define lived_list 1 once 2 more_than_once -99 "-99"
encode marriage_history, gen(marriage_historyv2) lab(lived_list)
capture label drop dwmy_list
label define dwmy_list 1 "days" 2 "weeks" 3 "months" 4 "years" -99 "-99" -88 "-88"
label define menstrual_list 1 days 2 weeks 3 months 4 years 5 menopausal_hysterectomy 6 before_birth 7 never -99 "-99"
encode menstrual_period, gen(menstrual_periodv2) lab(menstrual_list)
label define more_children_list 1 have_child 2 no_children 3 infertile -88 "-88" -99 "-99"
encode more_children, gen(more_childrenv2) lab(more_children_list)
encode more_children_pregnant, gen(more_children_pregnantv2) lab(more_children_list)
label define wait_child_list 1 months 2 years 3 soon 4 infertile 5 other -88 "-88" -99 "-99"
encode wait_birth, gen(wait_birthv2) lab(wait_child_list)
encode wait_birth_pregnant, gen(wait_birth_pregnantv2) lab(wait_child_list)
label define pregnancy_desired_list 1 then 2 later 3 not_at_all -99 "-99"
encode pregnancy_last_desired, gen(pregnancy_last_desiredv2) lab(pregnancy_desired_list)
encode pregnancy_current_desired, gen(pregnancy_current_desiredv2) lab(pregnancy_desired_list)
replace stop_using_why_cc=subinstr(stop_using_why_cc, "difficult_to_conceive", "diff_conceive", .)
replace stop_using_why_cc=subinstr(stop_using_why_cc, "interferes_with_body", "interf_w_body", .)
foreach reason in infrequent pregnant wanted_pregnant husband more_effective no_method_available health_concerns ///
side_effects no_access cost inconvenient fatalistic diff_conceive interf_w_body other {
gen stop_using_`reason'=0 if stop_using_why_cc!="" & stop_using_why_cc!="-99"
replace stop_using_`reason'=1 if (regexm(stop_using_why_cc, "`reason'"))
}
label define whynot_list 1 not_married 2 infrequent_sex 3 menopausal_hysterectomy 4 infecund 5 not_menstruated ///
6 breastfeeding 7 husband_away 8 fatalistic 9 respondent_opposed 10 partner_opposed 11 others_opposed ///
12 religion 13 no_knowledge 14 no_source_known 15 side_effects 16 health 17 no_access 18 cost ///
19 preferred_unavailable 20 no_method_available 21 inconvenient 22 interferes_with_body 23 other -88 "-88" -99 "-99"
label define decision_list 1 you_alone 2 provider 3 partner 4 you_and_provider 5 you_and_partner 6 other -99 "-99" -88 "-88"
encode fp_final_decision, gen(fp_final_decisionv2) lab(decision_list)
label define whynomethod_list 1 out_of_stock 2 unavailable 3 untrained 4 different 5 ineligible 6 decided_not_to_adopt ///
7 cost 8 other -88 "-88" -99 "-99"
encode fp_obtain_desired_whynot, gen(fp_obtain_desired_whynotv2) lab(whynomethod_list)
encode last_time_sex, gen(last_time_sexv2) lab(dwmy_list)
label define dwmy_future_list 1 days 2 weeks 3 months 4 years -99 "-99"
encode pp_method_units, gen(pp_method_unitsv2) lab(dwmy_future_list)
label define FRS_result_list 1 completed 2 not_at_home 3 postponed 4 refused 5 partly_completed 6 incapacitated
encode FRS_result, gen(FRS_resultv2) lab(FRS_result_list)
*Participated in previous survey
capture label var previous_PMA "Previously participated in PMA 2020 survey?"
capture encode previous_PMA, gen(previous_PMAv2) lab(yes_no_dnk_nr_list)
/*Additional questions added in July 2016 to core*/
capture confirm var ever_birth
if _rc==0 {
label var ever_birth "Ever given birth"
label var partner_decision "Before using method, did you discuss decision to avoid pregnancy with partner"
label var partner_overall "Using contraception is your decision, husband's decision or together?"
label var rhythm_final "Who made final decision to use rhythm"
label var lam_final "Who made final decision to use LAM"
encode rhythm_final, gen(rhythm_finalv2) lab(decision_list)
encode lam_final, gen(lam_finalv2) lab(decision_list)
rename why_not_decision why_not_decision
label var why_not_decision "Whose decision is it not to use contraception"
label define partner_overall_list 1 "respondent" 2 "husband" 3 "joint" 96 "other" -99 "-99"
encode partner_overall, lab(partner_overall_list) gen(partner_overallv2)
encode why_not_decision, lab(partner_overall_list) gen (why_not_decisionv2)
foreach var in ever_birth partner_decision {
encode `var', gen(`var'v2) lab(yes_no_dnk_nr_list)
}
}
unab vars: *v2
local stubs: subinstr local vars "v2" "", all
foreach var in `stubs'{
rename `var' `var'QZ
order `var'v2, after (`var'QZ)
}
rename *v2 *
drop *QZ
*****************************Change the date variables into Stata time***************************
**Change start and end times into SIF to calculate time
*Have to do the same procedures. Using the end time of the survey as the day of the survey
**Extract portion of string variable that has information on mondth/day/year
gen double todaySIF=clock(today, "YMD")
format todaySIF %tc
gen double startSIF=clock(start, "MDYhms")
gen double manual_dateSIF=clock(manual_date, "MDYhms")
format startSIF %tc
format manual_dateSIF %tc
gen double endSIF=clock(end, "MDYhms")
format endSIF %tc
gen double birthdateSIF=clock(birthdate, "MDY")
format birthdateSIF %tc
gen double husband_cohabit_start_firstSIF=clock(husband_cohabit_start_first, "MDY")
format husband_cohabit_start_firstSIF %tc
replace husband_cohabit_start_firstSIF=. if regexm(husband_cohabit_start_first, "2020")
gen double husband_cohabit_start_recentSIF=clock(husband_cohabit_start_recent, "MDY")
format husband_cohabit_start_recentSIF %tc
replace husband_cohabit_start_recentSIF=. if regexm(husband_cohabit_start_recent, "2020")
capture replace first_birth=recent_birth if children_born==1
capture replace first_birth=recent_birth if birth_events==1
capture replace first_birth=recent_birth if birth_events==1 & children_born==2
gen double first_birthSIF=clock(first_birth, "MDY")
format first_birthSIF %tc
replace first_birthSIF=. if regexm(first_birth, "2020")
gen double recent_birthSIF=clock(recent_birth, "MDY")
format recent_birthSIF %tc
replace recent_birthSIF=. if regexm(recent_birth, "2020")
gen double stop_usingSIF=clock(stop_using, "MDY")
format stop_usingSIF %tc
replace stop_usingSIF=. if regexm(stop_using, "2020")
unab vars: *SIF
local stubs: subinstr local vars "SIF" "", all
foreach var in `stubs'{
order `var'SIF, after (`var')
}
rename todaySIF FQtodaySIF
rename startSIF FQstartSIF
rename manual_dateSIF FQmanual_dateSIF
rename endSIF FQendSIF
rename your_name RE
replace RE=name_typed if your_name_check==0 | your_name_check==.
***************************************************************************************************
********************************* REPROGRAM FEMALE RESPONDENT *********************************
***************************************************************************************************
replace current_recent_method="" if recent_user!=1 & current_user!=1
replace current_method="" if current_user!=1
replace recent_method="" if recent_user!=1
*Current Use
gen femalester=0 if FRS_result==1
gen malester=0 if FRS_result==1
gen IUD=0 if FRS_result==1
gen injectables3=0 if FRS_result==1
gen injectables1=0 if FRS_result==1
gen injectables=0 if FRS_result==1
gen implant=0 if FRS_result==1
gen pill=0 if FRS_result==1
gen malecondom=0 if FRS_result==1
gen femalecondom=0 if FRS_result==1
gen LAM=0 if FRS_result==1
gen EC=0 if FRS_result==1
gen diaphragm=0 if FRS_result==1
gen N_tablet=0 if FRS_result==1
gen foamjelly=0 if FRS_result==1
gen stndrddays=0 if FRS_result==1
gen rhythm=0 if FRS_result==1
gen withdrawal=0 if FRS_result==1
gen othertrad=0 if FRS_result==1
split current_method, gen(current_method_temp)
forval y=1/10{
capture confirm variable current_method_temp`y'
if _rc==0{
replace femalester=1 if current_method_temp`y'=="female_sterilization" & FRS_result==1
replace malester=1 if current_method_temp`y'=="male_sterilization" & FRS_result==1
replace IUD=1 if current_method_temp`y'== "IUD" & FRS_result==1
replace injectables3=1 if current_method_temp`y'== "injectables_3mo" & FRS_result==1
replace injectables1=1 if current_method_temp`y'== "injectables_1mo" & FRS_result==1
replace injectables=1 if current_method_temp`y'=="injectables" & FRS_result==1
replace implant=1 if current_method_temp`y'=="implants" & FRS_result==1
replace pill=1 if current_method_temp`y'=="pill" & FRS_result==1
replace malecondom=1 if current_method_temp`y'=="male_condoms" & FRS_result==1
replace femalecondom=1 if current_method_temp`y'== "female_condoms" & FRS_result==1
replace LAM=1 if current_method_temp`y'== "LAM" & FRS_result==1
replace EC=1 if current_method_temp`y'=="emergency" & FRS_result==1
replace diaphragm=1 if current_method_temp`y'== "diaphragm" & FRS_result==1
replace N_tablet=1 if current_method_temp`y'== "N_tablet" & FRS_result==1
replace foamjelly=1 if current_method_temp`y'=="foam" & FRS_result==1
replace stndrddays=1 if current_method_temp`y'== "beads" & FRS_result==1
replace rhythm=1 if current_method_temp`y'== "rhythm" & FRS_result==1
replace withdrawal=1 if current_method_temp`y'=="withdrawal" & FRS_result==1
replace othertrad=1 if current_method_temp`y'=="other_traditional" & FRS_result==1
}
}
drop current_method_temp*
split why_not_using, gen(why_not_using_)
local x=r(nvars)
foreach var in not_married infrequent_sex menopausal_hysterectomy infecund not_menstruated ///
breastfeeding husband_away fatalistic respondent_opposed partner_opposed others_opposed religion ///
no_knowledge no_source_known side_effects health no_access cost preferred_unavailable ///
no_method_available inconvenient interferes_with_body other {
gen wn`var'=0 if why_not_using!="" & why_not_using!="-99"
forval y=1/`x' {
replace wn`var'=1 if why_not_using_`y'=="`var'"
label values wn`var' yes_no_dnk_nr_list
}
}
drop why_not_using_*
rename wnnot_married why_not_usingnotmarr
rename wninfrequent_sex why_not_usingnosex
rename wnmenopausal_hysterectomy why_not_usingmeno
rename wninfecund why_not_usingsubfec
rename wnnot_menstruated why_not_usingnomens
rename wnbreastfeeding why_not_usingbreastfd
rename wnhusband_away why_not_usinghsbndaway
rename wnfatalistic why_not_usinguptogod
rename wnrespondent_opposed why_not_usingrespopp
rename wnpartner_opposed why_not_usinghusbopp
rename wnothers_opposed why_not_usingotheropp
rename wnreligion why_not_usingrelig
rename wnno_knowledge why_not_usingdkmethod
rename wnno_source_known why_not_usingdksource
rename wnside_effects why_not_usingfearside
rename wnhealth why_not_usinghealth
rename wnno_access why_not_usingaccess
rename wncost why_not_usingcost
rename wnpreferred_unavailable why_not_usingprfnotavail
rename wnno_method_available why_not_usingnomethod
rename wninconvenient why_not_usinginconv
rename wninterferes_with_body why_not_usingbodyproc
rename wnother why_not_usingother
order why_not_usingnotmarr-why_not_usingother, after(why_not_using)
*Awareness
unab vars: heard_*
local stubs: subinstr local vars "heard_" "", all
foreach var in `stubs'{
label var heard_`var' "Have you ever heard of `var'"
encode heard_`var', gen(heard_`var'v2) lab(yes_no_dnk_nr_list)
order heard_`var'v2, after(heard_`var')
drop heard_`var'
rename heard_`var'v2 heard_`var'
}
capture rename heard_gel heard_foamjelly
*Replace skipped questions with values
replace fp_ever_user=1 if (current_user==1 | recent_user==1) & (current_recent_method!="-99")
capture replace age_at_first_use_children=0 if birth_events==0 & fp_ever_user==1
duplicates tag metainstanceName, gen(dupFQ)
duplicates tag link, gen(duplink)
duplicates report
duplicates drop
save `CCRX'_FRQ_$date.dta, replace
|
clear
clear matrix
clear mata
capture log close
set maxvar 15000
set more off
numlabel, add
/*******************************************************************************
*
* FILENAME: CCRX_HHQFQ_Wealth_Unmet_Generation_$date.do
* PURPOSE: PMA2020 HHQ/FQ wealth and unmet need generation in preparation for data analysis
* CREATED: Qingfeng Li (qli28@jhu.edu)
* DATA IN: CCRX_$date_ForAnalysis.dta
* DATA OUT: CCRX_WealthScore_$date.dta
* CCRX_UnmetNeed_$date.dt
* CCRX_WealthWeightAll_$date.dta
* CCRX_WealthWeightCompleteHH_$date.dta
* CCRX_WealthWeightFemale_$date.dta
*******************************************************************************/
*******************************************************************************
* INSTRUCTIONS
*******************************************************************************
*
* 1. Update macros/directories in first section
*
*******************************************************************************
* SET MACROS AND CORRECT DOI: UPDATE THIS SECTION FOR EACH COUNTRY/ROUND
*******************************************************************************
* Set local macros for country and round
local country "Nigeria-Oyo"
local round "Round5Oyo"
local CCRX "NGOyoR5"
local assets "electricity-boatmotor"
local water "water_pipe2dwelling-water_sachet"
local wealthgroup 5
* Set macro for date of HHQ/FQ EC recode dataset that intend to use
local datadate "20Dec2017"
* Set directory for country and round
global datadir "/Users/pma2020/Dropbox (Gates Institute)/Nigeria/Data_NotShared/Round5Oyo/Data/HHQFQ"
cd "$datadir"
* Read in data
use "/Users/pma2020/Dropbox (Gates Institute)/Nigeria/PMANG_Datasets/RoundOyo/Prelim95/NGROyo_NONAME_ECRecode_20Dec2017.dta"
* Rename and save data
save "`CCRX'_NONAME_ECRecode_`datadate'_100Prelim.dta", replace
*If running before weights are incorporated, will gen weights=1
capture confirm var HHweight
if _rc!=0{
gen HHweight=1
gen FQweight=1
}
tempfile temp
save `temp', replace
*******************************************************************************
* RENAME VARIABLES
*******************************************************************************
* Rename variables
capture rename wall_clock wallclock
*******************************************************************************
* STOP UPDATING MACROS
*******************************************************************************
cd "$datadir"
log using "`CCRX'_HHQFQ_Wealth_Unmet_Generation_$date.log", replace
* Set local/global macros for current date
local today=c(current_date)
local c_today= "`today'"
global date=subinstr("`c_today'", " ", "",.)
local todaystata=clock("`today'", "DMY")
* Keep only completed surveys
keep if HHQ_result==1
* First, double check HHtag and metatag
* metatag tags one observation from each form while HHtag tags one observation from each household (in the event that multiple
* forms are submitted for the same household then metatag will not equal HHtag. Be clear on whether you want to identify the
* the number of households identified or the number of forms completed)
* For weight calculation, use metatag
* If metatag does not already exist, generate
capture drop metatag
egen metatag=tag(metainstanceID)
keep if metatag==1
*******************************************************************************
* GENERATE WEALTH QUINTILE
*******************************************************************************
* Tab concatonated assets variable and compare to dichotomous asset variables
tab1 assets
codebook assets
tab1 `assets', mis
foreach var of varlist `assets' {
sum `var', det
local m=r(mean)
replace `var'=int(`m') if `var'==.
}
* Create dichotomous livestock owned variables
foreach var of varlist *_owned {
recode `var' -88 -99 88 99 =.
sum `var', det
local m=r(p50)
recode `var' 0/`m'=0 .=0 else=1, gen(`var'01)
tab1 `var', miss
recode `var' .=0
mean `var'
}
* Main material of the floor
tab1 floor, miss
tab1 floor, nolab
recode floor 10/19 96 =1 -99 .=. else=0, gen(floor_natural)
recode floor 20/29=1 -99 .=. else=0, gen(floor_rudimentary)
recode floor 30/39=1 -99 .=. else=0 ,gen(floor_finished)
//recode floor 11=1 .=. else=0, gen(floor_other)
* Main material of the roof
tab1 roof
tab roof, nolab
recode roof 10/19 96=1 -99 .=. else=0, gen(roof_natural)
recode roof 20/29=1 -99 .=. else=0, gen(roof_rudimentary)
recode roof 30/39=1 .=. else=0, gen(roof_finished)
//recode roof 14=1 .=. else=0, gen(roof_other)
* Main material of the exterior walls
tab1 walls
tab walls, nolab
recode walls 10/19 96=1 -99 .=. else=0, gen(wall_natural)
recode walls 20/29=1 -99 .=. else=0, gen(wall_rudimentary)
recode walls 30/39=1 -99 .=. else=0, gen(wall_finished)
* Recode wall, floor, and roof variables
recode wall_finished .=0
recode wall_rudimentary .=1
recode wall_natural .=0
recode floor_natural .=1
recode floor_rudimentary .=0
recode floor_finished .=0
recode roof_natural .=1
recode roof_rudimentary .=0
recode roof_finished .=0
* Check the page for Country DHS; PDF page 358 of Country 2011 DHS
* Improved drinking water sources include: water from pipe/tap, public tap, borehole or pump, protected well, protected spring or rainwater.
* Improved water sources do not include: vendor-provided water, bottled water, tanker trucks or unprotected wells and springs.
* Generate dichotomous water source variables
tab water_sources_all, mis
gen water_pipe2dwelling= regexm(water_sources_all, ["piped_indoor"])
gen water_pipe2yard= regexm(water_sources_all, ["piped_yard"])
gen water_publictap= regexm(water_sources_all, ["piped_public"])
gen water_tubewell= regexm(water_sources_all, ["tubewell"])
gen water_protectedwell= regexm(water_sources_all, ["protected_dug_well"])
gen water_unprotectedwell= regexm(water_sources_all, ["unprotected_dug_well"])
gen water_protectedspring= regexm(water_sources_all, ["protected_spring"])
gen water_unprotectedspring= regexm(water_sources_all, ["unprotected_spring"])
gen water_rainwater= regexm(water_sources_all, ["rainwater"])
gen water_tankertruck= regexm(water_sources_all, ["tanker"])
gen water_cart= regexm(water_sources_all, ["cart"])
gen water_surfacewater= regexm(water_sources_all, ["surface_water"])
gen water_bottled= regexm(water_sources_all, ["bottled"])
gen water_sachet= regexm(water_sources_all, ["sachet"])
gen water_refill=regexm(water_sources_all, ["refill"])
* Generate dichotomous toilet facility variables
tab sanitation_all, mis
gen toilet_pipedsewer=regexm(sanitation_all, ["flush_sewer"])
gen toilet_septictank=regexm(sanitation_all, ["flush_septic"])
gen toilet_flushpit=regexm(sanitation_all, ["flushpit"])
gen toilet_elsewhere=regexm(sanitation_all, ["flush_elsewhere"])
gen toilet_unknown=regexm(sanitation_all, ["flush_unknown"])
gen toilet_ventilatedpitlatrine=regexm(sanitation_all, ["vip"])
gen toilet_pitlatrinewslab=regexm(sanitation_all, ["pit_with_slab"])
gen toilet_pitlatrinewoslab=regexm(sanitation_all, ["pit_no_slab"])
gen toilet_compostingtoilet=regexm(sanitation_all, ["composting"])
gen toilet_buckettoilet=regexm(sanitation_all, ["bucket"])
gen toilet_hangingtoilet=regexm(sanitation_all, ["hanging"])
gen toilet_nofacility=regexm(sanitation_all, ["bush"])
gen toilet_other=regexm(sanitation_all, ["other"])
gen toilet_missing=regexm(sanitation_all, ["-99"])
* Rename toilet facility options so not included in wealth quintile generation
rename toilet_nofacility notoilet_nofacility
capture rename toilet_bushwater notoilet_bushwater
rename toilet_missing notoilet_missing
* Create temp file for all variables used to generate wealth quintile
tab1 `water' `assets' floor_* wall_* roof_* toilet_* *_owned01, mis
tempfile tem
save `tem', replace
* Create mean tempfile for all variables used to generate wealth quintile
preserve
collapse (mean) `water' `assets' floor_* wall_* roof_* toilet_* *_owned01
tempfile mean
save `mean', replace
restore
* Create standard deviation tempfile for all variables used to generate wealth quintile
preserve
collapse (sd) `water' `assets' floor_* wall_* roof_* toilet_* *_owned01
tempfile sd
save `sd', replace
restore
* Create count (N) tempfile for all variables used to generate wealth quintile
preserve
collapse (count) `water' `assets' floor_* wall_* roof_* toilet_* *_owned01
tempfile N
save `N', replace
restore
* Create minimum tempfile for all variables used to generate wealth quintile
preserve
collapse (min) `water' `assets' floor_* wall_* roof_* toilet_* *_owned01
tempfile min
save `min', replace
restore
* Create maximum tempfile for all variables used to generate wealth quintile
preserve
collapse (max) `water' `assets' floor_* wall_* roof_* toilet_* *_owned01
tempfile max
save `max', replace
restore
use `mean', clear
append using `sd'
append using `N'
append using `min'
append using `max'
* Use principal component analysis to generate wealth quintile
use `tem', clear
su `water' `assets' floor_* wall_* roof_* toilet_* *_owned01
pca `water' `assets' floor_* wall_* roof_* toilet_* *_owned01
predict score
alpha `water' `assets' floor_* wall_* roof_* toilet_* *_owned01
* Put into quintiles/tertiles based on weighted households
gen wealthcat=`wealthgroup'
if wealthcat==3 {
xtile wealthtertile=score [pw=HHweight], nq(3)
cap label define wealthtert 1 "Lowest tertile" 2 "Middle tertile" 3 "Highest tertile"
label value wealthtertile wealthtert
}
else {
xtile wealthquintile=score [pweight=HHweight], nq(5)
cap label define wealthquint 1 "Lowest quintile" 2 "Lower quintile" 3 "Middle quintile" 4 "Higher quintile" 5 "Highest quintile"
label value wealthquintile wealthquint
}
drop wealthcat
*******************************************************************************
* SAVE AND MERGE WEALTH QUINTILE DATA
*******************************************************************************
* Keep only wealth quintile, pca score variable, and metainstanceID and save as dataset
keep metainstanceID score wealth*
tempfile wealthscore
save `wealthscore', replace
* Merge wealth score dataset into dataset for analysis
use `temp'
*use `CCRX'_`date'_ForAnalysis.dta
merge m:1 metainstanceID using `wealthscore', nogen
save "`CCRX'_WealthWeightAll_$date.dta", replace
*******************************************************************************
* GENERATE UNMET NEED
*******************************************************************************
/* Note from DHS:
Stata program to create Revised unmet need variable as described in
Analytical Study 25: Revising Unmet Need for Family Plnaning
by Bradley, Croft, Fishel, and Westoff, 2012, published by ICF International
measuredhs.com/pubs/pdf/AS25/AS25.pdf
Program written by Sarah Bradley and edited by Trevor Croft, last updated 23 January 2011
SBradley@icfi.com
This program will work for most surveys. If your results do not match
Revising Unmet Need for Family Planning, the survey you are analyzing may require
survey-specific programming. See survey-specific link at measuredhs.com/topics/Unmet-Need.cfm */
* Use weighted female dataset with wealth quintile
use "`CCRX'_WealthWeightAll_$date.dta", clear
keep if FRS_result==1 & HHQ_result==1
tempfile temp
save `temp',replace
* Check for missing values
codebook FQdoi_corrected
* Split FQdoi_corrected
split FQdoi_corrected, gen(doi_)
* Generate doimonth (month of interview) from first split variable
gen doimonth=doi_1
tab1 doimonth, mis
replace doimonth=lower(doimonth)
tab1 doimonth, mis
replace doimonth="12" if doimonth=="dec"
replace doimonth="1" if doimonth=="jan"
replace doimonth="2" if doimonth=="feb"
replace doimonth="3" if doimonth=="mar"
replace doimonth="4" if doimonth=="apr"
replace doimonth="5" if doimonth=="may"
replace doimonth="6" if doimonth=="jun"
replace doimonth="7" if doimonth=="jul"
replace doimonth="8" if doimonth=="aug"
replace doimonth="9" if doimonth=="sep"
replace doimonth="10" if doimonth=="oct"
replace doimonth="11" if doimonth=="nov"
tab1 doimonth, mis
* Generate doiyear (year of interview) from third split variable
gen doiyear=doi_3
* Destring doimonth and doiyear
destring doimonth doiyear, replace
* Calculate doi in century month code (months since January 1900)
gen doicmc=(doiyear-1900)*12+doimonth
tab doicmc, mis
* Check the dates to make sure they make sense and correct if they do not
egen tagdoicmc=tag(doicmc)
*br SubmissionDate wrongdate start end system* doi* this* if tagdoicmc==1
* Drop unecessary variables used to generate doicmc
drop tagdoicmc
drop doi_*
* Confirm have only completed HHQ/FQ surveys
keep if FRS_result==1 & HHQ_result==1
codebook metainstanceID
* Generate unmet need variable
capture g unmet=.
* Set unmet need to NA for unmarried women if survey only included ever-married women
tab FQmarital_status, mis
tab FQmarital_status, mis nolab
recode FQmarital_status -99 =.
* Tab desire for more children variable(s)
des more_children*
tab more_children, mis
tab more_children, mis nolab
* Tab length of time want to wait until next birth variable
tab wait_birth, miss
tab wait_birth, miss nolab
tab wait_birth_pregnant, miss nolab
*******************************************************************************
* GROUP 1: CONTRACEPTIVE USERS
*******************************************************************************
* Using to limit if wants no more, sterilized, or declared infecund
recode unmet .=4 if cp==1 & (more_children==2 | femalester==1 | malester==1 | more_children==3)
* Using to space - all other contraceptive users
recode unmet .=3 if cp==1
*******************************************************************************
* GROUP 2: PREGNANT OR POSTPARTUM AMENORRHEIC (PPA) WOMEN
*******************************************************************************
* Determine who should be in Group 2
* Generate time since last birth (i.e. gap between date of interview and date of last birth)
* Generate month and year of last birth variables
split recent_birth, gen(lastbirth_)
rename lastbirth_1 lastbirthmonth
rename lastbirth_3 lastbirthyear
drop lastbirth_*
* Create numeric month of last birth variable
replace lastbirthmonth=lower(lastbirthmonth)
replace lastbirthmonth="1" if lastbirthmonth=="jan"
replace lastbirthmonth="2" if lastbirthmonth=="feb"
replace lastbirthmonth="3" if lastbirthmonth=="mar"
replace lastbirthmonth="4" if lastbirthmonth=="apr"
replace lastbirthmonth="5" if lastbirthmonth=="may"
replace lastbirthmonth="6" if lastbirthmonth=="jun"
replace lastbirthmonth="7" if lastbirthmonth=="jul"
replace lastbirthmonth="8" if lastbirthmonth=="aug"
replace lastbirthmonth="9" if lastbirthmonth=="sep"
replace lastbirthmonth="10" if lastbirthmonth=="oct"
replace lastbirthmonth="11" if lastbirthmonth=="nov"
replace lastbirthmonth="12" if lastbirthmonth=="dec"
* Destring last birth month and year variables
destring lastbirth*, replace
tab1 lastbirthmonth lastbirthyear
* Replace last birth month and year equal to missing is year is 2020 (i.e. missing)
replace lastbirthmonth=. if lastbirthyear==2020
recode lastbirthyear 2020=.
* Generate last birth data in century month code
gen lastbirthcmc=(lastbirthyear-1900)*12+lastbirthmonth
* Generate time since last birth in months variable
gen tsinceb=doicmc-lastbirthcmc
* Generate time since last period in months from v215, time since last menstrual period
replace menstrual_period=. if menstrual_period==-99
* Tab menstrual_period
tab menstrual_period, mis
tab menstrual_period, mis nolab
**Some women who says years since mp report the actual year
replace menstrual_period_value = menstrual_period_value-doiyear if menstrual_period_value>2000 ///
& menstrual_period_value!=. & menstrual_period==4 // years
* Generate time since menstrual period variable in months
g tsincep = menstrual_period_value if menstrual_period==3 // months
replace tsincep = int(menstrual_period_value/30) if menstrual_period==1 // days
replace tsincep = int(menstrual_period_value/4.3) if menstrual_period==2 // weeks
replace tsincep = menstrual_period_value*12 if menstrual_period==4 // years
* Initialize pregnant (1) or currently postpartum amenorrheic (PPA) women who have not had period since before birth (6)
g pregPPA=1 if pregnant==1 | menstrual_period==6
* For women with missing data or "period not returned" on date of last menstrual period, use information from time since last period
* If last period is before last birth in last 5 years
replace pregPPA=1 if tsincep> tsinceb & tsinceb<60 & tsincep!=. & tsinceb!=.
* Or if said "before last birth" to time since last period in the last 5 years
replace pregPPA=1 if menstrual_period==-99 & tsinceb<60 & tsinceb!=.
* Select only women who are pregnant or PPA for <24 months
g pregPPA24=1 if pregnant==1 | (pregPPA==1 & tsinceb<24)
* Classify based on wantedness of current pregnancy/last birth
* Generate variable for whether or not wanted last/current pregnancy then, later, or not at all
gen wantedlast=pregnancy_current_desired // currently pregnant
gen m10_1=pregnancy_last_desired // not currently pregnant
replace wantedlast = m10_1 if (wantedlast==. | wantedlast==-99) & pregnant!=1
tab wantedlast
replace wantedlast=. if wantedlast==-99
* Recode as no unmet need if wanted current pregnancy/last birth then/at that time
recode unmet .=7 if pregPPA24==1 & wantedlast==1
* Recode as unmet need for spacing if wanted current pregnancy/last birth later
recode unmet .=1 if pregPPA24==1 & wantedlast==2
* Recode as unmet need for limiting if wanted current pregnancy/last birth not at all
recode unmet .=2 if pregPPA24==1 & wantedlast==3
* Recode unmet need as missing value if "wantedlast" missing and if has been post-partum amenorrheic for less then 24 months
recode unmet .=99 if pregPPA24==1 & wantedlast==.
* Determine if sexually active in last 30 days: less than 4 weeks or less than 30 days
gen sexact=0
replace sexact=1 if (last_time_sex==2 & last_time_sex_value<=4 & last_time_sex_value>=0) | (last_time_sex==1 & last_time_sex_value<=30 & last_time_sex_value>=0) ///
| (last_time_sex==3 & last_time_sex_value<=1 & last_time_sex_value>=0)
* If unmarried and not sexually active in last 30 days, assume no need
recode unmet .=97 if FQmarital_status~=1 & FQmarital_status~=2 & sexact!=1
*******************************************************************************
* GROUP 3: DETERMINE FECUNDITY
*******************************************************************************
* Boxes refer to Figure 2 flowchart in DHS Analytics 25 Report
* Box 1 (applicable only to currently married/cohabiting women)
* Married 5+ years ago, no children in past 5 years, never used contraception, excluding pregnant and PPA <24 months
* husband_cohabit_start_current husband_cohabit_start_recent
* husband_cohabit_start_recent is first marriage, FQfirstmarriagemonth and FQfirstmarriageyear are dates of first marriage (women married more than once)
* husband_cohabit_start_current is current marriage (if woman married only once, only have current marriage)
* If first marriage more than five years ago, never used contraception, and never had child, then infecund
* Recode month and year of marriage as missing if year of marriage is 2020 (i.e. missing)
tab1 *marriagemonth *marriageyear, mis
replace firstmarriagemonth=. if firstmarriageyear==2020
replace recentmarriagemonth=. if recentmarriageyear==2020
recode firstmarriageyear 2020=.
recode recentmarriageyear 2020=.
* Generate marriage century month code variable
gen marriagecmc=(firstmarriageyear-1900)*12+firstmarriagemonth
replace marriagecmc=(recentmarriageyear-1900)*12 + recentmarriagemonth if marriage_history==1
* Generate time since marriage century month code variable
gen v512=int((doicmc-marriagecmc)/12)
tab v512, mis //years since marriage
* Generate dichotomous infecund variable
g infec=1 if (FQmarital_status==1 | FQmarital_status==2) & v512>=5 & v512!=. & (tsinceb>59 | ever_birth==0) & fp_ever_used==0 & pregPPA24!=1
* Box 2
* Declared infecund on future desires for children
replace infec=1 if more_children==3 // v605==7
* Box 3
* Menopausal/hysterectomy as reason not using contraception - slightly different recoding in DHS III and IV+
replace infec=1 if why_not_usingmeno==1
* Box 4
* Time since last period is >=6 months and not PPA
replace infec=1 if tsincep>=6 & tsincep!=. & pregPPA!=1
* Box 5
* Menopausal/hysterectomy for time since last period response
replace infec=1 if menstrual_period==5
* Never menstruated for time since last period response, unless had a birth in the last 5 years
replace infec=1 if menstrual_period==7 & (tsinceb>59 | tsinceb==.)
* Box 6
* Time since last birth>= 60 months and last period was before last birth
replace infec=1 if menstrual_period==6 & tsinceb>=60 & tsinceb!=.
* Never had a birth, but last period reported as before last birth - assume code should have been something other than 6
replace infec=1 if menstrual_period==6 & ever_birth==0
* Exclude pregnant and PP amenorrheic < 24 months
replace infec=. if pregPPA24==1
* Recode unmet need
recode unmet .=9 if infec==1
*******************************************************************************
* GROUP 4: FECUND WOMEN
*******************************************************************************
* Recode as no unmet need if wants child within 2 years
recode unmet .=7 if more_children==1 & ((wait_birth==1 & wait_birth_value<24) | (wait_birth==2 & wait_birth_value<2) | (wait_birth==3) ) // v605==1, wants within 2 years; wait_birth: 4 months; 5 years
* Recode as unmet need for spacing if wants in 2+ years, wants undecided timing, or unsure if wants
recode unmet .=1 if more_children==-88 | (more_children==1 & ( (wait_birth==1 & wait_birth_value>=24) | (wait_birth==2 & wait_birth_value>=2)) | (wait_birth==-88) | (wait_birth==-99)) //v605>=2 & v605<=4
* Recode as unmet need for limiting if wants no more children
recode unmet .=2 if more_children==2
* Recode any reamining missing values as "99"
recode unmet .=99
* Label unmet need
capture la def unmet ///
1 "unmet need for spacing" ///
2 "unmet need for limiting" ///
3 "using for spacing" ///
4 "using for limiting" ///
7 "no unmet need" ///
9 "infecund or menopausal" ///
97 "not sexually active" ///
98 "unmarried - EM sample or no data" ///
99 "missing"
la val unmet unmet
* Generate and lable dichotomous unmet need variable
capture recode unmet (1/2=1 "unmet need") (else=0 "no unmet need"), g(unmettot)
* Tab unmet need variable for married or unmarried women
tab unmet
tab unmet if FQmarital_status==1 | FQmarital_status==2
* Keep only certain variables
keep unmet unmettot FQmetainstanceID doi* *cmc tsince*
* Drop duplicates
duplicates drop FQmetainstanceID, force
*******************************************************************************
* SAVE AND MERGE UNMET NEED DATA, CLOSE LOG
*******************************************************************************
* Save unmet need dataset
tempfile unmet
saveold `unmet', replace
* Merge unmet need dataset with weighted, wealth quintile dataset containing all data
use "`CCRX'_WealthWeightAll_$date.dta"
merge m:1 FQmetainstanceID using `unmet', nogen
saveold "`CCRX'_WealthWeightAll_$date.dta", replace version(12)
log close
|
clear
clear matrix
clear mata
capture log close
set maxvar 15000
set more off
numlabel, add
/*******************************************************************************
*
* FILENAME: PMA_HHQFQ_2Page_Analysis_$date.do
* PURPOSE: PMA2020 HHQ/FQ two page data analysis
* CREATED: Linnea Zimmerman (lzimme12@jhu.edu)
* DATA IN: CCRX_WealthWeightFemale_$date.dta
* DATA OUT: CCRX_HHQFQ_2Page_Analysis_$date.dta
*
*******************************************************************************/
*******************************************************************************
* INSTRUCTIONS
*******************************************************************************
*
* 1. Update macros/directories in first section
* 2. If using source of method by method figure, uncomment last section
* and update coding
*
*******************************************************************************
* SET MACROS: UPDATE THIS SECTION FOR EACH COUNTRY/ROUND
*******************************************************************************
* Set macros for country and round
local country "Nigeria"
local round "Round5Oyo"
local CCRX "NGOyoR5"
* Set macros for contraceptive methods
local mcp "(current_methodnumEC>=1 & current_methodnumEC>>=19)"
local tcp "(current_methodnumEC>=30 & current_methodnumEC<=39)"
* Set macro for date of most recently generated weighted HHQ/FQ dataset with wealth and unmet need variables that intend to use
local datadate "23Feb2018"
* Set macros for data sets
local householddata "/Users/ealarson/Documents/Côte d'Ivoire/Data_NotShared/Round1/HHQFQ/Analysis - 02.23.2018/CIR1_WealthWeightAll_8Aug2018.dta"
* Set macro for directory/file of program to calculate medians
local medianfile "/Users/ealarson/Dropbox (Gates Institute)/1 DataManagement_General/DataRoutines/PMA_SOI_2P/DoFiles/Current/PMA2020_MedianDefineFn_simple_9Mar2015.do"
*local medianfile "C:\Users\Shulin\Dropbox (Gates Institute)/DataRoutines/PMA_SOI_2P/DoFiles/Current/PMA2020_MedianDefineFn_simple_9Mar2015.do"
* Set directory for country and round
global datadir "/Users/ealarson/Documents/Côte d'Ivoire/Data_NotShared/Round1/HHQFQ/Analysis - 02.23.2018"
cd "$datadir"
* Set local/global macros for current date
local today=c(current_date)
local c_today= "`today'"
global date=subinstr("`c_today'", " ", "",.)
* Create log
log using "`CCRX'_HHQFQ_2Page_Analysis.log", replace
*******************************************************************************
* PREPARE DATA FOR ANALYSIS
*******************************************************************************
* First use household data to show response rates
use "`householddata'",clear
preserve
keep if metatag==1
gen responserate=0 if HHQ_result>=1 & HHQ_result<6
replace responserate=1 if HHQ_result==1
label define responselist 0 "Not complete" 1 "Complete"
label val responserate responselist
tabout responserate using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", replace ///
cells(freq col) h2("Household response rate") f(0 1) clab(n %)
restore
* Response rate among all women
gen FQresponserate=0 if eligible==1 & last_night==1
replace FQresponserate=1 if FRS_result==1 & last_night==1
label define responselist 0 "Not complete" 1 "Complete"
label val FQresponserate responselist
tabout FQresponserate if HHQ_result==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append ///
cells(freq col) h2("Female response rate") f(0 1) clab(n %)
* Restrict analysis to women who completed questionnaire and households with completed questionnaire
keep if FRS_result==1 & HHQ_result==1
* Restrict analysis to women who slept in the house the night before (de facto)
keep if last_night==1
* Save data set so can replicate analysis results later
save "`CCRX'_HHQFQ_2Page_Analysis_$date.dta", replace
* Check for duplicates
duplicates report FQmetainstanceID
codebook FQmetainstanceID
* Set survey weights
svyset EA [pweight=FQweight]
* Generate variable that represents number of observations
gen one=FRS_result
label var one "All women"
* Generate dichotomous "married" variable to represent all women married or currently living with a man
gen married=(FQmarital_status==1 | FQmarital_status==2)
label variable married "Married or currently living with a man"
* Generate dichotomous sexually active unmarried women variable
cap drop umsexactive
gen umsexactive=0
replace umsexact=1 if married==0 & ((last_time_sex==2 & last_time_sex_value<=4 & last_time_sex_value>=0) | (last_time_sex==1 & last_time_sex_value<=30 & last_time_sex_value>=0) ///
| (last_time_sex==3 & last_time_sex_value<=1 & last_time_sex_value>=0))
*Generate sexually active variable
gen sexactive= (last_time_sex==2 & last_time_sex_value<=4 & last_time_sex_value>=0) | (last_time_sex==1 & last_time_sex_value<=30 & last_time_sex_value>=0) ///
| (last_time_sex==3 & last_time_sex_value<=1 & last_time_sex_value>=0)
* Generate 0/1 urban/rural variable
capture confirm var ur
if _rc==0 {
gen urban=ur==1
label variable urban "Urban/rural place of residence"
label define urban 1 "Urban" 0 "Rural"
label value urban urban
tab urban, mis
}
else {
gen urban=1
label variable urban "No urban/rural breakdown"
}
* Label yes/no response options
capture label define yesno 0 "No" 1 "Yes"
foreach x in married umsexactive sexactive {
label values `x' yesno
}
* Tabout count of all women, unweighted
*tabout one using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", replace cells(freq) h2("All women (unweighted)") f(0)
/* Tabout count of all women, weighted (should be same as unweighted count of all women)
tabout one [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append cells(freq) h2("All women (weighted)") f(0)
* Tabout count of all married women, unweighted
*tabout married if married==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append cells (freq) h2("Married women (unweighted)") f(0)
* Tabout count of all married women, weighted
tabout married if married==1 [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append cells(freq) h2("Married women (weighted)") f(0)
* Tabout count of unmarried sexually active women, unweighted
*tabout umsexactive if umsexactive==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append cells(freq) h2("Unmarried sexually active (unweighted)") f(0)
* Tabout count of unmarried sexually active women, weighted
tabout umsexactive if umsexactive==1 [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append cells(freq) h2("Unmarried sexually active (weighted)") f(0)
*/
*******************************************************************************
* FIRST PAGE
*******************************************************************************
*******************************************************************************
* CONTRACEPTIVE PREVALENCE RATE
*******************************************************************************
* Set macros for contraceptive methods
local mcp "(current_methodnumEC>=1 & current_methodnumEC<=19)"
local tcp "(current_methodnumEC>=30 & current_methodnumEC<=39)"
* Generate numeric current method variable that includes emergency contraception
capture gen current_methodnumEC=current_recent_methodEC if cp==1
label variable current_methodnumEC "Current contraceptive method, including EC (numeric)"
* Generate dichotomous current use of modern contraceptive variable
capture gen mcp=`mcp'
label variable mcp "Current use of modern contraceptive method"
* Generate dichotomous current use of traditional contraceptive variable
capture gen tcp=`tcp'
label variable tcp "Current use of traditional contraceptive method"
* Generate dichotomous current use of any contraceptive variable
capture gen cp= current_methodnumEC>=1 & current_methodnumEC<40
label variabl cp "Current use of any contraceptive method"
* Generate dichotomous current use of long acting contraceptive variable
capture drop longacting
capture gen longacting=current_methodnumEC>=1 & current_methodnumEC<=4
label variable longacting "Current use of long acting contraceptive method"
* Label yes/no response options
foreach x in cp mcp tcp longacting {
label values `x' yes_no_dnk_nr_list
}
* Tabout weighted proportion of contracpetive use (overall, modern, traditional, long acting) among all women
tabout cp mcp longacting [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) h2("CPR/mCPR/Long-acting - all women (weighted)")
* Tabout weighted proportion of contracpetive use (overall, modern, traditional, long acting) among married women
tabout cp mcp longacting if married==1 [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) h2("CPR/mCPR/Long-acting - married women (weighted)")
*******************************************************************************
* UNMET NEED
*******************************************************************************
* Label variables for tabout
label variable unmet "Unmet need (categorical)"
label variable unmettot "Unmet need (dichotomous)"
* Generate total demand = current use + unmet need
gen totaldemand=0
replace totaldemand=1 if cp==1 | unmettot==1
label variable totaldemand "Has contraceptive demand, i.e. current user or unmet need"
* Generate total demand staisfied - CONFIRM INDICATOR CODED CORRECTLY
gen totaldemand_sat=0 if totaldemand==1
replace totaldemand_sat=1 if totaldemand==1 & mcp==1
label variable totaldemand_sat "Contraceptive demand satisfied by modern method"
* Generate categorical unmet need, traditional method, modern method variable
gen cont_unmet=0 if married==1
replace cont_unmet=1 if unmettot==1
replace cont_unmet=2 if tcp==1
replace cont_unmet=3 if mcp==1
label variable cont_unmet "Unmet need, traditional method, and modern method prevalence among married women"
label define cont_unmetl 0 "None" 1 "Unmet need" 2 "Traditional contraceptive use" 3 "Modern contraceptive use"
label values cont_unmet cont_unmetl
* Label yes/no response options
foreach x in totaldemand totaldemand_sat {
label values `x' yesno
}
* Tabout weighted proportion of unmet need (categorical and dichotomous) among all women
tabout unmettot unmet [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) h2("Unmet need (categorical and dichotomous) - all women (weighted)")
* Tabout weighted proportion of unmet need (categorical dichotomous) among married women
tabout unmettot unmet [aw=FQweight] if married==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) h2("Unmet need (categorical and dichotomous) - married women (weighted)")
* Tabout weighted proportion of unmet need (categorical dichotomous) among unmarried sexually active women
*capture tabout unmettot unmet [aw=FQweight] if umsexactive==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) h2("Unmet need (categorical and dichotomous) - unmarried sexually active women (weighted)")
* Tabout weighted proportion of total demand among all women
tabout totaldemand [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) h2("Total demand for contraception - all women (weighted)")
* Tabout weighted proportion of total demand among all women
tabout totaldemand_sat [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) h2("Contraceptive demand satisfied by modern method- all women (weighted)")
* Tabout weighted proportion of total demand among married women
tabout totaldemand [aw=FQweight] if married==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) h2("Total demand for contraception - married women (weighted)")
* Tabout weighted proportion of total demand among married women
tabout totaldemand_sat [aw=FQweight] if married==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) h2("Contraceptive demand satisfied by modern method- married women (weighted)")
* Tabout weighted proportion of total demand among married women
tabout totaldemand_sat wealth [aw=FQweight] if married==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) clab(%) npos(row) h1("Contraceptive demand satisfied by wealth - married women (weighted)")
*******************************************************************************
* FERTILITY INDICATORS
*******************************************************************************
* Generate variables like in DHS, bith date in cmc
gen birthmonth2=birthmonth
replace birthmonth=6 if birthmonth==-88
gen v011=(birthyear-1900)*12 + birthmonth
*******************************************************************************
* UNINTENDED BIRTHS
*******************************************************************************
* Codebook recent births unintended: last birth/current pregnancy wanted then, later, not at all
codebook pregnancy_last_desired
codebook pregnancy_current_desired
* Recode "-99" as "." to represent missing
recode pregnancy_last_desired -99 =.
recode pregnancy_current_desired -99 =.
* Generate wantedness variable that combines results from last birth and current pregnancy questions
gen wanted=1 if pregnancy_last_desired==1 | pregnancy_current_desired==1
replace wanted=2 if pregnancy_last_desired==2 | pregnancy_current_desired==2
replace wanted=3 if pregnancy_last_desired==3 | pregnancy_current_desired==3
label variable wanted "Intendedness of previous birth/current pregnancy (categorical): then, later, not at all"
label def wantedlist 1 "then" 2 "later" 3 "not at all"
label val wanted wantedlist
tab wanted, mis
* Generate dichotomous intendedness variables that combines births wanted "later" or "not at all"
gen unintend=1 if wanted==2 | wanted==3
replace unintend=0 if wanted==1
label variable unintend "Intendedness of previous birth/current pregnancy (dichotomous)"
label define unintendl 0 "intended" 1 "unintended"
label values unintend unintendl
* Tabout intendedness and wantedness among women who had a birth in the last 5 years or are currently pregnant
tabout unintend wanted [aw=FQweight] if tsinceb<60 | pregnant==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) ///
h2("Intendedness (dichotomous and categorical) among women who had a birth in the last 5 years or are currently pregnant (weighted)")
*******************************************************************************
* CURRENT USE AND UNMET NEED AMONG MARRIED WOMEN BY WEALTH
*******************************************************************************
* Tabout current use and unmet need among married women of reproductive age, by wealth quintile (weighted)
tabout wealth cont_unmet[aw=FQweight] if married==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(row) f(1) clab(%) npos(row) h1("Unmet need, traditional method, and modern method prevalence among married women (weighted)")
*******************************************************************************
* METHOD MIX PIE CHART
*******************************************************************************
* Label variables
label variable current_recent_method "Current or recent method"
* Tabout current/recent method if using modern contraceptive method, among married women
tabout current_methodnumEC [aweight=FQweight] if mcp==1 & married==1 & cp==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) h2("Method mix - married women (weighted)")
* Tabout current/recent method if using modern contraceptive method, among unmarried sexually active women
capture tabout current_methodnumEC [aweight=FQweight] if mcp==1 & umsexactive==1 & cp==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) h2("Method mix - unmarried sexually active women (weighted)")
gen current_methodnumEC2=current_methodnumEC
replace current_methodnumEC2=0 if current_methodnumEC2==. | current_methodnumEC2==-99
replace current_methodnumEC2=30 if current_methodnumEC>=30 & current_methodnumEC<=39
label copy methods_list methods_list2
label define methods_list2 0 "Not using" 30 "Traditional methods", modify
label val current_methodnumEC2 methods_list2
*******************************************************************************
* SECOND PAGE
*******************************************************************************
*******************************************************************************
* CHOICE INDICATORS BY WEALTH
*******************************************************************************
* Method chosen self of jointly
tab fp_final_decision if cp==1
gen methodchosen=1 if fp_final_decision==1 | fp_final_decision==4 | fp_final_decision==5
replace methodchosen=0 if fp_final_decision==2 | fp_final_decision==3
replace methodchosen=0 if fp_final_decision==-99 | fp_final_decision==6
label variable methodchosen "Who chose method?"
label define methodchosenl 0 "Not self" 1 "Self, self/provider, self/partner"
label values methodchosen methodchosenl
* Generate dichotomous would return to provider/refer relative to provider variable
recode return_to_provider -88 -99=0
recode refer_to_relative -88 -99=0
gen returnrefer=1 if return_to_provider==1 & refer_to_relative==1 & cp==1
replace returnrefer=0 if cp==1 & (return_to_provider==0 | refer_to_relative==0)
label variable returnrefer "Would return to provider and refer a friend or family member"
label values returnrefer yesno
* Tabout who chose method (weighted) by wealth quintile among current users
tabout methodchosen wealth [aweight=FQweight] if mcp==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) clab(%) npos(row) h1("Method chosen - current modern user (weighted)")
* Tabout obtained method of choice by wealth (weighted) among current users
recode fp_obtain_desired -88 -99=0
tabout fp_obtain_desired wealth [aweight=FQweight] if mcp==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) clab(%) npos(row) h1("Obtained method of choice by wealth - current modern user (weighted)")
* Tabout told of other methods by wealth (weighted) among current users
recode fp_told_other_methods -88 -99=0
tabout fp_told_other_methods wealth [aweight=FQweight] if mcp==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) npos(row) h1("Told of other methods by wealth - current modern user (weighted)")
* Tabout counseled on side effects by wealth (weighted) among current users
recode fp_side_effects -88 -99=0
tabout fp_side_effects wealth [aweight=FQweight] if mcp==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) npos(row) h1("Told about side effects by wealth - current modern user (weighted)")
gen fees_paid_lastvisit=0 if method_fees==0
replace fees_paid_lastvisit=1 if method_fees>0 & method_fees!=.
replace fees_paid_lastvisit=1 if method_fees==-88
label var fees_paid_lastvisit "Did you pay for services the last time you obtained FP?"
label define yesno_list 1 "yes" 0 "no"
label val fees_paid_lastvisit yesno_list
* Tabout paid for services by wealth (weighted) among current users
tabout fees_paid_lastvisit wealth [aweight=FQweight] if mcp==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) npos(row) h1("REWORDED QUESTION FROM PREVIOUS ROUNDS Paid for FP services at last visit by wealth - current modern user (weighted)")
* Tabout would return to provider by wealth (weighted) among current users
tabout returnrefer wealth [aweight=FQweight] if mcp==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) npos(row) h1("Return to/refer provider by wealth - current modern user (weighted)")
*******************************************************************************
* RECEIVED A METHOD FROM A PUBLIC SDP
*******************************************************************************
* Generate dichotomous variable for public versus not public source of family planning
recode fp_provider_rw (1/19=1 "public") (-88 -99=0) (nonmiss=0 "not public"), gen(publicfp_rw)
label variable publicfp_rw "Respondent or partner for method for first time from public family planning provider"
* Tabout whether received contraceptive method from public facility by wealth (weighted) among current users
tabout publicfp_rw wealth if mcp==1 [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) npos(row) h1("Respondent/partner received method from public facility initially by wealth - current modern user (weighted)")
* Tabout percent unintended births is the only indicator in the section not restricted to current users (all others restricted to current users)
tabout unintend wealth [aweight=FQweight] if tsinceb<60 | pregnant==1 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) npos(row) h1("Percent unintended by wealth - current user (weighted)")
save "`CCRX'_HHQFQ_2Page_Analysis_$date.dta", replace
*******************************************************************************
* REASON FOR NON-USE
*******************************************************************************
* Collapse reasons into five categories
* Perceived not at risk (not married, lactating, infrequent/no sex, husband away, menopausal, subfecund, fatalistic)
* Infrequent/no sex/husband away
gen nosex=0 if why_not_using!=""
replace nosex=1 if (why_not_usinghsbndaway==1 | why_not_usingnosex==1)
* Menopausal/subfecund/amenorhheic
gen meno=0 if why_not_using!=""
replace meno=1 if why_not_usingmeno==1 | why_not_usingnomens==1 | why_not_usingsubfec==1
* Lactating
gen lactate=0 if why_not_using!=""
replace lactate=1 if why_not_usingbreastfd==1
* Combined no need
gen noneed=0 if why_not_using!=""
replace noneed=1 if nosex==1|meno==1|lactate==1|why_not_usinguptogod==1
label variable noneed "Perceived not at risk"
tab noneed [aw=FQweight]
* Not married is separate category
gen notmarried=0 if why_not_using!=""
replace notmarried=1 if why_not_usingnotmarr==1
label variable notmarried "Reason not using: not married"
tab notmarried [aw=FQweight]
* Method related includes fear of side effects, health concers, interferes wth bodies natural processes, inconvenient to use
* Health concerns
gen methodrelated=0 if why_not_using!=""
replace methodrelated=1 if (why_not_usinghealth==1 | why_not_usingbodyproc==1| why_not_usingfearside==1 | why_not_usinginconv==1)
label variable methodrelated "Reason not using: method or health-related concerns"
tab methodrelated [aw=FQweight]
* Opposition includes personal, partner, other, religious
gen opposition=0 if why_not_using!=""
replace opposition=1 if why_not_usingrespopp==1|why_not_usinghusbopp==1|why_not_usingotheropp==1| why_not_usingrelig==1
label variable opposition "Reason not using: opposition to use"
tab opposition [aw=FQweight]
* Access/knowledge
gen accessknowledge=0 if why_not_using!=""
replace accessknowledge=1 if why_not_usingdksource==1 | why_not_usingdkmethod==1 | why_not_usingaccess==1 | why_not_usingcost==1 | ///
why_not_usingprfnotavail==1 | why_not_usingnomethod==1
label variable accessknowledge "Reason not using: lack of access/knowledge"
tab accessknowledge [aw=FQweight]
* Other/no response/don't know
gen othernoresp=0 if why_not_using!=""
replace othernoresp=1 if ( why_not_usingother==1 | why_not_using=="-88" | why_not_using=="-99" )
label variable othernoresp "Reason not using: other"
tab othernoresp [aweight=FQweight]
* Label yes/no response options
foreach x in noneed nosex notmarried methodrelated opposition accessknowledge othernoresp {
label values `x' yesno
}
* Tabout reasons for not using contraception among all women wanting to delay the next birth for 2 or more yeras
tabout notmarried noneed methodrelated opposition accessknowledge othernoresp [aweight=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", ///
append oneway c(freq col) f(0 1) npos(row) h2("Reasons for non-use - among all women wanting to delay (weighted)")
*Drop ammened birthmonth and keep original coding
drop birthmonth
rename birthmonth2 birthmonth
save "`CCRX'_HHQFQ_2Page_Analysis_$date.dta", replace
*******************************************************************************
* MEANS AND MEDIANS
*******************************************************************************
* Generate age at first marriage by "date of first marriage - date of birth"
* Get the date for those married only once from FQcurrent*
* Get the date for those married more than once from FQfirst*
* marriage cmc already defined in unmet need
* Run code to generate medians
run "`medianfile'"
capture drop one
* Generate median age of first marriage
capture drop agemarriage
gen agemarriage=(marriagecmc-v011)/12
label variable agemarriage "Age at first marriage (25 to 49 years)"
*hist agemarriage if FQ_age>=25 & FQ_age<50
save, replace
save tem, replace
**Install the new command needed for the change
ssc install listtab, all replace
* Median age at marriage among all women who have married
preserve
pma2020mediansimple tem agemarriage 25
gen urban="All Women"
tempfile total
save `total', replace
restore
preserve
keep if urban==0
capture codebook metainstanceID
if _rc!=2000{
save tem, replace
pma2020mediansimple tem agemarriage 25
gen urban="Rural"
tempfile rural
save `rural', replace
}
restore
preserve
keep if urban==1
capture codebook metainstanceID
if _rc!=2000{
save tem, replace
pma2020mediansimple tem agemarriage 25
gen urban="Urban"
tempfile urban
save `urban', replace
}
restore
preserve
use `total', clear
capture append using `rural'
capture append using `urban'
listtab urban median , appendto("`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls") rstyle(tabdelim) headlines("Median age at marriage among all women who have married- by urban/rural (weighted)") footlines(" ")
restore
*Median age at first sex among all women who have had sex
preserve
keep if age_at_first_sex>0 & age_at_first_sex<50
save tem, replace
pma2020mediansimple tem age_at_first_sex 15
gen urban="All Women"
tempfile total
save `total', replace
restore
preserve
keep if age_at_first_sex>0 & age_at_first_sex<50 & urban==0
capture codebook metainstanceID
if _rc!=2000 {
save tem, replace
pma2020mediansimple tem age_at_first_sex 15
gen urban="Rural"
tempfile rural
save `rural', replace
}
restore
preserve
keep if age_at_first_sex>0 & age_at_first_sex<50 & urban==1
capture codebook metainstanceID
if _rc!=2000 {
save tem, replace
pma2020mediansimple tem age_at_first_sex 15
gen urban="Urban"
tempfile urban
save `urban',replace
}
restore
preserve
use `total', clear
capture append using `rural'
capture append using `urban'
listtab urban median , appendto("`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls") rstyle(tabdelim) headlines("Median age at first sex - among all women who have had sex by urban/rural(weighted)") footlines(" ")
restore
* Median age at first contraceptive use among all women who have ever use contraception
preserve
keep if fp_ever_used==1 & age_at_first_use>0
save tem, replace
pma2020mediansimple tem age_at_first_use 15
gen urban="All Women"
tempfile total
save `total', replace
restore
preserve
keep if fp_ever_used==1 & age_at_first_use>0 & urban==0
capture codebook metainstanceID
if _rc!=2000 {
save tem, replace
pma2020mediansimple tem age_at_first_use 15
gen urban="Rural"
tempfile rural
save `rural', replace
}
restore
preserve
keep if fp_ever_used==1 & age_at_first_use>0 & urban==1
capture codebook metainstanceID
if _rc!=2000 {
save tem, replace
pma2020mediansimple tem age_at_first_use 15
gen urban="Urban"
tempfile urban
save `urban', replace
}
restore
preserve
use `total', clear
append using `rural'
append using `urban'
listtab urban median , appendto("`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls") rstyle(tabdelim) headlines("Median age at first contraceptive use - among all women who have used contraception by urban/rural (weighted)") footlines(" ")
restore
* Generate age at first birth by subtracting birth date from age at first birth and dividing by hours in a year
capture drop agefirstbirth
capture replace first_birthSIF=recent_birthSIF if birth_events==1
capture replace first_birthSIF=recent_birthSIF if children_born==1
gen agefirstbirth=hours(first_birthSIF-birthdateSIF)/8765.81
* Median age at first birth among all women who have ever given birth
preserve
keep if ever_birth==1
save tem, replace
pma2020mediansimple tem agefirstbirth 25
gen urban="All Women"
tempfile total
save `total', replace
restore
preserve
keep if ever_birth==1 & birth_events_rw!=. & birth_events_rw!=-99 & urban==0
capture codebook metainstanceID
if _rc!=2000 {
save tem, replace
pma2020mediansimple tem agefirstbirth 25
gen urban="Rural"
tempfile rural
save `rural', replace
}
restore
preserve
keep if ever_birth==1 & birth_events_rw!=. & birth_events_rw!=-99 & urban==1
capture codebook metainstanceID
if _rc!=2000 {
save tem, replace
pma2020mediansimple tem agefirstbirth 25
gen urban="Urban"
tempfile urban
save `urban', replace
}
restore
preserve
use `total', clear
append using `rural'
append using `urban'
listtab urban median , appendto("`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls") rstyle(tabdelim) headlines("Median age at first birth - among all women who have given birth by urban/rural(weighted)") footlines(" ")
restore
* Percent of women age 18-24 having first birth by age 18
capture drop birth18
gen birth18=0 if FQ_age>=18 & FQ_age<25
replace birth18=1 if agefirstbirth<18 & birth18==0
label variable birth18 "Birth by age 18 (18-24)"
tab birth18 [aw=FQweight]
tab urban birth18 [aw=FQweight], row
* Percent received FP information from visiting provider or health care worker at facility
recode visited_by_health_worker -99=0
recode facility_fp_discussion -99=0
gen healthworkerinfo=0
replace healthworkerinfo=1 if visited_by_health_worker==1 | facility_fp_discussion==1
label variable healthworkerinfo "Received family planning info from provider in last 12 months"
tab healthworkerinfo [aweight=FQweight]
tab urban healthworkerinfo [aweight=FQweight], row
* Percent with exposure to family planning media in past few months
gen fpmedia=0
replace fpmedia=1 if fp_ad_radio==1 | fp_ad_magazine==1 | fp_ad_tv==1
label variable fpmedia "Exposed to family planning media in last few months"
tab fpmedia [aw=FQweight]
tab urban fpmedia [aw=FQweight], row
* Label yes/no response options
foreach x in healthworkerinfo fpmedia {
label values `x' yesno
}
* Tabout mean no. of living children at first contraceptive use among women who have ever used contraception
replace age_at_first_use_children=0 if ever_birth==0 & fp_ever_used==1
tabout urban [aweight=FQweight] if fp_ever_used==1 & age_at_first_use_children>=0 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append sum c(mean age_at_first_use_children) f(3) npos(row) h2("Mean number of children at first contraceptive use - among all women who have used contraception (weighted)")
* Tabout birth by age 18 among all women by urban/rural, weighted
tabout birth18 urban [aweight=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) npos(row) h1("Birth by age 18 (18-24) - among all women (weighted)")
* Tabout received family planning information from provider in last 12 months among all women by urban/rural, weighted
tabout healthworkerinfo urban [aweight=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) npos(row) h1("Received FP info. from provider in last 12 months - among all women (weighted)")
* Tabout received family planning information from provider in last 12 months among all women by urban/rural, weighted
tabout fpmedia urban [aweight=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append c(col) f(1) npos(row) h1("Exposed to FP media in last few months - among all women (weighted)")
*******************************************************************************
* ALTERNATIVE INDICATORS IF FEWER THAN 50 UNMARRIED SEXUALLY ACTIVE WOMEN
*******************************************************************************
* Percent women 18-24 who are married by age 18
gen married18=0 if FQ_age>=18 & FQ_age<25
replace married18=1 if agemarriage<18 & married18==0
label variable married18 "Married by age 18"
tab married18 [aw=FQweight]
tab urban married18 [aw=FQweight], row
* Percent women 18-24 who have had first birth by age 18
*Already defined
* Percent women 18-24 who have had first contraceptive use by age 18
gen fp18=0 if FQ_age>=18 & FQ_age<25
replace fp18=1 if age_at_first_use>0 & age_at_first_use<18 & fp18==0
label variable fp18 "Used contraception by age 18"
tab fp18 [aw=FQweight]
tab urban fp18 [aw=FQweight], row
* Percent women who had first sex by age 18
gen sex18=0 if FQ_age>=18 & FQ_age<25
replace sex18=1 if age_at_first_sex>0 & age_at_first_sex<18 & sex18==0
label variable sex18 "Had first sex by age 18"
tab sex18 [aw=FQweight]
tab urban sex18 [aw=FQweight], row
* Label yes/no response options
foreach x in married18 birth18 fp18 sex18 {
label values `x' yesno
}
* Tabout married by 18, first birth before 18, contraceptive use by 18, first sex by 18 among women age 18-24 (weighted)
tabout married18 sex18 fp18 birth18 [aw=FQweight] if FQ_age>=18 & FQ_age<25 using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append oneway c(col) f(1) clab(%) npos(row) ///
h2("Married by 18, first sex by 18, contraceptive use by 18, first birth before 18 - women age 18-24 (weighted)")
* Age specific rates of long, short, tcp and unmet need
gen lstu=1 if longacting==1
replace lstu=2 if longacting!=1 & mcp==1
replace lstu=3 if tcp==1
replace lstu=4 if unmettot==1
replace lstu=5 if lstu==.
label define lstul 1 "Long acting" 2 "Short acting" 3 "Traditional" 4 "Unmet need" 5 "Not using/no need"
label val lstu lstul
egen age5=cut(FQ_age), at(15(5)50)
tabout age5 lstu [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append ///
cells(row) h1("Use/need by age - all women (weighted)") f(2)
*******************************************************************************
* DEMOGRAPHIC VARIABLES
*******************************************************************************
recode school -99=.
tabout age5 [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append ///
c(freq col) f(0 1) clab(n %) npos(row) h2("Distribution of de facto women by age - weighted")
tabout school [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append ///
c(freq col) f(0 1) clab(n %) npos(row) h2("Distribution of de facto women by education - weighted")
tabout FQmarital_status [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append ///
c(freq col) f(0 1) clab(n %) npos(row) h2("Distribution of de facto women by marital status - weighted")
tabout wealth [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append ///
c(freq col) f(0 1) clab(n %) npos(row) h2("Distribution of de facto women by wealth - weighted")
tabout sexactive [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append ///
c(freq col) f(0 1) clab(n %) npos(row) h2("Distribution of de facto women by sexual activity - weighted")
capture tabout urban [aw=FQweight] using "`CCRX'_HHQFQ_2Page_Analysis_Output_$date.xls", append ///
c(freq col) f(0 1) clab(n %) npos(row) h2("Distribution of de facto women by urban/rural - weighted")
*******************************************************************************
* CLOSE
*******************************************************************************
log close
|
****PMA 2020 Indonesia Data Quality Checks****
***Version 20 January 2014****
**Third do file in series***
/*This do file labels each variable in the household roster*/
clear
set more off
cd "$datadir"
local today=c(current_date)
local c_today= "`today'"
local date=subinstr("`c_today'", " ", "",.)
local todaystata=clock("`today'", "DMY")
local CCRX $CCRX
local HHQcsv $HHQcsv
local HHQcsv2 $HHQcsv2
clear
capture noisily insheet using "$csvdir/`HHQcsv'_hh_rpt.csv", comma case
if _rc==0 {
tostring *, replace force
save `CCRX'_HHQmember.dta, replace
}
/*If you need to add an extra version of the forms, this will check if that
version number exists and add it. If the version does not, it will continue*/
clear
capture noisily insheet using "$csvdir/`HHQcsv2'_hh_rpt.csv", comma case
if _rc==0 {
tostring *, replace force
append using `CCRX'_HHQmember.dta, force
save, replace
}
use `CCRX'_HHQmember.dta, clear
************************************************************************************
rename mb_grp* *
rename ms_grp* *
rename nm_grp* *
capture confirm var EA_transfer
if _rc!=0{
capture rename quartier EA
}
**Assign variable labels
***Clean the HHQ Member information
label var PARENT_KEY "Unique id household - ODK generate"
label var firstname "First name of household memeber"
label var respondent_match "Is respondent in household"
label var gender "Sex of household member"
label var age "Age"
label var marital_status "Marital Status"
label var relationship "Relationship to head of household"
label var head_check "Head of household?"
capture label var family_id "Family ID"
capture rename usual_member usually_live
label var usually_live "Usually live in household"
label var last_night "Slept in the household night before"
label var eligible "Eligible female respondent"
label var FRS_form_name "Linking ID for Eligible Woman"
destring age, replace
destring respondent_match, replace
destring head_check, replace
capture destring family_id, replace
capture destring EA, replace
destring eligible, replace
numlabel, add
label define yes_no_dnk_nr_list 0 no 1 yes -88 "-88" -99 "-99"
encode more_hh_members, gen(more_hh_membersv2) lab(yes_no_dnk_nr_list)
label define gender_list 1 male 2 female -88 "-88" -99 "-99"
encode gender, gen(genderv2) lab(gender_list)
label define marital_status_list 5 never_married 1 currently_married 2 currently_living_with_partner 3 divorced 4 widow -99 "-99"
encode marital_status, gen(marital_statusv2) lab(marital_status_list)
label define relationship_list 1 head 2 spouse 3 child 4 child_in_law 5 grandchild 6 parent 7 parent_in_law 8 sibling 9 other 10 help -88 "-88" -99 "-99"
encode relationship, gen(relationshipv2) lab(relationship_list)
encode usually_live, gen(usually_livev2) lab(yes_no_dnk_nr_list)
encode last_night, gen(last_nightv2) lab(yes_no_dnk_nr_list)
unab vars: *v2
local stubs: subinstr local vars "v2" "", all
foreach var in `stubs'{
rename `var' `var'QZ
order `var'v2, after(`var'QZ)
}
rename *v2 *
drop *QZ
*Check for observations that are all duplicates
duplicates report
duplicates drop
rename PARENT_KEY metainstanceID
rename KEY member_number
duplicates drop member_number, force
rename link_transfer link
drop *_transfer*
rename link link_transfer
drop SETOFhh_rpt
drop firstname_raw
save `CCRX'_HHQmember_`date', replace
|
****PMA 2020 Data Quality Checks****
***Version Written in Bootcamp July 21-23, 2014****
**Second do file in series***
clear matrix
clear
cd "$datadir"
/*This do file imports the Household Questionnaire (without the roster) into Stata and then cleans it*/
set more off
/*Import Household Questionnaire csv file into Stata. There is a separate spreadsheet that has
the roster information for each person. Input now, but will merge later. The loop is programmed to look
for questionnaire versions up to version 30. If versions beyond version 30 are added, the values in the forval
command below will need to be updated */
/*generating a dataset with one variable so that all datasets can append to it. Cannot append
to a completely empty dataset */
local CCRX $CCRX
local HHQcsv $HHQcsv
local HHQcsv2 $HHQcsv2
set obs 1
gen x=.
save `CCRX'_HHQ.dta, replace
*Create a temporary file
tempfile tempHHQ
/*If there are multiple versions during the same round, they should all be named the same thing other than ///
the version number. */
clear
capture noisily insheet using "$csvdir/`HHQcsv'.csv", comma case
tostring *, replace force
save `tempHHQ', replace
use `CCRX'_HHQ.dta
append using `tempHHQ', force
save, replace
/*If you need to add an extra version of the forms, this will check if that
version number exists and add it. If the version does not, it will continue*/
clear
capture noisily insheet using "$csvdir/`HHQcsv2'.csv", comma case
if _rc==0 {
tostring *, replace force
save `tempHHQ', replace
use `CCRX'_HHQ.dta
append using `tempHHQ', force
save, replace
}
**Drop the single empty observation and empty variable x
use `CCRX'_HHQ.dta
drop in 1
drop x
save, replace
***Generate variable for country and round
gen country="$country"
label var country "Country"
gen round="$round"
label var round "Round of data collection"
order country round, first
*****DRC ONLY
capture rename quartier EA*
capture rename water_*refill water_*15
capture rename name_grp* *
rename date_group* *
rename *grp* **
rename assets_assets* assets*
rename s_* *
capture rename sanitation_all_sanitation_all sanitation_all
rename w_* *
rename livestock_* *
**Assign variable labels
label var times_visited "Visit number"
label var your_name "Resident Enumerator name"
label var your_name_check "To RE: Is this your name?"
label var name_typed "RE name is not correct"
label var system_date "Date and Time"
label var system_date_check "Confirm correct date and time"
label var manual_date "If no, enter correct date"
******
label var EA "EA"
label var structure "Structure number"
label var household "Household number"
label var hh_duplicate_check "Have you already submitted a form for this structure/household?"
label var resubmit_reasons "Reason for resubmission"
label var available "Respondent present and available at least once"
capture label var previous_survey "Previous PMA survey?"
label var begin_interview "May I begin"
label var consent_obtained "Consent obtained"
label var witness_auto "Interviewer name"
label var witness_manual "Name check"
label var num_HH_members "Number of household members in the roster"
label var heads "Number of heads of households"
label var names "Names in household"
label var respondent_in_roster "Check that respondent is in the roster"
label var roster_complete "Check that Roster is completed"
label var assets "List of Household Assets"
label var assets_check "Question 10 Check"
capture label var hh_location_ladder "Location of house on wealth ladder: 1 = poorest, 10 = wealthiest"
label var owned_ask "Do you own livestock"
label var owned_livestock_own "Total number of livestock owned"
label var floor "Main floor material"
label var roof "Main roof material"
label var walls "Main exterior wall material"
label var handwashing_place_rw "Can you show me where members most often wash their hands?"
label var handwashing_place_observations "At handwashing observe soap, water, sanitation"
rename water_main_drinking_select water_sources_main_drinking
rename water_main_other water_sources_main_other
label var water_sources_all "Which water sources used for any purpose"
label var number_of_sources "Number of sources in water_sources_all"
label var water_sources_main_drinking "Main source of drinking water"
label var water_sources_main_other "Main source of cooking/ handwashing water"
label var source_labels "Water sources mentioned"
label var water_main_drinking "Main drinking water among all sources mentioned"
label var water_uses "Use of main water source"
label var water_months_avail "Availability of main water source during year"
label var water_reliability "Availability of main water source when expected"
label var water_collection "Minutes in round trip to main water source"
label var sanitation_all "Use any of the following toilet facilities"
capture label var sanitation_all_other "Other toilet facility specified"
label var number_of_sanitation "Total number of toilet facilities"
label var sanitation_main "Main toilet facility"
label var sanitation_labels "Toilet facilities mentioned"
capture label var sanitation_vip_check "Latrine has ventilation pipe"
capture label var sanitation_pit_with_slab_check "Latrine has cement slab"
capture label var sanitation_where "Location of toilet facility"
drop the_sanitation
destring sanitation_empty_value, replace
label define dwmy_list 1 days 2 weeks 3 months 4 years -88 "-88" -99 "-99"
label define sanitation_empty_list 1 months 2 years -77 "-77" -88 "-88" -99 "-99"
encode sanitation_empty_units, gen(sanitation_empty_unitsv2) lab(sanitation_empty_list)
label var sanitation_empty_units "Days, weeks, months, or years since toilet facility was emptied"
label var sanitation_empty_value "Number of days, weeks, months, or years since toilet facility was emptied"
label var sanitation_empty_who "Last person to empty toilet facility"
label var sanitation_empty_where "Last place toilet facilities were emptied to"
capture label var sanitation_empty_where_other "Other emptied location specified"
label define sanitation_who 1 neighbors 2 provider 3 other -88 "-88" -99 "-99"
encode sanitation_empty_who, gen(sanitation_empty_whov2) lab(sanitation_who)
label define sanitation_where 1 covered_hold 2 open_water 3 open_ground ///
4 taken_facility 5 taken_dnk 6 other -88 "-88" -99 "-99"
encode sanitation_empty_where, gen(sanitation_empty_wherev2) lab(sanitation_where)
label var sanitation_frequency_cc "How often use toilet facility"
label var shared_san "Share toilet facility with other households/public"
label var shared_san_hh "Number of HH that share toilet facility"
label var bush_use "How many people use bush"
label var minAge "Minimum age of children listed in household"
label var thankyou "Thank you"
label var locationLatitude "Latitude"
label var locationLongitude "Longitude"
label var locationAltitude "Altitude"
label var locationAccuracy "Accuracy"
capture label var HH_photo "Hyperlink to photo"
label var HHQ_result "HHQ Result"
label var start "Start time"
label var end "End time"
label var deviceid "deviceid"
label var simserial "simserial"
label var metainstanceID "Household Unique ID - ODK"
capture label var handwashing_container_show "Moveable container available in the household"
*e. All variables are forced to string to make sure nothing is dropped when appending. Most variables need to be
*destrung and/or encoded before analysis. Destring variables (if numbers are stored as string) or encode (if values are in character but can be categories)
capture destring version, replace
destring times_visited, replace
destring structure, replace
destring household, replace
destring num_HH_members, replace
destring heads, replace
destring assets_check, replace
destring EA, replace
destring owned_livestock_own, replace
destring number_of_sources, replace
destring number_of_sanitation, replace
destring shared_san_hh, replace
destring water_collection*, replace
destring bush_use, replace
destring minAge, replace
destring consent_obtained, replace
*Assign value lables. This should be done before the encoding step to ensure the right number of values are encoded
capture drop label, _all
label define yes_no_dnk_nr_list 0 no 1 yes -88 "-88" -99 "-99"
foreach var of varlist your_name_check system_date_check hh_duplicate_check available ///
begin_interview owned_ask roster_complete {
encode `var', gen(`var'v2) lab(yes_no_dnk_nr_list)
}
label val consent_obtained yes_no_dnk_nr_list
capture encode garden, gen(gardenv2) lab(yes_no_dnk_nr_list)
label define resubmit_reason_list 1 new_members 2 correction 3 dissappeared 4 not_received 5 other
replace water_sources_main_drinking=water_sources_all if number_of_sources==1
replace water_sources_main_other=water_sources_all if number_of_sources==1
label define water_source_list 1 piped_indoor 2 piped_yard 3 piped_public 4 tubewell ///
5 protected_dug_well 6 unprotected_dug_well 7 protected_spring ///
8 unprotected_spring 9 rainwater 10 tanker 11 cart 12 surface_water ///
13 bottled 14 sachet 15 refill -99 "-99"
encode water_sources_main_drinking, gen(water_sources_main_drinkingv2) lab(water_source_list)
encode water_sources_main_other, gen(water_sources_main_otherv2) lab(water_source_list)
replace sanitation_main=sanitation_all if number_of_sanitation==1
label define sanitation_list 1 flush_sewer 2 flush_septic 3 flush_elsewhere 4 flush_unknown 5 vip ///
6 pit_with_slab 7 pit_no_slab 8 composting 9 bucket 10 hanging 11 other 12 bush 13 flushpit 14 bush_water_body -99 "-99"
encode sanitation_main, gen(sanitation_mainv2) lab(sanitation_list)
label define handwash_list 1 observed_fixed 2 observed_mobile 3 not_here 4 no_permission ///
5 not_observed_other -99 "-99"
encode handwashing_place_rw, gen(handwashing_place_rwv2) lab(handwash_list)
label define frequency_of_use_list_v2 1 always 2 mostly 3 occasionally -99 "-99"
label define shared_san_list 1 not_shared 2 shared_under_ten_HH 3 shared_above_ten_HH ///
4 shared_public -99 "-99"
encode sanitation_frequency_cc, gen(sanitation_frequency_ccv2) lab(frequency_of_use_list_v2)
encode shared_san, gen(shared_sanv2) lab(shared_san_list)
replace shared_sanv2=. if shared_san==""
capture encode shared_san_fp, gen(shared_san_fpv2) lab(shared_san_list)
label define continuity_list 1 always 2 predictable 3 unpredictable -99 "-99"
encode water_reliability, gen(water_reliabilityv2) lab(continuity_list)
*************
label define hhr_result_list 1 completed 2 not_at_home 3 postponed 4 refused 5 partly_completed 6 vacant 7 destroyed ///
8 not_found 9 absent_extended_period
encode (HHQ_result), gen(HHQ_resultv2) lab(hhr_result_list)
*Participated in previous survey
capture label var previous_survey "Previously participated in PMA 2020 survey - household"
capture encode previous_survey, gen(previous_surveyv2) lab(yes_no_dnk_nr_list)
unab vars: *v2
local stubs: subinstr local vars "v2" "", all
foreach var in `stubs'{
rename `var' `var'QZ
order `var'v2, after(`var'QZ)
}
rename *v2 *
drop *QZ
foreach use in drinking cooking washing livestock business gardening {
gen water_uses_`use'=.
replace water_uses_`use'=0 if water_uses!=""
replace water_uses_`use'=1 if (regexm(water_uses, "`use'"))
replace water_uses_`use'=. if water_uses=="-99"
}
*****************************Changing time variables*****************************
**Change the date variables into clock time
**Change date variable of upload from scalar to stata time (SIF)
*Drop the day of the week of the interview and the UST
gen double SubmissionDateSIF=clock(SubmissionDate, "MDYhms")
format SubmissionDateSIF %tc
**Change start and end times into SIF to calculate time
*Have to do the same procedures. Using the end time of the survey as the day of the survey
gen double startSIF=clock(start, "MDYhms")
format startSIF %tc
gen double endSIF=clock(end, "MDYhms")
format endSIF %tc
rename your_name RE
replace RE=name_typed if your_name_check==0 | your_name_check==.
**Check any complete duplicates, duplicates of metainstanceid, and duplicates of structure and household numbers
duplicates report
duplicates report metainstanceID
duplicates tag metainstanceID, gen (dupmeta)
*******Round specific questions
capture confirm var collect_water_dry
if _rc==0{
label var collect_water_dry "Time collect water - DRY season"
label var collect_water_dry_value "Value - collect water dry"
label var collect_water_wet "Time collect water - WET season"
label var collect_water_wet_value "Value - collect water wet"
label define collect_water_list 1 minutes 2 hours 3 someone_else 4 no_one -88 "-88" -99 "-99"
encode collect_water_dry, gen(collect_water_dryv2) lab(collect_water_list)
encode collect_water_wet, gen(collect_water_wetv2) lab(collect_water_list)
destring collect_water_dry_value, replace
destring collect_water_wet_value, replace
}
*Child Feces
label var child_feces "What do you do with children's waste"
*child_feces is multi-select, need to change to binary
gen child_feces_burn=0 if child_feces!=""
replace child_feces_burn=1 if (regexm(child_feces, ["burn"]))
gen child_feces_latdisp=0 if child_feces!=""
replace child_feces_latdisp=1 if (regexm(child_feces, ["latrine_disposal"]))
gen child_feces_bury=0 if child_feces!=""
replace child_feces_bury=1 if (regexm(child_feces, ["bury"]))
gen child_feces_garbage=0 if child_feces!=""
replace child_feces_garbage=1 if (regexm(child_feces, ["garbage"]))
gen child_feces_manure=0 if child_feces!=""
replace child_feces_manure=1 if (regexm(child_feces, ["manure"]))
gen child_feces_leave=0 if child_feces!=""
replace child_feces_leave=1 if (regexm(child_feces, ["leave"]))
gen child_feces_waste_water=0 if child_feces!=""
replace child_feces_waste_water=1 if (regexm(child_feces, ["waste_water"]))
gen child_feces_latused=0 if child_feces!=""
replace child_feces_latused=1 if (regexm(child_feces, ["latrine_used"]))
*r}
save `CCRX'_HHQ_$date.dta, replace
|
** GPS checks for HHQ survey
** Authors Ann Rogers - Sally Ann Safi - Beth Larson - Julien Nobili
** Requirements: Listing.dta file (needs to be generated beforehand)
** version 1.1 (April 2019)
**********************Set directory**************************************
local datadir $datadir
local dofiles $dofiledir
local csv_results $datadir
local Geo_ID $Geo_ID
local CCRX $CCRX
*****************Preparation of the Listing file*************************
* Import full country round listing file; Requires to first generate a clean version of the listing.dta, using the Listing.do;
** Then save the output in your datadir (non-dropbox) **
*** NEEDS TO BE UPDATED BEFORE USE ***
clear
use BFR5_Listing_22Apr2019.dta
drop if HH_SDP=="SDP"
duplicates drop
destring GPS_HHLatitude, replace
destring GPS_HHLongitude, replace
* Generate XY average per EA (EA centroids creation)
bysort EA: egen centro_latt=mean(GPS_HHLatitude) if GPS_HHLatitude!=0
bysort EA: egen centro_long=mean(GPS_HHLongitude) if GPS_HHLongitude!=0
* Save one observation per EA with average geo-coordinates
egen tag=tag(EA)
preserve
keep if tag==1
keep EA centro_latt centro_long
* Save as temp_listing
tempfile temp_listing_centroids
save `temp_listing_centroids.dta', replace
restore
drop if Occupied_YN_HH=="no"
drop if GPS_HHLatitude==. | GPS_HHLongitude==.
* Convert vars to string and concatenate vars EA + structure_number
egen conc= concat(EA number_structure_HH), punct("-")
* Keep useful vars and drop duplicates
keep EA GPS_HHLatitude GPS_HHLongitude conc
duplicates drop conc, force
* Save as temp_listing_ready
tempfile temp_listing_ready
save `temp_listing_ready.dta', replace
**********************Preparation of the HHQ file****************
* Use cleaned HHQ dataset from PARENT
clear
use "`datadir'/`CCRX'_HHQ_$date.dta"
* Keep vars and generate concatanated vars EA + Structure_number
egen conc= concat(EA structure), punct("-")
keep RE `Geo_ID' EA locationLatitude locationLongitude locationAccuracy metainstanceID conc
******* Merge: temp_listing_centroids + HHQ file, then listing_ready + (HHQ+ listing centro)**************
merge m:1 EA using `temp_listing_centroids.dta', gen(centroid_merge)
drop centroid_merge
merge m:1 conc using `temp_listing_ready.dta', gen(ready_merge)
drop if ready_merge==2
************* Gen distances vars (distance from HH to Centroid, and HH to listing's structure***********
destring locationLatitude, replace
destring locationLongitude, replace
destring locationAccuracy, replace
gen distance_2_cent=(((locationLatitude-centro_latt)^2+(locationLongitude-centro_long)^2)^(1/2))*111295
gen distance_2_list=(((locationLatitude-GPS_HHLatitude)^2+(locationLongitude-GPS_HHLongitude)^2)^(1/2))*111295
*********** Generate mean and standard-dev using var distance_cent *************
bysort EA: egen mean_distance_cent=mean(distance_2_cent)
bysort EA: egen sd_distance_cent=sd(distance_2_cent)
************************ Genarate Issues vars **********************************
gen missing_coordinates=1 if locationLatitude==. | locationLongitude==.
gen poor_accuracy=1 if locationAccuracy>6 & !missing(locationAccuracy)
gen EA_size_issue=1 if mean_distance_cent<sd_distance_cent
gen HH_suspect_location=1 if ((distance_2_cent-mean_distance_cent)/sd_distance_cent)>=2
gen No_correspondence_HH2listing=1 if ready_merge==1
gen HH_toofar_listing=1 if distance_2_list >101 & !missing(distance_2_list)
********* Keep useful vars and save output files (GIS monitoring) **************
keep RE `Geo_ID' EA locationLatitude locationLongitude locationAccuracy metainstanceID missing_coordinates poor_accuracy EA_size_issue HH_suspect_location No_correspondence_HH2listing HH_toofar_listing
save `CCRX'_HHQ_GISFullcheck_$date.dta, replace
export delimited using `CCRX'_HHQ_GISFullcheck_$date.csv, replace
************* Output spreadsheet (errors by RE) Errors.xls *********************
collapse(count) missing_coordinates poor_accuracy EA_size_issue HH_suspect_location No_correspondence_HH2listing HH_toofar_listing, by(RE)
export excel RE missing_coordinates poor_accuracy EA_size_issue HH_suspect_location No_correspondence_HH2listing HH_toofar_listing using `CCRX'_HHQFQErrors_$date.xls, firstrow(variables) sh(GPS_check_by_RE) sheetreplace
********************************* Voilà ! ****************************************
|
clear
clear matrix
clear mata
capture log close
set maxvar 15000
set more off
numlabel, add
*******************************************************************************
*
* FILENAME: CCRX_Listing_v1_$date.do
* PURPOSE: Generates listing database and
* CREATED: Linnea Zimmerman (lzimme12@jhu.edu)
* DATA IN: CDR4_Listing_v#.csv
* DATA OUT: CDR4_Listing_$date.dta
* UPDATES: v2-01Nov2017 Linnea Zimmerman added code to incorporate second csv file
*
*******************************************************************************
*******************************************************************************
* INSTRUCTIONS
*******************************************************************************
*
* 1. Update macros/directories in first section
*
*******************************************************************************
* SET MACROS AND CORRECT DOI: UPDATE THIS SECTION FOR EACH COUNTRY/ROUND
*******************************************************************************
clear matrix
clear
set more off
local today=c(current_date)
local c_today= "`today'"
local date=subinstr("`c_today'", " ", "",.)
di "`date'"
global date=subinstr("`c_today'", " ", "",.)
**Create a folder where you will store Round 1 Data
*In that folder, create a folder called CSV_Files and a folder called Listing
*Within the Listing folder, create a folder called Archived_data
*Use this folder as your data directory and update the global datadir
local Country DRC
local CCRX CDR7
local Round Round7
local Listingcsv CDR7_Listing_v6
local Listingcsv2 CDR7_Listing_v6
local dofiledate 17Jun2015
local GeoID "level1 level2 level3 level4 EA"
local StructureHH "number_structure_HH"
local StructureSDP "number_SDP"
**************** Update to the directories where:
**The csv files are saved
global csvdir "/Users/ealarson/Dropbox (Gates Institute)/7 DRC/PMADataManagement_DRC/Round7/Data/CSV_Files"
**Where you want the datasets to save (DO NOT SAVE ON DROPBOX)
global datadir "/Users/ealarson/Documents/DRC/Data_NotShared/Round7/Listing"
**Where the do files are saved
global dofiledir "/Users/ealarson/Dropbox (Gates Institute)/7 DRC/PMADataManagement_DRC/Round7/Cleaning_DoFiles/Current"
*******************************************************************************************
******* Stop Updating Macros Here *******
*******************************************************************************************
cd "$datadir/Listing"
*Create folder called Archived_Data in the Listing Folder
* Zip all of the old versions of the datasets. Updates the archive rather than replacing so all old versions are still in archive
capture zipfile `CCRX'*, saving (Archived_Data/ArchivedListing_$date.zip, replace)
*Delete old versions. Old version still saved in ArchivedData.zip
shell rm `CCRX'_*
*Added second csv file for multiple versions
tempfile tempList
clear
capture insheet using "$csvdir/`Listingcsv'.csv", comma case
tostring *, replace force
save `CCRX'_Listing_$date.dta, replace
clear
capture insheet using "$csvdir/`Listingcsv2'.csv", comma case
if _rc==0 {
tostring *, replace force
save `tempList', replace
use `CCRX'_Listing_$date.dta
append using `tempList', force
save `CCRX'_Listing_$date.dta, replace
}
**********************
clear
**Merge in household information into structure information
clear
capture insheet using "$csvdir/`Listingcsv'_HH_grp.csv", comma case
tostring *, replace force
save `CCRX'_ListingHH_$date, replace
clear
capture insheet using "$csvdir/`Listingcsv2'_HH_grp.csv", comma case
if _rc==0 {
tostring *, replace force
save `tempList', replace
use `CCRX'_ListingHH_$date.dta
append using `tempList', force
save `CCRX'_ListingHH_$date.dta, replace
}
use `CCRX'_ListingHH_$date.dta
save, replace
**The first database is just the structures, there should not be any duplicate households because households need to be merged in
use `CCRX'_Listing_$date
***Make sure Block Census code for Indonesia is consistent in all entries
capture replace blok_sensus=upper(blok_sensus)
capture rename ea EA
capture rename name_grp* *
capture rename B_grpB your_name
capture rename B_grpB2 name_typed
capture rename B3 your_name_check
replace your_name=name_typed if your_name_check=="no"
rename your_name RE
*Check for duplicate
destring number_structure_HH, replace
destring number_HH, replace
destring number_SDP, replace
duplicates drop metainstanceID, force
save, replace
clear
use `CCRX'_ListingHH_$date
capture confirm PARENT_KEY
capture duplicates drop KEY, force
rename PARENT_KEY metainstanceID
save, replace
use `CCRX'_Listing_$date.dta
*merge in household group information into structure information
merge 1:m metainstanceID using `CCRX'_ListingHH_$date
drop if EA=="9999"
save `CCRX'_ListingCombined_$date, replace
******************************************************************************************************
*****************CLEAN THE LISTING FORM OF DUPLICATE SUBMISSIONS OR NUMBERING ERRORS*************
**Clean duplicates that are listed below usng the Listing File
run "$dofiledir/`CCRX'_CleaningByRE_LISTING_`dofiledate'.do"
save, replace
*******************************************************************************************************
use `CCRX'_ListingCombined_$date.dta
egen metatag=tag(metainstanceID)
save, replace
keep if metatag==1
sort `GeoID' `StructureHH'
rename duplicate_check IsThisResubmission
duplicates tag `GeoID' `StructureHH' HH_SDP if HH_SDP=="HH", gen(dupHHstructure)
capture duplicates tag `GeoID' `StructureSDP' HH_SDP if HH_SDP=="SDP", gen(dupSDPstructure)
**Keep only the structure numbers that are duplicates. Even if they are multiHH dwellings, there should not be duplicate structure numbers
**First export out list of duplicate households
preserve
capture confirm variable dupHHstructure
if _rc!=111{
keep if dupHHstructure!=0 & dupHHstructure!=.
sort RE `GeoID' `StructureHH'
save `CCRX'_ListingDuplicates_$date.dta, replace
**Dropping the unnecessary variables
order metainstanceID RE `GeoID' `StructureHH' HH_SDP Occupied_YN_HH mult_HH number_HH address_description_HH* IsThis* dupHHstructure
keep metainstanceID RE `GeoID' `StructureHH' HH_SDP Occupied_YN_HH mult_HH number_HH address_description_HH* IsThis* dupHHstructure
capture noisily export excel using "`CCRX'_ListingErrors_$date.xls", sh("DupHHStructures") firstrow(variables) replace
if _rc!=198{
restore
}
if _rc==198 {
clear
set obs 1
gen x="NO DUPLICATE HOUSEHOLD STRUCTURES"
export excel using `CCRX'_ListingErrors_$date.xls, firstrow(variables) sh("DupHHStructures") sheetreplace
restore
}
}
else{
restore
}
***Second export list of duplicate SDPs. Do separately because there may not be SDPs immediatley, and the code will have errors
*if combine SDP and HH code
capture confirm var dupSDPstructure
if _rc!=111{
preserve
keep if dupSDPstructure!=0
sort `GeoID' RE `StructureSDP'
save `tempList', replace
use `CCRX'_ListingDuplicates_$date.dta, replace
append using `tempList'
save, replace
**Dropping the unnecessary variables
restore
preserve
keep if dupSDPstructure!=0 & dupSDPstructure!=.
order metainstanceID RE `GeoID' HH_SDP `StructureSDP' address_description_SDP* IsThis*
keep metainstanceID RE `GeoID' HH_SDP `StructureSDP' address_description_SDP* IsThis* dupSDP*
capture noisily export excel using "`CCRX'_ListingErrors_$date.xls", sh("DupSDPStructures") firstrow(variables) sheetreplace
if _rc!=198{
restore
}
if _rc==198 {
clear
set obs 1
gen x="NO DUPLICATE SDP STRUCTURES"
export excel using `CCRX'_ListingErrors_$date.xls, firstrow(variables) sh("DupSDPStructures") sheetreplace
restore
}
}
preserve
gen HH=1 if HH_SDP=="HH"
gen SDP=1 if HH_SDP=="SDP"
collapse (count) HH SDP, by(RE `GeoID')
sort RE EA
rename HH HHStructures
rename SDP SDPStructures
save `CCRX'_ListingErrors_$date.dta, replace
restore
clear
use `CCRX'_ListingCombined_$date, replace
*check to see if multihousehold dwellings have more than one entry
gen HH=1 if HH_SDP=="HH"
gen SDP=1 if HH_SDP=="SDP"
save, replace
bysort metainstanceID: egen HHcount=total(HH)
preserve
sort `GeoID' RE number_structure_HH
order RE `GeoID' HH_SDP `StructureHH' Occupied_YN_HH mult_HH number_HH address*
keep RE `GeoID' HH_SDP `StructureHH' Occupied_YN_HH mult_HH number_HH metatag HHcount address*
capture export excel using "`CCRX'_ListingErrors_$date.xls" if metatag==1 & HHcount==1 & mult_HH=="yes", sh(MultiHHError) sheetreplace firstrow(variables)
if _rc!=198{
restore
}
if _rc==198 {
clear
set obs 1
gen x="NO STRUCTURES LISTED AS MULTI-HOUSEHOLD WITH ONLY ONE HOUSEHOLD"
export excel using `CCRX'_ListingErrors_$date.xls, firstrow(variables) sh("MultiHHError") sheetreplace
restore
}
**Generating the average number of households in the EA and the number of errors (multi with only one HH)
generate multiHHerr=1 if mult_HH=="yes" & HHcount==1
save, replace
tempfile collapse
preserve
keep if HH==1
collapse (count) HH multiHHerr, by (RE `GeoID' )
rename HH NumberHHperEA
rename multiHHerr SingleHHlabeledMulti
save `collapse', replace
restore
use `CCRX'_ListingErrors_$date.dta
merge 1:1 RE `GeoID' using `collapse', nogen
sort `GeoID' RE
save, replace
use `CCRX'_ListingCombined_$date.dta
**Get average number of households per structure. Have to keep only one structure
preserve
keep if HH==1 & metatag==1
collapse (mean) HHcount, by (RE `GeoID' )
rename HHcount AverageHHperStrc
save `collapse', replace
restore
use `CCRX'_ListingErrors_$date.dta
merge 1:1 RE `GeoID' using `collapse', nogen
sort `GeoID' RE
save, replace
*Export out forms and total the number of forms where the date is entered incorrectly
use `CCRX'_ListingCombined_$date.dta
preserve
gen datetag=.
split start, gen(start_)
replace datetag=1 if start_3!="2015"
drop start_*
split end, gen(end_)
replace datetag=1 if end_3!="2015"
drop end_*
keep if datetag==1 & metatag==1
capture collapse (sum) datetag if metatag==1, by (RE `GeoID')
if _rc!=2000{
save `collapse', replace
use `CCRX'_ListingErrors_$date
merge 1:1 RE `GeoID' using `collapse', nogen
save, replace
}
else {
clear
use `CCRX'_ListingErrors_$date
gen datetag=.
save, replace
}
export excel using "`CCRX'_ListingErrors_$date.xls", sh(TotalsByRE) sheetreplace firstrow(variables)
restore
*Export out forms and total the total number of forms uploaded
use `CCRX'_ListingCombined_$date.dta
preserve
egen EAtag=tag(`GeoID')
tab EAtag
collapse (sum) HH EAtag SDP
label var HH "Total Households Listed and Uploaded"
label var SDP "Total Private SDP Listed and Uploaded"
label var EAtag "Total EAs with any forms uploaded"
order EAtag HH SDP
save `collapse', replace
export excel using "`CCRX'_ListingErrors_$date.xls", sh(OverallTotal) sheetreplace firstrow(varlabels)
restore
|
/*
The purpose of this do file is to select households for reinterview.
1. Update the macros below to be country and round specific
2. Update the EA that you are selecting
3. Update the variable names that you want to export and keep
4. The information will export to an excel spreadsheet with the EA name/number
5. Send that information to the supervisor. They will need to complete the entire
reinterview questionnaire but will be able to directly compare some information
in the field.
**Use the combined dataset with names to identify households for selection
for reinterview.
*May 2, 2016 updated error to get geographic identifiers to export
*/
*******************************************************************************
* SET MACROS: UPDATE THIS SECTION FOR EACH EA THAT NEEDS RE_INTERVIEW INFORMATION
*******************************************************************************
*Dataset date - this is the date of the data you are using to select the
*the households to reinterview
local datadate 2May2016
*Update the EA name/number
local EA "Odapu_Ogaji"
*******************************************************************************
* SET MACROS: UPDATE THIS SECTION FOR EACH COUNTRY/ROUND - ONLY NEEDS TO BE DONE ONCE
*******************************************************************************
*BEFORE USE THE FOLLOWING NEED TO BE UPDATED:
*Country
local Country Nigeria
*Round Number
local Round Round3
*Country and Round Abbreviation
local CCRX NGR3
*Combined Dataset Name (must be the version with names)
local CombinedDataset `CCRX'_Combined_`datadate'.dta
*Geographic Identifier
global GeoID "state LGA Locality EA"
*Geographic Identifier lower than EA to household
global GEOID_SH "structure household"
*variables to keep
local varstokeep "RE firstname gender age eligible usual last_night"
**Directory where the Combined dataset is stored
global datadir "/Users/pma2020/Dropbox (Gates Institute)/Nigeria/Data_NotShared/Round3/Data/HHQFQ"
**Directory where the excel files will be exported to
global selectiondir "/Users/pma2020/Dropbox (Gates Institute)/Nigeria/Data_NotShared/Round3/Data/Selection"
*******************************************************************************************
******* Stop Updating Macros Here *******
*******************************************************************************************
cd "$selectiondir"
set seed 7261982
use "$datadir/`CombinedDataset'", clear
tempfile completetemp
save `completetemp', replace
keep if EA=="`EA'"
keep if metatag==1
sample 5, count
gen reinterview=1
tempfile temp
keep metainstanceID reinterview
save `temp', replace
use `completetemp', clear
merge m:1 metainstanceID using `temp', nogen
keep if reinterview==1
keep $GeoID $GeoID_SH `varstokeep'
export excel using `CCRX'_ReinterviewInformation_`EA'.xls, firstrow(variables) replace
|
clear
clear matrix
clear mata
capture log close
set maxvar 15000
set more off
numlabel, add
/*******************************************************************************
*
* FILENAME: CCRX_Selection_11May2015
* PURPOSE: Generate database of households selected for interview
* CREATED: Linnea Zimmerman (lzimme12@jhu.edu)
* DATA IN: CCRX_Selection_vX.csv
* DATA OUT: `CCRX'_Selection_$date.dta
* UPDATES: 08/11/2015 by Linnea Zimmerman
*
*******************************************************************************/
*******************************************************************************
* INSTRUCTIONS
*******************************************************************************
*
* 1. Update macros/directories in first section
*
*******************************************************************************
* SET MACROS: UPDATE THIS SECTION FOR EACH COUNTRY/ROUND
*******************************************************************************
clear matrix
clear
set more off
local Country Niger
local CCRX NER2
local Selectioncsv Selection_NER2_v13
local Round Round2
*Supervisory area
local supervisoryarea "level3"
*Update the data directory
global datadir "~/Dropbox (Gates Institute)/`Country'/Data_NotShared/`Round'/Data"
**Global directory for the dropbox where the csv files are originally stored
global csvdir "~/Dropbox (Gates Institute)/`Country'/pmadatamanagement_`Country'/`Round'/Data/CSV_Files"
*Country specific data directory
*global datadir "~/PMA2020/`Round'/Data"
**Do file directory
global dofiledir "~/Dropbox (Gates Institute)/`Country'/pmadatamanagement_`Country'/`Round'/Cleaning_DoFiles/Current"
cd "$datadir/Selection"
*******************************************************************************
*Stop Updating here
*******************************************************************************
*Create folder called Archived_Data in the Listing Folder
* Zip all of the old versions of the datasets. Updates the archive rather than replacing so all old versions are still in archive
*capture zipfile `CCRX'*, saving (Archived_Data/ArchivedSelection_$date, replace)
*Delete old versions. Old version still saved in ArchivedData.zip
capture shell rm `CCRX'_*
clear
capture insheet using "$csvdir/`Selectioncsv'.csv", comma case
tostring _all, replace
save `CCRX'_Selection_$date.dta,replace
clear
capture insheet using "$csvdir/`Selectioncsv'_HH_selectionrpt.csv", comma case
tostring _all, replace
save `CCRX'_SelectionHousehold_$date.dta,replace
tempfile temp
rename PARENT_KEY metainstanceID
save `temp', replace
use `CCRX'_Selection_$date.dta
merge 1:m metainstanceID using `temp', nogen
save, replace
tostring name_typed, replace
rename name_grp* *
rename date_group* *
rename HH_selectiongrp* *
replace your_name=name_typed if your_name_check!="yes"
replace system_date=manual_date if system_date_check!="yes"
replace RE_name=RE_name_other if RE_name_other!=""
rename your_name Supervisor
drop SETOFHH_selectionrpt your_name_check name_typed system_date_check manual_date ///
HH_selectionprompt HH_selectioncheck_grp v* goback goback_2 SDP_selection_prompt start end ///
deviceid simserial phonenumber KEY RE* max_num_HH too* *logging SDP_check
order SubmissionDate system_date all_selected_HH, last
order metainstanceID, first
sort Supervisor `supervisoryarea' EA structure household
save, replace
|
pr ste_odkmeta
vers 11.2
_find_project_root
loc dir "`r(path)'/src/main/write"
#d ;
ste using `"`dir'/templates"',
base(`dir'/ODKMetaBaseWriter.mata)
control(ODKMetaController)
complete(`dir'/ODKMetaWriter.mata)
;
#d cr
end
|
pr write_odkmeta_ado
vers 11.2
qui ste_odkmeta
_find_project_root
#d ;
writeado
using `"`r(path)'/src/build/odkmeta.ado"',
stata(main/odkmeta.do)
class_declarations(
Collection
Group
Repeat
Field
)
mata(
// Classes with declaration dependencies
Collection
Group
Repeat
Field
// External dependencies
DoFileWriter
// Functions
csv
error
stata
string
BaseOptions
SurveyOptions
ChoicesOptions
AttribProps
Attrib
AttribSet
FormFields
List
FormLists
ODKMetaBaseWriter
ODKMetaController
ODKMetaWriter
odkmeta
)
;
#d cr
end
|
* -odkmeta- cscript
* -version- intentionally omitted for -cscript-.
* 1 to complete test 124; 0 not to.
local test124 0
* 1 to complete live-project testing; 0 not to.
local project 0
* 1 to execute profile.do after completion; 0 not to.
local profile 1
/* -------------------------------------------------------------------------- */
/* initialize */
* Check the parameters.
assert inlist(`test124', 0, 1)
assert inlist(`project', 0, 1)
assert inlist(`profile', 0, 1)
* Check that -renvars- is installed.
which renvars
c odkmeta
cd src/cscript
cap log close odkmeta
log using odkmeta, name(odkmeta) s replace
di c(username)
di "`:environment computername'"
clear all
if c(maxvar) < 5450 ///
set maxvar 5450
set varabbrev off
set type float
vers 11.2: set seed 889305539
set more off
adopath ++ `"`c(pwd)'/../build"'
adopath ++ `"`c(pwd)'/ado"'
write_odkmeta_ado
timer clear 1
timer on 1
* Preserve select globals.
loc FASTCDPATH : copy glo FASTCDPATH
cscript odkmeta adofile odkmeta
* Check that Mata issues no warning messages about the source code.
if c(stata_version) >= 13 {
matawarn odkmeta.ado
assert !r(warn)
cscript
}
* Restore globals.
glo FASTCDPATH : copy loc FASTCDPATH
* Define -shortnames-, a program to change an -odkmeta- do-file so that it
* attempts to name variables using their short names.
* Syntax: shortnames filename
run shortnames
* Define -get_warnings-, a program to return the warning messages of an
* -odkmeta- do-file.
* Syntax: -get_warnings do_file_name-
run get_warnings
cd tests
* Erase do-files and .dta files not in an "expected" directory.
loc dirs : dir . dir *
foreach dir of loc dirs {
loc dtas : dir "`dir'" file "*.dta"
foreach dta of loc dtas {
erase "`dir'/`dta'"
}
loc dofiles : dir "`dir'" file "*.do"
foreach do of loc dofiles {
erase "`dir'/`do'"
}
}
/* -------------------------------------------------------------------------- */
/* example test */
* Test 1
cd 1
odkmeta using "ODK to Stata.do", ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
/* -------------------------------------------------------------------------- */
/* basic */
* Test 2
cd 2
conf new f "ODK to Stata.do"
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2.csv) survey(survey.csv) choices(choices.csv)
get_warnings "ODK to Stata"
assert `"`r(badname_repeats)'"' == ""
assert "`r(badname_vars)'" == ""
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 19
cd 19
forv i = 1/2 {
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
}
cd ..
* Test 48
cd 48
odkmeta using "ODK to Stata.do", ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 52
cd 52
#d ;
odkmeta using "ODK to Stata.do",
csv(Audio audit test.csv)
survey(survey.csv, type(A) name(B) label(C) disabled(D))
choices(choices.csv, listname(A) name(B) label(C))
replace
;
#d cr
run "ODK to Stata"
compall expected
cd ..
* Test 54
cd 54
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 71
cd 71
cap erase "ODK to Stata.do"
odkmeta using "ODK to Stata", ///
csv(odkmetatest2.csv) survey(survey.csv) choices(choices.csv) replace
conf f "ODK to Stata.do"
run "ODK to Stata"
compall expected
cd ..
* Test 73
cd 73
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2) survey(survey) choices(choices) replace
run "ODK to Stata"
compall expected
cd ..
* Test 75
cd 75
odkmeta using "ODK to Stata.do", ///
csv("odkmetatest2.csv") survey("survey.csv") choices("choices.csv") replace
run "ODK to Stata"
compall expected
cd ..
* Test 76
cd 76
#d ;
odkmeta using "ODK to Stata.do",
csv("Audio audit test.csv")
survey("survey.csv", type(A) name(B) label(C) disabled(D))
choices("choices.csv", listname(A) name(B) label(C))
replace
;
#d cr
run "ODK to Stata"
compall expected
cd ..
/*
* Test 77
cd 77
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2.csv) survey(survey.csv, type(t\`ype)) ///
choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
*/
* Test 89
cd 89
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 90
cd 90
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 96
cd 96
odkmeta using "ODK to Stata.do", ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 108
cd 108
odkmeta using import, ///
csv("survey data") survey("survey survey") choices("survey choices") replace
run import
compall expected
cd ..
* Test 109
cd 109
odkmeta using import, ///
csv(survey data) survey(survey survey) choices(survey choices) replace
run import
compall expected
cd ..
* Test 157
cd 157
odkmeta using import, csv(data) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 199
cd 199
odkmeta using "ODK to Stata.do", ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 208
cd 208
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2.csv) survey(survey.csv) choices(choices.csv)
run "ODK to Stata"
compall expected
cd ..
/* -------------------------------------------------------------------------- */
/* .csv file */
* Test 3
cd 3
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest3.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 4
cd 4
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest3.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 5
cd 5
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest5.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 6
cd 6
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest5.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 7
cd 7
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest5.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 8
cd 8
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest8.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 9
cd 9
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest9.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 10
cd 10
odkmeta using "ODK to Stata.do", ///
csv("odkmetatest10 data.csv") survey("odk survey.csv") choices("odk choices.csv") replace
run "ODK to Stata"
compall expected
cd ..
* Test 27
cd 27
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest27.csv) survey(survey.csv) choices(choices.csv) replace
get_warnings "ODK to Stata"
assert `"`r(badname_repeats)'"' == `""""'
assert "`r(badname_vars)'" == "v3"
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 28
cd 28
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest28.csv) survey(survey.csv) choices(choices.csv) replace
get_warnings "ODK to Stata"
assert `"`r(badname_repeats)'"' == `""" "" "" """'
assert "`r(badname_vars)'" == "v3 v6 v9 v12"
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 29
cd 29
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest29.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 80
cd 80
odkmeta using "ODK to Stata.do", ///
csv(data.csv) survey(survey.csv) choices(choices.csv) replace
get_warnings "ODK to Stata"
assert `"`r(badname_repeats)'"' == `""" """'
assert "`r(badname_vars)'" == "v2 v3"
assert `"`r(datanotform_repeats)'"' == `""" """'
assert "`r(datanotform_vars)'" == "_uuid _submission_time"
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 84
cd 84
odkmeta using import, ///
csv(odkmetatest84.csv) survey(survey.csv) choices(choices.csv) replace
get_warnings import
assert `"`r(badname_repeats)'"' == `""""'
assert "`r(badname_vars)'" == "v3"
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 85
cd 85
odkmeta using import, ///
csv(odkmetatest2.csv) survey(survey.csv) choices(choices.csv) replace
tempfile new
filefilter import.do `new', from(fields)
assert !r(occurrences)
get_warnings import
assert `"`r(badname_repeats)'"' == ""
assert "`r(badname_vars)'" == ""
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 86
cd 86
odkmeta using import, ///
csv(odkmetatest86.csv) survey(survey.csv) choices(choices.csv) replace
get_warnings import
assert `"`r(badname_repeats)'"' == `""" "" "" "" "" "" """'
assert "`r(badname_vars)'" == "v2 v3 v4 v5 v6 v7 v8"
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 91
cd 91
odkmeta using import, ///
csv(odkmetatest91.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 103
cd 103
odkmeta using import, csv(data) survey(survey) choices(choices) replace
tempfile temp
filefilter import.do `temp', ///
from("* Rename any variable names that are difficult for -split-.") ///
to("renvars thirtytwocharactersagainonfruits \BS fruits32")
copy `temp' import.do, replace
get_warnings import
assert `"`r(badname_repeats)'"' == `""" "" "" """'
assert "`r(badname_vars)'" == "v7 v9 v13 v17"
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 104
cd 104
odkmeta using import, csv(odkmetatest104) ///
survey(survey) choices(choices) replace
get_warnings import
assert `"`r(badname_repeats)'"' == `""" "" """'
assert "`r(badname_vars)'" == "v3 v11 v12"
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 148
cd 148
odkmeta using import, ///
csv(Audio.audit.test.csv) survey(survey) choices(choices) replace
run import
compall expected
cd ..
/* -------------------------------------------------------------------------- */
/* field types */
* Test 50
cd 50
odkmeta using import, ///
csv(odkmetatest50.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 102
cd 102
odkmeta using import, ///
csv(odkmetatest102.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 141
cd 141
odkmeta using "import.do", ///
csv("data.csv") ///
survey("survey.csv") choices("choices.csv") replace
tempfile temp
filefilter import.do `temp', ///
from("local datetimemask MDYhms") to("local datetimemask DMYhms")
assert r(occurrences) == 1
copy `temp' import.do, replace
run import
compall expected
cd ..
* Test 145
cd 145
odkmeta using "import.do", csv(data) survey(survey) choices(choices)
tempfile temp
filefilter import.do `temp', ///
from("local datetimemask MDYhms") to("local datetimemask DMYhms") replace
assert r(occurrences) == 1
copy `temp' import.do, replace
filefilter import.do `temp', ///
from("local timemask hms") to("local timemask hm") replace
assert r(occurrences) == 1
copy `temp' import.do, replace
filefilter import.do `temp', ///
from("local datemask MDY") to("local datemask DMY") replace
assert r(occurrences) == 1
copy `temp' import.do, replace
run import
compall expected
cd ..
/* -------------------------------------------------------------------------- */
/* attributes */
* Test 11
cd 11
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2.csv) choices(choices.csv) replace ///
survey(survey.csv, type(Type))
run "ODK to Stata"
compall expected
cd ..
* Test 24
cd 24
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest24.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 25
cd 25
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest25.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 101
cd 101
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest101.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
/* -------------------------------------------------------------------------- */
/* lists */
* Test 12
cd 12
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest12.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 20
cd 20
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest20.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 21
cd 21
odkmeta using import, ///
csv(odkmetatest21.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 22
cd 22
odkmeta using import, ///
csv(odkmetatest22.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 23
cd 23
odkmeta using import, ///
csv(odkmetatest23.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 36
cd 36
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest36.csv) survey(survey.csv) choices(choices.csv) replace
get_warnings "ODK to Stata"
assert `"`r(badname_repeats)'"' == ""
assert "`r(badname_vars)'" == ""
assert `"`r(datanotform_repeats)'"' == `""" """'
assert "`r(datanotform_vars)'" == "_uuid _submission_time"
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 41
cd 41
odkmeta using "import online.do", ///
csv(odkmetatest36.csv) survey(survey.csv) choices(choices.csv) replace ///
oneline
run "import online"
compall expected
tempfile new
filefilter "import online.do" `new', from(;)
assert r(occurrences) == 4
filefilter "import online.do" `new', from(#delimit) replace
assert r(occurrences) == 4
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest36.csv) survey(survey.csv) choices(choices.csv) replace
filefilter "ODK to Stata.do" `new', from(;) replace
assert r(occurrences) > 4
filefilter "ODK to Stata.do" `new', from(#delimit) replace
assert r(occurrences) > 4
cd ..
* Test 47
cd 47
odkmeta using import, ///
csv(odkmetatest47.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 56
cd 56
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest56.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 57
cd 57
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest56.csv) survey(survey.csv) choices(choices.csv) replace ///
other(max)
run "ODK to Stata"
compall expected
cd ..
* Test 58
cd 58
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest56.csv) survey(survey.csv) choices(choices.csv) replace ///
other(min)
run "ODK to Stata"
compall expected
cd ..
* Test 59
cd 59
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest56.csv) survey(survey.csv) choices(choices.csv) replace ///
other(99)
run "ODK to Stata"
compall expected
cd ..
* Test 60
cd 60
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest56.csv) survey(survey.csv) choices(choices.csv) replace ///
other(.o)
run "ODK to Stata"
compall expected
cd ..
* Test 72
cd 72
odkmeta using import, ///
csv(odkmetatest72.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 83
cd 83
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest36.csv) survey(survey.csv) choices(choices.csv) replace
run "ODK to Stata"
compall expected
cd ..
* Test 87
cd 87
odkmeta using import, ///
csv(odkmetatest87.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 92
cd 92
odkmeta using import, csv(data) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 93
cd 93
odkmeta using import, csv(data) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 94
cd 94
odkmeta using import, csv(data) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 106
cd 106
odkmeta using import, csv(data) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 107
cd 107
odkmeta using import, csv(data) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 137
cd 137
odkmeta using import, csv(data) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 138
cd 138
odkmeta using import, ///
csv(odkmetatest36.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 139
cd 139
odkmeta using import, ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 140
cd 140
odkmeta using import, ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 142
cd 142
odkmeta using "import.do", ///
csv("data.csv") ///
survey("survey.csv") choices("choices.csv") replace
run import
compall expected
cd ..
* Test 143
cd 143
odkmeta using "import.do", ///
csv("data.csv") ///
survey("survey.csv") choices("choices.csv") replace
run import
compall expected
cd ..
* Test 144
cd 144
odkmeta using "import.do", ///
csv("data.csv") ///
survey("survey.csv") choices("choices.csv") replace
run import
compall expected
cd ..
* Test 197
cd 197
odkmeta using "import.do", ///
csv("data.csv") ///
survey("survey.csv") choices("choices.csv") replace
run import
compall expected
cd ..
/* -------------------------------------------------------------------------- */
/* -specialexp()- */
* Test 191
cd 191
odkmeta using import, ///
csv(odkmetatest21.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 192
cd 192
odkmeta using import, ///
csv(odkmetatest21.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 193
cd 193
odkmeta using import, ///
csv(odkmetatest21.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 194
cd 194
odkmeta using import, ///
csv(odkmetatest21.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 195
cd 195
odkmeta using import, ///
csv(odkmetatest21.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
* Test 196
cd 196
odkmeta using import, ///
csv(odkmetatest12.csv) survey(survey.csv) choices(choices.csv) replace
run import
compall expected
cd ..
/* -------------------------------------------------------------------------- */
/* groups and repeats */
* Test 98
cd 98
odkmeta using import, ///
csv(odkmetatest84.csv) survey(survey.csv) choices(choices.csv) replace
get_warnings import
assert `"`r(badname_repeats)'"' == `""""'
assert "`r(badname_vars)'" == "v3"
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 110
cd 110
odkmeta using import, csv(data) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 111
cd 111
odkmeta using import, ///
csv(odkmetatest111) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 112
cd 112
odkmeta using import, ///
csv(odkmetatest112) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 113
cd 113
odkmeta using import, ///
csv(odkmetatest113) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 114
cd 114
odkmeta using import, ///
csv(odkmetatest114) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 115
cd 115
odkmeta using import, ///
csv(odkmetatest115) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 116
cd 116
odkmeta using import, csv(data) survey(survey) choices(choices) replace
run import
* Check the dataset.
compall expected
* Check the do-file.
infix str line 1-244 using import.do, clear
gen ln = _n
su ln if line == "* begin group G1"
assert r(N) == 1
assert line[r(min) + 1] == "* begin group G2"
su ln if line == "* end group G2"
assert r(N) == 1
assert line[r(min) + 1] == "* end group G1"
cd ..
* Test 117
cd 117
odkmeta using import, ///
csv(odkmetatest117) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 124
cd 124
* Do-file only
if `test124' {
forv i = 1/25 {
drop _all
set obs `=ceil(10 * runiform())'
gen order = _n
gen type = "text"
gen name = "field" + strofreal(order)
foreach collec in repeat group {
forv j = 1/`=floor(301 * runiform())' {
preserve
drop _all
* "+ 2" for "begin group/repeat" and "end group/repeat"
set obs `=ceil(3 * runiform()) + 2'
gen collecorder = _n
gen type = ""
gen name = ""
replace type = "begin `collec'" in 1
replace name = "`collec'`j'" in 1
replace type = "end `collec'" in L
replace type = "integer" if mi(type)
replace name = "`collec'`j'_field" + strofreal(_n) if mi(name)
tempfile temp
sa `temp'
restore
sca neworder = (_N + 1) * runiform()
assert order != neworder
assert !mi(order)
append using `temp'
replace order = neworder if mi(order)
sort order collecorder
drop collecorder
replace order = _n
}
}
gen repeatlevel = sum(type == "begin repeat") - sum(type == "end repeat")
replace repeatlevel = repeatlevel + 1 if type == "end repeat"
gen label = strofreal(repeatlevel) if repeatlevel
drop repeatlevel
outsheet using survey`i'.csv, c replace
odkmeta using import, csv(124) survey(survey`i') choices(choices) replace
}
}
cd ..
* Test 146
cd 146
odkmeta using import, ///
csv(odkmetatest113) survey(survey) choices(choices) replace
tempfile temp
filefilter import.do `temp', ///
from("* Merge repeat groups.") ///
to("ex")
assert r(occurrences) == 1
copy `temp' import.do, replace
run import
compall expected
cd ..
* Test 147
cd 147
odkmeta using import, ///
csv(odkmetatest115) survey(survey) choices(choices) replace
tempfile temp
filefilter import.do `temp', ///
from("* Merge repeat groups.") ///
to("ex")
assert r(occurrences) == 1
copy `temp' import.do, replace
run import
compall expected
cd ..
* Test 162
cd 162
odkmeta using import, ///
csv(odkmetatest162) survey(survey) choices(choices)
get_warnings import
assert `"`r(badname_repeats)'"' == ""
assert "`r(badname_vars)'" == ""
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 163
cd 163
odkmeta using import, ///
csv(odkmetatest163) survey(survey) choices(choices)
get_warnings import
assert `"`r(badname_repeats)'"' == `"lindseyrepeat"'
assert "`r(badname_vars)'" == "v4"
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 164
cd 164
odkmeta using import, ///
csv(odkmetatest111) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 165
cd 165
odkmeta using import, ///
csv(odkmetatest165) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 166
cd 166
odkmeta using import, ///
csv(odkmetatest166) survey(survey) choices(choices) replace
run import
compall expected
cd ..
* Test 167
cd 167
odkmeta using import, ///
csv(odkmetatest167) survey(survey) choices(choices)
get_warnings import
assert `"`r(badname_repeats)'"' == `"lindseyrepeat"'
assert "`r(badname_vars)'" == "v4"
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 184
cd 184
odkmeta using import, ///
csv(odkmetatest111) survey(survey) choices(choices) replace
get_warnings import
assert `"`r(badname_repeats)'"' == ""
assert "`r(badname_vars)'" == ""
assert `"`r(datanotform_repeats)'"' == "roster"
assert "`r(datanotform_vars)'" == "DATA_NOT_FORM"
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 203
cd 203
odkmeta using import, ///
csv(odkmetatest203) survey(survey) choices(choices) replace
get_warnings import
assert `"`r(badname_repeats)'"' == `""" """'
assert "`r(badname_vars)'" == "v3 v6"
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
* Test 209
cd 209
odkmeta using import, csv(odkmetatest209) survey(survey) choices(choices) replace
get_warnings import
assert `"`r(badname_repeats)'"' == ""
assert "`r(badname_vars)'" == ""
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == ""
assert "`r(formnotdata_vars)'" == ""
compall expected
cd ..
/* -------------------------------------------------------------------------- */
/* -dropattrib()-, -keepattrib()- */
* Test 171
cd 171
odkmeta using import, ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) ///
dropattrib(type "constraint message" constraint_message) replace
run import
compall expected
cd ..
* Test 172
cd 172
odkmeta using import, ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) ///
dropattrib(_all) replace
run import
compall expected
cd ..
* Test 173
cd 173
odkmeta using import, ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) ///
dropattrib(_all type) replace
run import
compall expected
cd ..
* Test 174
cd 174
odkmeta using import, ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) ///
keepattrib(type "constraint message" constraint_message) replace
run import
compall expected
cd ..
* Test 175
cd 175
odkmeta using import, ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) ///
keepattrib(_all) replace
run import
compall expected
cd ..
* Test 176
cd 176
odkmeta using import, ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv) ///
keepattrib(_all type) replace
run import
compall expected
cd ..
* Test 178
cd 178
odkmeta using import, ///
csv(odkmetatest111) survey(survey) choices(choices) replace ///
dropattrib(type)
run import
compall expected
cd ..
* Test 179
cd 179
odkmeta using import, ///
csv(odkmetatest111) survey(survey) choices(choices) replace ///
dropattrib(_all)
run import
compall expected
cd ..
* Test 180
cd 180
odkmeta using import, ///
csv(odkmetatest111) survey(survey) choices(choices) replace ///
keepattrib(type)
run import
compall expected
cd ..
* Test 181
cd 181
odkmeta using import, ///
csv(odkmetatest111) survey(survey) choices(choices) replace ///
keepattrib(_all)
run import
compall expected
cd ..
/* -------------------------------------------------------------------------- */
/* -relax- */
* Test 74
cd 74
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2) survey(survey) choices(choices) replace
rcof `"noi do "ODK to Stata""' == 111
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2) survey(survey) choices(choices) replace ///
relax
get_warnings "ODK to Stata"
assert `"`r(badname_repeats)'"' == ""
assert "`r(badname_vars)'" == ""
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == `""""'
assert "`r(formnotdata_vars)'" == "DoesntExist"
compall expected
cd ..
* Test 81
cd 81
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest28.csv) survey(survey.csv) choices(choices.csv) replace ///
relax
get_warnings "ODK to Stata"
assert `"`r(badname_repeats)'"' == `""" "" "" """'
assert "`r(badname_vars)'" == "v3 v6 v9 v12"
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == `""""'
assert "`r(formnotdata_vars)'" == "DoesntExist"
compall expected
cd ..
* Test 122
cd 122
odkmeta using import, ///
csv(odkmetatest111) survey(survey) choices(choices) replace ///
relax
get_warnings import
assert `"`r(badname_repeats)'"' == ""
assert "`r(badname_vars)'" == ""
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == `""" roster roster"'
assert "`r(formnotdata_vars)'" == "DoesntExist1 DoesntExist2 DoesntExist3"
compall expected
cd ..
* Test 170
cd 170
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest28.csv) survey(survey.csv) choices(choices.csv) replace ///
relax
get_warnings "ODK to Stata"
assert `"`r(badname_repeats)'"' == `""" "" "" """'
assert "`r(badname_vars)'" == "v3 v6 v9 v12"
assert `"`r(datanotform_repeats)'"' == `""""'
assert "`r(datanotform_vars)'" == "_submission_time"
assert `"`r(formnotdata_repeats)'"' == `""""'
assert "`r(formnotdata_vars)'" == "DoesntExist"
compall expected
cd ..
* Test 182
cd 182
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest28.csv) survey(survey.csv) choices(choices.csv) replace ///
relax dropattrib(_all)
get_warnings "ODK to Stata"
assert `"`r(badname_repeats)'"' == `""" "" "" """'
assert "`r(badname_vars)'" == "v3 v6 v9 v12"
assert `"`r(datanotform_repeats)'"' == `""""'
assert "`r(datanotform_vars)'" == "_submission_time"
assert `"`r(formnotdata_repeats)'"' == `""""'
assert "`r(formnotdata_vars)'" == "DoesntExist"
compall expected
cd ..
* Test 183
cd 183
odkmeta using import, ///
csv(odkmetatest111) survey(survey) choices(choices) replace ///
relax
get_warnings import
assert `"`r(badname_repeats)'"' == ""
assert "`r(badname_vars)'" == ""
assert `"`r(datanotform_repeats)'"' == ""
assert "`r(datanotform_vars)'" == ""
assert `"`r(formnotdata_repeats)'"' == "roster"
assert "`r(formnotdata_vars)'" == "DoesntExist"
compall expected
cd ..
/* -------------------------------------------------------------------------- */
/* shortnames */
* Test 210
cd 210
odkmeta using import, ///
csv(odkmetatest210) survey(survey) choices(choices) replace ///
shortnames
run import
compall expected
cd ..
/* -------------------------------------------------------------------------- */
/* live-project testing */
if `project' {
loc curdir "`c(pwd)'"
c odkmeta_development
cd cscript
do "Project tests"
cd "`curdir'"
}
/* -------------------------------------------------------------------------- */
/* help file examples */
loc curdir "`c(pwd)'"
cd ../../../doc/help/example
cap erase import.do
odkmeta using import.do, csv("ODKexample.csv") survey("survey.csv") choices("choices.csv")
run import
compall expected
odkmeta using import.do, csv("ODKexample.csv") ///
survey("survey_fieldname.csv", name(fieldname)) choices("choices.csv") ///
replace
run import
odkmeta using import.do, csv("ODKexample.csv") ///
survey("survey_fieldname.csv", name(fieldname)) ///
choices("choices_valuename.csv", name(valuename)) replace
run import
odkmeta using import.do, ///
csv("ODKexample.csv") survey("survey.csv") choices("choices.csv") ///
dropattrib(hint) replace
run import
odkmeta using import.do, ///
csv("ODKexample.csv") survey("survey.csv") choices("choices.csv") ///
dropattrib(_all) replace
run import
odkmeta using import.do, ///
csv("ODKexample.csv") survey("survey.csv") choices("choices.csv") ///
other(99) replace
run import
cd "`curdir'"
/* -------------------------------------------------------------------------- */
/* user mistakes */
* Tests 33, 38
foreach i of numlist 33 38 {
cd `i'
#d ;
rcof `"
noi odkmeta using "ODK to Stata.do",
csv(odkmetatest2.csv) survey(survey.csv) choices(choices.csv)
replace
"' == 198
;
#d cr
cd ..
}
* Test 34
cd 34
#d ;
rcof `"
noi odkmeta using "ODK to Stata.do",
csv(odkmetatest36.csv) survey(survey.csv) choices(choices.csv) replace
"' == 198
;
#d cr
cd ..
* Test 35
cd 35
#d ;
rcof `"
noi odkmeta using "ODK to Stata.do",
csv(odkmetatest36.csv) survey(survey.csv) choices(choices.csv) replace
"' == 111
;
#d cr
cd ..
* Tests 37, 39, 40
foreach i of numlist 37 39 40 {
cd `i'
#d ;
rcof `"
noi odkmeta using "ODK to Stata.do",
csv(odkmetatest2.csv) survey(survey.csv) choices(choices.csv)
"' == 111
;
#d cr
cd ..
}
* Test 53
cd 53
#d ;
rcof `"
noi odkmeta using "ODK to Stata.do",
csv(Audio audit test.csv)
survey(survey.csv, type(Type))
choices(choices.csv) replace
"' == 111
;
#d cr
cd ..
* Test 55
cd 55
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest2.csv) survey(survey.csv) choices(choices.csv) replace
rcof `"noi do "ODK to Stata""' == 612
cd ..
* Tests 61, 62, 63, 64, 65, 66, 67, 68
forv i = 61/68 {
cd `i'
#d ;
rcof `"
noi odkmeta using "ODK to Stata.do",
csv(odkmetatest36.csv) survey(survey.csv) choices(choices.csv)
replace
"' == 198
;
#d cr
cd ..
}
* Test 82
cd 82
odkmeta using "ODK to Stata.do", ///
csv(odkmetatest36.csv) survey(survey.csv) choices(choices.csv) replace
rcof `"run "ODK to Stata""' == 9
loc dtas : dir . file "*.dta"
assert !`:list sizeof dtas'
cd ..
* Test 119
cd 119
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 120
cd 120
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 149
cd 149
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 150
cd 150
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 151
cd 151
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 152
cd 152
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 153
cd 153
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 154
cd 154
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 155
cd 155
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 156
cd 156
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 158
cd 158
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 159
cd 159
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 160
cd 160
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 161
cd 161
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" ///
== 198
cd ..
* Test 177
cd 177
#d ;
rcof "noi odkmeta using import,
csv(data) survey(survey) choices(choices)
dropattrib(name) keepattrib(type)"
== 198
;
#d cr
cd ..
* Test 185
cd 185
mkdir temp
loc dest temp/lspecialexp.mlib
copy "`c(sysdir_plus)'l/lspecialexp.mlib" "`dest'"
erase "`c(sysdir_plus)'l/lspecialexp.mlib"
mata: mata mlib index
cap findfile lspecialexp.mlib
assert _rc
cap mata: mata which specialexp()
assert _rc
rcof ///
"noi odkmeta using import, csv(Audio audit test) survey(survey) choices(choices)" ///
== 198
copy "`dest'" "`c(sysdir_plus)'l/lspecialexp.mlib"
erase "`dest'"
rmdir temp
mata: mata mlib index
cd ..
* Test 186
cd 186
#d ;
rcof
"noi odkmeta using import,
csv(odkmetatest56) survey(survey) choices(choices)
other(junk)"
== 198
;
#d cr
cd ..
* Test 187
cd 187
odkmeta using "ODK to Stata.do", ///
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv)
#d ;
rcof
`"noi odkmeta using "ODK to Stata.do",
csv(Audio audit test.csv) survey(survey.csv) choices(choices.csv)"'
== 602
;
#d cr
cd ..
* Test 188
cd 188
#d ;
rcof
"noi odkmeta using import,
csv(Audio audit test) survey(survey, type(DoesntExist)) choices(choices)"
== 111
;
#d cr
cd ..
* Test 189
cd 189
#d ;
rcof
"noi odkmeta using import,
csv(Audio audit test) survey(survey) choices(choices)"
== 198
;
#d cr
cd ..
* Test 190
cd 190
odkmeta using import, csv(odkmetatest111) survey(survey) choices(choices)
rcof "noi run import" == 9
cd ..
* Test 198
cd 198
rcof "noi odkmeta, csv(data) survey(survey) choices(choices)" == 100
cd ..
* Test 200
cd 200
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" == ///
198
cd ..
* Test 201
cd 201
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" == ///
198
cd ..
* Test 202
cd 202
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" == ///
198
cd ..
* Test 204
cd 204
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" == ///
198
cd ..
* Test 205
cd 205
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" == ///
198
cd ..
* Test 206
cd 206
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" == ///
198
cd ..
* Test 207
cd 207
rcof ///
"noi odkmeta using import, csv(data) survey(survey) choices(choices)" == ///
198
cd ..
/* -------------------------------------------------------------------------- */
/* finish up */
timer off 1
timer list 1
if `profile' {
cap conf f C:\ado\profile.do
if !_rc ///
run C:\ado\profile
}
timer list 1
log close odkmeta
|
* Gets the warning messages of an -odkmeta- do-file.
pr get_warnings, rclass
vers 9
syntax anything(name=do id=filename)
gettoken do rest : do
if `:length loc rest' {
di as err "invalid filename"
ex 198
}
if !c(noisily) {
di as err "quietly not allowed"
ex 198
}
* Set parameters.
loc trace 0
if c(trace) == "on" {
loc trace 1
set trace off
}
loc linesize = c(linesize)
set linesize 80
* Create the log file.
qui log query _all
if "`r(numlogs)'" != "" {
forv i = 1/`r(numlogs)' {
loc logs `logs' `r(name`i')'
}
}
loc i 0
* Using __get_warnings# rather than a tempname so that -get_warnings- does
* not affect the temporary names in the -odkmeta- do-file.
loc log __get_warnings`++i'
while `:list log in logs' {
loc log __get_warnings`++i'
}
tempfile temp
qui log using `temp', t name(`log')
run `"`do'"'
qui log close `log'
* Restore parameters.
if `trace' ///
set trace on
set linesize `linesize'
preserve
qui infix str line 1-244 using `temp', clear
gen n = _n
gen dashes = line == "`:di _dup(65) "-"'"
#d ;
loc warnings
badname "The following variables' names differ from their field names, which could not be\ninsheeted:"
datanotform "The following variables appear in the data but not the form:"
formnotdata "The following fields appear in the form but not the data:"
;
#d cr
assert mod(`:list sizeof warnings', 2) == 0
while `:list sizeof warnings' {
gettoken name warnings : warnings
gettoken text warnings : warnings
gen warning = 1
loc npieces = 1 + (strlen("`text'") - ///
strlen(subinstr("`text'", "\n", "", .))) / 2
forv i = 1/`npieces' {
loc pos = strpos("`text'", "\n")
if !`pos' ///
loc pos .
loc piece = substr("`text'", 1, `pos' - 1)
qui replace warning = warning & line[_n + `i' - 1] == "`piece'"
loc text = subinstr("`text'", "`piece'", "", 1)
loc text = subinstr("`text'", "\n", "", 1)
}
qui cou if warning
if r(N) {
assert r(N) == 1
assert line[_n + `npieces'] == "" if warning
assert itrim(line[_n + `npieces' + 1]) == ///
"repeat group variable name" ///
if warning
assert dashes[_n + `npieces' + 2] == 1 if warning
qui su n if warning
* "msgn" for "message n"
loc msgn = r(min)
qui su n if dashes & n > `msgn' + `npieces' + 2
loc first = `msgn' + `npieces' + 3
assert r(N) & r(min) > `first'
loc repeats
loc vars
forv i = `first'/`=r(min) - 1' {
assert inlist(wordcount(line[`i']), 1, 2)
if wordcount(line[`i']) == 1 {
loc repeats "`repeats' """
loc vars `vars' `=line[`i']'
}
else {
loc repeats "`repeats' `=word(line[`i'], 1)'"
loc vars `vars' `=word(line[`i'], 2)'
}
}
loc repeats : list retok repeats
loc vars : list retok vars
ret loc `name'_repeats "`repeats'"
ret loc `name'_vars "`vars'"
}
drop warning
}
end
|
* Changes an -odkmeta- do-file so that it attempts to name variables using their
* short field names.
pr shortnames
vers 9
syntax anything(name=fn id="filename")
gettoken fn rest : fn
if `:length loc rest' {
di as err "invalid filename"
ex 198
}
conf f `"`fn'"'
mata: shortnames(st_local("fn"))
end
vers 9
loc RS real scalar
loc SS string scalar
loc SC string colvector
mata:
void function shortnames(`SS' _fn)
{
`RS' fh, row, rows, i
`SS' findline
`SC' lines, newlines
transmorphic t
// Read the file.
fh = fopen(_fn, "r")
fseek(fh, 0, 1)
i = ftell(fh)
fseek(fh, 0, -1)
t = tokeninit("", (char(13) + char(10), char(10), char(13)), "")
tokenset(t, fread(fh, i))
fclose(fh)
lines = tokengetall(t)'
// Find the line.
findline = char(9) + `"use "\`dta'", clear"'
assert(sum(lines :== findline) == 1)
row = select(1::rows(lines), lines :== findline)
assert(row != rows(lines))
// Add the new code.
newlines =
char(13) + char(10) \
char(9) + "foreach var of varlist _all {" \
2 * char(9) + `"if "\`:char \`var'[Odk_group]'" != "" {"' \
3 * char(9) + `"local name = "\`:char \`var'[Odk_name]'" + ///"' \
4 * char(9) + `"cond(\`:char \`var'[Odk_is_other]', "_other", "") + ///"' \
4 * char(9) + `""\`:char \`var'[Odk_geopoint]'""' \
3 * char(9) + `"local newvar = strtoname("\`name'")"' \
3 * char(9) + "capture rename \`var' \`newvar'" \
2 * char(9) + "}" \
char(9) + "}"
newlines = newlines +
(((1::rows(newlines)) :!= rows(newlines)) :* (char(13) + char(10)))
lines = lines[|1 \ row|] \ newlines \ lines[|row + 1 \ .|]
// Write the new file.
unlink(_fn)
fh = fopen(_fn, "w")
rows = rows(lines)
for (i = 1; i <= rows; i++) {
fwrite(fh, lines[i])
}
fclose(fh)
}
end
|
* Uses -compdta- to compare two directories' sets of datasets.
pr compall
vers 9
args dir1 dir2
if `:length loc 3' ///
err 198
if !`:length loc dir1' {
di as err "dirname1 required"
ex 198
}
if "`dir2'" == "" {
loc dir2 "`dir1'"
loc dir1 .
}
forv i = 1/2 {
loc dtas`i' : dir "`dir`i''" file "*.dta"
}
* Check that the lists of .dta files are the same.
loc 1not2 : list dtas1 - dtas2
if `:list sizeof 1not2' {
di as err "in `dir1' but not `dir2':"
macro li _1not2
ex 198
}
loc 2not1 : list dtas2 - dtas1
if `:list sizeof 2not1' {
di as err "in `dir2' but not `dir1':"
macro li _2not1
ex 198
}
foreach dta of loc dtas1 {
di as txt "Comparing {res:`dta'}..."
compdta "`dir1'/`dta'" "`dir2'/`dta'", ///
form(not) char(drop("^(destring|tostring|ReS_.+)$"))
}
end
|
* Compare the list of tests actually completed in the cscript to those described
* in Tests.md.
vers 13.1
* 1 to compare odkmeta.do against Tests.md;
* 0 to compare "Project tests.do" against "Project tests.xlsx" (off GitHub).
loc github 1
set varabbrev off
if `github' ///
c odkmeta
else ///
c odkmeta_development
cd cscript
loc do = cond(`github', "odkmeta.do", "Project tests.do")
infix str line 1-`c(maxstrvarlen)' using "`do'", clear
loc ws [ `=char(9)']
gen tests = regexs(1) if regexm(line, "^`ws'*\*`ws'+Tests?`ws'+(.*)$")
assert regexm(tests, "^[0-9]+$") if wordcount(tests) == 1
* Check -cd-.
assert regexm(line, "^`ws'*cd " + tests[_n - 1] + "$") ///
if wordcount(tests[_n - 1]) == 1
* Check -compall-.
replace tests = tests[_n - 1] if mi(tests)
drop if mi(tests)
assert _N
gen compall = regexm(line, "^`ws'*compall(`ws'|\$)")
gen rcof = regexm(line, "^`ws'*rcof(`ws'|\$)")
gen doonly = line == "* Do-file only"
foreach var of var compall rcof doonly {
bys tests (`var'): replace `var' = `var'[_N]
}
drop line
duplicates drop
isid tests
assert compall | rcof | doonly
assert (compall | rcof) + doonly == 1
* Split variable tests.
split tests, gen(test) p(,)
drop tests
gen i = _n
reshape long test, i(i)
drop if mi(test)
drop i _j
destring test, replace
assert test > 0 & test == floor(test)
isid test
levelsof test, loc(tests)
loc dirs : dir Tests dir *
assert `:list tests in dirs'
if `github' {
di `"`:list dirs - tests'"'
assert `:list dirs === tests'
}
gen expected = .
foreach test of loc tests {
mata: st_local("expected", strofreal(direxists("Tests/`test'/Expected")))
replace expected = `expected' if test == `test'
}
assert !mi(expected)
assert compall + expected != 1
tempfile cscript
sa `cscript'
if `github' ///
do "Auxiliary/Import tests md"
else {
import excel using "Project tests.xlsx", sh(Tests) first case(low) clear
ren projecttestid id
ren formdescription desc
drop oldtestid
}
drop if inlist(checkedby, "NA", "")
keep id
ren id test
merge 1:1 test using `cscript'
assert _merge == 3
drop _merge
|
* Import Tests.md as a Stata dataset.
vers 13.1
set varabbrev off
infix str line 1-`c(maxstrvarlen)' using Tests.md, clear
replace line = strtrim(line)
assert !strpos(line, char(9))
gen order = _n
su order if line == "<table>"
assert r(N) == 1
drop in 1/`r(min)'
drop order
assert line == "</table>" in L
drop in L
assert mod(_N, 6) == 0
assert (line == "<tr>") + (mod(_n, 6) == 1) != 1
assert (line == "</tr>") + (mod(_n, 6) == 0) != 1
drop if inlist(line, "<tr>", "</tr>")
assert regexm(line, "^<th>.*</th>$") + inrange(_n, 1, 4) != 1
drop in 1/4
assert !strpos(line, "th>")
assert regexm(line, "^<td>.*</td>$")
replace line = regexs(1) if regexm(line, "^<td>(.*)</td>$")
gen j = ""
gen mod4 = mod(_n, 4)
assert mod4[_N] == 0
replace j = "id" if mod4 == 1
replace j = "checkedby" if mod4 == 2
replace j = "area" if mod4 == 3
replace j = "desc" if mod4 == 0
drop mod4
gen i = ceil(_n / 4)
reshape wide line, i(i) j(j) str
drop i
ren line* *
compress
destring id, replace
conf numeric var id
order id checkedby area desc
|
pr odkmeta
vers 11.2
cap mata: mata which specialexp()
if _rc {
* [ID 185]
di as err "SSC package specialexp required"
di as err "to install, type {cmd:ssc install specialexp}"
di as err "{p}after installation, you may need to " ///
"{help mata mlib:index Mata} or restart Stata{p_end}"
ex 198
}
#d ;
syntax using/, csv(str)
/* survey options */
Survey(str asis) [DROPattrib(str asis) KEEPattrib(str asis) RELax SHORTnames]
/* choices options */
CHOices(str asis) [OTHer(str) ONEline]
/* other options */
[replace]
;
#d cr
/* Abbreviations:
-csv-: no known -CSV*- option.
-Survey()- from -streg-'s -predict, Surv-; multiple -S*- options.
-survey(, Type())- from -ds, has(Type)-.
-survey(, Disabled())- from -cluster subcommand, Dissimilarity-; multiple
-D*- options.
-DROPattrib()- from -drop-.
-KEEPattrib()- from -keep- and -cluster subcommand, KEEPcenters-.
-RELax- from -sem, RELiability()-; multiple -REL*- options.
-SHORTnames- from -separate, SHORTlabel-
-CHOices()- from -nlogittree, CHOice()-.
-choices(, LIstname())- would be -Listname()- (from -list-) except for
-choices(, label)-; instead from -return LIst-.
-OTHer()- would be -Other()- (-cluster subcommand, Other()-) except for
-oneline-; instead from -query OTHer-.
-ONEline- would be -Oneline- (-matlist-) except for -other()-; instead from
-xtregar, rhotype(ONEstep).
*/
* Parse -csv()-.
loc stcsv st_local("csv")
mata: if (postmatch(`stcsv', ".csv")) ///
st_local("csv", substr(`stcsv', 1, strlen(`stcsv') - 4));;
* Check -dropattrib()- and -keepattrib()-.
if `:list sizeof dropattrib' & `:list sizeof keepattrib' {
* [ID 177]
di as err "options dropattrib() and keepattrib() are mutually exclusive"
ex 198
}
* Parse -other()-.
if "`other'" == "" ///
loc other max
else if !inlist("`other'", "min", "max") {
cap conf integer n `other'
if _rc & !(strlen("`other'") == 2 & inrange("`other'", ".a", ".z")) {
* [ID 186]
di as err "option other() invalid"
ex 198
}
}
* Add the .do extension to `using' if necessary.
mata: if (pathsuffix(st_local("using")) == "") ///
st_local("using", st_local("using") + ".do");;
* Check -using- and option -replace-.
cap conf new f `"`using'"'
if ("`replace'" == "" & _rc) | ///
("`replace'" != "" & !inlist(_rc, 0, 602)) {
* [ID 187]
conf new f `"`using'"'
/*NOTREACHED*/
}
preserve
#d ;
mata: odkmeta(
// Main
"using",
"csv",
"survey",
"choices",
// Fields
"dropattrib",
"keepattrib",
"relax",
"shortnames",
// Lists
"other",
"oneline",
// Non-option values
"0"
);
#d cr
end
/* -------------------------------------------------------------------------- */
/* error message programs */
pr error_parsing
syntax anything(name=rc id="return code"), opt(name) [SUBopt(str)]
mata: error_parsing(`rc', "`opt'", "`subopt'")
/*NOTREACHED*/
end
pr error_overlap
syntax anything(name=overlap id=overlap), ///
opts(namelist min=2 max=2) [SUBopts]
mata: error_overlap(strip_quotes(st_local("overlap")), tokens("`opts'"), ///
"`subopts'" != "")
end
/* error message programs */
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/* parse user input */
pr check_col
syntax varname(str) if/, opt(passthru) [SUBopt(name) LIstvars(varlist)]
tempvar touse
gen `touse' = `if'
qui cou if `touse'
if r(N) {
* Determine the first problematic row.
tempvar order
gen `order' = _n
* Add an extra observation so the row number is correct.
qui set obs `=_N + 1'
qui replace `touse' = 0 in L
qui replace `order' = 0 in L
sort `order'
qui replace `order' = _n
qui su `order' if `touse'
loc first = r(min)
if "`listvars'" != "" ///
li `listvars' in `first', ab(32)
* [ID 61], [ID 62], [ID 63], [ID 64], [ID 65], [ID 66], [ID 67], [ID 68]
mata: errprintf("invalid %s attribute '%s'\n", ///
st_global("`varlist'[Column_header]"), ///
st_sdata(`first', "`varlist'"))
error_parsing 198, ///
`opt' `=cond("`subopt'" != "", "sub(`subopt'())", "")'
/*NOTREACHED*/
}
end
/* -parse_survey- completes checking that involves single rows of the survey
sheet; the rest is done in `FormFields'. Unlike `FormFields',
-parse_survey- displays the problematic row, and in general, where possible
it is better to implement a check in -parse_survey- rather than
`FormFields'. However, all complex checks that involve Mata are best put
in `FormFields'. */
pr parse_survey, sclass
cap noi syntax anything(name=fn id=filename equalok everything), ///
[Type(str) name(str) LAbel(str) Disabled(str)]
loc opt opt(survey)
if _rc {
error_parsing `=_rc', `opt'
/*NOTREACHED*/
}
* Check column names.
loc opts type name label disabled
forv i = 1/`:list sizeof opts' {
gettoken opt1 opts : opts
if `"``opt1''"' == "" ///
loc `opt1' `opt1'
foreach opt2 of loc opts {
if `"``opt1''"' == `"``opt2''"' {
error_overlap `"``opt1''"', opts(`opt1' `opt2') sub
error_parsing 198, `opt'
/*NOTREACHED*/
}
}
}
mata: st_local("fn", strip_quotes(st_local("fn")))
mata: load_csv("optvars", st_local("fn"), ("type", "name", "label"), ///
"survey")
gettoken typevar optvars : optvars
gettoken namevar optvars : optvars
if !_N {
* [ID 35]
di as err "no fields in survey sheet"
error_parsing 198, `opt'
/*NOTREACHED*/
}
unab all : _all
loc listvars listvars(`all')
tempvar nonmiss
egen `nonmiss' = rownonmiss(_all), str
tempvar stdtype
loc matastd stdtype(st_sdata(., "`typevar'"))
mata: st_sstore(., st_addvar(smallest_vartype(`matastd'), "`stdtype'"), ///
`matastd')
* Check the word count of `typevar'.
tempvar select
gen `select' = inlist(word(`stdtype', 1), "select_one", "select_multiple")
* [ID 65]
check_col `typevar' if `select' & wordcount(`stdtype') != ///
2 + (word(`stdtype', wordcount(`stdtype')) == "or_other"), ///
`opt' sub(type) `listvars'
* Check that the list names specified to select variables are Stata names.
* [ID 66], [ID 67], [ID 68]
check_col `typevar' if `select' & ///
(word(`stdtype', 2) != strtoname(word(`stdtype', 2)) | ///
strpos(word(`stdtype', 2), "`")), ///
`opt' sub(type) `listvars'
* Check the word count of `namevar'.
* [ID 200]
check_col `namevar' if wordcount(`namevar') != 1 & ///
!regexm(`stdtype', "^end (group|repeat)$") & `nonmiss', ///
`opt' sub(name) `listvars'
sret loc fn "`fn'"
sret loc type "`type'"
sret loc name "`name'"
sret loc label "`label'"
sret loc disabled "`disabled'"
end
pr parse_choices, sclass
cap noi syntax anything(name=fn id=filename equalok everything), ///
[LIstname(str) name(str) LAbel(str)]
loc opt opt(choices)
if _rc {
error_parsing `=_rc', `opt'
/*NOTREACHED*/
}
* Check column names.
if "`listname'" == "" ///
loc listname list_name
loc opts listname name label
forv i = 1/`:list sizeof opts' {
gettoken opt1 opts : opts
if `"``opt1''"' == "" ///
loc `opt1' `opt1'
foreach opt2 of loc opts {
if `"``opt1''"' == `"``opt2''"' {
error_overlap `"``opt1''"', opts(`opt1' `opt2') sub
error_parsing 198, `opt'
/*NOTREACHED*/
}
}
}
mata: st_local("fn", strip_quotes(st_local("fn")))
mata: load_csv("optvars", st_local("fn"), ("listname", "name", "label"), ///
"choices")
gettoken listnamevar optvars : optvars
gettoken namevar optvars : optvars
gettoken labelvar optvars : optvars
unab all : _all
loc listvars listvars(`all')
tempvar nonmiss
egen `nonmiss' = rownonmiss(_all), str
* [ID 61], [ID 62], [ID 63], [ID 64]
check_col `listnamevar' ///
if (strtrim(`listnamevar') != strtoname(strtrim(`listnamevar')) | ///
strpos(`listnamevar', "`")) & `nonmiss', ///
`opt' sub(listname) `listvars'
* [ID 201]
check_col `namevar' if mi(strtrim(`namevar')) & `nonmiss', ///
`opt' sub(name) `listvars'
* [ID 202]
check_col `labelvar' if mi(`labelvar') & `nonmiss', ///
`opt' sub(label) `listvars'
sret loc fn "`fn'"
sret loc listname "`listname'"
sret loc name "`name'"
sret loc label "`label'"
end
/* parse user input */
/* -------------------------------------------------------------------------- */
|
* Attach value labels.
ds, not(vallab)
if "`r(varlist)'" != "" ///
ds `r(varlist)', has(char <%= char_name("list_name") %>)
foreach var in `r(varlist)' {
if !`:char `var'[<%= char_name("is_other") %>]' {
capture confirm string variable `var', exact
if !_rc {
replace `var' = ".o" if `var' == "other"
destring `var', replace
}
local list : char `var'[<%= char_name("list_name") %>]
if !`:list list in labs' {
display as err "list `list' not found in choices sheet"
exit 9
}
label values `var' `list'
}
}
|
compress
|
* Date and time variables
<% // Add a type attribute to SubmissionDate. %>
capture confirm variable SubmissionDate, exact
if !_rc {
local type : char SubmissionDate[<%= char_name("type") %>]
assert !`:length local type'
char SubmissionDate[<%= char_name("type") %>] datetime
}
local datetime date today time datetime start end
tempvar temp
ds, has(char <%= char_name("type") %>)
foreach var in `r(varlist)' {
local type : char `var'[<%= char_name("type") %>]
if `:list type in datetime' {
capture confirm numeric variable `var'
if !_rc {
tostring `var', replace
replace `var' = "" if `var' == "."
}
if inlist("`type'", "date", "today") {
local fcn date
local mask <%= `DateMask' %>
local format %tdMon_dd,_CCYY
}
else if "`type'" == "time" {
local fcn clock
local mask <%= `TimeMask' %>
local format %tchh:MM:SS_AM
}
else if inlist("`type'", "datetime", "start", "end") {
local fcn clock
local mask <%= `DatetimeMask' %>
local format %tcMon_dd,_CCYY_hh:MM:SS_AM
}
generate double `temp' = `fcn'(`var', "``mask''")
format `temp' `format'
count if missing(`temp') & !missing(`var')
if r(N) {
display as err "{p}"
display as err "`type' variable `var'"
if "`repeat'" != "" ///
display as err "in repeat group `repeat'"
display as err "could not be converted using the mask ``mask''"
display as err "{p_end}"
exit 9
}
move `temp' `var'
foreach char in `:char `var'[]' {
mata: st_global("`temp'[`char']", st_global("`var'[`char']"))
}
drop `var'
rename `temp' `var'
}
}
<% // Remove the type attribute from SubmissionDate. %>
capture confirm variable SubmissionDate, exact
if !_rc ///
char SubmissionDate[<%= char_name("type") %>]
|
capture mata: mata drop `values' `text'
set varabbrev `varabbrev'
* Display warning messages.
quietly {
noisily display
#delimit ;
local problems
<% df.indent() %>
allbadnames
<% df.indent() %>
"The following variables' names differ from their field names,
which could not be {cmd:insheet}ed:"
<% df.indent(-1) %>
alldatanotform
<% df.indent() %>
"The following variables appear in the data but not the form:"
<% df.indent(-1) %>
<% if (relax) { %>
allformnotdata
<% df.indent() %>
"The following fields appear in the form but not the data:"
<% df.indent(-1) %>
<% } %>
<% df.indent(-1) %>
;
#delimit cr
while `:list sizeof problems' {
gettoken local problems : problems
gettoken desc problems : problems
local any 0
foreach vars of local `local' {
local any = `any' | `:list sizeof vars'
}
if `any' {
noisily display as txt "{p}`desc'{p_end}"
noisily display "{p2colset 0 34 0 2}"
noisily display as txt "{p2col:repeat group}variable name{p_end}"
noisily display as txt "{hline 65}"
forvalues i = 1/`:list sizeof repeats' {
local repeat : word `i' of `repeats'
local vars : word `i' of ``local''
foreach var of local vars {
noisily display as res "{p2col:`repeat'}`var'{p_end}"
}
}
noisily display as txt "{hline 65}"
noisily display "{p2colreset}"
}
}
}
|
<% // Notes at the top of the do-file %>
* Created on <%= current_date() %> at <%= c("current_time") %> by the following -odkmeta- command:
* odkmeta <%= command_line %>
* -odkmeta- version 1.1.0 was used.
<% // -version- %>
version 9
<% // User parameters not covered by an option %>
* Change these values as required by your data.
* The mask of date values in the .csv files. See -help date()-.
* Fields of type date or today have these values.
local <%= `DateMask' %> MDY
* The mask of time values in the .csv files. See -help clock()-.
* Fields of type time have these values.
local <%= `TimeMask' %> hms
* The mask of datetime values in the .csv files. See -help clock()-.
* Fields of type datetime, start, or end have these values.
local <%= `DatetimeMask' %> MDYhms
/* -------------------------------------------------------------------------- */
* Start the import.
* Be cautious about modifying what follows.
<% // Set system parameters, saving their current values so they can be %>
<% // restored at the end of the do-file. %>
local varabbrev = c(varabbrev)
set varabbrev off
* Find unused Mata names.
foreach var in values text {
mata: st_local("external", invtokens(direxternal("*")'))
tempname `var'
while `:list `var' in external' {
tempname `var'
}
}
label drop _all
|
<% `RS' i
`SR' drop
drop = select(fields.attributes()->vals("char"),
!fields.attributes()->vals("keep"))
if (length(drop) == 0)
return
drop = sort(drop', 1)' %>
foreach var of varlist _all {
<% if (length(drop) <= 3) { %>
<% for (i = 1; i <= length(drop); i++) { %>
char `var'[<%= drop[i] %>]
<% } %>
<% } %>
<% else { %>
foreach char in <%= invtokens(drop) %> {
char `var'[`char']
}
<% } %>
}
|
* Drop note variables.
ds, has(char <%= char_name("type") %>)
foreach var in `r(varlist)' {
if "`:char `var'[<%= char_name("type") %>]'" == "note" ///
drop `var'
}
|
save, replace
}
|
foreach dta of local dtas {
use "`dta'", clear
unab all : _all
gettoken first : all
local repeat : char `first'[<%= char_name("repeat") %>]
|
replace `var' = `temp'
drop `temp'
encode `var', gen(`temp') label(`list') noextend
move `temp' `var'
foreach char in `:char `var'[]' {
mata: st_global("`temp'[`char']", st_global("`var'[`char']"))
}
drop `var'
rename `temp' `var'
}
}
|
args(`SS' strlists)
* Encode fields whose list contains a noninteger name.
local lists <%= strlists %>
tempvar temp
ds, has(char <%= char_name("list_name") %>)
foreach var in `r(varlist)' {
local list : char `var'[<%= char_name("list_name") %>]
if `:list list in lists' & !`:char `var'[<%= char_name("is_other") %>]' {
capture confirm numeric variable `var'
if !_rc {
tostring `var', replace format(<%= `RealFormat' %>)
if !`:list list in sysmisslabs' ///
replace `var' = "" if `var' == "."
}
generate `temp' = `var'
|
* Attach field labels as variable labels and notes.
ds, has(char <%= char_name("long_name") %>)
foreach var in `r(varlist)' {
* Variable label
local label : char `var'[<%= char_name("label") %>]
mata: st_varlabel("`var'", st_local("label"))
* Notes
if `:length local label' {
char `var'[note0] 1
mata: st_global("`var'[note1]", "Question text: " + ///
st_global("`var'[<%= char_name("label") %>]"))
mata: st_local("temp", ///
" " * (strlen(st_global("`var'[note1]")) + 1))
#delimit ;
local fromto
<% df.indent() %>
{ "`temp'"
} "{c )-}"
"`temp'" "{c -(}"
' "{c 39}"
"`" "{c 'g}"
"$" "{c S|}"
<% df.indent(-1) %>
;
#delimit cr
while `:list sizeof fromto' {
gettoken from fromto : fromto
gettoken to fromto : fromto
mata: st_global("`var'[note1]", ///
subinstr(st_global("`var'[note1]"), "`from'", "`to'", .))
}
}
}
|
args(
// The name of the .csv file to -insheet-
`SS' csv,
// `True' if all field names in the .csv file can be -insheet-ed (they are
// all `InsheetOK'); `False' otherwise.
`BooleanS' insheetable
)
<% // "qcsv" for "quote csv"
`SS' qcsv
qcsv = adorn_quotes(csv, "list")
if (!insheetable) { %>
insheet using <%= qcsv %>, comma nonames clear
local fields
foreach var of varlist _all {
local field = trim(`var'[1])
<% /* -parse_survey- already completes these checks for fields in the form.
Adding them to the do-file protects against fields not in the form whose
names cannot be -insheet-ed. For example, SubmissionDate is not in the
form, and it would become problematic if the user could add a separate
field with the same name to the form and this resulted in duplicate .csv
column names. */ %>
assert `:list sizeof field' == 1
assert !`:list field in fields'
local fields : list fields | field
}
<% } %>
insheet using <%= qcsv %>, comma names case clear
<% if (!insheetable) { %>
unab all : _all
<% } %>
|
args(
// Merge child dataset into parent
pointer(`RepeatS') scalar parent,
`SS' child,
// SET-OF variable of the parent dataset
`SS' set_of
)
<% `SS' description %>
unab before : _all
<% // Check that there is no unexpected variable list overlap. %>
local pos : list posof <%= adorn_quotes(child) %> in repeats
local child : word `pos' of `childfiles'
describe using `child', varlist
local childvars `r(varlist)'
local overlap : list before & childvars
local KEY KEY
local overlap : list overlap - KEY
quietly if `:list sizeof overlap' {
gettoken first : overlap
<% description = parent->main() ? "the main fields" : "repeat group " + parent->long_name() %>
noisily display as err "error merging <%= description %> and repeat group <%= child %>"
noisily display as err "variable `first' exists in both datasets"
noisily display as err "rename it in one or both, then try again"
exit 9
}
tempvar order
generate `order' = _n
if !_N ///
tostring KEY, replace
tempvar merge
merge KEY using `child', sort _merge(`merge')
tabulate `merge'
assert `merge' != 2
<% /* Restore the sort order.
This sort may be unnecessary: -merge- may complete it automatically.
However, this is not assured in the documentation, and the -reshape- requires it.
(Otherwise, _j could be incorrect.) */ %>
sort `order'
drop `order' `merge'
<% // Restore the variable order. %>
unab after : _all
local new : list after - before
foreach var of local new {
move `var' <%= set_of %>
}
drop <%= set_of %>
|
<% `SS' expression, otherval %>
* Add "other" values to value labels that need them.
local otherlabs <%= invtokens(fields.other_lists()) %>
foreach lab of local otherlabs {
mata: st_vlload("`lab'", `values' = ., `text' = "")
<% if (other == "max" | other == "min") { %>
<% expression = other == "max" ? "max(\`values') + 1" : "min(\`values') - 1" %>
mata: st_local("otherval", strofreal(<%= expression %>, "<%= `RealFormat' %>"))
<% otherval = "\`otherval'" %>
<% } %>
<% else
otherval = other %>
local othervals `othervals' <%= otherval %>
label define `lab' <%= otherval %> other, add
}
|
* select or_other variables
forvalues i = 1/`:list sizeof otherlabs' {
local lab : word `i' of `otherlabs'
local otherval : word `i' of `othervals'
ds, has(vallab `lab')
if "`r(varlist)'" != "" ///
recode `r(varlist)' (.o=`otherval')
}
|
<% `SS' repeat
repeat = fields.has_repeat() ? "\`repeat'" : "" %>
local repeats <%= adorn_quotes("\`repeats' " + adorn_quotes(repeat)) %>
<% if (fields.has_repeat()) { %>
tempfile child
local childfiles : list childfiles | child
<% } %>
local badnames
ds, has(char <%= char_name("bad_name") %>)
foreach var in `r(varlist)' {
if `:char `var'[<%= char_name("bad_name") %>]' & ///
<% /* Exclude SET-OF variables in the parent repeat groups, since they will be
dropped. */ %>
("`:char `var'[<%= char_name("type") %>]'" != "begin repeat" | ///
("`repeat'" != "" & ///
"`:char `var'[<%= char_name("name") %>]'" == "SET-OF-`repeat'")) {
local badnames : list badnames | var
}
}
local allbadnames `"`allbadnames' "`badnames'""'
ds, not(char <%= char_name("name") %>)
local datanotform `r(varlist)'
local exclude SubmissionDate KEY PARENT_KEY metainstanceID
local datanotform : list datanotform - exclude
local alldatanotform `"`alldatanotform' "`datanotform'""'
|
args(pointer(`RepeatS') scalar repeat)
<% `SS' mergekey %>
<% // Drop KEY and the SET-OF variable, which will be unused. %>
<% if (repeat->child_set_of()->insheet() == `InsheetOK') { %>
drop KEY <%= repeat->child_set_of()->st_long() %>
<% } %>
<% else { %>
drop KEY
foreach var of varlist _all {
if "`:char `var'[<%= char_name("name") %>]'" == "SET-OF-<%= repeat->name() %>" {
drop `var'
continue, break
}
}
<% } %>
<% mergekey = "PARENT_KEY" %>
* Add an underscore to variable names that end in a number.
ds <%= mergekey %>, not
foreach var in `r(varlist)' {
if inrange(substr("`var'", -1, 1), "0", "9") & length("`var'") < <%= strofreal(32 - repeat->level()) %> {
capture confirm new variable `var'_
if !_rc ///
rename `var' `var'_
}
}
if _N {
tempvar j
sort <%= mergekey %>, stable
by <%= mergekey %>: generate `j' = _n
ds <%= mergekey %> `j', not
reshape wide `r(varlist)', i(<%= mergekey %>) j(`j')
* Restore variable labels.
foreach var of varlist _all {
mata: st_varlabel("`var'", st_global("`var'[<%= char_name("label") %>]"))
}
}
else {
ds <%= mergekey %>, not
foreach var in `r(varlist)' {
ren `var' `var'1
}
drop <%= mergekey %>
gen <%= mergekey %> = ""
}
rename PARENT_KEY KEY
local pos : list posof <%= adorn_quotes(repeat->long_name()) %> in repeats
local child : word `pos' of `childfiles'
save `child'
|
args(`SS' repeat)
<% `SS' dta
dta = csv + (repeat != "") * "-" + repeat + (strpos(csv, ".") ? ".dta" : "") %>
local dta `"<%= adorn_quotes(dta) %>"'
save `dta', <%= fields.has_repeat() ? "orphans " : "" %>replace
local dtas : list dtas | dta
<% if (relax) { %>
local allformnotdata `"`allformnotdata' "`formnotdata'""'
<% } %>
|
* Save label information.
label dir
local labs `r(names)'
foreach lab of local labs {
quietly label list `lab'
* "nassoc" for "number of associations"
local nassoc `nassoc' `r(k)'
}
|
args(`SS' repeat)
local setof
foreach var of varlist _all {
if "`:char `var'[<%= char_name("name") %>]'" == "SET-OF-<%= repeat %>" {
local setof `var'
continue, break
}
}
assert "`setof'" != ""
|
foreach var of varlist _all {
if "`:char `var'[<%= char_name("group") %>]'" != "" & "`:char `var'[<%= char_name("type") %>]'" != "begin repeat" {
local name = "`:char `var'[<%= char_name("name") %>]'" + ///
cond(`:char `var'[<%= char_name("is_other") %>]', "_other", "") + ///
"`:char `var'[<%= char_name("geopoint") %>]'"
local newvar = strtoname("`name'")
capture rename `var' `newvar'
}
}
|
* Split select_multiple variables.
ds, has(char <%= char_name("type") %>)
foreach typevar in `r(varlist)' {
if strmatch("`:char `typevar'[<%= char_name("type") %>]'", "select_multiple *") & ///
!`:char `typevar'[<%= char_name("is_other") %>]' {
* Add an underscore to the variable name if it ends in a number.
local var `typevar'
local list : char `var'[<%= char_name("list_name") %>]
local pos : list posof "`list'" in labs
local nparts : word `pos' of `nassoc'
if `:list list in otherlabs' & !`:char `var'[<%= char_name("or_other") %>]' ///
local --nparts
if inrange(substr("`var'", -1, 1), "0", "9") & ///
length("`var'") < 32 - strlen("`nparts'") {
numlist "1/`nparts'"
local splitvars " `r(numlist)'"
local splitvars : subinstr local splitvars " " " `var'_", all
capture confirm new variable `var'_ `splitvars'
if !_rc {
rename `var' `var'_
local var `var'_
}
}
capture confirm numeric variable `var', exact
if !_rc ///
tostring `var', replace format(<%= `RealFormat' %>)
split `var'
local parts `r(varlist)'
local next = `r(nvars)' + 1
destring `parts', replace
forvalues i = `next'/`nparts' {
local newvar `var'`i'
generate byte `newvar' = .
local parts : list parts | newvar
}
local chars : char `var'[]
local label : char `var'[<%= char_name("label") %>]
local len : length local label
local i 0
foreach part of local parts {
local ++i
foreach char of local chars {
mata: st_global("`part'[`char']", st_global("`var'[`char']"))
}
if `len' {
mata: st_global("`part'[<%= char_name("label") %>]", st_local("label") + ///
(substr(st_local("label"), -1, 1) == " " ? "" : " ") + ///
"(#`i'/`nparts')")
}
move `part' `var'
}
drop `var'
}
}
|
*! version 1.0.0 03apr2022 Ben Jann
program colorcheck
version 14.2
syntax [, NOGRaph ///
TItle(passthru) GRopts(passthru) HORizontal VERTical ///
span LABels(passthru) LColor(passthru) LWidth(passthru) ///
BARWidth(passthru) NONUMbers sort SORT2(passthru) * ]
if "`horizontal'"!="" & "`vertical'"!="" {
di as err "may not combine options horizontal and vertical"
exit 198
}
local noinfo 0
if `"`r(ptype)'"'!="color" local noinfo 1
else if `"`r(p)'"'=="" local noinfo 1
if `noinfo' {
di as err "no color info found; need to run {bf:colorpalette} " /*
*/ "before applying {bf:colorcheck}"
exit 498
}
tempname rhold
_return hold `rhold'
_return restore `rhold', hold
capt n _colorcheck, `options'
if _rc {
_return restor `rhold'
exit _rc
}
else {
_return drop `rhold'
}
if "`change'`nograph'"!="" {
_colorcheck_display
}
if "`nograph'"!="" exit
_return hold `rhold'
_return restore `rhold', hold
capt n _colorcheck_graph, `title' `gropts' `horizontal' /*
*/ `vertical' `labels' `lcolor' `lwidth' `barwidth' /*
*/ `nonumbers' `sort' `sort2'
_return restor `rhold'
if _rc exit _rc
end
program _colorcheck_display
di
di _col(20) as txt "Number of colors =" as res %8.0g r(n)
di _col(20) as txt "N. of comparisons =" as res %8.0g comb(r(n),2)
di _col(20) as txt "CVD severity =" as res %8.0g r(cvd)
di _col(20) as txt "Proportion of gray =" as res %8.0g r(mono)
di _col(20) as txt "Grayscale method =" as res %8s r(mono_method)
di _col(20) as txt "Difference metric =" as res %8s r(metric)
matlist r(delta), rowtitle("Delta E") border(rows) twidth(14)
end
program _colorcheck, rclass
// defaults
local d_metric "Jab"
local d_mono 1
local d_mono_method "LCh"
local d_cvd 1
// syntax
syntax [, Metric(str) mono(str) cvd(numlist max=1 >=0 <=1) ]
_parse_mono `mono'
if `"`metric'"'=="" local metric "`d_metric'"
if `"`mono'"'=="" local mono `d_mono'
if `"`mono_method'"'=="" local mono_method "`d_mono_method'"
if `"`cvd'"'=="" local cvd `d_cvd'
// check whether results are already computed and settings did not change
capt confirm matrix r(delta)
if _rc==1 exit _rc
if _rc==0 {
// r(delta) exists
local change 0
if `"`metric'"'!=`"`r(metric)'"' local change 1
else if `"`mono'"'!=`"`r(mono)'"' local change 1
else if `"`mono_method'"'!=`"`r(mono_method)'"' local change 1
else if `"`cvd'"'!=`"`r(cvd)'"' local change 1
if `change'==0 {
return add
exit // nothing to do
}
}
c_local change "change"
// obtain color info
local pclass `"`r(pclass)'"'
local colors `"`r(p)'"'
return add
// compute results
tempname delta
mata: colorcheck()
mat coln `delta' = minimum maximum mean
mat rown `delta' = "normal sight" /*
*/ monochromacy deuteranomaly protanomaly tritanomaly
// return results
return local metric `"`metric'"'
return scalar mono = `mono'
return local mono_method `"`mono_method'"'
return scalar cvd = `cvd'
return matrix delta = `delta'
return local p_mono `"`p_mono'"'
return local p_deut `"`p_deut'"'
return local p_prot `"`p_prot'"'
return local p_trit `"`p_trit'"'
end
program _parse_mono
_parse comma p opts : 0
gettoken comma opts : opts, parse(",")
local 0 `", mono(`p') `opts'"'
syntax [, mono(numlist >=0 <=1) * ]
if "`mono'"=="" local mono 1
c_local mono `mono'
c_local mono_method `"`options'"'
end
program _colorcheck_graph
syntax [, TItle(passthru) GRopts(str asis) HORizontal VERTical ///
LABels(passthru) LColor(passthru) LWidth(passthru) ///
BARWidth(passthru) NONUMbers sort SORT2(str) ]
if `"`sort2'"'!="" local sort sort
if "`sort'"!="" _parse_sort, `sort2'
if `"`title'"'=="" local title title(`r(pname)')
local pct `:di % 7.0g r(mono)*100'
local cvd `:di % 7.0g r(cvd)*100'
if "`vertical'"!="" {
if `"`labels'"'=="" {
local haspct = !(`pct'==100 & `cvd'==100)
local lbl "Normal sight"
if `haspct' local lbl `""`lbl'" " ""'
local labels `"`"`lbl'"'"'
local lbl "Monochromacy"
if `pct'!=100 local lbl `""`lbl'" "(`pct' percent)""'
else if `haspct' local lbl `""`lbl'" " ""'
local labels `"`labels' `"`lbl'"'"'
local i 2
foreach nm in Deuter Prot Trit {
local ++i
if `cvd'!=100 local lbl `""`nm'anomaly" "(`cvd' percent)""'
else if `haspct' local lbl `""`nm'anopia" " ""'
else local lbl "`nm'anopia"
local labels `"`labels' `"`lbl'"'"'
}
local labels labels(`labels')
}
local mlab
forv i = 1/5 {
local di `: di %9.1f el(r(delta),`i',1)'
local mlab `"`mlab'`i' "{&Delta}E {&ge} `di'" "'
}
local gopts xlabel(,labgap(4)) xmlabel(`mlab', labsize(medsmall) notick labgap(-1))
}
else {
local txt .5 .5 "Normal sight"
if `pct'==100 local txt `txt' 1.5 .5 "Monochromacy"
else local txt `txt' 1.5 .5 "Monochromacy (`pct' percent)"
if `cvd'==100 local txt `txt' /*
*/ 2.5 .5 "Deuteranopia" 3.5 .5 "Protanopia" 4.5 .5 "Tritanopia"
else local txt `txt' 2.5 .5 "Deuteranomaly (`cvd' percent)" /*
*/ 3.5 .5 "Protanomaly (`cvd' percent)" /*
*/ 4.5 .5 "Tritanomaly (`cvd' percent)"
local gopts text(`txt', place(e))
if `"`labels'"'=="" {
forv i = 1/5 {
local di `: di %9.1f el(r(delta),`i',1)'
local labels `"`labels'"{&Delta}E {&ge} `di'" "'
}
local labels labels(`labels')
local gopts `gopts' yscale(alt)
}
if `"`barwidth'"'=="" local barwidth barwidth(0.6)
}
mata: colorcheck_sort(`"`sort2'"')
colorpalette, `title' `horizontal' `vertical' `lcolor' `lwidth' /*
*/ `barwidth' `nonumbers' `labels' gropts(`gopts' `gropts'): /*
*/ `p_normal' / `p_mono' / `p_deut' / `p_prot' / `p_trit'
end
program _parse_sort
syntax [, Normal Mono Deuter Prot Trit ]
local sort `normal' `mono' `deuter' `prot' `trit'
if `:list sizeof sort'>1 {
di as err "sort(): too many keyword specified"
exit 198
}
if "`sort'"=="" local sort normal
c_local sort2 `sort'
end
version 14
mata:
mata set matastrict on
void colorcheck()
{
real scalar i, gs_p, cvd_p
string scalar metric, gs_space
string rowvector cvd
real matrix delta, RGB, P
real colvector E
class ColrSpace scalar S
// settings
gs_p = strtoreal(st_local("mono"))
cvd_p = strtoreal(st_local("cvd"))
gs_space = st_local("mono_method")
metric = st_local("metric")
// import colors and prepare analysis
S.colors(st_local("colors"))
if (S.N()<=1) {
stata(`"di as err "need at least two colors""')
exit(error(498))
}
RGB = S.get("RGB1")
P = colorcheck_subsets(S.N())
delta = J(5,3,.)
// original
E = S.delta(P, metric)
delta[1,] = minmax(E), mean(E)
// grayscale
S.gray(gs_p, gs_space)
E = S.delta(P, metric)
delta[2,] = minmax(E), mean(E)
st_local("p_mono", S.colors())
// CVD
cvd = ("deut","prot","trit")
for (i=length(cvd);i;i--) {
S.set(RGB, "RGB1")
S.cvd(cvd_p, cvd[i])
E = S.delta(P, metric)
delta[2+i,] = minmax(E), mean(E)
st_local("p_"+cvd[i], S.colors())
}
// return results
st_matrix(st_local("delta"), delta)
}
real matrix colorcheck_subsets(real scalar n)
{
real scalar i, j, k
real matrix P
P = J(comb(n, 2),2,.)
k = 0
for (i=1; i<=n; i++) {
for (j=i+1;j<=n;j++) P[++k,] = (i,j)
}
return(P)
}
void colorcheck_sort(string scalar sort)
{
real colvector p
string scalar xlab
class ColrSpace scalar S
if (sort=="") {
st_local("p_normal", st_global("r(p)"))
st_local("p_mono", st_global("r(p_mono)"))
st_local("p_deut", st_global("r(p_deut)"))
st_local("p_prot", st_global("r(p_prot)"))
st_local("p_trit", st_global("r(p_trit)"))
return
}
if (sort=="mono") {
S.colors(st_global("r(p_mono)"))
p = order(S.get("RGB1")[,1],1)
}
else if (sort=="deuter") {
S.colors(st_global("r(p_deut)"))
p = order(S.get("HSL")[,1],1)
}
else if (sort=="prot") {
S.colors(st_global("r(p_prot)"))
p = order(S.get("HSL")[,1],1)
}
else if (sort=="trit") {
S.colors(st_global("r(p_trit)"))
p = order(S.get("HSL")[,1],1)
}
else if (sort=="normal") {
S.colors(st_global("r(p)"))
p = order(S.get("HSL")[,1],1)
}
st_local("p_normal", invtokens((`"""':+tokens(st_global("r(p)")):+`"""')[p]))
st_local("p_mono", invtokens((`"""':+tokens(st_global("r(p_mono)")):+`"""')[p]))
st_local("p_deut", invtokens((`"""':+tokens(st_global("r(p_deut)")):+`"""')[p]))
st_local("p_prot", invtokens((`"""':+tokens(st_global("r(p_prot)")):+`"""')[p]))
st_local("p_trit", invtokens((`"""':+tokens(st_global("r(p_trit)")):+`"""')[p]))
if (st_local("nonumbers")=="") {
xlab = invtokens((strofreal(1::S.N()):+" ":+`"""':+strofreal(p):+`"""')')
if (st_local("vertical")!="") st_local("gopts", st_local("gopts") +
" ylabel(" + xlab+", nogrid notick angle(hor))")
else st_local("gopts", st_local("gopts") +
" xlabel(" + xlab+", notick)")
st_local("nonumbers", "nonumbers")
}
}
end
exit
|
*! version 1.2.3 13apr2022 Ben Jann
if c(stata_version)<14.2 {
di as err "{bf:colorpalette} requires version 14.2 of Stata" _c
if c(stata_version)>=9.2 {
di as err "; use command {helpb colorpalette9}" _c
}
di as err ""
error 9
}
capt findfile lcolrspace.mlib
if _rc {
di as error "-colrspace- is required; type {stata ssc install colrspace}"
error 499
}
program colorpalette
version 14.2
capt _on_colon_parse `0'
if _rc==0 {
local 0 `"`s(after)'"'
local opts `"`s(before)'"'
_parse comma lhs opts : opts
if `"`lhs'"'!="" error 198
Parse_palettes `0'
forv i = 1/`r(n)' {
local palettes `"`palettes'`"`r(p`i')'"' "'
}
Graph2 `palettes' `opts'
exit
}
Parse_palettes `0'
if r(n)==1 {
Get_Global_Opts `0'
Palette_Get `0'
}
else {
local n = r(n) - 1
forv i = 1/`n' {
Global_Opts_Not_Allowed `r(p`i')'
local palettes `"`palettes'`"`r(p`i')'"' "'
}
Get_Global_Opts `r(p`r(n)')'
_parse comma p 0 : 0
if `"`p'"'!="" { // last element has palette
local palettes `"`palettes'`"`p'`0'"'"'
local 0
}
Palette_Get2 `palettes' `0'
}
if `"`GLOBALS'`GLOBALS2'"'!="" {
if "`GRAPH'"=="" local NOGRAPH nograph
Macros global, `GLOBALS2'
}
if `"`LOCALS'`LOCALS2'"'!="" {
if "`GRAPH'"=="" local NOGRAPH nograph
Macros local, `LOCALS2'
di _n as txt "locals:"
foreach name of local mnames {
gettoken p mvalues : mvalues
c_local `name' `"`p'"'
di as txt %22s "`name'" " : " as res `"`p'"'
}
}
if `"`STYLEFILES'`STYLEFILES2'"'!="" {
if "`GRAPH'"=="" local NOGRAPH nograph
Parse_Stylefiles, `STYLEFILES2'
Macros local, `LOCALS2'
Stylefiles `"`mnames'"' `"`mvalues'"' `"`STYLEFILES2'"'
}
if "`NOGRAPH'"=="" {
tempname hcurrent
_return hold `hcurrent'
_return restore `hcurrent', hold // make copy
Graph, `GROPTS'
_return restore `hcurrent'
}
end
/*----------------------------------------------------------------------------*/
/* retrieve palette(s) */
/*----------------------------------------------------------------------------*/
program Parse_palettes, rclass
local i 0
while (`"`0'"'!="") {
gettoken p 0 : 0, parse("/") quotes bind // "..." is separate token
if `"`p'"'=="/" {
return local p`++i' `"`palette'"'
local palette
local space
continue
}
local palette `"`palette'`space'`p'"'
local space " "
}
if `"`palette'"'!="" {
return local p`++i' `"`palette'"'
}
return scalar n = max(1,`i')
end
program Get_Global_Opts
_parse comma lhs 0 : 0
syntax [, ///
GLobals GLobals2(passthru) ///
LOCals LOCals2(passthru) ///
STYLEFiles STYLEFiles2(passthru) ///
NOGRaph GRAPH ///
GRopts(str asis) TItle(passthru) rows(passthru) ///
names NOINFO NONUMbers * ]
c_local GLOBALS `globals'
c_local GLOBALS2 `globals2'
c_local LOCALS `locals'
c_local LOCALS2 `locals2'
c_local STYLEFILES `stylefiles'
c_local STYLEFILES2 `stylefiles2'
c_local NOGRAPH `nograph'
c_local GRAPH `graph'
c_local GROPTS `gropts' `title' `rows' `names' `noinfo' `nonumbers'
if `"`options'"'!="" local options `", `options'"'
c_local 0 `"`lhs'`options'"'
end
program Global_Opts_Not_Allowed
Get_Global_Opts `0'
local 0 , `GLOBALS' `GLOBALS2' `LOCALS' `LOCALS2' /*
*/`STYLEFILES' `STYLEFILES2' `NOGRAPH' `GRAPH' `GROPTS'
syntax [, _somew3irednamedopt ]
end
program Palette_Get2, rclass
_parse comma palettes 0 : 0
syntax [, name(str) * ]
local space
local i 0
foreach p of local palettes {
_parse comma pnm popts : p
if `"`popts'"'=="" local popts ,
Palette_Get `pnm' `popts' `options'
local plist `"`plist'`space'`r(p)'"'
local space " "
forv j = 1/`r(n)' {
local ++i
return local p`i' `"`r(p`j')'"'
return local p`i'name `"`r(p`j'name)'"'
return local p`i'info `"`r(p`j'info)'"'
}
}
return scalar n = `i'
return local p `"`plist'"'
if `"`name'"'!="" {
return local pname `"`name'"'
}
else {
return local pname "custom"
}
return local ptype "color"
end
program Palette_Get, rclass
local opts N(numlist max=1 integer >=1) ///
Select(numlist integer) drop(numlist integer) ///
order(numlist integer) REVerse shift(numlist max=1) ///
INtensity(numlist >=0 missingokay) ///
OPacity(numlist int >=0 missingokay) ///
IPolate(str) ///
intensify(numlist >=0 missingokay) ///
saturate(str) luminate(str) ///
GScale GScale2(str) ///
CBlind CBlind2(str) ///
NOEXPAND class(str) name(str) FORCErgb
syntax [anything(name=palette id="palette" everything equalok)] ///
[, `opts' ///
/// palette-specific options
Hue(passthru) Chroma(passthru) SATuration(passthru) ///
Luminance(passthru) VALue(passthru) ///
POWer(passthru) DIRection(passthru) ///
RAnge(passthru) * ]
remove_repeatedopts `"`opts'"' `", `options'"'
local passthruopts `hue' `chroma' `saturation' `luminance' `value' /*
*/`power' `direction' `range' `options'
parse_psopts, `hue' `chroma' `saturation' `luminance' `value' /*
*/`power' `direction' `range'
if `"`opacity'"'!="" {
if c(stata_version)<15 {
di as err "{bf:opacity()} requires Stata 15"
exit 9
}
}
if `"`select'"'!="" {
if `"`drop'"'!="" {
di as err "only one of select() and drop() allowed"
exit 198
}
if `"`order'"'!="" {
di as err "only one of select() and order() allowed"
exit 198
}
}
if `"`ipolate'"'!="" parse_ipolate `ipolate'
if `"`saturate'"'!="" parse_saturate `saturate'
if `"`luminate'"'!="" parse_luminate `luminate'
if `"`ipolate'"'!="" parse_ipolate `ipolate'
if `"`gscale2'"'!="" local gscale gscale
if "`gscale'"!="" parse_gscale `gscale2'
if `"`cblind2'"'!="" local cblind cblind
if "`cblind'"!="" parse_cblind `cblind2'
if `"`class'"'!="" capture parse_class, `class'
// get colors
local ptype 0
if `"`palette'"'=="" { // use s2 if palette is empty
local palette s2
local ptype 2
}
else { // check whether palette is mata(name)
capt parse_mata, `palette' // returns local mataname
if _rc==0 local ptype 3
}
if `ptype'==0 {
// check whether palette has more than 3 words
// (ColrSpace currently has no palette names with more than 3 words)
if `:list sizeof palette'>3 {
local ptype 1
}
// check whether palette is list of colors in (...)
else {
gettoken pal rest : palette, match(paren)
if "`paren'"=="(" {
if `"`rest'"'!="" local palette `"`pal' `rest'"'
else local palette `"`pal'"'
local ptype 1
}
}
}
mata: getpalette(strtoreal("`n'"), `ptype')
// return palette
local i 0
foreach p of local plist {
local ++i
gettoken pnm pnames : pnames
gettoken pi pinfo : pinfo
return local p`i' `"`p'"'
return local p`i'name `"`pnm'"'
return local p`i'info `"`pi'"'
}
return local p `"`plist'"'
return local pclass `"`class'"'
return local psource `"`source'"'
return local pnote `"`note'"'
if `"`name'"'!="" {
local palette `"`name'"'
}
return local pname `"`palette'"'
return local ptype "color"
return scalar n = `i'
end
program remove_repeatedopts
args opts 0
syntax [, `opts' * ]
c_local options `"`options'"'
end
program _Palette_Get
gettoken palette 0 : 0, parse(" ,")
syntax [, n(numlist max=1 integer >0) * ]
colorpalette_`palette', n(`n') `options'
if `"`P'"'=="" { // palettes that define P#
local min 1
while (`"`P`min''"'=="") {
local ++min
if `min'>100 {
c_local n 0
exit // emergency exit
}
}
local max `min'
while (`"`P`max''"'!="") {
local ++max
}
local --max
if "`n'"=="" local n `max'
local n = max(`min',min(`max',`n'))
local P `"`P`n''"'
}
c_local plist `"`P'"'
c_local pnames `"`N'"'
c_local pinfo `"`I'"'
if `"`class'"'!="" c_local class `"`class'"'
if `"`name'"'!="" c_local palette `"`name'"'
c_local note `"`note'"'
c_local source `"`source'"'
end
program parse_ipolate
_parse comma n opts : 0
gettoken comma opts : opts, parse(",")
local 0 `", n(`n') `opts'"'
syntax [, n(numlist max=1 integer >=1) RAnge(numlist max=2) ///
POWer(numlist max=1 >0) POSitions(numlist) PADded * ]
if "`n'"=="" exit // no number specified
c_local ipolate_n `n'
c_local ipolate_range "`range'"
c_local ipolate_power "`power'"
c_local ipolate_positions "`positions'"
c_local ipolate_pad "`padded'"
c_local ipolate_space `"`options'"'
end
program parse_saturate
_parse comma p opts : 0
gettoken comma opts : opts, parse(",")
local 0 `", p(`p') `opts'"'
syntax [, p(numlist missingokay) level * ]
if "`p'"=="" exit // no numbers specified
c_local saturate_p `"`p'"'
c_local saturate_level "`level'"
c_local saturate_method `"`options'"'
end
program parse_luminate
_parse comma p opts : 0
gettoken comma opts : opts, parse(",")
local 0 `", p(`p') `opts'"'
syntax [, p(numlist missingokay) level * ]
if "`p'"=="" exit // no numbers specified
c_local luminate_p `"`p'"'
c_local luminate_level "`level'"
c_local luminate_method `"`options'"'
end
program parse_gscale
_parse comma p opts : 0
gettoken comma opts : opts, parse(",")
local 0 `", gscale(`p') `opts'"'
syntax [, gscale(numlist >=0 <=1 missingokay) * ]
if "`gscale'"=="" local gscale 1
c_local gscale_p `gscale'
c_local gscale_method `"`options'"'
end
program parse_cblind
_parse comma p opts : 0
gettoken comma opts : opts, parse(",")
local 0 `", cblind(`p') `opts'"'
syntax [, cblind(numlist >=0 <=1 missingokay) * ]
if "`cblind'"=="" local cblind 1
c_local cblind_p `cblind'
c_local cblind_method `"`options'"'
end
program parse_class
syntax [, Qualitative CATegorical Sequential Diverging CYClic CIRCular ]
local class `qualitative' `categorical' `sequential' `diverging' `cyclic' `circular'
if `:list sizeof class'>1 exit 198
c_local class `class'
end
program parse_mata
syntax, Mata(name)
c_local mataname `"`mata'"'
end
program parse_psopts // options for color generators and matplotlib colormaps
syntax [, Hue(numlist max=2) ///
Chroma(numlist max=2 >=0) SATuration(numlist max=2 >=0 <=1) ///
Luminance(numlist max=2 >=0 <=100) VALue(numlist max=2 >=0 <=1) ///
POWer(numlist max=2 >0) DIRection(numlist int max=1) ///
RAnge(numlist max=2 >=0 <=1) ]
if "`saturation'"!="" {
if "`chroma'"!="" & {
di as err "chroma() and saturation() not both allowed"
exit 198
}
local chroma `saturation'
}
if "`value'"!="" {
if "`luminance'"!="" & {
di as err "luminance() and value() not both allowed"
exit 198
}
local luminance `value'
}
if "`direction'"!="" {
if "`power'"!="" & {
di as err "power() and direction() not both allowed"
exit 198
}
if !inlist(`direction',1,-1) {
di as err "{bf:direction()} must be 1 or -1"
exit 198
}
local power = (`direction'==-1)
}
c_local hue `hue'
c_local chroma `chroma'
c_local luminance `luminance'
c_local power `power'
c_local range `range'
end
program parse_optnotallowed
syntax [, _somew3irednamedopt ]
end
/*----------------------------------------------------------------------------*/
/* return macros */
/*----------------------------------------------------------------------------*/
program Macros
// syntax
_parse comma macro 0 : 0
if "`macro'"=="local" {
local lmax 31
local uscore "_"
}
else {
local lmax 32
local uscore
}
syntax [, `macro's2(str) ] // get contents of option
local 0 `"``macro's2'"'
capt n syntax [anything] [, NONames Prefix(str) Suffix(str) ]
if _rc==1 exit _rc
if _rc {
di as err "(error in option {bf:`macro's()})"
exit _rc
}
if `"`prefix'"'!="" {
local 0 `", prefix(`uscore'`prefix')"'
capt n syntax [, prefix(name) ]
if _rc==1 exit _rc
if _rc {
di as err "(error in option {bf:`macro's()})"
exit _rc
}
if "`macro'"=="local" {
local prefix = substr("`prefix'",2,.) // remove "_"
}
else if substr("`prefix'",1,1)=="_" {
di as err "global macro name may not start with '_'"
di as err "(error in option {bf:`macro's()})"
exit 198
}
}
while (`"`anything'"'!="") {
gettoken name anything : anything, quotes
if `"`anything'"'=="" { // last element
if substr(`"`name'"',-1,1)=="*" {
local name = substr(`"`name'"',1,strlen(`"`name'"')-1)
capt confirm name `uscore'`name'
if _rc==1 exit _rc
if _rc {
di as err "'" `"`name'*"' "' not allowed in {bf:`macro's()}"
exit _rc
}
local prefix1 `name'
if "`macro'"!="local" {
if substr("`prefix1'",1,1)=="_" {
di as err "global macro name may not start with '_'"
di as err "(error in option {bf:`macro's()})"
exit 198
}
}
continue, break
}
}
if "`prefix'"!="" {
local name `prefix'`name'
local name = substr("`name'",1,`lmax')
}
capt n confirm name `uscore'`name'
if _rc==1 exit _rc
if _rc {
di as err "(error in option {bf:`macro's()})"
exit _rc
}
if "`macro'"!="local" {
if substr("`name'",1,1)=="_" {
di as err "global macro name may not start with '_'"
di as err "(error in option {bf:`macro's()})"
exit 198
}
}
local names `names' `name'
}
if `"`prefix1'"'=="" local prefix1 `prefix'
if "`macro'"!="local" & `"`prefix1'"'=="" local prefix1 "p"
if `"`suffix'"'!="" {
local 0 `", suffix(_`suffix')"'
capt n syntax [, suffix(name) ]
if _rc==1 exit _rc
if _rc {
di as err "(error in option {bf:`macro's()})"
exit _rc
}
local suffix = substr(`"`suffix'"',2,.)
}
// return macros
local n = r(n)
local ls = strlen(`"`suffix'"')
local prefix1 = substr("`prefix1'",1,`lmax'-floor(log10(`n')))
forv i = 1/`n' {
local p `"`r(p`i')'"'
if `: list sizeof p'>1 {
local p `""`p'""'
}
gettoken name names : names
if "`name'`nonames'"=="" {
local pname `"`r(p`i'name)'"' // name available in palette
if `"`pname'"'!="" {
local name = ustrtoname(`"`uscore'`pname'"', 0)
if "`macro'"=="local" {
local name = substr(`"`name'"',2,.)
}
}
else {
capt confirm name `p' // color code is a name
if _rc==0 {
local name `prefix'`p'
local name = substr("`name'",1,`lmax')
}
}
}
if "`name'"=="" local name `prefix1'`i'
if `ls' {
local name = substr("`name'",1,`lmax'-`ls')
local name `name'`suffix'
}
local mnames `mnames' `name'
local mvalues `"`mvalues' `"`p'"'"'
}
if "`macro'"=="local" {
c_local mnames `mnames'
c_local mvalues `"`mvalues'"'
exit
}
di _n as txt "globals:"
foreach name of local mnames {
gettoken p mvalues : mvalues
global `name' `"`p'"'
di as txt %22s "`name'" " : " as res `"${`name'}"'
}
end
/*----------------------------------------------------------------------------*/
/* write style files */
/*----------------------------------------------------------------------------*/
program Parse_Stylefiles
syntax [, stylefiles2(str) ] // get contents of option
local 0 `"`stylefiles2'"'
capt n syntax [anything] [, PERsonal path(passthru) replace * ]
if _rc==1 exit _rc
if _rc {
di as err "(error in option {bf:stylefiles()})"
exit _rc
}
if "`personal'"!="" {
if `"`path'"'!="" {
di as err "{bf:personal} and {bf:path()} not both allowed"
di as err "(error in option {bf:stylefiles()})"
exit 198
}
local path "personal"
}
if `"`options'"'!="" {
local options `", `options'"'
}
local locals2 `"`anything'`options'"'
if `"`locals2'"'!="" {
local locals2 locals2(`locals2')
}
c_local LOCALS2 `"`locals2'"'
c_local STYLEFILES2 `"`path' `replace'"'
end
program Stylefiles
// syntax
args names values opts
local 0 `", `opts'"'
syntax [, PERsonal path(str) replace ]
// determine path
if "`personal'"!="" {
mata: st_local("path", pathjoin(pathsubsysdir("PERSONAL"),"style"))
}
else if `"`path'"'=="" {
local path "style"
}
mata: colorpalette_mkdir(st_local("path"))
// check existing files
if "`replace'"=="" {
foreach name of local names {
local fn `"color-`name'.style"'
mata: st_local("fn", pathjoin(st_local("path"), st_local("fn")))
capt n confirm new file `"`fn'"'
if _rc==1 exit _rc
if _rc {
di as err "specify {bf:stylefiles(, replace)} to overwrite existing files"
exit _rc
}
}
}
// write style files
local i 0
tempname fh
di _n as txt "color styles:"
foreach name of local names {
gettoken p values : values
capt numlist `p', int min(3) max(3) range(>=0 <=255)
if _rc {
di as txt %22s "`name'" " : (color definition not RGB; skipped)"
continue
}
local ++i
local fn `"color-`name'.style"'
mata: st_local("fn", pathjoin(st_local("path"), st_local("fn")))
quietly file open `fh' using `"`fn'"', write replace
file write `fh' `"set rgb `p'"'
file close `fh'
di as txt %22s "`name'" " : " as res `"`p'"'
}
if `i' {
di _n as txt `"(style files written to directory `path')"'
}
else {
di _n as txt `"(no style files written)"'
}
end
/*----------------------------------------------------------------------------*/
/* graph of single palette */
/*----------------------------------------------------------------------------*/
program Graph
syntax [, rows(int 5) TItle(passthru) names NOINFO NONUMbers * ]
local n = r(n)
local c = max(3,ceil(sqrt(`n'/12*3)))
local cut = max(`rows',ceil(`n'/`c'))
local rows = max(5, `cut')
local c = max(3,ceil(`n'/`rows'))
local size = (100-10)/(1.5*`rows')
local lblgap = `size'/6
local infogap = `size'/3.75
local rgap = (100-5)/`c'
local j 1
local r 0
forv i=1/`n' {
if `i'>(`cut'*`j') {
local ++j
local r 0
}
local ++r
local pi `"`r(p`i')'"'
if `"`pi'"'=="" continue
local jlab `j'
local plots `plots' (scatteri `r' `j', mlw(vthin) mlc(black) ///
msymbol(square) msize(`size') mfcolor(`"`pi'"'))
local pnum `pnum' `r' `j' "`i'"
local pinfo `"`r(p`i'name)'"'
local plbl
if "`names'"!="" {
if `"`pinfo'"'!="" {
local plbl `"`pinfo'"'
local pinfo
}
}
if `"`plbl'"'=="" local plbl `"`pi'"'
local lbl `lbl' `r' `jlab' `"`plbl'"'
if `"`pinfo'"'==`"`pi'"' local pinfo // do not repeat names
if `"`pinfo'"'=="" local pinfo `"`r(p`i'info)'"' // use info if no name
if `"`pinfo'"'!="" {
local info `info' `r' `jlab' `"`pinfo'"'
}
}
if `"`plots'"'=="" {
di as txt "(nothing to display)"
exit
}
if `rows'>=30 {
local pnumsize tiny
local lblsize tiny
local infosize half_tiny
}
else if `rows'>=15 {
local pnumsize small
local lblsize vsmall
local infosize tiny
}
else if `rows'>=10 {
local pnumsize small
local lblsize small
local infosize vsmall
}
else {
local pnumsize medium 3.8194
local lblsize medsmall
local infosize small
}
if "`nonumbers'"=="" {
local pnum (scatteri `pnum', ms(i) msize(`size') mlabpos(9) ///
mlabgap(`lblgap') mlabsize(`pnumsize') mlabcolor(black))
}
else local pnum
if `"`lbl'"'!="" {
local lbl (scatteri `lbl', ms(i) msize(`size') mlabpos(3) ///
mlabgap(`lblgap') mlabsize(`lblsize') mlabcolor(black))
}
if "`noinfo'"=="" {
if `"`info'"'!="" {
local info (scatteri `info', ms(i) msize(`size') mlabpos(4) ///
mlabgap(`infogap') mlabsize(`infosize') mlabcolor(black))
}
}
else local info
local l = `size'/2 + cond("`nonumbers'"=="", 9, 5)
local r = `size'/2 + `rgap'
local b = `size'/2 + 5
local t = `size'/2 + 4
if `"`title'"'=="" {
local title title(`"`r(pname)'"')
}
two `plots' `pnum' `lbl' `info' , `title' scheme(s2color) ///
legend(off) ylabel(none) graphr(color(white)) ///
xlabel(none) xscale(range(1 3) off) ///
yscale(range(1 `rows') off reverse) ///
plotr(margin(`l' `r' `b' `t')) graphr(margin(0 0 0 3)) ///
`source' `options'
end
/*----------------------------------------------------------------------------*/
/* graph of multiple palettes */
/*----------------------------------------------------------------------------*/
program Graph2
_parse comma palettes 0 : 0
syntax [, TItle(passthru) LABels(str asis) PLabels(str asis) ///
NONUMbers GRopts(str asis) LColor(str) LWidth(str) ///
VERTical HORizontal span BARWidth(numlist max=1) * ]
if "`barwidth'"=="" local barwidth 0.7
if `"`labels'"'!="" local plabels `"`labels'"'
local orientation `vertical' `horizontal'
if "`orientation'"=="" local orientation horizontal
if `"`lcolor'"'!="" {
local lcolor lc(`lcolor' ..)
if c(stata_version)>=15 local lcolor `lcolor' lalign(center ..)
}
if `"`lwidth'"'!="" {
local lwidth lw(`lwidth' ..)
}
else local lwidth lw(vthin ..)
local np: list sizeof palettes
local r = 4 * `np'
if (_N > `r') {
preserve
qui keep in 1/`r' // remove extra observations to speed up
}
else if (_N < `r') {
preserve
qui set obs `r'
}
tempvar y
qui generate `y' = ceil(_n/4) - `barwidth'/2 + /*
*/ inlist(mod(_n-1,4)+1,3,4)*`barwidth' in 1/`r'
if "`span'"!="" {
tempvar psize
qui generate `psize' = .
}
local nxvars 0
local i 0
local plots
local ylab
foreach p of local palettes {
local ++i
_parse comma pnm popts : p
if `"`pnm'"'=="." continue
if `"`popts'"'=="" local popts ,
Palette_Get `pnm' `popts' `options'
local colors `"`r(p)'"'
local n = r(n)
gettoken plab plabels : plabels
if `"`plab'"'=="" {
local plab `"`r(pname)'"'
}
local ylab `ylab' `i' `"`plab'"'
while (`nxvars'<`n') {
local xx0 `xx`nxvars''
if mod(`nxvars',20)==0 local xx0
local ++nxvars
tempvar x`nxvars'
local xx`nxvars' `xx0' `x`nxvars''
}
local from = (`i' - 1) * 4 + 1
local to = `i' * 4
if "`span'"!="" {
qui replace `psize' = `n' in `from'/`to'
}
local n0 0
while (`n0'<`n') {
local ctmp
while (1) {
local ++n0
gettoken ci colors : colors, quotes
local ctmp `"`ctmp'`ci' "'
if `n0'==`n' continue, break
if mod(`n0',20)==0 continue, break
}
local plots `plots' ///
(scatter `xx`n0'' `y' in `from'/`to', color(`ctmp') ///
`lcolor' `lwidth' fintensity(100 ..) ///
recast(area) `orientation' nodropbase)
}
}
if `"`plots'"'=="" {
di as txt "(noting to display)"
exit
}
forv i=1/`nxvars' {
qui gen `x`i'' = `i' + inlist(mod(_n-1,4)+1,2,3) - .5
}
if "`span'"!="" {
forv i=1/`nxvars' {
qui replace `x`i'' = (`x`i'' - .5) / `psize'
}
}
if "`nonumbers'"=="" {
local xlab = ceil(`nxvars'/20)
numlist "1(`xlab')`nxvars'"
local xlab `r(numlist)'
}
else local xlab none
if "`orientation'"=="horizontal" {
if "`span'"!="" {
local xscale xscale(off range(0 1))
local xlabel
}
else {
local xscale xscale(lstyle(none) range(1 `nxvars'))
local xlabel xlabel(`xlab', notick)
}
local yscale yscale(lstyle(none) range(0.65 `np'.35) reverse)
local ylabel ylabel(`ylab', nogrid notick angle(hor))
}
else {
local xscale xscale(lstyle(none) range(0.65 `np'.35) alt)
local xlabel xlabel(`ylab', notick)
if "`span'"!="" {
local yscale yscale(off range(0 1) reverse)
local ylabel ylabel(, nogrid)
}
else {
local yscale yscale(lstyle(none) range(1 `nxvars') reverse)
local ylabel ylabel(`xlab', nogrid notick angle(hor))
}
}
twoway `plots', `xscale' `xlabel' xti("") `yscale' `ylabel' yti("") ///
legend(off) graphr(margin(l=2 t=2 b=1 r=2) color(white)) ///
scheme(s2color) `title' `gropts'
end
/*----------------------------------------------------------------------------*/
/* mata */
/*----------------------------------------------------------------------------*/
version 14
mata:
mata set matastrict on
void colorpalette_mkdir(path)
{
real scalar i
string scalar d
string rowvector dlist
pragma unset d
pragma unset dlist
if (direxists(path)) return
if (path=="") return
printf("{txt}directory %s does not exist\n", path)
printf("{txt}press any key to create the directory, or Break to abort\n")
more()
while (1) {
pathsplit(path, path, d)
dlist = dlist, d
if (path=="") break
if (direxists(path)) break
}
for (i=cols(dlist); i>=1; i--) {
path = pathjoin(path, dlist[i])
mkdir(path)
}
}
void getpalette(real scalar n, real scalar ptype)
{
real scalar ip_n, rc
string scalar pal, libname
class ColrSpace scalar S
pragma unset S
pointer scalar p
// Step 1: determine type of palette and parse additional options
// ptype 0 = <not defined>
// 1 = color list
// 2 = ColrSpace palette
// 3 = mata() object (already set)
// 4 = colorpalette_<palette>.ado
pal = st_local("palette")
if (ptype==0) {
// check whether palette exists in ColrSpace
checkpalette(S, ptype, pal)
}
if (ptype==0) {
// by now, if palette has multiple words, it must be a color list
if (length(tokens(pal))>1) ptype = 1
}
if (ptype==0) {
// if only one word: check whether resulting program name is valid
if (_stata("confirm name colorpalette_"+pal, 1)) ptype = 1
}
if (ptype==0) {
// check whether palette program exists; if yes: run it
if (_stata("local junk: properties colorpalette_"+pal, 1)) ptype = 1
else {
rc = _stata("_Palette_Get "+pal+", n(\`n') \`passthruopts'")
if (rc) exit(rc)
st_local("options", "")
ptype = 4
}
}
// make sure no extra options are left
rc = _stata("parse_optnotallowed, "+st_local("options"))
if (rc) exit(rc)
// Step 2: collect palette/colors and apply options
// get colors
if (ptype==1) {
S.colors(pal) // space-separated list of color specifications
st_local("palette", "custom") // assign default palette name
}
else if (ptype==2) { // ColrSpace palette
(void) S.pexists(pal, libname="")
if (substr(libname, 1, strlen("generators"))=="generators") {
S.palette(pal, n, strtoreal(tokens(st_local("hue"))),
strtoreal(tokens(st_local("chroma"))),
strtoreal(tokens(st_local("luminance"))),
strtoreal(tokens(st_local("power"))))
}
else if (substr(libname, 1, strlen("lsmaps"))=="lsmaps" |
substr(libname, 1, strlen("rgbmaps"))=="rgbmaps") {
S.palette(pal, n, strtoreal(tokens(st_local("range"))))
}
else S.palette(pal, n, st_local("noexpand")!="")
st_local("palette", S.name())
}
else if (ptype==3) { // palette is Mata object
// mataname: name of object
p = findexternal(st_local("mataname"))
if (p==NULL) {
display("{err}mata object '" + st_local("mataname") + "' not found")
exit(498)
}
if (classname(*p)!="ColrSpace") {
display("{err}'" + st_local("mataname") + "' is not a ColrSpace() object")
exit(498)
}
S = *p
if (S.name()!="") st_local("palette", S.name())
else st_local("palette", st_local("mataname"))
}
else if (ptype==4) { // colorpalette_<palette>.ado
// plist: comma-separated list of colors
// pnames: comma-separated list of color names
// pinfo: comma-separated list of descriptions
// note: palette note
// source: palette source
S.colors(st_local("plist"), ",")
if (st_local("pnames")!="") S.names(st_local("pnames"), ",")
if (st_local("pinfo")!="") S.info(st_local("pinfo"), ",")
S.note(st_local("note"))
S.source(st_local("source"))
}
// class
if (S.pclass()=="") S.pclass(st_local("class"))
// option n()
if (ptype!=2) {
if (n<. & n!=S.N()) {
if (anyof(("qualitative","categorical"), S.pclass())) {
if (st_local("noexpand")=="") S.recycle(n)
else if (n<S.N()) S.recycle(n)
}
else if (st_local("noexpand")=="") S.ipolate(n)
}
}
// option select()
if (st_local("select")!="") S.select(strtoreal(tokens(st_local("select"))))
// option drop()
if (st_local("drop")!="") S.drop(strtoreal(tokens(st_local("drop"))))
// option order()
if (st_local("order")!="") S.order(strtoreal(tokens(st_local("order"))))
// option reverse
if (st_local("reverse")!="") S.reverse()
// option shift()
if (st_local("shift")!="") S.shift(strtoreal(st_local("shift")))
// option opacity()
if (st_local("opacity")!="") S.opacity(strtoreal(tokens(st_local("opacity"))), 1)
// option intensity()
if (st_local("intensity")!="") S.intensity(strtoreal(tokens(st_local("intensity"))), 1)
// option ipolate()
if ((ip_n = strtoreal(st_local("ipolate_n")))<.) {
S.ipolate(ip_n,
st_local("ipolate_space"),
strtoreal(tokens(st_local("ipolate_range"))),
strtoreal(st_local("ipolate_power")),
strtoreal(tokens(st_local("ipolate_positions"))),
st_local("ipolate_pad")!="")
}
// option intensify()
if (st_local("intensify")!="") {
S.intensify(strtoreal(tokens(st_local("intensify"))))
}
// option saturate()
if (st_local("saturate_p")!="") {
S.saturate(strtoreal(tokens(st_local("saturate_p"))),
st_local("saturate_method"),
st_local("saturate_level")!="")
}
// option luminate()
if (st_local("luminate_p")!="") {
S.luminate(strtoreal(tokens(st_local("luminate_p"))),
st_local("luminate_method"),
st_local("luminate_level")!="")
}
// option gscale()
if (st_local("gscale")!="") {
S.gray(strtoreal(tokens(st_local("gscale_p"))),
st_local("gscale_method"))
}
// option cblind()
if (st_local("cblind")!="") {
S.cvd(strtoreal(tokens(st_local("cblind_p"))),
st_local("cblind_method"))
}
// return colors
st_local("plist", S.colors(st_local("forcergb")!=""))
st_local("pnames", S.names())
st_local("pinfo", S.info())
st_local("note", S.note())
st_local("source", S.source())
st_local("class", S.pclass())
}
void _getpalette_ipolate(class ColrSpace scalar S, real scalar n,
string scalar space, real rowvector range, real scalar pow,
real rowvector pos, real scalar pad, string scalar cyc)
{
if (cyc=="cyclic") S.ipolate(n, space, range, pow, pos, pad, 1)
else if (cyc=="nocyclic") S.ipolate(n, space, range, pow, pos, pad, 0)
else S.ipolate(n, space, range, pow, pos, pad)
}
void checkpalette(class ColrSpace scalar S, real scalar ptype, string scalar pal0)
{
string scalar pal, p1
string rowvector PAL
// characters % * # " may occur in color specifications, but are currently
// not used in ColrSpace palette names; exit if such characters are found
if (any(strpos(pal0, ("%", "*", "#",`"""')))) return
// case 1: palette exists in ColrSpace
PAL = tokens(pal0)
if ((pal=S.pexists(pal0))!="") {
p1 = tokens(pal)[1]
if (PAL[1]!=p1) {
// if first word is not an exact match, check whether first word has
// an exact match in named colors; if yes, assume pal to be a list
// of colors (such that, e.g., "blue" will be interpreted as color
// "blue" and not as palette "Blues")
if (S.cvalid(PAL[1])==PAL[1]) return
}
pal0 = pal // return expanded palette name
ptype = 2
return
}
// case 2: check whether first word matches a palette
if ((pal=S.pexists(PAL[1]))!="") {
p1 = tokens(pal)[1]
if (PAL[1]!=p1) {
// (see note above)
if (S.cvalid(PAL[1])==PAL[1]) return
}
// modify palette such that first word is expanded
PAL[1] = p1
pal0 = invtokens(PAL)
ptype = 2
}
}
real scalar smatch(string scalar scheme, string vector schemes)
{
real scalar i, n
n = length(schemes)
for (i=1; i<=n; i++) {
if (_smatch(scheme, schemes[i])) return(1)
}
return(0)
}
real scalar _smatch(string scalar A, string scalar B)
{
if (A==substr(B, 1, strlen(A))) {
A = B
return(1)
}
return(0)
}
end
exit
|
*! version 1.0.3 27dec2018 Ben Jann
program colorpalette9
version 9.2
capt _on_colon_parse `0'
if _rc==0 {
local 0 `"`s(before)'"'
local rhs `"`s(after)'"'
_parse comma lhs 0 : 0
if `"`lhs'"'!="" error 198
if `"`rhs'"'=="" local rhs s2
local palettes
local palette
local space
while (`"`rhs'"'!="") {
gettoken p rhs : rhs, parse("/") quotes bind
if `"`p'"'=="/" {
local palettes `"`palettes'`"`palette'"' "'
local palette
local space
continue
}
local palette `"`palette'`space'`p'"'
local space " "
}
if `"`palette'"'!="" {
local palettes `"`palettes'`"`palette'"'"'
}
Graph2 `palettes' `0'
exit
}
gettoken p rhs : 0, parse("/") quotes bind
if `"`rhs'"'!="" {
local rhs: copy local 0
local 0
local palettes
local palette
local space
while (`"`rhs'"'!="") {
gettoken p rhs : rhs, parse("/") quotes bind
if `"`p'"'=="/" {
local palettes `"`palettes'`"`palette'"' "'
local palette
local space
continue
}
local palette `"`palette'`space'`p'"'
local space " "
}
if `"`palette'"'!="" { // handle syntax after last slash
_parse comma p rhs : palette
if `"`p'"'!="" {
Parse_Graph_Opts `rhs' // returns rhs and 0
local palettes `"`palettes'`"`p'`rhs'"'"'
}
else local 0: copy local palette
}
Palette_Get2 `palettes' `0'
}
else {
Palette_Get `0'
}
if "`GRAPH'"=="" {
tempname hcurrent
_return hold `hcurrent'
_return restore `hcurrent', hold // make copy
Graph, `GROPTS'
_return restore `hcurrent'
}
end
/*----------------------------------------------------------------------------*/
/* retrieve palette(s) */
/*----------------------------------------------------------------------------*/
program Parse_Graph_Opts
syntax [, noGRaph GRopts(passthru) TItle(passthru) rows(passthru) * ]
if `"`graph'`gropts'`title'`rows'"'!="" {
c_local 0 `", `graph' `gropts' `title' `rows'"'
}
if `"`options'"'!="" c_local rhs `", `options'"'
else c_local rhs
end
program Palette_Get2, rclass
_parse comma palettes 0 : 0
syntax [, noGRaph GRopts(str asis) TItle(passthru) rows(passthru) * ]
c_local GRAPH "`graph'"
c_local GROPTS `"`rows' `title' `gropts'"'
local space
local i 0
foreach p of local palettes {
_parse comma pnm popts : p
if `"`popts'"'=="" local popts ,
Palette_Get `pnm' `popts' `options'
local plist `"`plist'`space'`r(p)'"'
local space " "
forv j = 1/`r(n)' {
local ++i
return local p`i' `"`r(p`j')'"'
return local p`i'info `"`r(p`j'info )'"'
}
}
return scalar n = `i'
return local p `"`plist'"'
return local pname "custom"
return local ptype "color"
end
program Palette_Get
_parse comma palette 0 : 0
syntax [, noGRaph GRopts(str asis) TItle(passthru) rows(passthru) ///
IPolate(numlist max=1 integer >=1) * ]
c_local GRAPH "`graph'"
c_local GROPTS `"`rows' `title' `gropts'"'
_Palette_Get `palette', `options'
if "`ipolate'"!="" {
Palette_Ipolate `ipolate'
}
end
program Palette_Ipolate, rclass
args n
mata: st_local("P", _invtokens_quoted(ipolate_colors(tokens(st_global("r(p)"))', `n')))
local P: list clean P
local i 0
foreach p of local P {
local ++i
return local p`i' `"`p'"'
}
return local p `"`P'"'
if `"`r(pnote)'"'!="" {
return local pnote `"`r(pnote)' (interpolated)"'
}
else return local pnote "(interpolated)"
return local pname `"`r(pname)'"'
return local ptype "color"
return scalar n = `n'
end
program _Palette_Get, rclass
syntax [anything(name=palette id="palette" everything equalok)] ///
[, N(numlist max=1 integer >=1) Select(numlist integer >=1) Reverse ///
INtensity(numlist >=0 <=255) OPacity(numlist int >=0 <=100) * ]
local n_in: list sizeof intensity
local n_op: list sizeof opacity
if `n_op' {
if c(stata_version)<15 {
di as err "{bf:opacity()} requires Stata 15"
exit 9
}
}
// get palette
if `"`palette'"'=="" local palette s2
local islist = (`: list sizeof palette'!=1)
if `islist'==0 {
capt confirm name _`palette'
if _rc local islist 1
}
if `islist'==0 {
capt __Palette_Get `palette', n(`n') `options'
if _rc==199 {
capt confirm name `palette'
if _rc { // numeric palette name: cannot be a named style
di as err `"palette `palette' not found"'
exit 198
}
local islist 1
}
else if _rc { // display error message
__Palette_Get `palette', n(`n') `options'
}
}
if `islist' {
local i 0
foreach p of local palette {
local ++i
local p`i' `"`p'"'
}
local n `i'
local palette "custom"
}
// select/order
if "`reverse'"!="" {
if "`select'"=="" {
qui numlist "`n'(-1)1"
local select `r(numlist)'
}
else {
local select0 `select'
local select
foreach s of local select0 {
local select `s' `select'
}
}
}
else if "`select'"=="" {
qui numlist "1/`n'"
local select `r(numlist)'
}
// opacity/intensity: prepare lists
local n: list sizeof select
if `n_in' {
if `n_in'<`n' {
mata: _listrecycle("intensity", `n')
local n_in `n'
}
else if `n_in'>`n' {
mata: _listrecycle("select", `n_in')
local n `n_in'
}
}
if `n_op' {
if `n_op'<`n' {
mata: _listrecycle("opacity",`n')
local n_op `n'
}
else if `n_op'>`n' {
mata: _listrecycle("select",`n_op')
local n `n_op'
if `n_in' {
mata: _listrecycle("intensity",`n_op')
local n_in `n_op'
}
}
}
// return palette
local plist
local i 0
foreach j of local select {
local pj `"`p`j''"'
if `"`pj'"'!="" {
local ++i
mata: makeRGB("pj")
gettoken in intensity : intensity
if `"`in'"'!="" {
if strpos(`"`pj'"',"*")==0 local pj `"`pj'*`in'"'
}
gettoken op opacity : opacity
if `"`op'"'!="" {
if strpos(`"`pj'"',"%")==0 local pj `"`pj'%`op'"'
}
local plist `"`plist'`space'`"`pj'"'"'
local space " "
return local p`i' `"`pj'"'
return local p`i'info `"`p`j'info'"'
}
}
local n `i'
local plist: list clean plist
return local p `"`plist'"'
return local pnote `"`note'"'
return local pname `"`palette'"'
return local ptype "color"
return scalar n = `n'
end
program __Palette_Get
gettoken palette 0 : 0, parse(" ,")
syntax [, n(numlist max=1 integer >0) * ]
colorpalette9_`palette', n(`n') `options'
if `"`P'"'!="" { // palettes that define P (and I)
mata: st_local("P", _parse_palette(st_local("P")))
mata: st_local("I", _parse_palette(st_local("I")))
local min 1
local max: list sizeof P
if "`n'"=="" local n `max'
local n = max(`min',min(`max',`n'))
}
else { // palettes that define P#
local min 1
while (`"`P`min''"'=="") {
local ++min
if `min'>100 {
c_local n 0
exit // emergency exit
}
}
local max `min'
while (`"`P`max''"'!="") {
local ++max
}
local --max
if "`n'"=="" local n `max'
local n = max(`min',min(`max',`n'))
local P `"`P`n''"'
mata: st_local("P", _parse_palette(st_local("P")))
mata: st_local("I", _parse_palette(st_local("I")))
}
local i 0
foreach c of local P {
gettoken info I : I
local ++i
if `i'>`n' continue, break
c_local p`i' `"`c'"'
c_local p`i'info `"`info'"'
}
c_local note `"`note'"'
c_local n `n'
end
/*----------------------------------------------------------------------------*/
/* graph of single palette */
/*----------------------------------------------------------------------------*/
program Graph
syntax [, rows(int 5) TItle(passthru) * ]
local n = r(n)
local c = max(3,ceil(sqrt(`n'/12*3)))
local cut = max(`rows',ceil(`n'/`c'))
local rows = max(5, `cut')
local c = max(3,ceil(`n'/`rows'))
local size = (100-10)/(1.5*`rows')
local lblgap = `size'/6
local infogap = `size'/3.75
local rgap = (100-5)/`c'
local j 1
local r 0
forv i=1/`n' {
if `i'>(`cut'*`j') {
local ++j
local r 0
}
local ++r
if `"`r(p`i')'"'=="" continue
local jlab `j'
local plots `plots' (scatteri `r' `j', mlw(vthin) mlc(black) ///
msymbol(square) msize(`size') mfcolor(`"`r(p`i')'"'))
local pnum `pnum' `r' `j' "`i'"
local lbl `lbl' `r' `jlab' `"`r(p`i')'"'
if `"`r(p`i'info)'"'!="" {
local info `info' `r' `jlab' `"`r(p`i'info)'"'
}
}
if `"`plots'"'=="" {
di as txt "(nothing to display)"
exit
}
if `rows'>=30 {
local pnumsize tiny
local lblsize tiny
local infosize half_tiny
}
else if `rows'>=15 {
local pnumsize small
local lblsize vsmall
local infosize tiny
}
else if `rows'>=10 {
local pnumsize small
local lblsize small
local infosize vsmall
}
else {
local pnumsize medium 3.8194
local lblsize medsmall
local infosize small
}
local pnum (scatteri `pnum', ms(i) msize(`size') mlabpos(9) ///
mlabgap(`lblgap') mlabsize(`pnumsize') mlabcolor(black))
if `"`lbl'"'!="" {
local lbl (scatteri `lbl', ms(i) msize(`size') mlabpos(3) ///
mlabgap(`lblgap') mlabsize(`lblsize') mlabcolor(black))
}
if `"`info'"'!="" {
local info (scatteri `info', ms(i) msize(`size') mlabpos(4) ///
mlabgap(`infogap') mlabsize(`infosize') mlabcolor(black))
}
else local info
local l = `size'/2 + 9
local r = `size'/2 + `rgap'
local b = `size'/2 + 5
local t = `size'/2 + 4
if `"`title'"'=="" {
if `"`r(pnote)'"'=="" local title title(`"`r(pname)'"')
else local title title(`"`r(pname)' `r(pnote)'"')
}
two `plots' `pnum' `lbl' `info' , `title' scheme(s2color) ///
legend(off) ylabel(none) graphr(color(white)) ///
xlabel(none) xscale(range(1 3) off) ///
yscale(range(1 `rows') off reverse) ///
plotr(margin(`l' `r' `b' `t')) graphr(margin(0 0 0 3)) `options'
end
/*----------------------------------------------------------------------------*/
/* graph of multiple palettes */
/*----------------------------------------------------------------------------*/
program Graph2
_parse comma palettes 0 : 0
syntax [, TItle(passthru) LABels(str asis) PLabels(str asis) ///
GRopts(str asis) LColor(str) LWidth(str) VERTical HORizontal * ]
if `"`labels'"'!="" local plabels `"`labels'"'
local orientation `vertical' `horizontal'
if "`orientation'"=="" local orientation horizontal
if `"`lcolor'"'!="" {
local lcolor lc(`lcolor' ..)
if c(stata_version)>=15 local lcolor `lcolor' lalign(center ..)
}
if `"`lwidth'"'!="" {
local lwidth lw(`lwidth' ..)
}
else local lwidth lw(vthin ..)
local np: list sizeof palettes
local r = 4 * `np'
if (_N > `r') {
preserve
qui keep in 1/`r' // remove extra observations to speed up
}
else if (_N < `r') {
preserve
qui set obs `r'
}
tempvar y
qui generate `y' = ceil(_n/4) - .35 + inlist(mod(_n-1,4)+1,3,4)*.7 in 1/`r'
local nxvars 0
local i 0
local plots
local ylab
foreach p of local palettes {
local ++i
_parse comma pnm popts : p
if `"`pnm'"'=="." continue
if `"`popts'"'=="" local popts ,
Palette_Get `pnm' `popts' `options'
local colors `"`r(p)'"'
local n = r(n)
gettoken plab plabels : plabels
if `"`plab'"'=="" {
if `"`r(pnote)'"'=="" local plab `"`r(pname)'"'
else local plab `"`r(pname)' `r(pnote)'"'
}
local ylab `ylab' `i' `"`plab'"'
while (`nxvars'<`n') {
local xx0 `xx`nxvars''
if mod(`nxvars',20)==0 local xx0
local ++nxvars
tempvar x`nxvars'
local xx`nxvars' `xx0' `x`nxvars''
}
local from = (`i' - 1) * 4 + 1
local to = `i' * 4
local n0 0
while (`n0'<`n') {
local ctmp
while (1) {
local ++n0
gettoken ci colors : colors, quotes
local ctmp `"`ctmp'`ci' "'
if `n0'==`n' continue, break
if mod(`n0',20)==0 continue, break
}
local plots `plots' ///
(scatter `xx`n0'' `y' in `from'/`to', color(`ctmp') ///
`lcolor' `lwidth' fintensity(100 ..) ///
recast(area) `orientation' nodropbase)
}
}
if `"`plots'"'=="" {
di as txt "(noting to display)"
exit
}
forv i=1/`nxvars' {
qui gen `x`i'' = `i' + inlist(mod(_n-1,4)+1,2,3) - .5
}
local xlab = ceil(`nxvars'/20)
numlist "1(`xlab')`nxvars'"
local xlab `r(numlist)'
if "`orientation'"=="horizontal" {
local xscale xscale(lstyle(none) range(1 `nxvars'))
local xlabel xlabel(`xlab', notick)
local yscale yscale(lstyle(none) range(0.65 `np'.35) reverse)
local ylabel ylabel(`ylab', nogrid notick angle(hor))
}
else {
local xscale xscale(lstyle(none) range(0.65 `np'.35) alt)
local xlabel xlabel(`ylab', notick)
local yscale yscale(lstyle(none) range(1 `nxvars') reverse)
local ylabel ylabel(`xlab', nogrid notick angle(hor))
}
twoway `plots', `xscale' `xlabel' xti("") `yscale' `ylabel' yti("") ///
legend(off) graphr(margin(l=2 t=2 b=1 r=2) color(white)) ///
scheme(s2color) `title' `gropts'
end
/*----------------------------------------------------------------------------*/
/* palettes */
/*----------------------------------------------------------------------------*/
program colorpalette9_s1
c_local P dkgreen,orange_red,navy,maroon,teal,sienna,orange,magenta,cyan,red,lime,brown,purple,olive_teal,ltblue
end
program colorpalette9_s1r
c_local P yellow,lime,midblue,magenta,orange,red,ltblue,sandb,mint,olive_teal,orange_red,blue,pink,teal,sienna
end
program colorpalette9_s2
c_local P navy,maroon,forest_green,dkorange,teal,cranberry,lavender,khaki,sienna,emidblue,emerald,brown,erose,gold,bluishgray
end
program colorpalette9_economist
c_local P edkblue,emidblue,eltblue,emerald,erose,ebblue,eltgreen,stone,navy,maroon,brown,lavender,teal,cranberry,khaki
end
program colorpalette9_mono
c_local P gs6,gs10,gs8,gs4,black,gs12,gs2,gs7,gs9,gs11,gs13,gs5,gs3,gs14,gs15
end
program colorpalette9_cblind
c_local P #000000,#999999,#E69F00,#56B4E9,#009E73,#F0E442,#0072B2,#D55E00,#CC79A7
c_local I black,grey,orange,sky blue,bluish green,yellow,blue,vermillion,reddish purple
end
program colorpalette9_plottig
c_local P black, ///
97 156 255, /// plb1 - blue
0 192 175, /// plg1 - light greenish
201 152 0, /// ply1 - yellow/brownish
185 56 255, /// pll1 - purple
248 118 109, /// plr1 - red
0 176 246, /// plb2 - bluish
0 186 56, /// plg2 - greenish
163 165 0, /// ply2 - yellow/brownish
231 107 243, /// pll2 - purple
255 103 164, /// plr2 - red
0 188 216, /// plb3 - blue
107 177 0, /// plg3 - green
229 135 0, /// ply3 - orange
253 97 209 // pll3 - purple
end
program colorpalette9_538
c_local P 3 144 214, /// 538b
254 48 11, /// 538r
120 172 68, /// 538g
247 187 5, /// 538y
229 138 233, /// 538m
254 133 3, /// 538o
242 242 242, /// 538background
205 205 206, /// 538axis
155 155 155, /// 538label
162 204 246, /// 538bs6 (ci)
254 181 167, /// 538rs6 (ci2)
42 161 237, /// 538bs1 (contour_begin)
255 244 241 // 538rs11 (contour_end)
c_local I ,,,,,,(background),(axes etc.),(labels),(ci),(ci2),(contour_begin),(contour_end)
end
program colorpalette9_tfl
c_local P 220 36 31, /// tflred
0 25 168, /// tflblue
0 114 41, /// tflgreen
232 106 16, /// tflorange
137 78 36, /// tflbrown
117 16 86, /// tflpurple
255 206 0, /// tflyellow
65 75 86 // tflgrey
end
program colorpalette9_mrc
c_local P 33 103 126, /// mrcblue
106 59 119, /// mrcpurple
130 47 90, /// mrcred
208 114 50, /// mrcorange
255 219 0, /// mrcyellow
181 211 52, /// mrcgreen
138 121 103 // mrcgrey
end
program colorpalette9_burd
c_local P 33 102 172, /// Bu from RdBu-7
178 24 43, /// Rd from RdBu-7
27 120 55, /// Gn from PRGn-7
230 97 1, /// Or from PuOr-7
1 102 94, /// BG from BrBG-7
197 27 125, /// Pi from PiYG-7
118 42 131, /// Pu from PuOr-7
140 81 10, /// Br from BrBG-7
77 77 77, /// Gy from RdGy-7
103 169 207, /// (ci_arealine)
209 229 240, /// (ci_area)
239 138 98, /// (ci2_arealine)
253 219 199 // (ci2_area)
c_local I Bu from RdBu-7,Rd from RdBu-7,Gn from PRGn-7,Or from PuOr-7, ///
BG from BrBG-7,Pi from PiYG-7,Pu from PuOr-7,Br from BrBG-7, ///
Gy from RdGy-7,(ci_arealine),(ci_area),(ci2_arealine),(ci2_area)
end
program colorpalette9_lean
c_local P gs14,gs10,gs12,gs8,gs16,gs13,gs10,gs7,gs4,gs0,gs14,gs10,gs12,gs0,gs16
end
program colorpalette9_hue
// translation of pal_hue() from the -scales- package by Hadley Wickham in R
// see https://github.com/hadley/scales
syntax [, n(int 15) Hue(numlist max=2) Chroma(numlist max=1 >=0) ///
Luminance(numlist max=1 >=0 <=100) DIRection(int 1) ///
hstart(real 0) * ] // hstart() not documented; don't see any use for it
if !inlist(`direction',-1,1) {
di as err "direction must be 1 or -1"
exit 198
}
gettoken h1 h2 : hue
gettoken h2 : h2
gettoken c : chroma
gettoken l : luminance
if "`h1'"=="" local h1 = 0 + 15
if "`h2'"=="" local h2 = 360 + 15
if "`c'"=="" local c 100
if "`l'"=="" local l 65
if (mod(`h2'-`h1',360) < 1) local h2 = `h2' - 360/`n'
local P
forv i=1/`n'{
local x = `h1' + cond(`n'<=1, 0, (`i'-1) * (`h2'-`h1') / (`n'-1))
local h = mod(`x' + `hstart', 360) * `direction'
mata: st_local("P", st_local("P") + "`comma'" + HCL_to_RGB(`h', `c', `l'))
mata: st_local("I", st_local("I") + "`comma'hcl " + ///
strtrim(stritrim(sprintf("%9.3g %9.3g %9.3g", `h', `c', `l'))))
local comma ","
}
c_local P `P'
c_local I `I'
end
program colorpalette9_hcl
syntax [, n(int 15) Hue(numlist max=2) Chroma(numlist max=2 >=0) ///
Luminance(numlist max=2 >=0 <=100) POWer(numlist max=2 >0) ///
QUALitative intense dark light pastel ///
SEQuential blues greens grays oranges purples reds ///
heat heat2 TERrain terrain2 viridis plasma redblue ///
DIVerging bluered bluered2 bluered3 greenorange browngreen pinkgreen purplegreen * ]
local pal `qualitative' `intense' `dark' `light' `pastel' ///
`sequential' `blues' `greens' `grays' `oranges' `purples' `reds' ///
`heat' `heat2' `terrain' `terrain2' `viridis' `plasma' `redblue' ///
`diverging' `bluered' `bluered2' `bluered3' `greenorange' `browngreen' `pinkgreen' `purplegreen'
if `: list sizeof pal'>1 {
di as err `"only one of '`pal'' allowed"'
exit 198
}
if "`pal'"=="" local pal qualitative
if "`pal'"=="qualitative" local ptype qualitative
else if "`pal'"=="intense" local ptype qualitative
else if "`pal'"=="dark" local ptype qualitative
else if "`pal'"=="light" local ptype qualitative
else if "`pal'"=="pastel" local ptype qualitative
else if "`pal'"=="diverging" local ptype diverging
else if "`pal'"=="bluered" local ptype diverging
else if "`pal'"=="bluered2" local ptype diverging
else if "`pal'"=="bluered3" local ptype diverging
else if "`pal'"=="greenorange" local ptype diverging
else if "`pal'"=="browngreen" local ptype diverging
else if "`pal'"=="pinkgreen" local ptype diverging
else if "`pal'"=="purplegreen" local ptype diverging
else local ptype sequential
gettoken h1 h2 : hue
gettoken h2 : h2
gettoken c1 c2 : chroma
gettoken c2 : c2
gettoken l1 l2 : luminance
gettoken l2 : l2
gettoken p1 p2 : power
gettoken p2 : p2
if "`ptype'"=="qualitative" {
if "`pal'"=="qualitative" local def 15 . 60 70
else if "`pal'"=="intense" local def 15 . 100 65
else if "`pal'"=="dark" local def 15 . 80 60
else if "`pal'"=="light" local def 15 . 50 80
else if "`pal'"=="pastel" local def 15 . 35 85
foreach m in h1 h2 c1 l1 {
gettoken d def : def
if "``m''"=="" {
if `"`d'"'=="." local `m' = `h1' + 360*(`n'-1)/`n'
else local `m' `d'
}
}
forv i=1/`n'{
if `n'==1 local H `h1'
else local H = `h1' + (`i'-1) * (`h2'-`h1') / (`n'-1)
mata: st_local("P", st_local("P") + "`comma'" + HCL_to_RGB(`H', `c1', `l1'))
mata: st_local("I", st_local("I") + "`comma'hcl " + ///
strtrim(stritrim(sprintf("%9.3g %9.3g %9.3g", `H', `c1', `l1'))))
local comma ","
}
}
else if "`ptype'"=="sequential" {
if "`pal'"=="sequential" local def 260 . 80 10 25 95 1 .
else if "`pal'"=="blues" local def 260 . 80 10 25 95 1 .
else if "`pal'"=="greens" local def 145 125 80 10 25 95 1 .
else if "`pal'"=="grays" local def 0 . 0 0 15 95 1 .
else if "`pal'"=="oranges" local def 40 . 100 10 50 95 1 .
else if "`pal'"=="purples" local def 280 . 70 10 20 95 1 .
else if "`pal'"=="reds" local def 10 20 80 10 25 95 1 .
else if "`pal'"=="heat" local def 0 90 100 30 50 90 0.2 1.0
else if "`pal'"=="heat2" local def 0 90 80 30 30 90 0.2 2.0
else if "`pal'"=="terrain" local def 130 0 80 0 60 95 0.1 1.0
else if "`pal'"=="terrain2" local def 130 30 65 0 45 90 0.5 1.5
else if "`pal'"=="viridis" local def 300 75 35 95 15 90 0.8 1.2
else if "`pal'"=="plasma" local def 100 100 60 100 15 95 2.0 0.9
else if "`pal'"=="redblue" local def 0 -100 80 40 40 75 1.0 1.0
foreach m in h1 h2 c1 c2 l1 l2 p1 p2 {
gettoken d def : def
if "``m''"=="" {
if "`d'"=="." local `m' `last'
else local `m' `d'
}
local last ``m''
}
forv j=1/`n'{
if `n'==1 local i 1
else local i = (`n'-`j')/(`n'-1)
local H = `h2' - `i' * (`h2'-`h1')
local C = `c2' - `i'^`p1' * (`c2'-`c1')
local L = `l2' - `i'^`p2' * (`l2'-`l1')
mata: st_local("P", st_local("P") + "`comma'" + HCL_to_RGB(`H', `C', `L'))
mata: st_local("I", st_local("I") + "`comma'hcl " + ///
strtrim(stritrim(sprintf("%9.3g %9.3g %9.3g", `H', `C', `L'))))
local comma ","
}
}
else if "`ptype'"=="diverging" {
if "`pal'"=="diverging" local def 260 0 80 30 95 1 .
else if "`pal'"=="bluered" local def 260 0 80 30 95 1 .
else if "`pal'"=="bluered2" local def 260 0 100 50 95 1 .
else if "`pal'"=="bluered3" local def 180 330 60 75 95 1 .
else if "`pal'"=="greenorange" local def 130 45 100 70 95 1 .
else if "`pal'"=="browngreen" local def 55 160 60 35 95 1 .
else if "`pal'"=="pinkgreen" local def 340 128 90 35 95 1 .
else if "`pal'"=="purplegreen" local def 300 128 60 30 95 1 .
foreach m in h1 h2 c1 l1 l2 p1 p2 {
gettoken d def : def
if "``m''"=="" {
if "`d'"=="." local `m' `last'
else local `m' `d'
}
local last ``m''
}
forv j=1/`n'{
if `n'==1 local i 1
else local i = (`n' - 2*`j' + 1) / (`n'-1)
local H = cond(`i'>0, `h1', `h2')
local C = `c1' * abs(`i')^`p1'
local L = `l2' - abs(`i')^`p2' * (`l2'-`l1')
mata: st_local("P", st_local("P") + "`comma'" + HCL_to_RGB(`H', `C', `L'))
mata: st_local("I", st_local("I") + "`comma'hcl " + ///
strtrim(stritrim(sprintf("%9.3g %9.3g %9.3g", `H', `C', `L'))))
local comma ","
}
}
c_local P `P'
c_local I `I'
c_local note `pal'
end
program colorpalette9_hsv
syntax [, n(int 15) Hue(numlist max=2) SATuration(numlist max=2 >=0 <=1) ///
VALue(numlist max=2 >=0 <=1) POWer(numlist max=2 >0) ///
QUALitative intense dark light pastel RAINbow ///
SEQuential blues greens grays oranges purples reds heat terrain ///
DIVerging bluered bluered2 bluered3 greenorange browngreen pinkgreen purplegreen ///
heat0 terrain0 * ] // heat0/terrain0 not documented (same as heat/terrain in spmap)
local pal `qualitative' `intense' `dark' `light' `pastel' `rainbow' ///
`sequential' `blues' `greens' `grays' `oranges' `purples' `reds' `heat' `terrain' ///
`diverging' `bluered' `bluered2' `bluered3' `greenorange' `browngreen' `pinkgreen' `purplegreen' ///
`heat0' `terrain0'
if `: list sizeof pal'>1 {
di as err `"only one of '`pal'' allowed"'
exit 198
}
if "`pal'"=="" local pal qualitative
if "`pal'"=="qualitative" local ptype qualitative
else if "`pal'"=="intense" local ptype qualitative
else if "`pal'"=="dark" local ptype qualitative
else if "`pal'"=="light" local ptype qualitative
else if "`pal'"=="pastel" local ptype qualitative
else if "`pal'"=="rainbow" local ptype qualitative
else if "`pal'"=="diverging" local ptype diverging
else if "`pal'"=="bluered" local ptype diverging
else if "`pal'"=="bluered2" local ptype diverging
else if "`pal'"=="bluered3" local ptype diverging
else if "`pal'"=="greenorange" local ptype diverging
else if "`pal'"=="browngreen" local ptype diverging
else if "`pal'"=="pinkgreen" local ptype diverging
else if "`pal'"=="purplegreen" local ptype diverging
else if "`pal'"=="heat0" local ptype heat0
else if "`pal'"=="terrain0" local ptype terrain0
else local ptype sequential
gettoken h1 h2 : hue
gettoken h2 : h2
gettoken s1 s2 : saturation
gettoken s2 : s2
gettoken v1 v2 : value
gettoken v2 : v2
gettoken p1 p2 : power
gettoken p2 : p2
if "`ptype'"=="qualitative" {
if "`pal'"=="qualitative" local def 0 360*(`n'-1)/`n' .4 .85
else if "`pal'"=="intense" local def 0 360*(`n'-1)/`n' .6 .9
else if "`pal'"=="dark" local def 0 360*(`n'-1)/`n' .6 .7
else if "`pal'"=="light" local def 0 360*(`n'-1)/`n' .3 .9
else if "`pal'"=="pastel" local def 0 360*(`n'-1)/`n' .2 .9
else if "`pal'"=="rainbow" local def 0 360*(`n'-1)/`n' 1 1
foreach m in h1 h2 s1 v1 {
gettoken d def : def
if "``m''"=="" local `m' `d'
}
forv i=1/`n'{
if `n'==1 local h `h1'
else local H = `h1' + (`i'-1) * (`h2'-`h1') / (`n'-1)
mata: st_local("P", st_local("P") + "`comma'" + HSV_to_RGB(`H', `s1', `v1'))
mata: st_local("I", st_local("I") + "`comma'hsv " + ///
strtrim(stritrim(sprintf("%9.3g %9.3g %9.3g", `H', `s1', `v1'))))
local comma ","
}
}
else if "`ptype'"=="sequential" {
if "`pal'"=="sequential" local def 240 . .8 .05 .6 1 1.2 .
else if "`pal'"=="blues" local def 240 . .8 .05 .6 1 1.2 .
else if "`pal'"=="greens" local def 140 120 1 .1 .3 1 1.2 .
else if "`pal'"=="grays" local def 0 . 0 0 .1 .95 1 .
else if "`pal'"=="oranges" local def 30 . 1 .1 .9 1 1.2 .
else if "`pal'"=="purples" local def 270 . 1 .1 .6 1 1.2 .
else if "`pal'"=="reds" local def 0 20 1 .1 .6 1 1.2 .
else if "`pal'"=="heat" local def 0 60 1 .2 1 1 0.3 .
else if "`pal'"=="terrain" local def 120 0 1 0 .65 .95 0.7 1.5
foreach m in h1 h2 s1 s2 v1 v2 p1 p2 {
gettoken d def : def
if "``m''"=="" {
if "`d'"=="." local `m' `last'
else local `m' `d'
}
local last ``m''
}
forv j=1/`n'{
if `n'==1 local i 1
else local i = (`n'-`j')/(`n'-1)
local H = `h2' - `i' * (`h2'-`h1')
local S = `s2' - `i'^`p1' * (`s2'-`s1')
local V = `v2' - `i'^`p2' * (`v2'-`v1')
mata: st_local("P", st_local("P") + "`comma'" + HSV_to_RGB(`H', `S', `V'))
mata: st_local("I", st_local("I") + "`comma'hsv " + ///
strtrim(stritrim(sprintf("%9.3g %9.3g %9.3g", `H', `S', `V'))))
local comma ","
}
}
else if "`ptype'"=="diverging" {
if "`pal'"=="diverging" local def 240 0 .8 .6 .95 1.2 .
else if "`pal'"=="bluered" local def 240 0 .8 .6 .95 1.2 .
else if "`pal'"=="bluered2" local def 240 0 .6 .8 .95 1.2 .
else if "`pal'"=="bluered3" local def 175 320 .6 .8 .95 1.2 .
else if "`pal'"=="greenorange" local def 130 40 1 .8 .95 1.2 .
else if "`pal'"=="browngreen" local def 40 150 .8 .6 .95 1.2 .
else if "`pal'"=="pinkgreen" local def 330 120 .9 .6 .95 1.2 .
else if "`pal'"=="purplegreen" local def 290 120 .7 .5 .95 1.2 .
foreach m in h1 h2 s1 v1 v2 p1 p2 {
gettoken d def : def
if "``m''"=="" {
if "`d'"=="." local `m' `last'
else local `m' `d'
}
local last ``m''
}
forv j=1/`n'{
if `n'==1 local i 1
else local i = (`n' - 2*`j' + 1) / (`n'-1)
local H = cond(`i'>0, `h1', `h2')
local S = `s1' * abs(`i')^`p1'
local V = `v2' - abs(`i')^`p2' * (`v2'-`v1')
mata: st_local("P", st_local("P") + "`comma'" + HSV_to_RGB(`H', `S', `V'))
mata: st_local("I", st_local("I") + "`comma'hsv " + ///
strtrim(stritrim(sprintf("%9.3g %9.3g %9.3g", `H', `S', `V'))))
local comma ","
}
}
else if "`ptype'"=="heat0" {
if "`h1'"=="" local h1 = 0
if "`h2'"=="" local h2 = 60
if "`s1'"=="" local s1 1
if "`s2'"=="" local s2 0
if "`v1'"=="" local v1 1
local j = trunc(`n' / 4)
local i = `n' - `j'
forv ii=1/`i'{
local H = `h1' + cond(`i'==1, 0, (`ii'-1) * (`h2'-`h1') / (`i'-1))
local S = `s1'
local V = `v1'
mata: st_local("P", st_local("P") + "`comma'" + HSV_to_RGB(`H', `S', `V'))
mata: st_local("I", st_local("I") + "`comma'hsv " + ///
strtrim(stritrim(sprintf("%9.3g %9.3g %9.3g", `H', `S', `V'))))
local comma ","
}
local S1 = `s1' - (`s1'-`s2') / (2*`j')
local S2 = `s2' + (`s1'-`s2') / (2*`j')
forv ii=1/`j' {
local H = `h2'
local S = `S1' + cond(`j'==1, 0, (`ii'-1) * (`S2'-`S1') / (`j'-1))
local V = `v1'
mata: st_local("P", st_local("P") + "`comma'" + HSV_to_RGB(`H', `S', `V'))
mata: st_local("I", st_local("I") + "`comma'hsv " + ///
strtrim(stritrim(sprintf("%9.3g %9.3g %9.3g", `H', `S', `V'))))
local comma ","
}
}
else if "`ptype'"=="terrain0" {
local h3 `h2'
if "`h1'"=="" local h1 = 120
if "`h3'"=="" local h3 = 0
local h2 = (`h1' + `h3')/2 // 60
if "`s1'"=="" local s1 1
if "`s2'"=="" local s2 0
if "`v1'"=="" local v1 .65
if "`v2'"=="" local v2 .9
local v3 = `v2' + (1-`v2')/2 // .95
local k = trunc(`n' / 2)
forv i=1/`k'{
local H = `h1' + cond(`k'==1, 0, (`i'-1) * (`h2'-`h1') / (`k'-1))
local S = `s1'
local V = `v1' + cond(`k'==1, 0, (`i'-1) * (`v2'-`v1') / (`k'-1))
mata: st_local("P", st_local("P") + "`comma'" + HSV_to_RGB(`H', `S', `V'))
mata: st_local("I", st_local("I") + "`comma'hsv " + ///
strtrim(stritrim(sprintf("%9.3g %9.3g %9.3g", `H', `S', `V'))))
local comma ","
}
local k = `n' - `k' + 1
forv i=2/`k' {
local H = `h2' + (`i'-1) * (`h3'-`h2') / (`k'-1)
local S = `s1' + (`i'-1) * (`s2'-`s1') / (`k'-1)
local V = `v2' + (`i'-1) * (`v3'-`v2') / (`k'-1)
mata: st_local("P", st_local("P") + "`comma'" + HSV_to_RGB(`H', `S', `V'))
mata: st_local("I", st_local("I") + "`comma'hsv " + ///
strtrim(stritrim(sprintf("%9.3g %9.3g %9.3g", `H', `S', `V'))))
local comma ","
}
}
c_local P `P'
c_local I `I'
c_local note `pal'
end
program colorpalette9_ptol
syntax [, QUALitativ DIVerging RAINbow * ]
local pal `qualitative' `diverging' `rainbow'
if `:list sizeof pal'>1 {
di as err "only one scheme allowed"
exit 198
}
if "`pal'"=="" local pal qualitative // default
if "`pal'"=="qualitative" {
c_local P1 68 119 170
c_local P2 68 119 170,204 102 119
c_local P3 68 119 170,221 204 119,204 102 119
c_local P4 68 119 170,17 119 51,221 204 119,204 102 119
c_local P5 51 34 136,136 204 238,17 119 51,221 204 119,204 102 119
c_local P6 51 34 136,136 204 238,17 119 51,221 204 119,204 102 119,170 68 153
c_local P7 51 34 136,136 204 238,68 170 153,17 119 51,221 204 119,204 102 119,170 68 153
c_local P8 51 34 136,136 204 238,68 170 153,17 119 51,153 153 51,221 204 119,204 102 119,170 68 153
c_local P9 51 34 136,136 204 238,68 170 153,17 119 51,153 153 51,221 204 119,204 102 119,136 34 85,170 68 153
c_local P10 51 34 136,136 204 238,68 170 153,17 119 51,153 153 51,221 204 119,102 17 0,204 102 119,136 34 85,170 68 153
c_local P11 51 34 136,102 153 204,136 204 238,68 170 153,17 119 51,153 153 51,221 204 119,102 17 0,204 102 119,136 34 85,170 68 153
c_local P12 51 34 136,102 153 204,136 204 238,68 170 153,17 119 51,153 153 51,221 204 119,102 17 0,204 102 119,170 68 102,136 34 85,170 68 153
}
else if "`pal'"=="diverging" {
c_local P3 153 199 236,255 250 210,245 162 117
c_local P4 0 139 206,180 221 247,249 189 126,208 50 50
c_local P5 0 139 206,180 221 247,255 250 210,249 189 126,208 50 50
c_local P6 58 137 201,153 199 236,230 245 254,255 227 170,245 162 117,210 77 62
c_local P7 58 137 201,153 199 236,230 245 254,255 250 210,255 227 170,245 162 117,210 77 62
c_local P8 58 137 201,119 183 229,180 221 247,230 245 254,255 227 170,249 189 126,237 135 94,210 77 62
c_local P9 58 137 201,119 183 229,180 221 247,230 245 254,255 250 210,255 227 170,249 189 126,237 135 94,210 77 62
c_local P10 61 82 161,58 137 201,119 183 229,180 221 247,230 245 254,255 227 170,249 189 126,237 135 94,210 77 62,174 28 62
c_local P11 61 82 161,58 137 201,119 183 229,180 221 247,230 245 254,255 250 210,255 227 170,249 189 126,237 135 94,210 77 62,174 28 62
}
else if "`pal'"=="rainbow" {
c_local P4 64 64 150,87 163 173,222 167 58,217 33 32
c_local P5 64 64 150,82 157 183,125 184 116,227 156 55,217 33 32
c_local P6 64 64 150,73 140 194,99 173 153,190 188 72,230 139 51,217 33 32
c_local P7 120 28 129,63 96 174,83 158 182,109 179 136,202 184 67,231 133 50,217 33 32
c_local P8 120 28 129,63 86 167,75 145 192,95 170 159,145 189 97,216 175 61,231 124 48,217 33 32
c_local P9 120 28 129,63 78 161,70 131 193,87 163 173,109 179 136,177 190 78,223 165 58,231 116 47,217 33 32
c_local P10 120 28 129,63 71 155,66 119 189,82 157 183,98 172 155,134 187 106,199 185 68,227 156 55,231 109 46,217 33 32
c_local P11 120 28 129,64 64 150,65 108 183,77 149 190,91 167 167,110 179 135,161 190 86,211 179 63,229 148 53,230 104 45,217 33 32
c_local P12 120 28 129,65 59 147,64 101 177,72 139 194,85 161 177,99 173 153,127 185 114,181 189 76,217 173 60,230 142 52,230 100 44,217 33 32
}
c_local note `pal'
end
program colorpalette9_d3
syntax [, 10 20 20b 20c * ]
local pal `10' `20' `20b' `20c'
if `:list sizeof pal'>1 {
di as err "only one scheme allowed"
exit 198
}
if "`pal'"=="" local pal 10 // default
if "`pal'"=="10" {
c_local P #1f77b4,#ff7f0e,#2ca02c,#d62728,#9467bd,#8c564b,#e377c2,#7f7f7f,#bcbd22,#17becf
}
else if "`pal'"=="20" {
c_local P #1f77b4,#aec7e8,#ff7f0e,#ffbb78,#2ca02c,#98df8a,#d62728,#ff9896,#9467bd,#c5b0d5,#8c564b,#c49c94,#e377c2,#f7b6d2,#7f7f7f,#c7c7c7,#bcbd22,#dbdb8d,#17becf,#9edae5
}
else if "`pal'"=="20b" {
c_local P #393b79,#5254a3,#6b6ecf,#9c9ede,#637939,#8ca252,#b5cf6b,#cedb9c,#8c6d31,#bd9e39,#e7ba52,#e7cb94,#843c39,#ad494a,#d6616b,#e7969c,#7b4173,#a55194,#ce6dbd,#de9ed6
}
else if "`pal'"=="20c" {
c_local P #3182bd,#6baed6,#9ecae1,#c6dbef,#e6550d,#fd8d3c,#fdae6b,#fdd0a2,#31a354,#74c476,#a1d99b,#c7e9c0,#756bb1,#9e9ac8,#bcbddc,#dadaeb,#636363,#969696,#bdbdbd,#d9d9d9
}
c_local note `pal'
end
program colorpalette9_lin // values obtained from brewextra.ado v 1.0.0,21MAR2016
syntax [, TABleau CARcolor food FEATures ACTivities FRUITs VEGetables DRINKs BRANDs ///
Algorithm * ]
local pal `tableau' `carcolor' `food' `features' `activities' `fruits' `vegetables' `drinks' `brands'
if `:list sizeof pal'>1 {
di as err "only one scheme allowed"
exit 198
}
if "`pal'"=="" local pal tableau // default
if "`pal'"=="tableau" {
c_local P #1f77b4,#ff7f0e,#2ca02c,#d62728,#9467bd,#8c564b,#e377c2,#7f7f7f,#bcbd22,#17becf,#aec7e8,#ffbb78,#98df8a,#ff9896,#c5b0d5,#c49c94,#f7b6d2,#c7c7c7,#dbdb8d,#9edae5
local algorithm
}
else if "`pal'"=="carcolor" {
if "`algorithm'"!="" {
c_local P 214 39 40,199 199 199,127 127 127,44 160 44,140 86 75,31 119 180
c_local I Red,Silver,Black,Green,Brown,Blue
}
else {
c_local P 214 39 40,199 199 199,127 127 127,44 160 44,140 86 75,31 119 180
c_local I Red,Silver,Black,Green,Brown,Blue
local algorithm Turkers
}
}
else if "`pal'"=="food" {
if "`algorithm'"!="" {
c_local P 31 119 180,255 127 14,140 86 75,44 160 44,255 187 120,219 219 141,214 39 40
c_local I Sour cream,Blue cheese dressing,Porterhouse steak,Iceberg lettuce,Onions (raw),Potato (baked),Tomato
}
else {
c_local P 199 199 199,31 119 180,140 86 75,152 223 138,219 219 141,196 156 148,214 39 40
c_local I Sour cream,Blue cheese dressing,Porterhouse steak,Iceberg lettuce,Onions (raw),Potato (baked),Tomato
local algorithm Turkers
}
}
else if "`pal'"=="features" {
if "`algorithm'"!="" {
c_local P 214 39 40,31 119 180,140 86 75,255 127 14,44 160 44
c_local I Speed,Reliability,Comfort,Safety,Efficiency
}
else {
c_local P 214 39 40,31 119 180,174 119 232,44 160 44,152 223 138
c_local I Speed,Reliability,Comfort,Safety,Efficiency
local algorithm Turkers
}
}
else if "`pal'"=="activities" {
if "`algorithm'"!="" {
c_local P 140 86 75,255 127 14,31 119 180,227 119 194,214 39 40
c_local I Sleeping,Working,Leisure,Eating,Driving
}
else {
c_local P 31 119 180,214 39 40,152 223 138,44 160 44,127 127 127
c_local I Sleeping,Working,Leisure,Eating,Driving
local algorithm Turkers
}
}
else if "`pal'"=="fruits" {
if "`algorithm'"!="" {
c_local P 44 160 44,188 189 34,31 119 180,214 39 40,148 103 189,255 187 120,255 127 14
c_local I Apple,Banana,Blueberry,Cherry,Grape,Peach,Tangerine
}
else {
c_local P 146 195 51,251 222 6,64 105 166,200 0 0,127 34 147,251 162 127,255 86 29
c_local I Apple,Banana,Blueberry,Cherry,Grape,Peach,Tangerine
local algorithm expert
}
}
else if "`pal'"=="vegetables" {
if "`algorithm'"!="" {
c_local P 255 127 14,44 160 44,188 189 34,148 103 189,140 86 75,152 223 138,214 39 40
c_local I Carrot,Celery,Corn,Eggplant,Mushroom,Olive,Tomato
}
else {
c_local P 255 141 61,157 212 105,245 208 64,104 59 101,239 197 143,139 129 57,255 26 34
c_local I Carrot,Celery,Corn,Eggplant,Mushroom,Olive,Tomato
local algorithm expert
}
}
else if "`pal'"=="drinks" {
if "`algorithm'"!="" {
c_local P 140 86 75,214 39 40,227 119 194,31 119 180,44 160 44,255 127 14,148 103 189
c_local I A&W Root Beer,Coca-Cola,Dr. Pepper,Pepsi,Sprite,Sunkist,Welch's Grape
}
else {
c_local P 119 67 6,254 0 0,151 37 63,1 106 171,1 159 76,254 115 20,104 105 169
c_local I A&W Root Beer,Coca-Cola,Dr. Pepper,Pepsi,Sprite,Sunkist,Welch's Grape
local algorithm expert
}
}
else if "`pal'"=="brands" {
if "`algorithm'"!="" {
c_local P 152 223 138,31 119 180,255 127 14,140 86 75,44 160 44,214 39 40,148 103 189
c_local I Apple,AT&T,Home Depot,Kodak,Starbucks,Target,Yahoo!
}
else {
c_local P 161 165 169,44 163 218,242 99 33,255 183 0,0 112 66,204 0 0,123 0 153
c_local I Apple,AT&T,Home Depot,Kodak,Starbucks,Target,Yahoo!
local algorithm expert
}
}
if `"`algorithm'"'!="" local algorithm (`algorithm')
c_local note `pal' `algorithm'
end
program colorpalette9_spmap
syntax [, n(numlist max=1 integer) ///
BLues GREENs GREYs REDs RAINbow heat TERrain TOPological * ]
local pal `blues' `greens' `greys' `reds' `rainbow' `heat' `terrain' `topological'
if `:list sizeof pal'>1 {
di as err "only one scheme allowed"
exit 198
}
if "`pal'"=="" local pal blues // default
if "`pal'"=="blues" {
if "`n'"=="" local n 15
local n = max(2,min(`n',99))
local P
forv i = 1/`n' {
local p = (`i'-1)/(`n'-1)
mata: st_local("P", st_local("P") + "`comma'" + ///
HSV_to_RGB(208, .2 + .8*`p', 1 - .6*`p'))
local comma ","
}
c_local P `P'
c_local n `n'
}
else if "`pal'"=="greens" {
if "`n'"=="" local n 15
local n = max(2,min(`n',99))
local P
forv i = 1/`n' {
local p = (`i'-1)/(`n'-1)
mata: st_local("P", st_local("P") + "`comma'" + ///
HSV_to_RGB(122 + 20*`p', .2 + .8*`p', 1 - .7*`p'))
local comma ","
}
c_local P `P'
c_local n `n'
}
else if "`pal'"=="greys" {
if "`n'"=="" local n 15
local n = max(2,min(`n',99))
local P
forv i = 1/`n' {
local p = (`i'-1)/(`n'-1)
mata: st_local("P", st_local("P") + "`comma'" + ///
HSV_to_RGB(0, 0, .88 - .88*`p'))
local comma ","
}
c_local P `P'
c_local n `n'
}
else if "`pal'"=="reds" {
if "`n'"=="" local n 15
local n = max(2,min(`n',99))
local P
forv i = 1/`n' {
local p = (`i'-1)/(`n'-1)
mata: st_local("P", st_local("P") + "`comma'" + ///
HSV_to_RGB(20 - 20*`p', .2 + .8*`p', 1 - max((0, 1.2*(`p'-.5)))))
local comma ","
}
c_local P `P'
c_local n `n'
}
else if "`pal'"=="rainbow" {
if "`n'"=="" local n 15
local n = max(2,min(`n',99))
local P
forv i = 1/`n' {
local p = (`i'-1)/(`n'-1)
mata: st_local("P", st_local("P") + "`comma'" + ///
HSV_to_RGB(240 - 240*`p', 1, 1))
local comma ","
}
c_local P `P'
c_local n `n'
}
else if "`pal'"=="heat" {
c_local P2 255 255 0,255 0 0
c_local P3 255 255 0,255 128 0,255 0 0
c_local P4 255 255 128,255 255 0,255 128 0,255 0 0
c_local P5 255 255 128,255 255 0,255 170 0,255 85 0,255 0 0
c_local P6 255 255 128,255 255 0,255 191 0,255 128 0,255 64 0,255 0 0
c_local P7 255 255 128,255 255 0,255 204 0,255 153 0,255 102 0,255 51 0,255 0 0
c_local P8 255 255 191,255 255 64,255 255 0,255 204 0,255 153 0,255 102 0,255 51 0,255 0 0
c_local P9 255 255 191,255 255 64,255 255 0,255 213 0,255 170 0,255 128 0,255 85 0,255 42 0,255 0 0
c_local P10 255 255 191,255 255 64,255 255 0,255 219 0,255 182 0,255 146 0,255 109 0,255 73 0,255 36 0,255 0 0
c_local P11 255 255 191,255 255 64,255 255 0,255 223 0,255 191 0,255 159 0,255 128 0,255 96 0,255 64 0,255 32 0,255 0 0
c_local P12 255 255 213,255 255 128,255 255 42,255 255 0,255 223 0,255 191 0,255 159 0,255 128 0,255 96 0,255 64 0,255 32 0,255 0 0
c_local P13 255 255 213,255 255 128,255 255 42,255 255 0,255 227 0,255 198 0,255 170 0,255 142 0,255 113 0,255 85 0,255 57 0,255 28 0,255 0 0
c_local P14 255 255 213,255 255 128,255 255 42,255 255 0,255 229 0,255 204 0,255 178 0,255 153 0,255 128 0,255 102 0,255 77 0,255 51 0,255 26 0,255 0 0
c_local P15 255 255 213,255 255 128,255 255 42,255 255 0,255 232 0,255 209 0,255 185 0,255 162 0,255 139 0,255 116 0,255 93 0,255 70 0,255 46 0,255 23 0,255 0 0
c_local P16 255 255 223,255 255 159,255 255 96,255 255 32,255 255 0,255 232 0,255 209 0,255 185 0,255 162 0,255 139 0,255 116 0,255 93 0,255 70 0,255 46 0,255 23 0,255 0 0
}
else if "`pal'"=="terrain" {
c_local P2 0 166 0,242 242 242
c_local P3 0 166 0,236 177 118,242 242 242
c_local P4 0 166 0,230 230 0,236 177 118,242 242 242
c_local P5 0 166 0,230 230 0,234 182 78,238 185 159,242 242 242
c_local P6 0 166 0,99 198 0,230 230 0,234 182 78,238 185 159,242 242 242
c_local P7 0 166 0,99 198 0,230 230 0,233 189 58,236 177 118,239 194 179,242 242 242
c_local P8 0 166 0,62 187 0,139 208 0,230 230 0,233 189 58,236 177 118,239 194 179,242 242 242
c_local P9 0 166 0,62 187 0,139 208 0,230 230 0,232 195 46,235 178 94,237 180 142,240 201 192,242 242 242
c_local P10 0 166 0,45 182 0,99 198 0,160 214 0,230 230 0,232 195 46,235 178 94,237 180 142,240 201 192,242 242 242
c_local P11 0 166 0,45 182 0,99 198 0,160 214 0,230 230 0,232 199 39,234 182 78,236 177 118,238 185 159,240 207 200,242 242 242
c_local P12 0 166 0,36 179 0,76 191 0,122 204 0,173 217 0,230 230 0,232 199 39,234 182 78,236 177 118,238 185 159,240 207 200,242 242 242
c_local P13 0 166 0,36 179 0,76 191 0,122 204 0,173 217 0,230 230 0,231 203 33,233 186 67,235 177 101,237 179 135,239 190 170,240 211 206,242 242 242
c_local P14 0 166 0,29 176 0,62 187 0,99 198 0,139 208 0,182 219 0,230 230 0,231 203 33,233 186 67,235 177 101,237 179 135,239 190 170,240 211 206,242 242 242
c_local P15 0 166 0,29 176 0,62 187 0,99 198 0,139 208 0,182 219 0,230 230 0,231 206 29,233 189 58,234 179 88,236 177 118,237 182 148,239 194 179,241 214 211,242 242 242
c_local P16 0 166 0,25 175 0,53 184 0,83 193 0,116 202 0,151 211 0,189 220 0,230 230 0,231 206 29,233 189 58,234 179 88,236 177 118,237 182 148,239 194 179,241 214 211,242 242 242
}
else if "`pal'"=="topological" {
c_local P2 76 0 255,0 229 255
c_local P3 76 0 255,0 255 77,255 255 0
c_local P4 76 0 255,0 229 255,0 255 77,255 255 0
c_local P5 76 0 255,0 76 255,0 229 255,0 255 77,255 255 0
c_local P6 76 0 255,0 229 255,0 255 77,230 255 0,255 255 0,255 224 178
c_local P7 76 0 255,0 76 255,0 229 255,0 255 77,230 255 0,255 255 0,255 224 178
c_local P8 76 0 255,0 25 255,0 128 255,0 229 255,0 255 77,230 255 0,255 255 0,255 224 178
c_local P9 76 0 255,0 76 255,0 229 255,0 255 77,77 255 0,230 255 0,255 255 0,255 222 89,255 224 178
c_local P10 76 0 255,0 25 255,0 128 255,0 229 255,0 255 77,77 255 0,230 255 0,255 255 0,255 222 89,255 224 178
c_local P11 76 0 255,0 0 255,0 76 255,0 153 255,0 229 255,0 255 77,77 255 0,230 255 0,255 255 0,255 222 89,255 224 178
c_local P12 76 0 255,0 25 255,0 128 255,0 229 255,0 255 77,26 255 0,128 255 0,230 255 0,255 255 0,255 229 59,255 219 119,255 224 178
c_local P13 76 0 255,0 0 255,0 76 255,0 153 255,0 229 255,0 255 77,26 255 0,128 255 0,230 255 0,255 255 0,255 229 59,255 219 119,255 224 178
c_local P14 76 0 255,15 0 255,0 46 255,0 107 255,0 168 255,0 229 255,0 255 77,26 255 0,128 255 0,230 255 0,255 255 0,255 229 59,255 219 119,255 224 178
c_local P15 76 0 255,0 0 255,0 76 255,0 153 255,0 229 255,0 255 77,0 255 0,77 255 0,153 255 0,230 255 0,255 255 0,255 234 45,255 222 89,255 219 134,255 224 178
c_local P16 76 0 255,15 0 255,0 46 255,0 107 255,0 168 255,0 229 255,0 255 77,0 255 0,77 255 0,153 255 0,230 255 0,255 255 0,255 234 45,255 222 89,255 219 134,255 224 178
}
c_local note `pal'
end
program colorpalette9_sfso
syntax [, BRown ORange red PInk PUrple VIolet BLue LTBLue TUrquoise ///
green OLive black parties LANGuages VOTEs THemes cmyk * ]
local pal `brown' `orange' `red' `pink' `purple' `violet' `blue' ///
`ltblue' `turquoise' `green' `olive' `black' `parties' `languages' ///
`votes' `themes'
if `:list sizeof pal'>1 {
di as err "only one scheme allowed"
exit 198
}
if "`pal'"=="" local pal blue // default
if "`pal'"=="brown" {
if "`cmyk'"=="" c_local P #6b0616,#a1534e,#b67d6c,#cca58f,#ddc3a8,#eee3cd
else c_local P 0 1 .7 .6,0 .74 .57 .32,0 .56 .5 .24,0 .4 .4 .16,0 .27 .35 .1,0 .12 .22 .05
//sRGB: c_local P #6b0616,#a1524f,#b67d6c,#cca590,#ddc3a8,#eee3cd
}
else if "`pal'"=="orange" {
if "`cmyk'"=="" c_local P #92490d,#ce6725,#d68c25,#e2b224,#eccf76,#f6e7be
else c_local P 0 .75 1 .4,0 .75 1 0,0 .59 1 0,0 .4 1 0,0 .26 .68 0,0 .13 .35 0
//sRGB: c_local P #91490d,#cd6725,#d68c25,#e1b124,#eccf76,#f6e7be
}
else if "`pal'"=="red" {
if "`cmyk'"=="" c_local P #6d0724,#a61346,#c62a4f,#d17477,#dea49f,#efd6d1
else c_local P .1 1 .6 .55,.1 1 .6 .15,0 .95 .64 0,0 .71 .48 0,0 .5 .34 0,0 .25 .16 0
//sRGB: c_local P #6d0724,#a61346,#c62a4f,#d17377,#dea59f,#efd6d1
}
else if "`pal'"=="pink" {
if "`cmyk'"=="" c_local P #7c0051,#a4006f,#c0007c,#cc669d,#da9dbf,#efd7e5
else c_local P .12 1 .12 .45,.09 1 .09 .18,0 1 .09 .04,0 .75 .07 .03,0 .53 .04 .02,0 .25 .02 0
//sRGB: c_local P #7b0051,#a4006f,#c0007b,#cc669d,#da9dbf,#f0d7e5
}
else if "`pal'"=="purple" {
if "`cmyk'"=="" c_local P #5e0059,#890883,#a23392,#bf64a6,#d79dc5,#efd7e8
else c_local P .45 1 0 .45,.45 1 0 .05,.32 .9 0 0,.15 .75 0 0,.05 .53 0 0,0 .25 0 0
//sRGB: c_local P #5e0058,#890783,#a23092,#be63a6,#d79dc5,#f0d7e8
}
else if "`pal'"=="violet" {
if "`cmyk'"=="" c_local P #3a0054,#682b86,#8c58a3,#a886bc,#c5b0d5,#e1d7eb
else c_local P .75 1 0 .5,.65 .9 0 .12,.51 .75 0 0,.38 .56 0 0,.25 .38 0 0,.12 .2 0 0
//sRGB: c_local P #3a0054,#682b86,#8c58a3,#a886bc,#c5b0d4,#e1d7eb
}
else if "`pal'"=="blue" {
if "`cmyk'"=="" c_local P #1c3259,#374a83,#6473aa,#8497cf,#afbce2,#d8def2,#e8eaf7
else c_local P .83 .45 0 .7,.85 .55 0 .4,.7 .45 0 .2,.63 .36 0 0,.43 .22 0 0,.22 .1 0 0,.13 .07 0 0
//sRGB: c_local P #1c3258,#374a83,#6473aa,#8497cf,#afbce2,#d8def2,#e8eaf7
c_local I ,,,BFS-Blau,,,BFS-Blau 20%
}
else if "`pal'"=="ltblue" {
if "`cmyk'"=="" c_local P #076e8d,#1b9dc9,#76b8da,#abd0e7,#c8e0f2,#edf5fd
else c_local P .98 0 .14 .45,.98 0 .14 .05,.72 0 .1 .03,.49 0 .07 .02,.35 0 .04 0,.12 0 0 0
//sRGB: c_local P #086e8c,#159dc9,#76b7da,#abd0e7,#c8e0f2,#edf5fd
}
else if "`pal'"=="turquoise" {
if "`cmyk'"=="" c_local P #005046,#107a6d,#3aa59a,#95c6c3,#cbe1df,#e9f2f5
else c_local P 1 0 .55 .65,1 0 .55 .35,.94 0 .5 0,.6 0 .3 0,.33 0 .17 0,.15 0 .05 0
//sRGB: c_local P #005046,#107a6d,#3aa59a,#95c6c3,#cbe1df,#e9f2f5
}
else if "`pal'"=="green" {
if "`cmyk'"=="" c_local P #3b6519,#68a239,#95c15b,#b3d17f,#d3e3af,#ecf2d1
else c_local P .75 0 1 .6,.75 0 1 .15,.6 0 .85 0,.45 0 .68 0,.28 0 .45 0,.12 0 .28 0
//sRGB: c_local P #3a6419,#68a139,#95c15c,#b3d07f,#d3e3af,#ecf2d1
}
else if "`pal'"=="olive" {
if "`cmyk'"=="" c_local P #6f6f02,#a3a20a,#c5c00c,#e3df86,#eeecbc,#fefde6
else c_local P .05 0 1 .7,.05 0 1 .45,0 0 1 .3,0 0 .6 .15,0 0 .35 .09,0 0 .17 0
//sRGB: c_local P #6f6f01,#a3a20a,#c5c00c,#e3df85,#eeecbc,#fffde6
}
else if "`pal'"=="black" {
if "`cmyk'"=="" c_local P #3f3f3e,#838382,#b2b3b3,#d4d5d5,#e6e6e7,#f7f7f7
else c_local P 0 0 0 .9,0 0 0 .65,0 0 0 .43,0 0 0 .25,0 0 0 .15,0 0 0 .05
//sRGB: c_local P #3c3c3c,#828282,#b2b2b2,#d4d4d4,#e6e6e6,#f6f6f6
}
else if "`pal'"=="parties" {
if "`cmyk'"=="" c_local P #6268AF,#f39f5e,#ea546f,#547d34,#cbd401,#ffff00,#26b300,#792a8f,#9fabd9,#f0da9d,#bebebe
else c_local P .76 .6 .02 0,0 .57 .78 0,0 .85 .58 0,.8 .3 1 .2,.28 .01 .96 0,.01 0 .96 0,.72 0 1 0,.6 .92 0 0,.5 .29 0 0,0 .2 .5 0,0 0 0 .35
c_local I FDP,CVP,SP,SVP,GLP,BDP,Grüne, "small leftwing parties (PdA, Sol.)","small middle parties (EVP, CSP)","small rightwing parties (EDu, Lega)",other parties
}
else if "`pal'"=="languages" {
if "`cmyk'"=="" c_local P #c73e31,#4570ba,#4ca767,#ecce42,#7f5fa9
else c_local P 0 .9 .9 0,.9 .5 0 0,.9 0 .8 0,0 .25 .9 0,.6 .7 0 0
c_local I German,French,Italian,Rhaeto-Romanic,English
}
else if "`pal'"=="votes" {
if "`cmyk'"=="" c_local P #6d2a83,#6d2a83*.8,#6d2a83*.6,#6d2a83*.4,#6d2a83*.2,#45974d*.2,#45974d*.4,#45974d*.6,#45974d*.8,#45974d
else c_local P .6 .9 0 .15,.6 .9 0 .15*.8,.6 .9 0 .15*.6,.6 .9 0 .15*.4,.6 .9 0 .15*.2,.9 0 .9 .15*.2,.9 0 .9 .15*.4,.9 0 .9 .15*.6,.9 0 .9 .15*.8,.9 0 .9 .15
c_local I No,,,,,,,,,Yes
}
else if "`pal'"=="themes" {
c_local P .63 .36 0 0,0 .38 1 0,0 .59 1 0,0 .76 .88 0,0 .95 .64 0,0 1 .09 .04,.32 .9 0 0,.51 .57 0 0,.62 .6 0 0,.63 .22 0 .03,.8 .29 0 0,.98 0 .14 .05,.94 0 .5 0,.6 0 .85 0,.42 0 .76 0,0 0 1 .3,0 .19 .98 .35,.13 0 .6 .32,0 .36 .5 .24,0 .74 .57 .32,0 .5 .16 .14,.32 0 .18 .37
c_local I 00 Basics and overviews,01 Population,02 Territory and environment,03 Work and income,/*
*/04 National economy,05 Prices,06 Industry and services,07 Agriculture and forestry,/*
*/08 Energy,09 Construction and housing,10 Tourism,11 Mobility and transport,/*
*/"12 Money, banks and insurance",13 Social security,14 Health,15 Education and science,/*
*/"16 Culture, media, information society, sport",17 Politics,18 General Government and finance,/*
*/19 Crime and criminal justice,20 Economic and social situation of the population,/*
*/21 Sustainable development
local cmyk cmyk
}
if "`cmyk'"!="" local cmyk (CMYK)
c_local note `pal' `cmyk'
end
program colorpalette9_Accent
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P 127 201 127,190 174 212,253 192 134,255 255 153,56 108 176,240 2 127,191 91 23,102 102 102
}
else {
c_local P .5 0 .5 0,.25 .25 0 0,0 .25 .4 0,0 0 .4 0,.8 .4 0 0,0 1 0 0,.25 .6 .9 0,0 0 0 .6
c_local note (CMYK)
}
end
program colorpalette9_Blues
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 222 235 247,158 202 225,49 130 189
c_local P4 239 243 255,189 215 231,107 174 214,33 113 181
c_local P5 239 243 255,189 215 231,107 174 214,49 130 189,8 81 156
c_local P6 239 243 255,198 219 239,158 202 225,107 174 214,49 130 189,8 81 156
c_local P7 239 243 255,198 219 239,158 202 225,107 174 214,66 146 198,33 113 181,8 69 148
c_local P8 247 251 255,222 235 247,198 219 239,158 202 225,107 174 214,66 146 198,33 113 181,8 69 148
c_local P9 247 251 255,222 235 247,198 219 239,158 202 225,107 174 214,66 146 198,33 113 181,8 81 156,8 48 107
}
else {
c_local P3 .13 .03 0 0,.38 .08 0 0,.82 .27 0 0
c_local P4 .08 .02 0 0,.28 .07 0 0,.57 .14 0 0,.9 .34 0 0
c_local P5 .08 .02 0 0,.28 .07 0 0,.57 .14 0 0,.82 .27 0 0,1 .45 0 .07
c_local P6 .08 .02 0 0,.24 .06 0 0,.38 .08 0 0,.57 .14 0 0,.82 .27 0 0,1 .45 0 .07
c_local P7 .08 .02 0 0,.24 .06 0 0,.38 .08 0 0,.57 .14 0 0,.75 .22 0 0,.9 .34 0 0,1 .55 0 .05
c_local P8 .03 .01 0 0,.13 .03 0 0,.24 .06 0 0,.38 .08 0 0,.57 .14 0 0,.75 .22 0 0,.9 .34 0 0,1 .55 0 .05
c_local P9 .03 .01 0 0,.13 .03 0 0,.24 .06 0 0,.38 .08 0 0,.57 .14 0 0,.75 .22 0 0,.9 .34 0 0,1 .45 0 .07,1 .55 0 .3
c_local note (CMYK)
}
end
program colorpalette9_BrBG
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 216 179 101,245 245 245,90 180 172
c_local P4 166 97 26,223 194 125,128 205 193,1 133 113
c_local P5 166 97 26,223 194 125,245 245 245,128 205 193,1 133 113
c_local P6 140 81 10,216 179 101,246 232 195,199 234 229,90 180 172,1 102 94
c_local P7 140 81 10,216 179 101,246 232 195,245 245 245,199 234 229,90 180 172,1 102 94
c_local P8 140 81 10,191 129 45,223 194 125,246 232 195,199 234 229,128 205 193,53 151 143,1 102 94
c_local P9 140 81 10,191 129 45,223 194 125,246 232 195,245 245 245,199 234 229,128 205 193,53 151 143,1 102 94
c_local P10 84 48 5,140 81 10,191 129 45,223 194 125,246 232 195,199 234 229,128 205 193,53 151 143,1 102 94,0 60 48
c_local P11 84 48 5,140 81 10,191 129 45,223 194 125,246 232 195,245 245 245,199 234 229,128 205 193,53 151 143,1 102 94,0 60 48
}
else {
c_local P3 .15 .25 .55 0,0 0 0 .05,.65 .05 .23 0
c_local P4 .35 .55 .9 0,.12 .2 .45 0,.5 0 .17 0,1 .1 .55 0
c_local P5 .35 .55 .9 0,.12 .2 .45 0,0 0 0 .05,.5 0 .17 0,1 .1 .55 0
c_local P6 .45 .6 1 0,.15 .25 .55 0,.03 .08 .2 0,.22 0 .06 0,.65 .05 .23 0,1 .3 .6 0
c_local P7 .45 .6 1 0,.15 .25 .55 0,.03 .08 .2 0,0 0 0 .05,.22 0 .06 0,.65 .05 .23 0,1 .3 .6 0
c_local P8 .45 .6 1 0,.25 .43 .8 0,.12 .2 .45 0,.03 .08 .2 0,.22 0 .06 0,.5 0 .17 0,.8 .12 .35 0,1 .3 .6 0
c_local P9 .45 .6 1 0,.25 .43 .8 0,.12 .2 .45 0,.03 .08 .2 0,0 0 0 .05,.22 0 .06 0,.5 0 .17 0,.8 .12 .35 0,1 .3 .6 0
c_local P10 .45 .6 1 .4,.45 .6 1 0,.25 .43 .8 0,.12 .2 .45 0,.03 .08 .2 0,.22 0 .06 0,.5 0 .17 0,.8 .12 .35 0,1 .3 .6 0,1 .3 .7 .4
c_local P11 .45 .6 1 .4,.45 .6 1 0,.25 .43 .8 0,.12 .2 .45 0,.03 .08 .2 0,0 0 0 .05,.22 0 .06 0,.5 0 .17 0,.8 .12 .35 0,1 .3 .6 0,1 .3 .7 .4
c_local note (CMYK)
}
end
program colorpalette9_BuGn
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 229 245 249,153 216 201,44 162 95
c_local P4 237 248 251,178 226 226,102 194 164,35 139 69
c_local P5 237 248 251,178 226 226,102 194 164,44 162 95,0 109 44
c_local P6 237 248 251,204 236 230,153 216 201,102 194 164,44 162 95,0 109 44
c_local P7 237 248 251,204 236 230,153 216 201,102 194 164,65 174 118,35 139 69,0 88 36
c_local P8 247 252 253,229 245 249,204 236 230,153 216 201,102 194 164,65 174 118,35 139 69,0 88 36
c_local P9 247 252 253,229 245 249,204 236 230,153 216 201,102 194 164,65 174 118,35 139 69,0 109 44,0 68 27
}
else {
c_local P3 .1 0 0 0,.4 0 .15 0,.83 0 .7 0
c_local P4 .07 0 0 0,.3 0 .05 0,.6 0 .3 0,.87 .1 .83 0
c_local P5 .07 0 0 0,.3 0 .05 0,.6 0 .3 0,.83 0 .7 0,1 .2 1 0
c_local P6 .07 0 0 0,.2 0 .06 0,.4 0 .15 0,.6 0 .3 0,.83 0 .7 0,1 .2 1 0
c_local P7 .07 0 0 0,.2 0 .06 0,.4 0 .15 0,.6 0 .3 0,.75 0 .55 0,.87 .1 .83 0,1 .35 1 0
c_local P8 .03 0 0 0,.1 0 0 0,.2 0 .06 0,.4 0 .15 0,.6 0 .3 0,.75 0 .55 0,.87 .1 .83 0,1 .35 1 0
c_local P9 .03 0 0 0,.1 0 0 0,.2 0 .06 0,.4 0 .15 0,.6 0 .3 0,.75 0 .55 0,.87 .1 .83 0,1 .2 1 0,1 .5 1 0
c_local note (CMYK)
}
end
program colorpalette9_BuPu
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 224 236 244,158 188 218,136 86 167
c_local P4 237 248 251,179 205 227,140 150 198,136 65 157
c_local P5 237 248 251,179 205 227,140 150 198,136 86 167,129 15 124
c_local P6 237 248 251,191 211 230,158 188 218,140 150 198,136 86 167,129 15 124
c_local P7 237 248 251,191 211 230,158 188 218,140 150 198,140 107 177,136 65 157,110 1 107
c_local P8 247 252 253,224 236 244,191 211 230,158 188 218,140 150 198,140 107 177,136 65 157,110 1 107
c_local P9 247 252 253,224 236 244,191 211 230,158 188 218,140 150 198,140 107 177,136 65 157,129 15 124,77 0 75
}
else {
c_local P3 .12 .03 0 0,.38 .14 0 0,.47 .6 0 0
c_local P4 .07 0 0 0,.3 .1 0 0,.45 .3 0 0,.47 .7 0 0
c_local P5 .07 0 0 0,.3 .1 0 0,.45 .3 0 0,.47 .6 0 0,.47 .95 0 .05
c_local P6 .07 0 0 0,.25 .09 0 0,.38 .14 0 0,.45 .3 0 0,.47 .6 0 0,.47 .95 0 .05
c_local P7 .07 0 0 0,.25 .09 0 0,.38 .14 0 0,.45 .3 0 0,.45 .5 0 0,.47 .7 0 0,.5 1 0 .15
c_local P8 .03 0 0 0,.12 .03 0 0,.25 .09 0 0,.38 .14 0 0,.45 .3 0 0,.45 .5 0 0,.47 .7 0 0,.5 1 0 .15
c_local P9 .03 0 0 0,.12 .03 0 0,.25 .09 0 0,.38 .14 0 0,.45 .3 0 0,.45 .5 0 0,.47 .7 0 0,.47 .95 0 .05,.5 1 0 .4
c_local note (CMYK)
}
end
program colorpalette9_Dark2
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P 27 158 119,217 95 2,117 112 179,231 41 138,102 166 30,230 171 2,166 118 29,102 102 102
}
else {
c_local P .9 0 .55 0,.15 .6 1 0,.55 .45 0 0,.05 .85 .05 0,.6 .1 1 0,.1 .3 1 0,.35 .45 .9 0,0 0 0 .6
c_local note (CMYK)
}
end
program colorpalette9_GnBu
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 224 243 219,168 221 181,67 162 202
c_local P4 240 249 232,186 228 188,123 204 196,43 140 190
c_local P5 240 249 232,186 228 188,123 204 196,67 162 202,8 104 172
c_local P6 240 249 232,204 235 197,168 221 181,123 204 196,67 162 202,8 104 172
c_local P7 240 249 232,204 235 197,168 221 181,123 204 196,78 179 211,43 140 190,8 88 158
c_local P8 247 252 240,224 243 219,204 235 197,168 221 181,123 204 196,78 179 211,43 140 190,8 88 158
c_local P9 247 252 240,224 243 219,204 235 197,168 221 181,123 204 196,78 179 211,43 140 190,8 104 172,8 64 129
}
else {
c_local P3 .12 0 .12 0,.34 0 .25 0,.75 .12 0 0
c_local P4 .06 0 .08 0,.27 0 .23 0,.52 0 .15 0,.8 .2 0 0
c_local P5 .06 0 .08 0,.27 0 .23 0,.52 0 .15 0,.75 .12 0 0,1 .35 0 0
c_local P6 .06 0 .08 0,.2 0 .2 0,.34 0 .25 0,.52 0 .15 0,.75 .12 0 0,1 .35 0 0
c_local P7 .06 0 .08 0,.2 0 .2 0,.34 0 .25 0,.52 0 .15 0,.7 .05 0 0,.85 .2 0 0,1 .42 0 .05
c_local P8 .03 0 .05 0,.12 0 .12 0,.2 0 .2 0,.34 0 .25 0,.52 0 .15 0,.7 .05 0 0,.85 .2 0 0,1 .42 0 .05
c_local P9 .03 0 .05 0,.12 0 .12 0,.2 0 .2 0,.34 0 .25 0,.52 0 .15 0,.7 .05 0 0,.85 .2 0 0,1 .35 0 0,1 .5 0 .2
c_local note (CMYK)
}
end
program colorpalette9_Greens
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 229 245 224,161 217 155,49 163 84
c_local P4 237 248 233,186 228 179,116 196 118,35 139 69
c_local P5 237 248 233,186 228 179,116 196 118,49 163 84,0 109 44
c_local P6 237 248 233,199 233 192,161 217 155,116 196 118,49 163 84,0 109 44
c_local P7 237 248 233,199 233 192,161 217 155,116 196 118,65 171 93,35 139 69,0 90 50
c_local P8 247 252 245,229 245 224,199 233 192,161 217 155,116 196 118,65 171 93,35 139 69,0 90 50
c_local P9 247 252 245,229 245 224,199 233 192,161 217 155,116 196 118,65 171 93,35 139 69,0 109 44,0 68 27
}
else {
c_local P3 .1 0 .1 0,.37 0 .37 0,.81 0 .76 0
c_local P4 .07 0 .07 0,.27 0 .27 0,.55 0 .55 0,.84 .1 .83 0
c_local P5 .07 0 .07 0,.27 0 .27 0,.55 0 .55 0,.81 0 .76 0,1 .2 1 0
c_local P6 .07 0 .07 0,.22 0 .22 0,.37 0 .37 0,.55 0 .55 0,.81 0 .76 0,1 .2 1 0
c_local P7 .07 0 .07 0,.22 0 .22 0,.37 0 .37 0,.55 0 .55 0,.75 0 .7 0,.87 .1 .83 0,1 .35 .9 0
c_local P8 .03 0 .03 0,.1 0 .1 0,.22 0 .22 0,.37 0 .37 0,.55 0 .55 0,.75 0 .7 0,.87 .1 .83 0,1 .35 .9 0
c_local P9 .03 0 .03 0,.1 0 .1 0,.22 0 .22 0,.37 0 .37 0,.55 0 .55 0,.75 0 .7 0,.87 .1 .83 0,1 .2 1 0,1 .5 1 0
c_local note (CMYK)
}
end
program colorpalette9_Greys
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 240 240 240,189 189 189,99 99 99
c_local P4 247 247 247,204 204 204,150 150 150,82 82 82
c_local P5 247 247 247,204 204 204,150 150 150,99 99 99,37 37 37
c_local P6 247 247 247,217 217 217,189 189 189,150 150 150,99 99 99,37 37 37
c_local P7 247 247 247,217 217 217,189 189 189,150 150 150,115 115 115,82 82 82,37 37 37
c_local P8 255 255 255,240 240 240,217 217 217,189 189 189,150 150 150,115 115 115,82 82 82,37 37 37
c_local P9 255 255 255,240 240 240,217 217 217,189 189 189,150 150 150,115 115 115,82 82 82,37 37 37,0 0 0
}
else {
c_local P3 0 0 0 .06,0 0 0 .26,0 0 0 .61
c_local P4 0 0 0 .03,0 0 0 .2,0 0 0 .41,0 0 0 .68
c_local P5 0 0 0 .03,0 0 0 .2,0 0 0 .41,0 0 0 .61,0 0 0 .85
c_local P6 0 0 0 .03,0 0 0 .15,0 0 0 .26,0 0 0 .41,0 0 0 .61,0 0 0 .85
c_local P7 0 0 0 .03,0 0 0 .15,0 0 0 .26,0 0 0 .41,0 0 0 .55,0 0 0 .68,0 0 0 .85
c_local P8 0 0 0 0,0 0 0 .06,0 0 0 .15,0 0 0 .26,0 0 0 .41,0 0 0 .55,0 0 0 .68,0 0 0 .85
c_local P9 0 0 0 0,0 0 0 .06,0 0 0 .15,0 0 0 .26,0 0 0 .41,0 0 0 .55,0 0 0 .68,0 0 0 .85,0 0 0 1
c_local note (CMYK)
}
end
program colorpalette9_OrRd
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 254 232 200,253 187 132,227 74 51
c_local P4 254 240 217,253 204 138,252 141 89,215 48 31
c_local P5 254 240 217,253 204 138,252 141 89,227 74 51,179 0 0
c_local P6 254 240 217,253 212 158,253 187 132,252 141 89,227 74 51,179 0 0
c_local P7 254 240 217,253 212 158,253 187 132,252 141 89,239 101 72,215 48 31,153 0 0
c_local P8 255 247 236,254 232 200,253 212 158,253 187 132,252 141 89,239 101 72,215 48 31,153 0 0
c_local P9 255 247 236,254 232 200,253 212 158,253 187 132,252 141 89,239 101 72,215 48 31,179 0 0,127 0 0
}
else {
c_local P3 0 .09 .18 0,0 .27 .4 0,.1 .7 .7 0
c_local P4 0 .06 .12 0,0 .2 .4 0,0 .45 .55 0,.15 .8 .8 0
c_local P5 0 .06 .12 0,0 .2 .4 0,0 .45 .55 0,.1 .7 .7 0,.3 1 1 0
c_local P6 0 .06 .12 0,0 .17 .32 0,0 .27 .4 0,0 .45 .55 0,.1 .7 .7 0,.3 1 1 0
c_local P7 0 .06 .12 0,0 .17 .32 0,0 .27 .4 0,0 .45 .55 0,.05 .6 .6 0,.15 .8 .8 0,.4 1 1 0
c_local P8 0 .03 .06 0,0 .09 .18 0,0 .17 .32 0,0 .27 .4 0,0 .45 .55 0,.05 .6 .6 0,.15 .8 .8 0,.4 1 1 0
c_local P9 0 .03 .06 0,0 .09 .18 0,0 .17 .32 0,0 .27 .4 0,0 .45 .55 0,.05 .6 .6 0,.15 .8 .8 0,.3 1 1 0,.5 1 1 0
c_local note (CMYK)
}
end
program colorpalette9_Oranges
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 254 230 206,253 174 107,230 85 13
c_local P4 254 237 222,253 190 133,253 141 60,217 71 1
c_local P5 254 237 222,253 190 133,253 141 60,230 85 13,166 54 3
c_local P6 254 237 222,253 208 162,253 174 107,253 141 60,230 85 13,166 54 3
c_local P7 254 237 222,253 208 162,253 174 107,253 141 60,241 105 19,217 72 1,140 45 4
c_local P8 255 245 235,254 230 206,253 208 162,253 174 107,253 141 60,241 105 19,217 72 1,140 45 4
c_local P9 255 245 235,254 230 206,253 208 162,253 174 107,253 141 60,241 105 19,217 72 1,166 54 3,127 39 4
}
else {
c_local P3 0 .1 .15 0,0 .32 .5 0,.1 .65 .95 0
c_local P4 0 .07 .1 0,0 .26 .4 0,0 .45 .7 0,.15 .7 1 0
c_local P5 0 .07 .1 0,0 .26 .4 0,0 .45 .7 0,.1 .65 .95 0,.35 .75 1 0
c_local P6 0 .07 .1 0,0 .19 .3 0,0 .32 .5 0,0 .45 .7 0,.1 .65 .95 0,.35 .75 1 0
c_local P7 0 .07 .1 0,0 .19 .3 0,0 .32 .5 0,0 .45 .7 0,.05 .58 .9 0,.15 .7 1 0,.45 .78 1 0
c_local P8 0 .04 .06 0,0 .1 .15 0,0 .19 .3 0,0 .32 .5 0,0 .45 .7 0,.05 .58 .9 0,.15 .7 1 0,.45 .78 1 0
c_local P9 0 .04 .06 0,0 .1 .15 0,0 .19 .3 0,0 .32 .5 0,0 .45 .7 0,.05 .58 .9 0,.15 .7 1 0,.35 .75 1 0,.5 .8 1 0
c_local note (CMYK)
}
end
program colorpalette9_PRGn
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 175 141 195,247 247 247,127 191 123
c_local P4 123 50 148,194 165 207,166 219 160,0 136 55
c_local P5 123 50 148,194 165 207,247 247 247,166 219 160,0 136 55
c_local P6 118 42 131,175 141 195,231 212 232,217 240 211,127 191 123,27 120 55
c_local P7 118 42 131,175 141 195,231 212 232,247 247 247,217 240 211,127 191 123,27 120 55
c_local P8 118 42 131,153 112 171,194 165 207,231 212 232,217 240 211,166 219 160,90 174 97,27 120 55
c_local P9 118 42 131,153 112 171,194 165 207,231 212 232,247 247 247,217 240 211,166 219 160,90 174 97,27 120 55
c_local P10 64 0 75,118 42 131,153 112 171,194 165 207,231 212 232,217 240 211,166 219 160,90 174 97,27 120 55,0 68 27
c_local P11 64 0 75,118 42 131,153 112 171,194 165 207,231 212 232,247 247 247,217 240 211,166 219 160,90 174 97,27 120 55,0 68 27
}
else {
c_local P3 .31 .38 0 0,0 0 0 .03,.5 .05 .5 0
c_local P4 .53 .77 0 0,.23 .3 0 0,.35 0 .35 0,1 0 1 0
c_local P5 .53 .77 0 0,.23 .3 0 0,0 0 0 .03,.35 0 .35 0,1 0 1 0
c_local P6 .55 .8 .1 0,.31 .38 0 0,.09 .14 0 0,.15 0 .15 0,.5 .05 .5 0,.9 .2 .9 0
c_local P7 .55 .8 .1 0,.31 .38 0 0,.09 .14 0 0,0 0 0 .03,.15 0 .15 0,.5 .05 .5 0,.9 .2 .9 0
c_local P8 .55 .8 .1 0,.4 .49 .05 0,.23 .3 0 0,.09 .14 0 0,.15 0 .15 0,.35 0 .35 0,.65 .05 .65 0,.9 .2 .9 0
c_local P9 .55 .8 .1 0,.4 .49 .05 0,.23 .3 0 0,.09 .14 0 0,0 0 0 .03,.15 0 .15 0,.35 0 .35 0,.65 .05 .65 0,.9 .2 .9 0
c_local P10 .6 1 0 .4,.55 .8 .1 0,.4 .49 .05 0,.23 .3 0 0,.09 .14 0 0,.15 0 .15 0,.35 0 .35 0,.65 .05 .65 0,.9 .2 .9 0,1 .5 1 0
c_local P11 .6 1 0 .4,.55 .8 .1 0,.4 .49 .05 0,.23 .3 0 0,.09 .14 0 0,0 0 0 .03,.15 0 .15 0,.35 0 .35 0,.65 .05 .65 0,.9 .2 .9 0,1 .5 1 0
c_local note (CMYK)
}
end
program colorpalette9_Paired
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P 166 206 227,31 120 180,178 223 138,51 160 44,251 154 153,227 26 28,253 191 111,255 127 0,202 178 214,106 61 154,255 255 153,177 89 40
}
else {
c_local P .35 .07 0 0,.9 .3 0 0,.3 0 .45 0,.8 0 1 0,0 .4 .25 0,.1 .9 .8 0,0 .25 .5 0,0 .5 1 0,.2 .25 0 0,.6 .7 0 0,0 0 .4 0,.23 .73 .98 .12
c_local note (CMYK)
}
end
program colorpalette9_Pastel1
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P 251 180 174,179 205 227,204 235 197,222 203 228,254 217 166,255 255 204,229 216 189,253 218 236,242 242 242
}
else {
c_local P 0 .3 .2 0,.3 .1 0 0,.2 0 .2 0,.12 .17 0 0,0 .15 .3 0,0 0 .2 0,.1 .12 .2 0,0 .15 0 0,0 0 0 .05
c_local note (CMYK)
}
end
program colorpalette9_Pastel2
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P 179 226 205,253 205 172,203 213 232,244 202 228,230 245 201,255 242 174,241 226 204,204 204 204
}
else {
c_local P .3 0 .15 0,0 .2 .25 0,.2 .1 0 0,.03 .2 0 0,.1 0 .2 0,0 .05 .3 0,.05 .1 .15 0,0 0 0 .2
c_local note (CMYK)
}
end
program colorpalette9_PiYG
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 233 163 201,247 247 247,161 215 106
c_local P4 208 28 139,241 182 218,184 225 134,77 172 38
c_local P5 208 28 139,241 182 218,247 247 247,184 225 134,77 172 38
c_local P6 197 27 125,233 163 201,253 224 239,230 245 208,161 215 106,77 146 33
c_local P7 197 27 125,233 163 201,253 224 239,247 247 247,230 245 208,161 215 106,77 146 33
c_local P8 197 27 125,222 119 174,241 182 218,253 224 239,230 245 208,184 225 134,127 188 65,77 146 33
c_local P9 197 27 125,222 119 174,241 182 218,253 224 239,247 247 247,230 245 208,184 225 134,127 188 65,77 146 33
c_local P10 142 1 82,197 27 125,222 119 174,241 182 218,253 224 239,230 245 208,184 225 134,127 188 65,77 146 33,39 100 25
c_local P11 142 1 82,197 27 125,222 119 174,241 182 218,253 224 239,247 247 247,230 245 208,184 225 134,127 188 65,77 146 33,39 100 25
}
else {
c_local P3 .07 .35 .03 0,0 0 0 .03,.37 0 .6 0
c_local P4 .15 .9 0 0,.04 .28 0 0,.28 0 .47 0,.7 0 1 0
c_local P5 .15 .9 0 0,.04 .28 0 0,0 0 0 .03,.28 0 .47 0,.7 0 1 0
c_local P6 .2 .9 .1 0,.07 .35 .03 0,0 .12 0 0,.1 0 .17 0,.37 0 .6 0,.7 .15 1 0
c_local P7 .2 .9 .1 0,.07 .35 .03 0,0 .12 0 0,0 0 0 .03,.1 0 .17 0,.37 0 .6 0,.7 .15 1 0
c_local P8 .2 .9 .1 0,.11 .52 .06 0,.04 .28 0 0,0 .12 0 0,.1 0 .17 0,.28 0 .47 0,.5 .05 .8 0,.7 .15 1 0
c_local P9 .2 .9 .1 0,.11 .52 .06 0,.04 .28 0 0,0 .12 0 0,0 0 0 .03,.1 0 .17 0,.28 0 .47 0,.5 .05 .8 0,.7 .15 1 0
c_local P10 .1 1 0 .35,.2 .9 .1 0,.11 .52 .06 0,.04 .28 0 0,0 .12 0 0,.1 0 .17 0,.28 0 .47 0,.5 .05 .8 0,.7 .15 1 0,.75 0 1 .4
c_local P11 .1 1 0 .35,.2 .9 .1 0,.11 .52 .06 0,.04 .28 0 0,0 .12 0 0,0 0 0 .03,.1 0 .17 0,.28 0 .47 0,.5 .05 .8 0,.7 .15 1 0,.75 0 1 .4
c_local note (CMYK)
}
end
program colorpalette9_PuBu
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 236 231 242,166 189 219,43 140 190
c_local P4 241 238 246,189 201 225,116 169 207,5 112 176
c_local P5 241 238 246,189 201 225,116 169 207,43 140 190,4 90 141
c_local P6 241 238 246,208 209 230,166 189 219,116 169 207,43 140 190,4 90 141
c_local P7 241 238 246,208 209 230,166 189 219,116 169 207,54 144 192,5 112 176,3 78 123
c_local P8 255 247 251,236 231 242,208 209 230,166 189 219,116 169 207,54 144 192,5 112 176,3 78 123
c_local P9 255 247 251,236 231 242,208 209 230,166 189 219,116 169 207,54 144 192,5 112 176,4 90 141,2 56 88
}
else {
c_local P3 .07 .07 0 0,.35 .15 0 0,.85 .2 0 0
c_local P4 .05 .05 0 0,.26 .13 0 0,.55 .17 0 0,1 .3 0 0
c_local P5 .05 .05 0 0,.26 .13 0 0,.55 .17 0 0,.85 .2 0 0,1 .3 0 .2
c_local P6 .05 .05 0 0,.18 .12 0 0,.35 .15 0 0,.55 .17 0 0,.85 .2 0 0,1 .3 0 .2
c_local P7 .05 .05 0 0,.18 .12 0 0,.35 .15 0 0,.55 .17 0 0,.8 .2 0 0,1 .3 0 0,1 .3 0 .3
c_local P8 0 .03 0 0,.07 .07 0 0,.18 .12 0 0,.35 .15 0 0,.55 .17 0 0,.8 .2 0 0,1 .3 0 0,1 .3 0 .3
c_local P9 0 .03 0 0,.07 .07 0 0,.18 .12 0 0,.35 .15 0 0,.55 .17 0 0,.8 .2 0 0,1 .3 0 0,1 .3 0 .2,1 .3 0 .5
c_local note (CMYK)
}
end
program colorpalette9_PuBuGn
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 236 226 240,166 189 219,28 144 153
c_local P4 246 239 247,189 201 225,103 169 207,2 129 138
c_local P5 246 239 247,189 201 225,103 169 207,28 144 153,1 108 89
c_local P6 246 239 247,208 209 230,166 189 219,103 169 207,28 144 153,1 108 89
c_local P7 246 239 247,208 209 230,166 189 219,103 169 207,54 144 192,2 129 138,1 100 80
c_local P8 255 247 251,236 226 240,208 209 230,166 189 219,103 169 207,54 144 192,2 129 138,1 100 80
c_local P9 255 247 251,236 226 240,208 209 230,166 189 219,103 169 207,54 144 192,2 129 138,1 108 89,1 70 54
}
else {
c_local P3 .07 .09 0 0,.35 .15 0 0,.9 .12 .27 0
c_local P4 .03 .05 0 0,.26 .13 0 0,.6 .15 0 0,1 .15 .35 0
c_local P5 .03 .05 0 0,.26 .13 0 0,.6 .15 0 0,.9 .12 .27 0,1 .25 .65 0
c_local P6 .03 .05 0 0,.18 .12 0 0,.35 .15 0 0,.6 .15 0 0,.9 .12 .27 0,1 .25 .65 0
c_local P7 .03 .05 0 0,.18 .12 0 0,.35 .15 0 0,.6 .15 0 0,.8 .2 0 0,1 .15 .35 0,1 .3 .7 0
c_local P8 0 .03 0 0,.07 .09 0 0,.18 .12 0 0,.35 .15 0 0,.6 .15 0 0,.8 .2 0 0,1 .15 .35 0,1 .3 .7 0
c_local P9 0 .03 0 0,.07 .09 0 0,.18 .12 0 0,.35 .15 0 0,.6 .15 0 0,.8 .2 0 0,1 .15 .35 0,1 .25 .65 0,1 .5 .8 0
c_local note (CMYK)
}
end
program colorpalette9_PuOr
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 241 163 64,247 247 247,153 142 195
c_local P4 230 97 1,253 184 99,178 171 210,94 60 153
c_local P5 230 97 1,253 184 99,247 247 247,178 171 210,94 60 153
c_local P6 179 88 6,241 163 64,254 224 182,216 218 235,153 142 195,84 39 136
c_local P7 179 88 6,241 163 64,254 224 182,247 247 247,216 218 235,153 142 195,84 39 136
c_local P8 179 88 6,224 130 20,253 184 99,254 224 182,216 218 235,178 171 210,128 115 172,84 39 136
c_local P9 179 88 6,224 130 20,253 184 99,254 224 182,247 247 247,216 218 235,178 171 210,128 115 172,84 39 136
c_local P10 127 59 8,179 88 6,224 130 20,253 184 99,254 224 182,216 218 235,178 171 210,128 115 172,84 39 136,45 0 75
c_local P11 127 59 8,179 88 6,224 130 20,253 184 99,254 224 182,247 247 247,216 218 235,178 171 210,128 115 172,84 39 136,45 0 75
}
else {
c_local P3 .05 .35 .7 0,0 0 0 .03,.4 .35 0 0
c_local P4 .1 .6 1 0,0 .28 .55 0,.3 .25 0 0,.65 .7 0 0
c_local P5 .1 .6 1 0,0 .28 .55 0,0 0 0 .03,.3 .25 0 0,.65 .7 0 0
c_local P6 .3 .6 1 0,.05 .35 .7 0,0 .12 .24 0,.15 .1 0 0,.4 .35 0 0,.7 .8 .05 0
c_local P7 .3 .6 1 0,.05 .35 .7 0,0 .12 .24 0,0 0 0 .03,.15 .1 0 0,.4 .35 0 0,.7 .8 .05 0
c_local P8 .3 .6 1 0,.12 .46 .92 0,0 .28 .55 0,0 .12 .24 0,.15 .1 0 0,.3 .25 0 0,.5 .45 .05 0,.7 .8 .05 0
c_local P9 .3 .6 1 0,.12 .46 .92 0,0 .28 .55 0,0 .12 .24 0,0 0 0 .03,.15 .1 0 0,.3 .25 0 0,.5 .45 .05 0,.7 .8 .05 0
c_local P10 .5 .7 1 0,.3 .6 1 0,.12 .46 .92 0,0 .28 .55 0,0 .12 .24 0,.15 .1 0 0,.3 .25 0 0,.5 .45 .05 0,.7 .8 .05 0,.75 1 0 .4
c_local P11 .5 .7 1 0,.3 .6 1 0,.12 .46 .92 0,0 .28 .55 0,0 .12 .24 0,0 0 0 .03,.15 .1 0 0,.3 .25 0 0,.5 .45 .05 0,.7 .8 .05 0,.75 1 0 .4
c_local note (CMYK)
}
end
program colorpalette9_PuRd
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 231 225 239,201 148 199,221 28 119
c_local P4 241 238 246,215 181 216,223 101 176,206 18 86
c_local P5 241 238 246,215 181 216,223 101 176,221 28 119,152 0 67
c_local P6 241 238 246,212 185 218,201 148 199,223 101 176,221 28 119,152 0 67
c_local P7 241 238 246,212 185 218,201 148 199,223 101 176,231 41 138,206 18 86,145 0 63
c_local P8 247 244 249,231 225 239,212 185 218,201 148 199,223 101 176,231 41 138,206 18 86,145 0 63
c_local P9 247 244 249,231 225 239,212 185 218,201 148 199,223 101 176,231 41 138,206 18 86,152 0 67,103 0 31
}
else {
c_local P3 .09 .09 0 0,.2 .38 0 0,.1 .9 .15 0
c_local P4 .05 .05 0 0,.15 .25 0 0,.1 .6 0 0,.17 .95 .35 0
c_local P5 .05 .05 0 0,.15 .25 0 0,.1 .6 0 0,.1 .9 .15 0,.4 1 .47 0
c_local P6 .05 .05 0 0,.16 .23 0 0,.2 .38 0 0,.1 .6 0 0,.1 .9 .15 0,.4 1 .47 0
c_local P7 .05 .05 0 0,.16 .23 0 0,.2 .38 0 0,.1 .6 0 0,.05 .85 .05 0,.17 .95 .35 0,.43 1 .5 0
c_local P8 .03 .03 0 0,.09 .09 0 0,.16 .23 0 0,.2 .38 0 0,.1 .6 0 0,.05 .85 .05 0,.17 .95 .35 0,.43 1 .5 0
c_local P9 .03 .03 0 0,.09 .09 0 0,.16 .23 0 0,.2 .38 0 0,.1 .6 0 0,.05 .85 .05 0,.17 .95 .35 0,.4 1 .47 0,.6 1 .75 0
c_local note (CMYK)
}
end
program colorpalette9_Purples
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 239 237 245,188 189 220,117 107 177
c_local P4 242 240 247,203 201 226,158 154 200,106 81 163
c_local P5 242 240 247,203 201 226,158 154 200,117 107 177,84 39 143
c_local P6 242 240 247,218 218 235,188 189 220,158 154 200,117 107 177,84 39 143
c_local P7 242 240 247,218 218 235,188 189 220,158 154 200,128 125 186,106 81 163,74 20 134
c_local P8 252 251 253,239 237 245,218 218 235,188 189 220,158 154 200,128 125 186,106 81 163,74 20 134
c_local P9 252 251 253,239 237 245,218 218 235,188 189 220,158 154 200,128 125 186,106 81 163,84 39 143,63 0 125
}
else {
c_local P3 .06 .05 0 0,.28 .18 0 0,.55 .48 0 0
c_local P4 .05 .04 0 0,.2 .15 0 0,.38 .3 0 0,.6 .6 0 0
c_local P5 .05 .04 0 0,.2 .15 0 0,.38 .3 0 0,.55 .48 0 0,.7 .8 0 0
c_local P6 .05 .04 0 0,.14 .1 0 0,.26 .18 0 0,.38 .3 0 0,.55 .48 0 0,.7 .8 0 0
c_local P7 .05 .04 0 0,.14 .1 0 0,.26 .18 0 0,.38 .3 0 0,.5 .4 0 0,.6 .6 0 0,.75 .9 0 0
c_local P8 .01 .01 0 0,.06 .05 0 0,.14 .1 0 0,.26 .18 0 0,.38 .3 0 0,.5 .4 0 0,.6 .6 0 0,.75 .9 0 0
c_local P9 .01 .01 0 0,.06 .05 0 0,.14 .1 0 0,.26 .18 0 0,.38 .3 0 0,.5 .4 0 0,.6 .6 0 0,.7 .8 0 0,.8 1 0 0
c_local note (CMYK)
}
end
program colorpalette9_RdBu
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 239 138 98,247 247 247,103 169 207
c_local P4 202 0 32,244 165 130,146 197 222,5 113 176
c_local P5 202 0 32,244 165 130,247 247 247,146 197 222,5 113 176
c_local P6 178 24 43,239 138 98,253 219 199,209 229 240,103 169 207,33 102 172
c_local P7 178 24 43,239 138 98,253 219 199,247 247 247,209 229 240,103 169 207,33 102 172
c_local P8 178 24 43,214 96 77,244 165 130,253 219 199,209 229 240,146 197 222,67 147 195,33 102 172
c_local P9 178 24 43,214 96 77,244 165 130,253 219 199,247 247 247,209 229 240,146 197 222,67 147 195,33 102 172
c_local P10 103 0 31,178 24 43,214 96 77,244 165 130,253 219 199,209 229 240,146 197 222,67 147 195,33 102 172,5 48 97
c_local P11 103 0 31,178 24 43,214 96 77,244 165 130,253 219 199,247 247 247,209 229 240,146 197 222,67 147 195,33 102 172,5 48 97
}
else {
c_local P3 .05 .45 .5 0,0 0 0 .03,.6 .15 0 0
c_local P4 .2 1 .75 0,.03 .35 .38 0,.43 .08 0 0,1 .3 0 0
c_local P5 .2 1 .75 0,.03 .35 .38 0,0 0 0 .03,.43 .08 0 0,1 .3 0 0
c_local P6 .3 .9 .7 0,.05 .45 .5 0,0 .14 .16 0,.18 .04 0 0,.6 .15 0 0,.9 .4 0 0
c_local P7 .3 .9 .7 0,.05 .45 .5 0,0 .14 .16 0,0 0 0 .03,.18 .04 0 0,.6 .15 0 0,.9 .4 0 0
c_local P8 .3 .9 .7 0,.15 .6 .57 0,.03 .35 .38 0,0 .14 .16 0,.18 .04 0 0,.43 .08 0 0,.75 .2 0 0,.9 .4 0 0
c_local P9 .3 .9 .7 0,.15 .6 .57 0,.03 .35 .38 0,0 .14 .16 0,0 0 0 .03,.18 .04 0 0,.43 .08 0 0,.75 .2 0 0,.9 .4 0 0
c_local P10 .6 1 .75 0,.3 .9 .7 0,.15 .6 .57 0,.03 .35 .38 0,0 .14 .16 0,.18 .04 0 0,.43 .08 0 0,.75 .2 0 0,.9 .4 0 0,1 .5 0 .4
c_local P11 .6 1 .75 0,.3 .9 .7 0,.15 .6 .57 0,.03 .35 .38 0,0 .14 .16 0,0 0 0 .03,.18 .04 0 0,.43 .08 0 0,.75 .2 0 0,.9 .4 0 0,1 .5 0 .4
c_local note (CMYK)
}
end
program colorpalette9_RdGy
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 239 138 98,255 255 255,153 153 153
c_local P4 202 0 32,244 165 130,186 186 186,64 64 64
c_local P5 202 0 32,244 165 130,255 255 255,186 186 186,64 64 64
c_local P6 178 24 43,239 138 98,253 219 199,224 224 224,153 153 153,77 77 77
c_local P7 178 24 43,239 138 98,253 219 199,255 255 255,224 224 224,153 153 153,77 77 77
c_local P8 178 24 43,214 96 77,244 165 130,253 219 199,224 224 224,186 186 186,135 135 135,77 77 77
c_local P9 178 24 43,214 96 77,244 165 130,253 219 199,255 255 255,224 224 224,186 186 186,135 135 135,77 77 77
c_local P10 103 0 31,178 24 43,214 96 77,244 165 130,253 219 199,224 224 224,186 186 186,135 135 135,77 77 77,26 26 26
c_local P11 103 0 31,178 24 43,214 96 77,244 165 130,253 219 199,255 255 255,224 224 224,186 186 186,135 135 135,77 77 77,26 26 26
}
else {
c_local P3 .05 .45 .5 0,0 0 0 0,0 0 0 .4
c_local P4 .2 1 .75 0,.03 .35 .38 0,0 0 0 .27,0 0 0 .75
c_local P5 .2 1 .75 0,.03 .35 .38 0,0 0 0 0,0 0 0 .27,0 0 0 .75
c_local P6 .3 .9 .7 0,.05 .45 .5 0,0 .14 .16 0,0 0 0 .12,0 0 0 .4,0 0 0 .7
c_local P7 .3 .9 .7 0,.05 .45 .5 0,0 .14 .16 0,0 0 0 0,0 0 0 .12,0 0 0 .4,0 0 0 .7
c_local P8 .3 .9 .7 0,.15 .6 .57 0,.03 .35 .38 0,0 .14 .16 0,0 0 0 .12,0 0 0 .27,0 0 0 .47,0 0 0 .7
c_local P9 .3 .9 .7 0,.15 .6 .57 0,.03 .35 .38 0,0 .14 .16 0,0 0 0 0,0 0 0 .12,0 0 0 .27,0 0 0 .47,0 0 0 .7
c_local P10 .6 1 .75 0,.3 .9 .7 0,.15 .6 .57 0,.03 .35 .38 0,0 .14 .16 0,0 0 0 .12,0 0 0 .27,0 0 0 .47,0 0 0 .7,0 0 0 .9
c_local P11 .6 1 .75 0,.3 .9 .7 0,.15 .6 .57 0,.03 .35 .38 0,0 .14 .16 0,0 0 0 0,0 0 0 .12,0 0 0 .27,0 0 0 .47,0 0 0 .7,0 0 0 .9
c_local note (CMYK)
}
end
program colorpalette9_RdPu
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 253 224 221,250 159 181,197 27 138
c_local P4 254 235 226,251 180 185,247 104 161,174 1 126
c_local P5 254 235 226,251 180 185,247 104 161,197 27 138,122 1 119
c_local P6 254 235 226,252 197 192,250 159 181,247 104 161,197 27 138,122 1 119
c_local P7 254 235 226,252 197 192,250 159 181,247 104 161,221 52 151,174 1 126,122 1 119
c_local P8 255 247 243,253 224 221,252 197 192,250 159 181,247 104 161,221 52 151,174 1 126,122 1 119
c_local P9 255 247 243,253 224 221,252 197 192,250 159 181,247 104 161,221 52 151,174 1 126,122 1 119,73 0 106
}
else {
c_local P3 0 .12 .08 0,0 .38 .12 0,.2 .9 0 0
c_local P4 0 .08 .08 0,0 .3 .15 0,0 .6 .1 0,.3 1 0 0
c_local P5 0 .08 .08 0,0 .3 .15 0,0 .6 .1 0,.2 .9 0 0,.5 1 0 .05
c_local P6 0 .08 .08 0,0 .23 .15 0,0 .38 .12 0,0 .6 .1 0,.2 .9 0 0,.5 1 0 .05
c_local P7 0 .08 .08 0,0 .23 .15 0,0 .38 .12 0,0 .6 .1 0,.1 .8 0 0,.3 1 0 0,.5 1 0 .05
c_local P8 0 .03 .03 0,0 .12 .08 0,0 .23 .15 0,0 .38 .12 0,0 .6 .1 0,.1 .8 0 0,.3 1 0 0,.5 1 0 .05
c_local P9 0 .03 .03 0,0 .12 .08 0,0 .23 .15 0,0 .38 .12 0,0 .6 .1 0,.1 .8 0 0,.3 1 0 0,.5 1 0 .05,.7 1 0 .15
c_local note (CMYK)
}
end
program colorpalette9_RdYlBu
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 252 141 89,255 255 191,145 191 219
c_local P4 215 25 28,253 174 97,171 217 233,44 123 182
c_local P5 215 25 28,253 174 97,255 255 191,171 217 233,44 123 182
c_local P6 215 48 39,252 141 89,254 224 144,224 243 248,145 191 219,69 117 180
c_local P7 215 48 39,252 141 89,254 224 144,255 255 191,224 243 248,145 191 219,69 117 180
c_local P8 215 48 39,244 109 67,253 174 97,254 224 144,224 243 248,171 217 233,116 173 209,69 117 180
c_local P9 215 48 39,244 109 67,253 174 97,254 224 144,255 255 191,224 243 248,171 217 233,116 173 209,69 117 180
c_local P10 165 0 38,215 48 39,244 109 67,253 174 97,254 224 144,224 243 248,171 217 233,116 173 209,69 117 180,49 54 149
c_local P11 165 0 38,215 48 39,244 109 67,253 174 97,254 224 144,255 255 191,224 243 248,171 217 233,116 173 209,69 117 180,49 54 149
}
else {
c_local P3 0 .45 .55 0,0 0 .25 0,.43 .11 0 0
c_local P4 .15 .9 .8 0,0 .32 .55 0,.33 .03 0 0,.85 .3 0 0
c_local P5 .15 .9 .8 0,0 .32 .55 0,0 0 .25 0,.33 .03 0 0,.85 .3 0 0
c_local P6 .15 .8 .75 0,0 .45 .55 0,0 .12 .4 0,.12 0 0 0,.43 .11 0 0,.75 .37 0 0
c_local P7 .15 .8 .75 0,0 .45 .55 0,0 .12 .4 0,0 0 .25 0,.12 0 0 0,.43 .11 0 0,.75 .37 0 0
c_local P8 .15 .8 .75 0,.03 .57 .63 0,0 .32 .55 0,0 .12 .4 0,.12 0 0 0,.33 .03 0 0,.55 .15 0 0,.75 .37 0 0
c_local P9 .15 .8 .75 0,.03 .57 .63 0,0 .32 .55 0,0 .12 .4 0,0 0 .25 0,.12 0 0 0,.33 .03 0 0,.55 .15 0 0,.75 .37 0 0
c_local P10 .35 1 .7 0,.15 .8 .75 0,.03 .57 .63 0,0 .32 .55 0,0 .12 .4 0,.12 0 0 0,.33 .03 0 0,.55 .15 0 0,.75 .37 0 0,.85 .7 0 0
c_local P11 .35 1 .7 0,.15 .8 .75 0,.03 .57 .63 0,0 .35 .55 0,0 .12 .4 0,0 0 .25 0,.12 0 0 0,.33 .03 0 0,.55 .15 0 0,.75 .37 0 0,.85 .7 0 0
c_local note (CMYK)
}
end
program colorpalette9_RdYlGn
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 252 141 89,255 255 191,145 207 96
c_local P4 215 25 28,253 174 97,166 217 106,26 150 65
c_local P5 215 25 28,253 174 97,255 255 191,166 217 106,26 150 65
c_local P6 215 48 39,252 141 89,254 224 139,217 239 139,145 207 96,26 152 80
c_local P7 215 48 39,252 141 89,254 224 139,255 255 191,217 239 139,145 207 96,26 152 80
c_local P8 215 48 39,244 109 67,253 174 97,254 224 139,217 239 139,166 217 106,102 189 99,26 152 80
c_local P9 215 48 39,244 109 67,253 174 97,254 224 139,255 255 191,217 239 139,166 217 106,102 189 99,26 152 80
c_local P10 165 0 38,215 48 39,244 109 67,253 174 97,254 224 139,217 239 139,166 217 106,102 189 99,26 152 80,0 104 55
c_local P11 165 0 38,215 48 39,244 109 67,253 174 97,254 224 139,255 255 191,217 239 139,166 217 106,102 189 99,26 152 80,0 104 55
}
else {
c_local P3 0 .45 .55 0,0 0 .25 0,.43 0 .65 0
c_local P4 .15 .9 .8 0,0 .32 .55 0,.35 0 .6 0,.9 0 .9 0
c_local P5 .15 .9 .8 0,0 .35 .55 0,0 0 .25 0,.35 0 .6 0,.9 0 .9 0
c_local P6 .15 .8 .75 0,0 .45 .55 0,0 .12 .42 0,.15 0 .45 0,.43 0 .65 0,.9 0 .9 0
c_local P7 .15 .8 .75 0,0 .45 .55 0,0 .12 .42 0,0 0 .25 0,.15 0 .45 0,.43 0 .65 0,.9 0 .8 0
c_local P8 .15 .8 .75 0,.03 .57 .63 0,0 .32 .55 0,0 .12 .42 0,.15 0 .45 0,.35 0 .6 0,.6 0 .65 0,.9 0 .8 0
c_local P9 .15 .8 .75 0,.03 .57 .63 0,0 .32 .55 0,0 .12 .42 0,0 0 .25 0,.15 0 .45 0,.35 0 .6 0,.6 0 .65 0,.9 0 .8 0
c_local P10 .35 1 .7 0,.15 .8 .75 0,.03 .57 .63 0,0 .32 .55 0,0 .12 .42 0,.15 0 .45 0,.35 0 .6 0,.6 0 .65 0,.9 0 .8 0,1 .25 .9 0
c_local P11 .35 1 .75 0,.15 .8 .75 0,.03 .57 .63 0,0 .32 .55 0,0 .12 .42 0,0 0 .25 0,.15 0 .45 0,.35 0 .6 0,.6 0 .65 0,.9 0 .8 0,1 .25 .9 0
c_local note (CMYK)
}
end
program colorpalette9_Reds
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 254 224 210,252 146 114,222 45 38
c_local P4 254 229 217,252 174 145,251 106 74,203 24 29
c_local P5 254 229 217,252 174 145,251 106 74,222 45 38,165 15 21
c_local P6 254 229 217,252 187 161,252 146 114,251 106 74,222 45 38,165 15 21
c_local P7 254 229 217,252 187 161,252 146 114,251 106 74,239 59 44,203 24 29,153 0 13
c_local P8 255 245 240,254 224 210,252 187 161,252 146 114,251 106 74,239 59 44,203 24 29,153 0 13
c_local P9 255 245 240,254 224 210,252 187 161,252 146 114,251 106 74,239 59 44,203 24 29,165 15 21,103 0 13
}
else {
c_local P3 0 .12 .12 0,0 .43 .43 0,.12 .82 .75 0
c_local P4 0 .1 .1 0,0 .32 .32 0,0 .59 .59 0,.2 .9 .8 0
c_local P5 0 .1 .1 0,0 .32 .32 0,0 .59 .59 0,.12 .82 .75 0,.35 .95 .85 0
c_local P6 0 .1 .1 0,0 .27 .27 0,0 .43 .43 0,0 .59 .59 0,.12 .82 .75 0,.35 .95 .85 0
c_local P7 0 .1 .1 0,0 .27 .27 0,0 .43 .43 0,0 .59 .59 0,.05 .77 .72 0,.2 .9 .8 0,.4 1 .9 0
c_local P8 0 .04 .04 0,0 .12 .12 0,0 .27 .27 0,0 .43 .43 0,0 .59 .59 0,.05 .77 .72 0,.2 .9 .8 0,.4 1 .9 0
c_local P9 0 .04 .04 0,0 .12 .12 0,0 .27 .27 0,0 .43 .43 0,0 .59 .59 0,.05 .77 .72 0,.2 .9 .8 0,.35 .95 .85 0,.6 1 .9 0
c_local note (CMYK)
}
end
program colorpalette9_Set1
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P 228 26 28,55 126 184,77 175 74,152 78 163,255 127 0,255 255 51,166 86 40,247 129 191,153 153 153
}
else {
c_local P .1 .9 .8 0,.8 .3 0 0,.7 0 .8 0,.4 .65 0 0,0 .5 1 0,0 0 .8 0,.35 .6 .8 0,0 .5 0 0,0 0 0 .4
c_local note (CMYK)
}
end
program colorpalette9_Set2
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P 102 194 165,252 141 98,141 160 203,231 138 195,166 216 84,255 217 47,229 196 148,179 179 179
}
else {
c_local P .6 0 .3 0,0 .45 .5 0,.45 .25 0 0,.07 .45 0 0,.35 0 .7 0,0 .15 .8 0,.1 .2 .35 0,0 0 0 .3
c_local note (CMYK)
}
end
program colorpalette9_Set3
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P 141 211 199,255 255 179,190 186 218,251 128 114,128 177 211,253 180 98,179 222 105,252 205 229,217 217 217,188 128 189,204 235 197,255 237 111
}
else {
c_local P .45 0 .15 0,0 0 .3 0,.25 .2 0 0,0 .5 .4 0,.5 .15 0 0,0 .3 .55 0,.3 0 .6 0,0 .2 0 0,0 0 0 .15,.25 .45 0 0,.2 0 .2 0,0 .07 .55 0
c_local note (CMYK)
}
end
program colorpalette9_Spectral
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 252 141 89,255 255 191,153 213 148
c_local P4 215 25 28,253 174 97,171 221 164,43 131 186
c_local P5 215 25 28,253 174 97,255 255 191,171 221 164,43 131 186
c_local P6 213 62 79,252 141 89,254 224 139,230 245 152,153 213 148,50 136 189
c_local P7 213 62 79,252 141 89,254 224 139,255 255 191,230 245 152,153 213 148,50 136 189
c_local P8 213 62 79,244 109 67,253 174 97,254 224 139,230 245 152,171 221 164,102 194 165,50 136 189
c_local P9 213 62 79,244 109 67,253 174 97,254 224 139,255 255 191,230 245 152,171 221 164,102 194 165,50 136 189
c_local P10 158 1 66,213 62 79,244 109 67,253 174 97,254 224 139,230 245 152,171 221 164,102 194 165,50 136 189,94 79 162
c_local P11 158 1 66,213 62 79,244 109 67,253 174 97,254 224 139,255 255 191,230 245 152,171 221 164,102 194 165,50 136 189,94 79 162
}
else {
c_local P3 0 .45 .55 0,0 0 .25 0,.4 0 .4 0
c_local P4 .15 .9 .8 0,0 .32 .55 0,.33 0 .33 0,.85 .25 0 0
c_local P5 .15 .9 .8 0,0 .32 .55 0,0 0 .25 0,.33 0 .33 0,.85 .25 0 0
c_local P6 .15 .75 .5 0,0 .45 .55 0,0 .12 .42 0,.1 0 .4 0,.4 0 .4 0,.82 .23 0 0
c_local P7 .15 .75 .5 0,0 .45 .55 0,0 .12 .42 0,0 0 .25 0,.1 0 .4 0,.4 0 .4 0,.82 .23 0 0
c_local P8 .15 .75 .5 0,.03 .57 .53 0,0 .32 .55 0,0 .12 .42 0,.1 0 .4 0,.33 0 .33 0,.6 0 .3 0,.82 .23 0 0
c_local P9 .15 .75 .5 0,.03 .57 .63 0,0 .32 .55 0,0 .12 .42 0,0 0 .25 0,.1 0 .4 0,.33 0 .33 0,.6 0 .3 0,.82 .23 0 0
c_local P10 0 1 .2 .35,.15 .75 .5 0,.03 .57 .63 0,0 .32 .55 0,0 .12 .42 0,.1 0 .4 0,.33 0 .33 0,.6 0 .3 0,.82 .23 0 0,.65 .6 0 0
c_local P11 0 1 .2 .35,.15 .75 .5 0,.03 .57 .63 0,0 .32 .55 0,0 .12 .42 0,0 0 .25 0,.1 0 .4 0,.33 0 .33 0,.6 0 .3 0,.82 .23 0 0,.65 .6 0 0
c_local note (CMYK)
}
end
program colorpalette9_YlGn
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 247 252 185,173 221 142,49 163 84
c_local P4 255 255 204,194 230 153,120 198 121,35 132 67
c_local P5 255 255 204,194 230 153,120 198 121,49 163 84,0 104 55
c_local P6 255 255 204,217 240 163,173 221 142,120 198 121,49 163 84,0 104 55
c_local P7 255 255 204,217 240 163,173 221 142,120 198 121,65 171 93,35 132 67,0 90 50
c_local P8 255 255 229,247 252 185,217 240 163,173 221 142,120 198 121,65 171 93,35 132 67,0 90 50
c_local P9 255 255 229,247 252 185,217 240 163,173 221 142,120 198 121,65 171 93,35 132 67,0 104 55,0 69 41
}
else {
c_local P3 .03 0 .27 0,.32 0 .43 0,.81 0 .76 0
c_local P4 0 0 .2 0,.24 0 .39 0,.53 0 .53 0,.87 .1 .83 0
c_local P5 0 0 .2 0,.24 0 .39 0,.53 0 .53 0,.81 0 .76 0,1 .25 .9 0
c_local P6 0 0 .2 0,.15 0 .35 0,.32 0 .43 0,.53 0 .53 0,.81 0 .76 0,1 .25 .9 0
c_local P7 0 0 .2 0,.15 0 .35 0,.32 0 .43 0,.53 0 .53 0,.75 0 .7 0,.87 .15 .83 0,1 .35 .9 0
c_local P8 0 0 .1 0,.03 0 .27 0,.15 0 .35 0,.32 0 .43 0,.53 0 .53 0,.75 0 .7 0,.87 .15 .83 0,1 .35 .9 0
c_local P9 0 0 .1 0,.03 0 .27 0,.15 0 .35 0,.32 0 .43 0,.53 0 .53 0,.75 0 .7 0,.87 .15 .83 0,1 .25 .9 0,1 .5 .9 0
c_local note (CMYK)
}
end
program colorpalette9_YlGnBu
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 237 248 177,127 205 187,44 127 184
c_local P4 255 255 204,161 218 180,65 182 196,34 94 168
c_local P5 255 255 204,161 218 180,65 182 196,44 127 184,37 52 148
c_local P6 255 255 204,199 233 180,127 205 187,65 182 196,44 127 184,37 52 148
c_local P7 255 255 204,199 233 180,127 205 187,65 182 196,29 145 192,34 94 168,12 44 132
c_local P8 255 255 217,237 248 177,199 233 180,127 205 187,65 182 196,29 145 192,34 94 168,12 44 132
c_local P9 255 255 217,237 248 177,199 233 180,127 205 187,65 182 196,29 145 192,34 94 168,37 52 148,8 29 88
}
else {
c_local P3 .07 0 .3 0,.5 0 .2 0,.85 .27 0 0
c_local P4 0 0 .2 0,.37 0 .25 0,.75 0 .1 0,.9 .45 0 0
c_local P5 0 0 .2 0,.37 0 .25 0,.75 0 .1 0,.85 .27 0 0,.9 .7 0 0
c_local P6 0 0 .2 0,.22 0 .27 0,.5 0 .2 0,.75 0 .1 0,.85 .27 0 0,.9 .7 0 0
c_local P7 0 0 .2 0,.22 0 .27 0,.5 0 .2 0,.75 0 .1 0,.9 .15 0 0,.9 .45 0 0,1 .7 0 .1
c_local P8 0 0 .15 0,.07 0 .3 0,.22 0 .27 0,.5 0 .2 0,.75 0 .1 0,.9 .15 0 0,.9 .45 0 0,1 .7 0 .1
c_local P9 0 0 .15 0,.07 0 .3 0,.22 0 .27 0,.5 0 .2 0,.75 0 .1 0,.9 .15 0 0,.9 .45 0 0,.9 .7 0 0,1 .7 0 .4
c_local note (CMYK)
}
end
program colorpalette9_YlOrBr
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 255 247 188,254 196 79,217 95 14
c_local P4 255 255 212,254 217 142,254 153 41,204 76 2
c_local P5 255 255 212,254 217 142,254 153 41,217 95 14,153 52 4
c_local P6 255 255 212,254 227 145,254 196 79,254 153 41,217 95 14,153 52 4
c_local P7 255 255 212,254 227 145,254 196 79,254 153 41,236 112 20,204 76 2,140 45 4
c_local P8 255 255 229,255 247 188,254 227 145,254 196 79,254 153 41,236 112 20,204 76 2,140 45 4
c_local P9 255 255 229,255 247 188,254 227 145,254 196 79,254 153 41,236 112 20,204 76 2,153 52 4,102 37 6
}
else {
c_local P3 0 .03 .25 0,0 .23 .65 0,.15 .6 .95 0
c_local P4 0 0 .17 0,0 .15 .4 0,0 .4 .8 0,.2 .67 1 0
c_local P5 0 0 .17 0,0 .15 .4 0,0 .4 .8 0,.15 .6 .95 0,.4 .75 1 0
c_local P6 0 0 .17 0,0 .11 .4 0,0 .23 .65 0,0 .4 .8 0,.15 .6 .95 0,.4 .75 1 0
c_local P7 0 0 .17 0,0 .11 .4 0,0 .23 .65 0,0 .4 .8 0,.07 .55 .9 0,.2 .67 1 0,.45 .78 1 0
c_local P8 0 0 .1 0,0 .03 .25 0,0 .11 .4 0,0 .23 .65 0,0 .4 .8 0,.07 .55 .9 0,.2 .67 1 0,.45 .78 1 0
c_local P9 0 0 .1 0,0 .03 .25 0,0 .11 .4 0,0 .23 .65 0,0 .4 .8 0,.07 .55 .9 0,.2 .67 1 0,.4 .75 1 0,.6 .8 1 0
c_local note (CMYK)
}
end
program colorpalette9_YlOrRd
syntax [, cmyk * ]
if "`cmyk'"=="" {
c_local P3 255 237 160,254 178 76,240 59 32
c_local P4 255 255 178,254 204 92,253 141 60,227 26 28
c_local P5 255 255 178,254 204 92,253 141 60,240 59 32,189 0 38
c_local P6 255 255 178,254 217 118,254 178 76,253 141 60,240 59 32,189 0 38
c_local P7 255 255 178,254 217 118,254 178 76,253 141 60,252 78 42,227 26 28,177 0 38
c_local P8 255 255 204,255 237 160,254 217 118,254 178 76,253 141 60,252 78 42,227 26 28,177 0 38
c_local P9 255 255 204,255 237 160,254 217 118,254 178 76,253 141 60,252 78 42,227 26 28,189 0 38,128 0 38
}
else {
c_local P3 0 .07 .35 0,0 .3 .65 0,.05 .77 .8 0
c_local P4 0 0 .3 0,0 .2 .6 0,0 .45 .7 0,.1 .9 .8 0
c_local P5 0 0 .3 0,0 .2 .6 0,0 .45 .7 0,.05 .77 .8 0,.25 1 .7 0
c_local P6 0 0 .3 0,0 .15 .5 0,0 .3 .65 0,0 .45 .7 0,.05 .77 .8 0,.25 1 .7 0
c_local P7 0 0 .3 0,0 .15 .5 0,0 .3 .65 0,0 .45 .7 0,0 .7 .75 0,.1 .9 .8 0,.3 1 .7 0
c_local P8 0 0 .2 0,0 .07 .35 0,0 .15 .5 0,0 .3 .65 0,0 .45 .7 0,0 .7 .75 0,.1 .9 .8 0,.3 1 .7 0
c_local P9 0 0 .2 0,0 .07 .35 0,0 .15 .5 0,0 .3 .65 0,0 .45 .7 0,0 .7 .75 0,.1 .9 .8 0,.25 1 .7 0,.5 1 .7 0
c_local note (CMYK)
}
end
/*----------------------------------------------------------------------------*/
/* mata */
/*----------------------------------------------------------------------------*/
version 9.2
mata:
mata set matastrict on
string scalar _parse_palette(string scalar p0)
{
real scalar i, j, blank
string rowvector p
p = tokens(p0, ",")
if (length(p)<1) return("")
blank = 1
j = 0
for (i=1;i<=length(p);i++) {
if (p[i]==",") {
if (blank) {
j++
p[j] = ""
}
blank = 1
continue
}
j++
p[j] = strtrim(p[i])
blank = 0
}
return(_invtokens_quoted(p[|1\j|]))
}
void _listrecycle(string scalar lname, real scalar n)
{
real scalar i, l, l2
string rowvector In, Out
In = tokens(st_local(lname))
l = length(In)
if (l==0) return
Out = J(1, n, "")
for (i=1; i<=n; i=i+l) {
l2 = n-i+1
if (l2<l) Out[|i \ i+l2-1|] = In[|1 \ l2|]
else Out[|i \ i+l-1|] = In
}
st_local(lname, _invtokens(Out))
}
string scalar _invtokens(string vector In)
{
real scalar i
string scalar Out
if (length(In)<1) return("")
Out = In[1]
for (i=2; i<=length(In); i++) Out = Out + " " + In[i]
return(Out)
}
string scalar _invtokens_quoted(string vector In)
{
real scalar i
string scalar Out
if (length(In)<1) return("")
Out = "`" + `"""' + In[1] + `"""' + "'"
for (i=2; i<=length(In); i++) {
Out = Out + " `" + `"""' + In[i] + `"""' + "'"
}
return(Out)
}
void makeRGB(string scalar lname)
{
string scalar c
c = strtrim(st_local(lname))
if (substr(c,1,1)=="#") c = HEXtoRGB(c)
else if (substr(c,1,4)=="hcl ") c = HCLtoRGB(c)
else return
if (c!="") st_local(lname, c)
}
string scalar HEXtoRGB(string scalar c)
{
string scalar r, g, b, rest
real scalar l
c = strtrim(substr(c,2,.)) // get rid of #; allow blanks after #
if ((l = strpos(c,"*"))) { // take care of *... (intensity)
rest = substr(c,l,.)
c = strrtrim(substr(c,1,l-1)) // allow blanks before *
}
if ((l = strpos(c,"%"))) { // take care of %... (opacity)
rest = substr(c,l,.) + rest
c = strrtrim(substr(c,1,l-1)) // allow blanks before %
}
l = strlen(c)
if (l==3) c = substr(c,1,1)*2 + substr(c,2,1)*2 + substr(c,3,1)*2
else if (l!=6) return("")
r = strofreal(_HEXtoRGB(substr(c,1,2)))
if (r==".") return("")
g = strofreal(_HEXtoRGB(substr(c,3,2)))
if (g==".") return("")
b = strofreal(_HEXtoRGB(substr(c,5,2)))
if (g==".") return("")
return(r + " " + g + " " + b + rest)
}
real scalar _HEXtoRGB(string scalar s0)
{
real scalar d1, d2
string scalar s, digits
s = strlower(strtrim(s0))
digits = "0123456789abcdef"
d1 = strpos(digits,substr(s,1,1))
if (d1==0) return(.)
d2 = strpos(digits,substr(s,2,1))
if (d2==0) return(.)
return((d1-1)*16 + (d2-1))
}
string scalar HCLtoRGB(string scalar c)
{
real scalar l
string scalar rest
string rowvector hcl
real scalar H, C, L
c = strtrim(substr(c,5,.)) // get rid of hcl; allow blanks after hcl
if ((l = strpos(c,"*"))) { // take care of *... (intensity)
rest = substr(c,l,.)
c = strrtrim(substr(c,1,l-1)) // allow blanks before *
}
if ((l = strpos(c,"%"))) { // take care of %... (opacity)
rest = substr(c,l,.) + rest
c = strrtrim(substr(c,1,l-1)) // allow blanks before %
}
hcl = tokens(c)
if (cols(hcl)!=3) return("")
H = strtoreal(hcl[1])
if (H>=.) return("")
C = strtoreal(hcl[2])
if (C>=. | C<0) return("")
L = strtoreal(hcl[3])
if (L>100 | C<0) return("")
return(HCL_to_RGB(H, C, L) + rest)
}
// The following colorspace translators are based on R's colorspace package
string scalar HSV_to_RGB(H, S, V) // H in [0,360], 0 <= S <=1, 0 <= V <=1
{
real scalar h, i, f, v, m, n
h = H/60
i = floor(h)
f = h - i
i = mod(i, 6) // wrap around if H outside [0,360)
if (mod(i,2)==0) f = 1 - f
v = V
n = V * (1 - f * S)
m = V * (1 - S)
if (i==0) return(strRGB(v, n, m))
else if (i==1) return(strRGB(n, v, m))
else if (i==2) return(strRGB(m, v, n))
else if (i==3) return(strRGB(m, n, v))
else if (i==4) return(strRGB(n, m, v))
else if (i==5) return(strRGB(v, m, n))
}
string scalar HCL_to_RGB(H, C, L)
{
real scalar XN, YN, ZN
real scalar h, U, V
real scalar X, Y, Z, t, x, y, uN, vN, u, v
real scalar R, G, B
// CheckWhite()
XN = 95.047; YN = 100.000; ZN = 108.883
// polarLUV_to_LUV()
h = H * pi() / 180
U = C * cos(h)
V = C * sin(h)
// LUV_to_XYZ()
if (L<=0 & U==0 & V==0) {
X = 0; Y = 0; Z = 0
}
else {
Y = YN * (L>7.999592 ? ((L + 16)/116)^3 : L / 903.3)
// XYZ_to_uv()
t = XN + YN + ZN
x = XN / t; y = YN / t
uN = 2 * x / (6 * y - x + 1.5)
vN = 4.5 * y / (6 * y - x + 1.5)
// done
u = U / (13 * L) + uN
v = V / (13 * L) + vN
X = 9.0 * Y * u/(4 * v)
Z = -X/3 - 5*Y + 3*Y/v
}
// XYZ_to_sRGB()
R = gtrans(( 3.240479 * X - 1.537150 * Y - 0.498535 * Z) / YN, 2.4)
G = gtrans((-0.969256 * X + 1.875992 * Y + 0.041556 * Z) / YN, 2.4)
B = gtrans(( 0.055648 * X - 0.204043 * Y + 1.057311 * Z) / YN, 2.4)
// return
return(strRGB(R, G, B))
}
real scalar gtrans(real scalar u, real scalar gamma)
{
if (u > 0.00304) return(1.055 * u^(1 / gamma) - 0.055)
else return(12.92 * u)
}
string scalar strRGB(real scalar r, real scalar g, real scalar b)
{
return(strofreal(min((max((0 , trunc(255 * r + 0.5))), 255))) + " " +
strofreal(min((max((0 , trunc(255 * g + 0.5))), 255))) + " " +
strofreal(min((max((0 , trunc(255 * b + 0.5))), 255))))
}
// Functions for color interpolation
struct color {
real scalar type // 0 RGB, 1 HSV, 2 CMYK (>1), 3 CMYK (<=1)
real scalar intensity, opacity
real rowvector code
}
string colvector ipolate_colors(string colvector P, real scalar n)
{
/*
P: input vector containing color codes (RGB, CMYK, HSV) or named colors,
possible including intensity and opacity adjustment
n: number of desired output colors
*/
real scalar r, i
real scalar ctype // 0 RGB, 1 HSV, 2 CMYK (>1), 3 CMYK (<=1)
string colvector S
real matrix C
real colvector from, to, I, O
struct color scalar color
// analyze first element
if ((r = length(P))==0) return("") // empty list
color = parse_color(P[1])
ctype = color.type
// collect colors
C = J(r, cols(color.code), .)
C[1,] = color.code
if (color.intensity<.) {
I = J(r, 1, 1)
I[1] = color.intensity
}
if (color.opacity<.) {
O = J(r, 1, 1)
O[1] = color.opacity
}
for (i=2; i<=r; i++) {
color = parse_color(P[i])
if (color.type!=ctype) ERROR_incompatible_colortypes()
C[i,] = color.code
if (color.intensity<.) {
if (length(I)==0) I = J(r, 1, 1)
I[i] = color.intensity
}
if (color.opacity<.) {
if (length(O)==0) O = J(r, 1, 100)
O[i] = color.opacity
}
}
// if only one color: duplicate to make interpolation work
if (r==1) {
C = C \ C; I = I \ I; O = O \ O
r = 2
}
// interpolate
from = rangen(0, 1, r)
to = rangen(0, 1, n)
if (length(I)>0) I = _ipolate(from, I, to)
if (length(O)>0) O = _ipolate(from, O, to)
if (cols(C)==4) { // CMYK
C = _ipolate(from, C[,1], to), _ipolate(from, C[,2], to),
_ipolate(from, C[,3], to), _ipolate(from, C[,4], to)
}
else { // RGB/HSV
C = _ipolate(from, C[,1], to), _ipolate(from, C[,2], to),
_ipolate(from, C[,3], to)
}
if (ctype==0) C = round(C) // RGB
else if (ctype==1) C = round(C, (1, .001, .001)) // HSV
else if (ctype==2) C = round(C) // CMYK (>1)
else if (ctype==3) C = round(C, .001) // CMYK (>1)
// return
return( (ctype==1 ? "hsv " : "") :+
strofreal(C[,1]) :+
" " :+ strofreal(C[,2]) :+
" " :+ strofreal(C[,3]) :+
(cols(C)==4 ? " " :+ strofreal(C[,4]) : "") :+
(length(I)>0 ? "*" :+ strofreal(I) : "") :+
(length(O)>0 ? "%" :+ strofreal(O) : "")
)
}
void ERROR_incompatible_colortypes()
{
display("{err}interpolation not possible due to incompatible color specifications")
exit(499)
}
real colvector _ipolate(real colvector x, real colvector y,
real colvector xnew, | real scalar outer)
{ /* renamed mm_ipolate() (version 1.0.6, Ben Jann, 10jul2006) from moremata */
real scalar i, j0b, j0e, j1b, j1e, r, xlo, xup, xi, y0, y1
real colvector p, pnew, ynew
r = rows(x)
if (rows(y)!=r) _error(3200)
if (r<1) return(J(rows(xnew), 1, .))
if (args()<4) outer = 0
p = order(x, 1)
pnew = order(xnew, 1)
ynew = J(rows(xnew),1,.)
xlo = x[p[1]]; xup = x[p[rows(p)]]
j0b = j1e = j0e = 1
for (i=1; i<=rows(xnew); i++) {
xi = xnew[pnew[i]]
if (outer==0) {
if (xi<xlo) continue
if (xi>xup) return(ynew)
}
while (j0e<r) {
if (x[p[j0e+1]]>xi) break
j0e++
if (x[p[j0e]]>x[p[j0b]]) j0b = j0e
}
if (j0e>=j1e) {
j1b = j0e
while (j1b<=r) {
if (x[p[j1b]]>=xi) break
j1b++
}
if (j1b>r) j1b = r
j1e = j1b
while (j1e<r) {
if (x[p[j1e+1]]>x[p[j1b]]) break
j1e++
}
}
y0 = (j0b==j0e ? y[p[j0b]] : mean(y[p[|j0b \ j0e|]],1))
y1 = (j1b==j1e ? y[p[j1b]] : mean(y[p[|j1b \ j1e|]],1))
if (outer) {
if (xi<xlo) {
ynew[pnew[i]] = y1
continue
}
if (xi>xup) {
ynew[pnew[|i \ rows(pnew)|]] = J(rows(pnew)-i+1,1,y0)
return(ynew)
}
}
ynew[pnew[i]] = ( j0e==j1e ? y0 :
y0 + (y1-y0) * (xi-x[p[j0e]])/(x[p[j1b]]-x[p[j0e]]) )
}
return(ynew)
}
struct color scalar parse_color(string scalar s)
{
/* function to parse a color specification; also reads the definitions
of named colors from style files */
real scalar l
string colvector tmp
struct color scalar color
tmp = tokens(s, " *%")
l = length(tmp)
if (l<1) ERROR_invalid_color(s)
// check for hsv prefix
if (tmp[1]=="hsv") {
if (l<4) ERROR_invalid_color(s)
color.type = 1
tmp = tmp[|2 \ l|]
l = length(tmp)
}
else if (strtoreal(tmp[1])>=.) { // named color
if (l>1) tmp = read_colorstyle(tmp[1]), tmp[|2 \ l|]
else tmp = read_colorstyle(tmp[1])
l = length(tmp)
}
if (l<3) ERROR_invalid_color(s)
// read first three codes
color.code = strtoreal(tmp[|1 \ 3|])
if (missing(color.code)) ERROR_invalid_color(s)
if (l==3) {
if (missing(color.type)) color.type = 0 // RGB
return(color)
}
tmp = tmp[|4 \ l|]
l = length(tmp)
// read fourth code (CMYK)
if (strtoreal(tmp[1])<.) {
color.code = color.code, strtoreal(tmp[1])
if (missing(color.type)) color.type = 2 // CMYK
else ERROR_invalid_color(s)
if (all(color.code:<=1)) color.type = 3 // CMYK (<=1)
if (l==1) return(color)
tmp = tmp[|2 \ l|]
l = length(tmp)
}
else if (missing(color.type)) color.type = 0 // RGB
// read intensity and transparency
if (tmp[1]=="*") {
if (l<2) ERROR_invalid_color(s)
color.intensity = strtoreal(tmp[2])
if (missing(color.intensity)) ERROR_invalid_color(s)
if (l==2) return(color)
tmp = tmp[|3 \ l|]
l = length(tmp)
if (l!=2) ERROR_invalid_color(s)
if (tmp[1]!="%") ERROR_invalid_color(s)
color.opacity = strtoreal(tmp[2])
if (missing(color.opacity)) ERROR_invalid_color(s)
return(color)
}
// read transparency and intensity
if (tmp[1]=="%") {
if (l<2) ERROR_invalid_color(s)
color.opacity = strtoreal(tmp[2])
if (missing(color.opacity)) ERROR_invalid_color(s)
if (l==2) return(color)
tmp = tmp[|3 \ l|]
l = length(tmp)
if (l!=2) ERROR_invalid_color(s)
if (tmp[1]!="*") ERROR_invalid_color(s)
color.intensity = strtoreal(tmp[2])
if (missing(color.intensity)) ERROR_invalid_color(s)
return(color)
}
// can only be reached if invalid
ERROR_invalid_color(s)
}
string rowvector read_colorstyle(string scalar s)
{
/* read RGB code of named color from style file */
real scalar fh, i
string scalar fn, line
string matrix EOF
if (!st_isname(s)) ERROR_invalid_color(s)
fn = findfile("color-"+s+".style")
if (fn=="") ERROR_color_not_found(s)
fh = fopen(fn, "r")
EOF = J(0, 0, "")
while ((line=fget(fh))!=EOF) {
line = strtrim(stritrim(line))
if (substr(line, 1, 8)=="set rgb ") {
line = tokens(substr(line, 9, .))
if (length(line)!=1) continue
fclose(fh)
return(tokens(line))
}
}
fclose(fh)
ERROR_color_not_found(s)
}
void ERROR_invalid_color(string scalar s)
{
display("{err}" + s + ": invalid color specification")
exit(499)
}
void ERROR_color_not_found(string scalar s)
{
display("{err}" + s + ": color definition not found")
exit(499)
}
end
exit
|
*! version 1.0.1 27dec2018 Ben Jann
program linepalette
version 9.2
capt _on_colon_parse `0'
if _rc==0 {
local 0 `"`s(before)'"'
local rhs `"`s(after)'"'
_parse comma lhs 0 : 0
if `"`lhs'"'!="" error 198
if `"`rhs'"'=="" local rhs default
local palettes
local palette
local space
while (`"`rhs'"'!="") {
gettoken p rhs : rhs, parse("/") quotes bind
if `"`p'"'=="/" {
local palettes `"`palettes'`"`palette'"' "'
local palette
local space
continue
}
local palette `"`palette'`space'`p'"'
local space " "
}
if `"`palette'"'!="" {
local palettes `"`palettes'`"`palette'"'"'
}
Graph2 `palettes' `0'
exit
}
Palette_Get `0'
if "`GRAPH'"=="" {
tempname hcurrent
_return hold `hcurrent'
_return restore `hcurrent', hold // make copy
Graph, `GROPTS'
_return restore `hcurrent'
}
end
/*----------------------------------------------------------------------------*/
/* retrieve palette */
/*----------------------------------------------------------------------------*/
program Palette_Get, rclass
syntax [anything(name=palette id="palette" everything equalok)] ///
[, noGRaph GRopts(str asis) TItle(passthru) LWidth(passthru) rows(passthru) ///
N(numlist max=1 integer >=1) Select(numlist integer >=1) Reverse * ]
c_local GRAPH "`graph'"
c_local GROPTS `"`rows' `title' `lwidth' `gropts'"'
// get palette
if `"`palette'"'=="" local palette default
local islist = (`: list sizeof palette'!=1)
if `islist'==0 {
capt confirm name _`palette'
if _rc local islist 1
}
if `islist'==0 {
capt _Palette_Get `palette', n(`n') `options'
if _rc==199 {
capt confirm name `palette'
if _rc { // numeric palette name: cannot be a named style
di as err `"palette `palette' not found"'
exit 198
}
local islist 1
}
else if _rc { // display error message
_Palette_Get `palette', n(`n') `options'
}
}
if `islist' {
local i 0
foreach p of local palette {
local ++i
local p`i' `"`p'"'
}
local n `i'
local palette "custom"
}
// select/order
if "`reverse'"!="" {
if "`select'"=="" {
qui numlist "`n'(-1)1"
local select `r(numlist)'
}
else {
local select0 `select'
local select
foreach s of local select0 {
local select `s' `select'
}
}
}
else if "`select'"=="" {
qui numlist "1/`n'"
local select `r(numlist)'
}
// return palette
local plist
local i 0
foreach j of local select {
if `"`p`j''"'!="" {
local ++i
local plist `"`plist'`space'`"`p`j''"'"'
local space " "
return local p`i' `"`p`j''"'
return local p`i'info `"`p`j'info'"'
}
}
local n `i'
local plist: list clean plist
return local p `"`plist'"'
return local pnote `"`note'"'
return local pname `"`palette'"'
return local ptype "line"
return scalar n = `n'
end
program _Palette_Get
gettoken palette 0 : 0, parse(" ,")
syntax [, n(numlist max=1 integer >0) * ]
linepalette_`palette', n(`n') `options'
if `"`P'"'!="" { // palettes that define P (and I)
mata: st_local("P", _parse_palette(st_local("P")))
mata: st_local("I", _parse_palette(st_local("I")))
local min 1
local max: list sizeof P
if "`n'"=="" local n `max'
local n = max(`min',min(`max',`n'))
}
else { // palettes that define P#
local min 1
while (`"`P`min''"'=="") {
local ++min
if `min'>100 {
c_local n 0
exit // emergency exit
}
}
local max `min'
while (`"`P`max''"'!="") {
local ++max
}
local --max
if "`n'"=="" local n `max'
local n = max(`min',min(`max',`n'))
local P `"`P`n''"'
mata: st_local("P", _parse_palette(st_local("P")))
mata: st_local("I", _parse_palette(st_local("I")))
}
local i 0
foreach c of local P {
gettoken info I : I
local ++i
if `i'>`n' continue, break
c_local p`i' `"`c'"'
c_local p`i'info `"`info'"'
}
c_local note `"`note'"'
c_local n `n'
end
/*----------------------------------------------------------------------------*/
/* graph of single palette */
/*----------------------------------------------------------------------------*/
program Graph
syntax [, rows(int 5) TItle(passthru) LWidth(passthru) * ]
if `"`lwidth'"'=="" local lwidth lwidth(medthick)
local n = r(n)
local c = max(3,ceil(sqrt(`n'/12*3)))
local cut = max(`rows',ceil(`n'/`c'))
local rows = max(5, `cut')
local lblgap = 10/`rows'
local infogap = 10/`rows'
local j 1
local r 0
forv i=1/`n' {
if `i'>(`cut'*`j') {
local ++j
local r 0
}
local ++r
if `"`r(p`i')'"'=="" continue
local jlab `j'
local j2 = `j' + .6
local jlab = `jlab' + .3
local plots `plots' (pci `r' `j' `r' `j2', recast(pcspike) ///
lpattern(`"`r(p`i')'"') `lwidth' lcolor(black))
local pnum `pnum' `r' `j' "`i'"
local lbl `lbl' `r' `jlab' `"`r(p`i')'"'
if `"`r(p`i'info)'"'!="" {
local info `info' `r' `jlab' `"`r(p`i'info)'"'
}
}
if `"`plots'"'=="" {
di as txt "(nothing to display)"
exit
}
if `rows'>=30 {
local pnumsize vsmall
local lblsize tiny
local infosize half_tiny
}
else if `rows'>=15 {
local pnumsize small
local lblsize vsmall
local infosize tiny
}
else if `rows'>=10 {
local pnumsize medsmall
local lblsize small
local infosize vsmall
}
else {
local pnumsize medium 3.8194
local lblsize medsmall
local infosize small
}
local pnum (scatteri `pnum', ms(i) msize(`size') mlabpos(9) ///
mlabgap(`lblgap') mlabsize(`pnumsize') mlabcolor(gray))
if `"`lbl'"'!="" {
local lbl (scatteri `lbl', ms(i) msize(`size') mlabpos(6) ///
mlabgap(`lblgap') mlabsize(`lblsize') mlabcolor(gray))
}
if `"`info'"'!="" {
local info (scatteri `info', ms(i) msize(`size') mlabpos(12) ///
mlabgap(`infogap') mlabsize(`infosize') mlabcolor(gray))
}
else local info
local l = 1/2 + 9
local r = 1/2 + 5
local b = 1/2 + 10
local t = 1/2 + 5
if `"`title'"'=="" {
if `"`r(pnote)'"'=="" local title title(`"`r(pname)'"')
else local title title(`"`r(pname)' `r(pnote)'"')
}
two `plots' `pnum' `lbl' `info' , `title' scheme(s2color) ///
legend(off) ylabel(none) graphr(color(white)) ///
xlabel(none) xscale(range(1 3) off) ///
yscale(range(1 `rows') off reverse) ///
plotr(margin(`l' `r' `b' `t')) graphr(margin(0 0 0 3)) `options'
end
/*----------------------------------------------------------------------------*/
/* graph of multiple palettes */
/*----------------------------------------------------------------------------*/
program Graph2
_parse comma palettes 0 : 0
syntax [, TItle(passthru) LABels(str asis) PLabels(str asis) ///
GRopts(str asis) LWidth(passthru) VERTical HORizontal * ]
if `"`labels'"'!="" local plabels `"`labels'"'
if `"`lwidth'"'=="" local lwidth lwidth(medthick)
local orientation `vertical' `horizontal'
if "`orientation'"=="" local orientation vertical
if "`orientation'"=="horizontal" {
local ii i
local jj j
}
else {
local ii j
local jj i
}
local N 1
local plots
local i 0
foreach p of local palettes {
local ++i
_parse comma pnm popts : p
if `"`popts'"'=="" local popts ,
Palette_Get `pnm' `popts' `options'
local n = r(n)
local N = max(`n',`N')
gettoken plab plabels : plabels
if `"`plab'"'=="" {
if `"`r(pnote)'"'=="" local plab `"`r(pname)'"'
else local plab `"`r(pname)' `r(pnote)'"'
}
local ylab `ylab' `i' `"`plab'"'
forv j=1/`n' {
local plots `plots' ///
(pci ``ii'' `=``jj''-.35' ``ii'' `=``jj''+.35', ///
recast(pcspike) lpattern(`"`r(p`j')'"') `lwidth' lcolor(black))
}
}
if `"`plots'"'=="" {
di as txt "(nothing to display)"
exit
}
if "`orientation'"=="horizontal" {
local xscale xscale(lstyle(none) range(.5 `N'.5))
local xlabel xlabel(1/`N', notick)
local yscale yscale(lstyle(none) range(.5 `i'.5) reverse)
local ylabel ylabel(`ylab', nogrid notick angle(hor))
}
else {
local xscale xscale(lstyle(none) range(.5 `i'.5) alt)
local xlabel xlabel(`ylab', notick)
local yscale yscale(lstyle(none) range(.5 `N'.5) reverse)
local ylabel ylabel(1/`N', nogrid notick angle(hor))
}
two `plots', `xscale' `xlabel' xti("") `yscale' `ylabel' yti("") ///
legend(off) graphr(margin(l=2 t=2 b=2 r=2) color(white)) ///
scheme(s2color) `title' `gropts'
end
/*----------------------------------------------------------------------------*/
/* palettes */
/*----------------------------------------------------------------------------*/
program linepalette_default
c_local P solid,dash,vshortdash,longdash_dot,longdash,dash_dot,dot, ///
shortdash_dot,tight_dot,dash_dot_dot,longdash_shortdash,dash_3dot, ///
longdash_dot_dot,shortdash_dot_dot,longdash_3dot
end
program linepalette_pplain // plotplain (p#lineplot)
c_local P solid,dash,vshortdash,dot,dash_dot_dot,tight_dot,longdash,dash_dot, ///
shortdash_dot,longdash_dot,longdash_shortdash,dash_3dot,longdash_dot_dot, ///
shortdash_dot_dot,longdash_3dot
end
/*----------------------------------------------------------------------------*/
/* mata */
/*----------------------------------------------------------------------------*/
version 9.2
mata:
mata set matastrict on
string scalar _parse_palette(string scalar p0)
{
real scalar i, j, blank
string rowvector p
p = tokens(p0, ",")
if (length(p)<1) return("")
blank = 1
j = 0
for (i=1;i<=length(p);i++) {
if (p[i]==",") {
if (blank) {
j++
p[j] = ""
}
blank = 1
continue
}
j++
p[j] = strtrim(p[i])
blank = 0
}
return(_invtokens_quoted(p[|1\j|]))
}
string scalar _invtokens_quoted(string vector In)
{
real scalar i
string scalar Out
if (length(In)<1) return("")
Out = "`" + `"""' + In[1] + `"""' + "'"
for (i=2; i<=length(In); i++) {
Out = Out + " `" + `"""' + In[i] + `"""' + "'"
}
return(Out)
}
end
exit
|
*! version 1.0.2 27dec2018 Ben Jann
program symbolpalette
version 9.2
capt _on_colon_parse `0'
if _rc==0 {
local 0 `"`s(before)'"'
local rhs `"`s(after)'"'
_parse comma lhs 0 : 0
if `"`lhs'"'!="" error 198
if `"`rhs'"'=="" local rhs default
local palettes
local palette
local space
while (`"`rhs'"'!="") {
gettoken p rhs : rhs, parse("/") quotes bind
if `"`p'"'=="/" {
local palettes `"`palettes'`"`palette'"' "'
local palette
local space
continue
}
local palette `"`palette'`space'`p'"'
local space " "
}
if `"`palette'"'!="" {
local palettes `"`palettes'`"`palette'"'"'
}
Graph2 `palettes' `0'
exit
}
Palette_Get `0'
if "`GRAPH'"=="" {
tempname hcurrent
_return hold `hcurrent'
_return restore `hcurrent', hold // make copy
Graph, `GROPTS'
_return restore `hcurrent'
}
end
/*----------------------------------------------------------------------------*/
/* retrieve palette */
/*----------------------------------------------------------------------------*/
program Palette_Get, rclass
syntax [anything(name=palette id="palette" everything equalok)] ///
[, noGRaph GRopts(str asis) TItle(passthru) rows(passthru) ///
N(numlist max=1 integer >=1) Select(numlist integer >=1) Reverse * ]
c_local GRAPH "`graph'"
c_local GROPTS `"`rows' `title' `gropts'"'
// get palette
if `"`palette'"'=="" local palette default
local islist = (`: list sizeof palette'!=1)
if `islist'==0 {
capt confirm name _`palette'
if _rc local islist 1
}
if `islist'==0 {
capt _Palette_Get `palette', n(`n') `options'
if _rc==199 {
capt confirm name `palette'
if _rc { // numeric palette name: cannot be a named style
di as err `"palette `palette' not found"'
exit 198
}
local islist 1
}
else if _rc { // display error message
_Palette_Get `palette', n(`n') `options'
}
}
if `islist' {
local i 0
foreach p of local palette {
local ++i
local p`i' `"`p'"'
}
local n `i'
local palette "custom"
}
// select/order
if "`reverse'"!="" {
if "`select'"=="" {
qui numlist "`n'(-1)1"
local select `r(numlist)'
}
else {
local select0 `select'
local select
foreach s of local select0 {
local select `s' `select'
}
}
}
else if "`select'"=="" {
qui numlist "1/`n'"
local select `r(numlist)'
}
// return palette
local plist
local i 0
foreach j of local select {
if `"`p`j''"'!="" {
local ++i
mata: _translate_symbols("p`j'")
local plist `"`plist'`space'`"`p`j''"'"'
local space " "
return local p`i' `"`p`j''"'
return local p`i'info `"`p`j'info'"'
}
}
local n `i'
local plist: list clean plist
return local p `"`plist'"'
return local pnote `"`note'"'
return local pname `"`palette'"'
return local ptype "symbol"
return scalar n = `n'
end
program _Palette_Get
gettoken palette 0 : 0, parse(" ,")
syntax [, n(numlist max=1 integer >0) * ]
symbolpalette_`palette', n(`n') `options'
if `"`P'"'!="" { // palettes that define P (and I)
mata: st_local("P", _parse_palette(st_local("P")))
mata: st_local("I", _parse_palette(st_local("I")))
local min 1
local max: list sizeof P
if "`n'"=="" local n `max'
local n = max(`min',min(`max',`n'))
}
else { // palettes that define P#
local min 1
while (`"`P`min''"'=="") {
local ++min
if `min'>100 {
c_local n 0
exit // emergency exit
}
}
local max `min'
while (`"`P`max''"'!="") {
local ++max
}
local --max
if "`n'"=="" local n `max'
local n = max(`min',min(`max',`n'))
local P `"`P`n''"'
mata: st_local("P", _parse_palette(st_local("P")))
mata: st_local("I", _parse_palette(st_local("I")))
}
local i 0
foreach c of local P {
gettoken info I : I
local ++i
if `i'>`n' continue, break
c_local p`i' `"`c'"'
c_local p`i'info `"`info'"'
}
c_local note `"`note'"'
c_local n `n'
end
/*----------------------------------------------------------------------------*/
/* graph of single palette */
/*----------------------------------------------------------------------------*/
program Graph
syntax [, rows(int 5) TItle(passthru) * ]
local n = r(n)
local c = max(3,ceil(sqrt(`n'/12*3)))
local cut = max(`rows',ceil(`n'/`c'))
local rows = max(5, `cut')
local size = max(2,(100-10)/(1.5*`rows')*.3)
local lblgap = `size'
local infogap = `size'*1.25
local rgap = 100/`c'
local j 1
local r 0
forv i=1/`n' {
if `i'>(`cut'*`j') {
local ++j
local r 0
}
local ++r
if `"`r(p`i')'"'=="" continue
local plots `plots' (scatteri `r' `j', msymbol(`"`r(p`i')'"') ///
msize(`size') mcolor(black))
local pnum `pnum' `r' `j' "`i'"
local lbl `lbl' `r' `j' `"`r(p`i')'"'
if `"`r(p`i'info)'"'!="" {
local info `info' `r' `j' `"`r(p`i'info)'"'
}
}
if `"`plots'"'=="" {
di as txt "(nothing to display)"
exit
}
if `rows'>=30 {
local pnumsize vsmall
local lblsize tiny
local infosize half_tiny
}
else if `rows'>=15 {
local pnumsize small
local lblsize vsmall
local infosize tiny
}
else if `rows'>=10 {
local pnumsize medsmall
local lblsize small
local infosize vsmall
}
else {
local pnumsize medium 3.8194
local lblsize medsmall
local infosize small
}
local pnum (scatteri `pnum', ms(i) msize(`size') mlabpos(9) ///
mlabgap(`lblgap') mlabsize(`pnumsize') mlabcolor(gray))
if `"`lbl'"'!="" {
local lbl (scatteri `lbl', ms(i) msize(`size') mlabpos(3) ///
mlabgap(`lblgap') mlabsize(`lblsize') mlabcolor(gray))
}
if `"`info'"'!="" {
local info (scatteri `info', ms(i) msize(`size') mlabpos(4) ///
mlabgap(`infogap') mlabsize(`infosize') mlabcolor(gray))
}
else local info
local l = `size'/2 + 9
local r = `size'/2 + `rgap'
local b = `size'/2 + 5
local t = `size'/2 + 4
if `"`title'"'=="" {
if `"`r(pnote)'"'=="" local title title(`"`r(pname)'"')
else local title title(`"`r(pname)' `r(pnote)'"')
}
two `plots' `pnum' `lbl' `info' , `title' scheme(s2color) ///
legend(off) ylabel(none) graphr(color(white)) ///
xlabel(none) xscale(range(1 3) off) ///
yscale(range(1 `rows') off reverse) ///
plotr(margin(`l' `r' `b' `t')) graphr(margin(0 0 0 3)) `options'
end
/*----------------------------------------------------------------------------*/
/* graph of multiple palettes */
/*----------------------------------------------------------------------------*/
program Graph2
_parse comma palettes 0 : 0
syntax [, TItle(passthru) LABels(str asis) PLabels(str asis) ///
GRopts(str asis) MSIZe(passthru) VERTical HORizontal * ]
if `"`labels'"'!="" local plabels `"`labels'"'
if `"`msize'"'=="" local msize msize(large)
local orientation `vertical' `horizontal'
if "`orientation'"=="" local orientation horizontal
if "`orientation'"=="horizontal" {
local ii i
local jj j
}
else {
local ii j
local jj i
}
local N 1
local plots
local i 0
foreach p of local palettes {
local ++i
_parse comma pnm popts : p
if `"`popts'"'=="" local popts ,
Palette_Get `pnm' `popts' `options'
local n = r(n)
local N = max(`n',`N')
gettoken plab plabels : plabels
if `"`plab'"'=="" {
if `"`r(pnote)'"'=="" local plab `"`r(pname)'"'
else local plab `"`r(pname)' `r(pnote)'"'
}
local ylab `ylab' `i' `"`plab'"'
forv j=1/`n' {
local plots `plots' (scatteri ``ii'' ``jj'', ///
msymbol(`"`r(p`j')'"') `msize' mcolor(black))
}
}
if `"`plots'"'=="" {
di as txt "(nothing to display)"
exit
}
if "`orientation'"=="horizontal" {
local xscale xscale(lstyle(none) range(.5 `N'.5))
local xlabel xlabel(1/`N', notick)
local yscale yscale(lstyle(none) range(.5 `i'.5) reverse)
local ylabel ylabel(`ylab', nogrid notick angle(hor))
}
else {
local xscale xscale(lstyle(none) range(.5 `i'.5) alt)
local xlabel xlabel(`ylab', notick)
local yscale yscale(lstyle(none) range(.5 `N'.5) reverse)
local ylabel ylabel(1/`N', nogrid notick angle(hor))
}
two `plots', `xscale' `xlabel' xti("") `yscale' `ylabel' yti("") ///
legend(off) graphr(margin(l=2 t=2 b=2 r=2) color(white)) ///
scheme(s2color) `title' `gropts'
end
/*----------------------------------------------------------------------------*/
/* palettes */
/*----------------------------------------------------------------------------*/
program symbolpalette_synonyms // all available symbols listed in symbolstyle
c_local I O,D,T,S,+,X,A,a,|,V,o,d,s,t,smplus,x,v,Oh,Dh,Th,Sh,oh,dh,th,sh,p,i
end
program symbolpalette_default
c_local P circle,diamond,square,triangle,X,plus,circle_hollow,diamond_hollow, ///
square_hollow,triangle_hollow,smcircle,smdiamond,smsquare,smtriangle,smx
end
program symbolpalette_lean
c_local P circle_hollow,circle,plus,diamond_hollow,diamond,square_hollow,square, ///
X,triangle_hollow,triangle,smcircle,smdiamond_hollow,smsquare,smtriangle_hollow,smx
end
program symbolpalette_tufte
c_local P circle_hollow,diamond_hollow,square_hollow,plus,circle,diamond,square, ///
X,triangle_hollow,triangle,smcircle,smdiamond_hollow,smsquare,smtriangle_hollow,smx
end
program symbolpalette_pplain // plotplain
c_local P circle_hollow,square_hollow,diamond_hollow,triangle_hollow,plus,X, ///
circle,diamond,square,triangle,smcircle,smdiamond,smsquare,smtriangle,smx
end
program symbolpalette_pblind // plotplainblind
c_local P circle_hollow,square_hollow,diamond_hollow,triangle_hollow,plus,X, ///
smcircle,smdiamond,smsquare,smtriangle,smx,circle,square,triangle
end
/*----------------------------------------------------------------------------*/
/* mata */
/*----------------------------------------------------------------------------*/
version 9.2
mata:
mata set matastrict on
string scalar _parse_palette(string scalar p0)
{
real scalar i, j, blank
string rowvector p
p = tokens(p0, ",")
if (length(p)<1) return("")
blank = 1
j = 0
for (i=1;i<=length(p);i++) {
if (p[i]==",") {
if (blank) {
j++
p[j] = ""
}
blank = 1
continue
}
j++
p[j] = strtrim(p[i])
blank = 0
}
return(_invtokens_quoted(p[|1\j|]))
}
string scalar _invtokens_quoted(string vector In)
{
real scalar i
string scalar Out
if (length(In)<1) return("")
Out = "`" + `"""' + In[1] + `"""' + "'"
for (i=2; i<=length(In); i++) {
Out = Out + " `" + `"""' + In[i] + `"""' + "'"
}
return(Out)
}
void _translate_symbols(string scalar lname)
{
real scalar i
string scalar p
string rowvector in, from, to
from = ("O", "D", "T", "S", "+", "A", "a", "|", "o", "d", "s",
"t", "x", "v", "Oh", "Dh", "Th", "Sh", "oh", "dh", "th", "sh", "p", "i")
to = ("circle", "diamond", "triangle", "square", "plus", "arrowf",
"arrow", "pipe", "smcircle", "smdiamond", "smsquare",
"smtriangle", "smx", "smv", "circle_hollow", "diamond_hollow",
"triangle_hollow", "square_hollow", "smcircle_hollow",
"smdiamond_hollow", "smtriangle_hollow", "smsquare_hollow", "point",
"none")
in = st_local(lname)
if (in=="") return
in = tokens(in)
p = select(to, from:==in[1])
if (length(p)==1) {
for (i=2;i<=length(in);i++) p = p + " " + in[i]
st_local(lname, p)
}
}
end
exit
|
*! version 1.0.1 10May2020
*! author: Gabriele Rovigatti, Bank of Italy, Rome, Italy. mailto: gabriele.rovigatti@gmail.com | gabriele.rovigatti@bancaditalia.it
/***************************************************************************
** Stata program for Markup Estimation
** Programmed by: Gabriele Rovigatti
**************************************************************************/
/*
Command structure:
- MICRO
- 1) DLW
Estimation Type:
|- PRODEST (CUSTOM VERSION)
|- SAVED PRODEST RESULTS
- PREDICT MARKUPS
- MACRO
- 2.1) HALL (1988,2018)
Class:
|- GROSS OUTPUT
Options and Postestimation:
|- TIME-VARYING
|- MEASUREMENT ERROR CORRECTION
- 2.2) ROEGER (1995)
*/
// ********************* //
capture program drop markupest
program define markupest, sortpreserve eclass byable(recall)
version 10.0
syntax name [if] [in] [, METhod(name) replace SAVEBeta VERBose id(varlist min=1 max=1) t(varlist min=1 max=1) output(varlist numeric min=1 max=1) free(varlist numeric min=1) /*
*/ state(varlist numeric min=1 max=1) proxy(varlist numeric min=1) VALueadded PRODESTOPTions(string) CORRected ESTimates(name) INPUTvar(varlist numeric min=1) /*
*/ GO(varlist numeric min=1 max=1) deltago(varlist min=1 max=1) pgo(varlist min=1 max=1) INputs(varlist numeric min=1) INSTRuments(varlist numeric min=1) /*
*/ DELTAvars(varlist numeric min=1) PRICEvars(varlist numeric min=1) TIMEVarying HGraph pmethod(name) W(varlist numeric min=1 max=1)]
loc vv: di "version " string(min(max(10,c(stata_version)),14.0)) ", missing:" // we want a version 10 min, while a version 14.0 max (no version 14.2)
loc markupvar `namelist'
marksample touse
markout `touse' `output' `free' `state' `proxy' `go' `va' `compensation' `ii' `rt' `k' `w' `instruments'
**************** COMMAND CHECKS *********************
if !inlist("`method'", "dlw", "hall", "roeger"){ // if the users specifies a wrong method
di as error "Allowed methods are dlw, hall and roeger"
exit 198
}
/* Indicate either Micro or Macro methods */
loc micro = 0
if "`method'" == "dlw" | !mi("`output'") | !mi("`free'") | !mi("`state'") | !mi("`proxy'") | !mi("`estimates'"){
loc micro = 1
}
/* method definition in case of missing */
if mi("`method'") & `micro' == 1{ // in case of micro the only available method is DLW
di as error "'method()' is missing. The default 'dlw' (micro) is used."
loc method dlw
}
else if mi("`method'") & `micro' == 0{ // in case of macro, Roeger is the default
di as error "'method()' is missing. The default 'roeger' (macro) is used."
loc method roeger
}
/* check whether data is xtset. In case it is not, xtset it with the specified variables */
if (mi("`id'") | mi("`t'")) {
capture xtset
if (_rc != 0) { // if not xtset, the user must specify both panel and time variables
di as error "You must either xtset your data or specify both id() and t()"
error _rc
}
else {
loc id = r(panelvar)
loc t = r(timevar)
}
}
else {
qui xtset `id' `t'
}
loc byRep = _byindex() // generate an indicator for the by repetition. If it is not the first, we should take care of that
/* check whether the variable already exists */
cap confirm variable `markupvar', exact // check whether the markup variable exists already
if _rc == 0 & `byRep' == 1{
if !mi("`replace'"){ // if the user specified "replace", drop the current variable
drop `markupvar'
}
else{
di as error "`markupvar' already exists"
exit 198
}
}
else if _rc != 0 & `byRep' == 1 & !mi("`replace'"){ // in case the user specifies "replace", but there is no variable
di as error "`markupvar' does not exist. No replace"
}
/* ROUTINE START */
if `micro' == 1{
if mi("`inputvar'"){
di as error "You should specify the <inputvar> whose elasticity would be used to compute markups"
exit 198
}
************************************************************************************
**************** MICRO PART: PRODEST + OUTPUT SHARE CALCULATION ********************
************************************************************************************
/* 1) DLW */
if !mi("`output'") | !mi("`free'") | !mi("`state'") | !mi("`proxy'"){ // if the user specifies at least one of the variables, this is the "micro" run
if mi("`output'"){ // check whether there are missing "parts" in the command launched
di as error "Specify an <output> variable "
exit 198
}
else if mi("`free'"){
di as error "Specify at least one <free> variable "
exit 198
}
else if mi("`state'"){
di as error "Specify at least one <state> variable "
exit 198
}
else if mi("`proxy'"){
di as error "Specify at least one <proxy> variable "
exit 198
}
if mi("`pmethod'"){ // the default method is levisohn-petrin
loc pmethod lp
}
if !mi("`corrected'") & !regexm("fsres", "`prodestoptions'") & !inlist("`method'", "wrdg", "mr", "rob") { // if the user specifies "corrected markups", but not the first-stage residuals, run them with a tmp var
tempvar fsresid
loc fscorr fsresiduals(`fsresid')
}
/* run prodest with the specified variables + options */
prodest_m `output' if `touse' == 1, free(`free') state(`state') proxy(`proxy') met("`pmethod'") `valueadded' `fscorr' `prodestoptions'
if !mi("`verbose'"){
_coef_table
}
if !mi("`savebeta'"){
mat beta = e(b)
loc betanum = colsof(beta)
loc betanames: colnames beta
forv v = 1/`betanum'{
loc betaname `: word `v' of `betanames''
qui g _b`betaname' = beta[1, `v']
}
}
/* run the PREDICT */
if !mi("`e(FSres)'"){ // if prodest is run with first-stage residuals, correct for them - otherwise go with the uncorrected
qui predict `markupvar' if `touse' == 1, markups inputvar(`inputvar') corrected rep(`byRep')
}
else{
qui predict `markupvar' if `touse' == 1, markups inputvar(`inputvar') rep(`byRep')
}
}
else if !mi("`estimates'") { // estimates stored
estimates restore `estimates'
loc free `e(free)'
loc state `e(state)'
loc proxy `e(proxy)'
loc output `e(depvar)'
/* run the PREDICT */
if !mi("`e(FSres)'"){ // if prodest is run with first-stage residuals, correct for them - otherwise go with the uncorrected
qui predict `markupvar' if `touse' == 1, markups inputvar(`inputvar') corrected rep(`byRep')
}
else{
qui predict `markupvar' if `touse' == 1, markups inputvar(`inputvar') rep(`byRep')
}
}
}
else{
******************************************************************************************
**************** MACRO PART: HALL (1988), ROEGER (1995) and HALL (2018) ******************
******************************************************************************************
/* Listing macro-specific errors */
if mi("`go'"){
di as error "You must specify <GO> variable"
exit 198
}
loc nInputs: word count `inputs' // check that there are at least capital and labor inputs specified
if mi("`inputs'") | `nInputs' < 2{
di as error "You must specify at least two <INputs> (K and L)"
exit 198
}
if "`method'" == "hall" {
**********************************************************************
/* 2.1) Hall method */
**********************************************************************
*** Sanity checks ***
if mi("`instruments'"){
di as error "hall method requires at least one <INSTRument>"
exit 198
}
if !mi("`valueadded'") & `nInputs' < 3{ // with Value Added models, there must be at least 3 inputs (Labor, Capital, and Materials)
di as error "With VA models, you must specify at least three <INputs> (K, L, and M)"
exit 198
}
if mi("`pricevars'") & mi("`deltavars'"){ // input
di as error "You should specify either <DELTAvars> or <PRICEvars>"
exit 198
}
loc nDeltaPrice: word count `deltavars' `pricevars' // check that either deltavars of pricevars are consistent with the input variables by checking that the sum of the number is consistent
if `nDeltaPrice' != `nInputs'{
di as error "The number of <DELTAvar> of <PRICEvar> specified must equal the <INputs>"
}
// Left-hand side: sum of shares + changes in input
if mi("`deltavars'"){ // generate deltavars --> requires the "pricevars" - i.e., the changes in prices
loc i = 0
foreach var of varlist `inputs'{
tempvar d`var' delta`var'
loc i = `i' + 1
loc pvar: word `i' of `pricevars'
qui g `d`var'' = (`var' - l.`var') / l.`var' `if' // generate the changes in the "gross" variable
qui g `delta`var'' = `dvar' - `pvar' // subtract the changes in prices to obtain the changes in input
loc deltavars `deltavars' `delta`var''
}
}
// generate the sum of weighted inputs
loc i = 0 //counter for inputs
tempvar lhs
qui g `lhs' = 0
foreach dvar of varlist `deltavars'{
tempvar _alpha`var'
loc i = `i' + 1
loc invar: word `i' of `inputs' // this is the level variable relative to the delta we are considering
qui g `_alpha`var'' = `invar' / `go' // generate the alphas of input variables - that is, the ratio between the level of inputvar cost, and gross output
qui replace `lhs' = `lhs' + `_alpha`var'' * `dvar' // sum of weighted input variables changes
}
// Right-hand side: changes in output
tempvar rhs
if !mi("`deltago'"){
qui g `rhs' = `deltago'
}
else{ // generate the changes in sectoral output / value added
tempvar dGO deltaGO
qui g `dGO' = (`go' - l.`go') / l.`go' // this is the change in Gross Output = P * Y
qui g `deltago' = `dGO' - `pgo' // here we generate delta Y
qui g `rhs' = `deltago'
}
// Time-varying markups
if !mi("`timevarying'"){
tempvar twdeltaGO tweights
qui su `t' `if'
loc avgY = `r(min)' + int( (`r(max)' - `r(min)') / 2 )
g `tweights' = `t' - `avgY' // linear time trend
qui g `twdeltaGO' = `tweights' * `deltago'
foreach var of varlist `instruments'{
tempvar tw`var'
qui g `tw`var'' = `tweights' * `var'
loc twinstruments `twinstruments' `tw`var''
}
}
// Actual estimation
qui ivreg2 `lhs' (`rhs' `twdeltaGO' = `instruments' `twinstruments') if `touse' == 1, nocons
if `byRep' == 1{ // when using "by()", discriminate between the first and the following rounds
if mi("`timevarying'"){
qui g `markupvar' = 1 / _b[`rhs'] if `touse' == 1
}
else{
qui g _psi_ = -_b[`twdeltaGO'] if `touse' == 1
qui g `markupvar' = 1 / _b[`rhs'] - (_b[`twdeltaGO'] * `tweights') if `touse' == 1
}
}
else{ //
tempvar _tmp_
if mi("`timevarying'"){
qui g `_tmp_' = 1 / _b[`rhs'] if `touse' == 1
}
else{
qui replace _psi_ = -_b[`twdeltaGO'] if `touse' == 1
qui g `_tmp_' = 1 / _b[`rhs'] - (_b[`twdeltaGO'] * `tweights') if `touse' == 1
}
qui replace `markupvar' = `_tmp_' if `touse' == 1
}
// post-estimation cleaning
if !mi("`hgraph'") & _bylastcall() == 1{ // plot the resulting, corrected distribution - if it is the last "by" call - or the unique one
preserve
tempvar densT densWerror
collapse (mean) `markupvar', by(`id')
qui drop if `markupvar' < 0 // way to ensure that initial sigma > 0 --> which ensures that the estimation is feasible
qui g muMinus1 = `markupvar' - 1
cap qui mata: opt_hall()
clear
set obs 1000000
qui g `densT' = 1 + exp(rnormal(delta, sigma)) //
qui g `densWerror' = rnormal(meanM, sdTotM ) //
qui replace `densT' = 0 if _n == 1
tw (kdensity `densT' if `densT' >= 0 & `densT' < 3, lw(thick) lc(maroon)) (kdensity `densWerror' if `densWerror' >= 0 /*
*/ & `densWerror' < 3, lw(thick) lc(ebblue) ), xtitle("Ratio of price to marginal cost {&mu}") xlab(0(0.25)3) /*
*/ legend(order(1 "{&mu} w/o sampling error" 2 "Estimated {&mu}")) ytitle("Probability Density")
restore
}
}
else if "`method'" == "roeger"{
**********************************************************************
/* 2.2) Roeger method */
**********************************************************************
if mi("`go'"){ // check whether there are missing "parts" in the command launched
di as error "Specify the <GO> variable "
exit 198
}
if !mi("`w'"){
loc weights "[aw=`W']"
}
loc nInputs: word count `inputs' // this is the total number of inputs
tempvar dGO
qui g `dGO' = (`go' - l.`go') / l.`go' // changes in gross output
/* generate the deltavars for Roeger: SUBSTITUTE THE PREVIOUS CODE */
foreach var of varlist `inputs'{
tempvar d`var'
qui g `d`var'' = (`var' - l.`var') / l.`var'
loc ddvars `ddvars' `d`var''
}
// generate the sum of weighted inputs
loc i = 0 //counter for inputs
tempvar lhs allalphas
qui g `lhs' = `dGO'
qui g `allalphas' = 0
*foreach dvar of varlist `deltavars'{
foreach dvar of varlist `ddvars'{
tempvar _alpha`var'
loc i = `i' + 1
loc invar: word `i' of `inputs' // this is the level variable relative to the delta we are considering
if `i' < `nInputs'{ // generate alphas for each input but capital, which has the "residual" share
qui g `_alpha`var'' = `invar' / `go' // generate the alphas of input variables - that is, the ratio between the level of inputvar cost, and gross output
qui replace `allalphas' = `allalphas' + `_alpha`var''
}
else{ // it should NOT change anything if the user has complete data
qui g `_alpha`var'' = 1 - `allalphas' if `touse' == 1 // this is the residual alpha for capital
}
qui replace `lhs' = `lhs' - `_alpha`var'' * `dvar' // sum of weighted input variables changes
}
// Right-hand side: changes in output
tempvar rhs deltaRtK
loc capitalvar: word `nInputs' of `inputs'
qui g `deltaRtK' = (`capitalvar' - l.`capitalvar') / l.`capitalvar' `if' // this is the gross change in capital
qui g `rhs' = `dGO' - `deltaRtK'
// Actual estimation
qui reg `lhs' `rhs' `weights' if `touse' == 1, nocons
if `byRep' == 1{ // when using "by()", discriminate between the first and the following rounds
qui g `markupvar' = 1 / (1 - _b[`rhs']) if `touse' == 1
}
else{ //
tempvar _tmp_
qui g `_tmp_' = 1 / (1 - _b[`rhs']) if `touse' == 1
qui replace `markupvar' = `_tmp_' if `touse' == 1
}
}
}
/* return the locals in e() */
eret clear
eret loc cmd "markupest"
eret loc markupvar "`markupvar'"
eret loc method "`method'"
eret loc id "`id'"
eret loc t "`t'"
if `micro' == 1 {
eret loc markuptype "micro"
eret loc inputvar "`inputvar'"
if !mi("`estimates'"){
eret loc output "`output'"
eret loc free "`free'"
eret loc state "`state'"
eret loc proxy "`proxy'"
eret loc PFest_method "`pmethod'"
}
else{
eret loc estimate_name "`estimates'"
}
}
else{
eret loc markuptype "macro"
eret loc inputs "`inputs'"
if !mi("`instruments'"){
eret loc instruments "`instruments'"
}
if !mi("`deltavars'"){
eret loc deltavars "`deltavars'"
eret loc deltago "`deltago'"
}
else if !mi("`pricevars'"){
eret loc pricevars "`pricevars'"
eret loc pgo "`pgo'"
}
if !mi("`timevarying'"){
eret loc hall_mkuptype "time-varying"
}
}
if !mi("`prodestoptions'"){
eret loc prodestOPTS "`prodestoptions'"
}
if !mi("`corrected'"){
eret loc corrected "corrected"
}
end program
/*---------------------------------------------------------------------*/
capture mata mata drop hallsolver()
capture mata mata drop opt_hall()
mata:
/*------------ MATA ROUTINE FOR HALL (2018) POSTESTIMATION ------------*/
void hallsolver(todo, p, M, lnf, S, H)
{
gamma = p[1]
delta = p[2]
sigma = p[3]
M1 = M[1]
M2 = M[2]
M3 = M[3]
lnf = ( exp(delta + 0.5 * (sigma^2) ) - M1)^2 \
( (gamma^2) + exp( (2 * delta) + (2 * sigma^2) ) - M2 )^2 \
( exp( (3 * delta) + (9/2) * sigma^2 ) + 3 * gamma^2 * exp( delta + (1/2) * sigma^2 ) - M3 )^2
}
/*---------------------------------------------------------------------*/
void opt_hall()
{
st_view(MUminus1=., ., "muMinus1")
MUminus1_sq = MUminus1:^2
MUminus1_cb= MUminus1:^3
M = mean(MUminus1), mean(MUminus1_sq), mean(MUminus1_cb)
S = optimize_init()
optimize_init_argument(S, 1, M)
optimize_init_evaluator(S, &hallsolver())
optimize_init_evaluatortype(S, "v0")
optimize_init_params(S, (1, 1, 1) )
optimize_init_which(S, "min" )
optimize_init_tracelevel(S, "none")
optimize_init_conv_ptol(S, 1e-16)
optimize_init_conv_vtol(S, 1e-16)
p = optimize(S)
// these are the three main elements
gamma = p[1]
delta = p[2]
sigma = p[3]
// compute means and variances - etas, nus and markups
meanV = exp( delta + (sigma^2 / 2) ) // E[ exp( log(v) ) ] = exp( mu + sigma^2 / 2 )
varV = ( exp(sigma^2 ) - 1 ) * exp(2 * delta + sigma^2) // Var[ exp( log(v) ) ] = ( exp(sigma^2 ) - 1 ) exp(2 * mu + sigma^2)
sdV = (varV)^(1/2)
meanM = 1 + meanV
sdM = sdV // the standard deviation of the markup is the standard deviation of nu, i.e., the estimated markups net of the disturbance
sdTotM = (varV + gamma^2)^(1/2) // this is the standard deviation of the estimated markups
st_numscalar("delta", delta)
st_numscalar("sigma", sigma)
st_numscalar("meanM", meanM)
st_numscalar("sdTotM", sdTotM)
}
/*---------------------------------------------------------------------*/
end
|
**! version 1.0.1 10May2020
*! author Gabriele Rovigatti, Bank of Italy, Rome, Italy. mailto: gabriele.rovigatti@gmail.com | gabriele.rovigatti@bancaditalia.it
/***************************************************************************
** Stata program for Markup Estimation - custom PRODEST version
** Programmed by: Gabriele Rovigatti
**************************************************************************/
capture program drop prodest_m
program define prodest_m, sortpreserve eclass
version 10.0
syntax varlist(numeric min=1 max=1) [if] [in], free(varlist numeric min=1) /*
*/ proxy(varlist numeric min=1 max=2) state(varlist numeric min=1) /*
*/ [control(varlist min=1) ENDOgenous(varlist min=1) id(varlist min=1 max=1) t(varlist min=1 max=1) reps(integer 5) /*
*/ VAlueadded Level(int ${S_level}) OPTimizer(namelist min=1 max=3) MAXiter(integer 10000) /* OPTIMIZER: THERE IS THE POSSIBILITY TO WRITE technique(nr 100 nm 1000) with different optimizers after the number of iterations.
*/ poly(integer 3) METhod(name min=1 max=1) lags(integer 999) TOLerance(real 0.00001) gmm /*
*/ ATTrition ACF INIT(string) FSRESiduals(name min=1 max=1) EVALuator(string) TRANSlog OVERidentification]
loc vv: di "version " string(min(max(10,c(stata_version)),14.0)) ", missing:" // we want a version 10 min, while a version 14.0 max (no version 14.2)
loc depvar `varlist'
marksample touse
markout `touse' `free' `proxy' `state' `control' `endogenous' `id' `t'
/// checks for same variables in state - free - proxy - controls
loc chk1: list free & state
loc chk2: list free & control
loc chk3: list free & proxy
loc chk4: list state & control
loc chk5: list state & proxy
loc chk6: list control & proxy
forval i = 1/6{
if "`chk`i''" != ""{
di as error "Same variables in free, state, control or proxy."
exit 198
}
}
/// check whether there are more than one state AND more than one proxy
loc pnum: word count `proxy'
loc snum: word count `state'
if `pnum' > 1 & `snum' > 1{
di as error "Cannot specify multiple state AND multiple proxy variables"
exit 198
}
/// check for unavailable choice of models
if (!inlist("`method'", "op", "lp", "wrdg", "mr", "rob") & !mi("`method'")){
di as error "Allowed prodest methods are op, lp, wrdg, mr or rob. Default is lp"
exit 198
}
else if ("`method'" == "mr" & c(stata_version) < 14.2){
di as error "MrEst only available with Stata version 14.2 or higher"
exit 198
}
else if mi("`method'"){
loc method "lp"
}
/// Check for unpractical value of polynomial approximation
if (`poly' >= 7 | `poly' < 2){
di as error "Polynomial degree must lie between 2 and 6"
exit 198
}
/// Syntax check: is data xtset?
if (mi("`id'") | mi("`t'")) {
capture xtset
if (_rc != 0) {
di as error "You must either xtset your data or specify both id() and t()"
error _rc
}
else {
loc id = r(panelvar)
loc t = r(timevar)
}
}
else {
qui xtset `id' `t'
}
/// Check for a valid confidence level
if (`level' < 10 | `level' > 99) {
di as error "confidence level must be between 10 and 99"
error 198
}
/// Value added or gross output?
if mi("`valueadded'"){
loc model "grossoutput"
loc proxyGO `proxy'
loc lproxyGO l_gen`proxy'
}
else {
loc model "valueadded"
}
/// Number of repetitions reasonable?
if (`reps' < 2) & ("`method'" != "wrdg" & "`method'" != "mr"){
di as error "reps() must be at least 2"
exit 198
}
/// optimizer choice
if (!mi("`optimizer'") & !inlist("`optimizer'", "nm", "nr", "dfp", "bfgs", "bhhh", "gn")){
di as error "Allowed optimizers are nm, nr, dfp, bfgs and bhhh or gn for wrdg and mr. Default is nm or gn for wrdg and mr"
exit 198
}
else if inlist("`method'", "wrdg", "mr") & inlist("`optimizer'", "nm", "bhhh"){
di as error "`optimizer' is not allowed with `method' method. Optimizer switched to gn (default)"
loc optimizer "gn"
}
else if mi("`optimizer'") & !inlist("`method'", "wrdg", "mr"){
loc optimizer "nm"
}
else if mi("`optimizer'"){
loc optimizer "gn"
}
/// number of lags in MR metodology
if ("`method'" == "mr" & mi("`lags'")){
di as error "Method 'mr' requires a specification for lags. Switched to 'all possible lags' (default)"
loc lags = .
}
else if ("`method'" == "mr" & (`lags' < 1)) {
di as error "Minimum lag is 1"
exit 198
}
else if ("`method'" == "mr" & (`lags' == 999)) {
loc lags = .
}
/// check ACF correction in woolridge and mr cases
if ("`method'" == "wrdg" | "`method'" == "mr") & !mi("`acf'"){
di as error "`method' does not support ACF correction. Estimation run on baseline model"
loc acf ""
}
/// feasible values of tolerance
if mi(`tolerance') | `tolerance' > 0.01{
di as error "maximum value of tolerance is 0.01. Changed to default of value of e-5"
loc tolerance = 0.00001
}
if ("`method'" == "op") {
loc proxyGO ""
loc lproxyGO ""
}
/// define conv_nrtol for wooldridge and mr - NOT in case of GN
if ("`method'" == "wrdg" | "`method'" == "mr") & "`optimizer'" != "gn"{
loc conv_nrtol "conv_nrtol(`tolerance')"
}
if ("`method'" == "wrdg" | "`method'" == "mr") & !mi("`init'"){
loc init = subinstr("`init'",","," ",.)
foreach var in `free' `state'{
gettoken val init: init
loc init_gmm `init_gmm' xb_`var' `val'
}
loc init_gmm from(`init_gmm')
}
if !mi("`fsresiduals'") & inlist("`method'", "wrdg", "mr", "rob"){
di as error "fsresiduals is available with OP and LP methods only."
exit 198
}
cap confirm var `fsresiduals'
if !_rc{
di as error "`fsresiduals' already exists"
exit 198
}
/// define the evaluator type
if mi("`evaluator'") & "`optimizer'" == "bhhh"{
loc evaluator = "gf0"
}
else if mi("`evaluator'"){
loc evaluator = "d0"
}
/// check the translog - only meaningful for ACF and Wooldridge methods
if !mi("`translog'") & mi("`acf'") & "`method'" != "wrdg"{
di as error "translog is available with Wooldridge or ACF-corrected models only"
exit 198
}
else if !mi("`translog'") & !mi("`acf'"){
loc transVars `free' `state' `proxyGO'
}
/// change the production function type
if !mi("`translog'"){
loc PFtype "translog"
}
else{
loc PFtype "Cobb-Douglas"
}
if !mi("`overidentification'") & mi("`acf'") & "`method'" != "wrdg"{
di as error "overidentification is meaningful for ACF or WRDG methods only. Ignoring the option."
}
if !mi("`gmm'") & "`method'" != "wrdg"{
di as error "gmm works for wrdg only. Ignoring the option."
loc gmm ""
}
if !mi("`acf'") & "`model'" == "grossoutput"{
di as error "Using ACF correction with GO output does not ensure a correct parameter identification. See ACF (2015)."
}
if "`method'" == "op" loc strMethod "Olley-Pakes"
if "`method'" == "lp" loc strMethod "Levinsohn-Petrin"
if "`method'" == "wrdg" loc strMethod "Wooldridge"
if "`method'" == "rob" loc strMethod "Wooldridge/Robinson"
if "`method'" == "mr" loc strMethod "Mollisi-Rovigatti"
loc colnum: word count `free' `state' `control' `proxyGO'
/// initialize results matrices
tempname firstb __b __V robV
mat `__b' = J(`reps',`colnum',.)
/// preserve the data before generating tons of variables
preserve
qui su `t' if `touse' == 1
loc maxDate = `r(max)'
/// directly keep only observations in IF and IN --> SAVE A TEMPORARY FILE
tempfile temp
qui keep if `touse' == 1
keep `depvar' `free' `state' `proxy' `control' `id' `t' `touse' `endogenous'
/// generate an "exit" dummy variable equal to one for all firms not present in the last period of panel
tempvar exit
qui bys `id' (`t'): g `exit' = (_n == _N & `t' < `maxDate')
qui save `temp', replace
/// define all locals to run the command
loc toLagVars `free' `state' `control' `proxyGO'
foreach var of local toLagVars{
qui g l_gen`var' = l.`var'
loc laggedVars `laggedVars' l_gen`var'
}
foreach local in free state proxy control{
loc `local'num: word count ``local''
foreach var of local `local'{
loc lag`local'Vars `lag`local'Vars' l_gen`var'
}
}
loc instrumentVars `state' `lagfreeVars' `control' `lproxyGO'
/// OP LP polyvars
loc polyvars `state' `proxy'
/// ACF requires free variables to be among the polynomial
if !mi("`acf'"){
loc polyvars `free' `polyvars'
}
loc varnum: word count `polyvars'
loc controlnum: word count `control'
loc tolagnum: word count `toLagVars'
// poly-th degree polynomial
loc n = 1
foreach x of local polyvars{
qui g var_`n' = `x'
loc interactionvars `interactionvars' var_`n'
loc ++n
}
forv i=1/`varnum'{
forv j=`i'/`varnum'{
qui g var_`i'_`j' = var_`i'*var_`j'
loc interactionvars `interactionvars' var_`i'_`j'
if `poly' > 2{
forv z=`j'/`varnum'{
qui g var_`i'_`j'_`z' = var_`i'*var_`j'*var_`z'
loc interactionvars `interactionvars' var_`i'_`j'_`z'
if `poly' > 3{
forv g = `z'/`varnum'{
qui g var_`i'_`j'_`z'_`g' = var_`i'*var_`j'*var_`z'*var_`g'
loc interactionvars `interactionvars' var_`i'_`j'_`z'_`g'
if `poly' > 4{
forv v = `g'/`varnum'{
qui g var_`i'_`j'_`z'_`g'_`v' = var_`i'*var_`j'*var_`z'*var_`g'*var_`v'
loc interactionvars `interactionvars' var_`i'_`j'_`z'_`g'_`v'
if `poly' > 5{
forv s = `v'/`varnum'{
qui g var_`i'_`j'_`z'_`g'_`v'_`s' = var_`i'*var_`j'*var_`z'*var_`g'*var_`v'*var_`s'
loc interactionvars `interactionvars' var_`i'_`j'_`z'_`g'_`v'_`s'
}
}
}
}
}
}
}
}
}
}
if inlist("`method'","wrdg", "mr", "rob"){
/// generate GMM fit - for each state and free variables need to initiate the GMM
foreach element in `free' `state' `proxyGO'{
local gmmfit `gmmfit'-{`element'}*`element'
}
/// generate lag interactionvars
foreach var of local interactionvars{
cap g l_gen`var' = l.`var'
loc lagInteractionvars `lagInteractionvars' l_gen`var'
}
if "`method'" == "wrdg"{ /* WRDG */
if !mi("`overidentification'") | !mi("`gmm'") { // in case of overidentification, or in case the user specifies a preference for GMM, estimate a system GMM model, else go with the linear IV model
if !mi("`overidentification'"){
loc overidentification `lagfreeVars' `lagInteractionvars'
}
loc eq1counter = 0
foreach var of varlist `interactionvars'{
loc ++eq1counter
loc eq1vars `eq1vars' -{xb`eq1counter'}*`var'
}
loc eq2counter = 0
foreach var of varlist `lagInteractionvars'{
loc ++eq2counter
loc eq2vars `eq2vars' -{xb`eq2counter'}*`var'
}
`vv' qui gmm (eq1: `depvar' `gmmfit' `eq1vars' `wrdg_contr1' - {a0}) /* y - a0 - witB - xitJ - citL
*/ (eq2: `depvar' `gmmfit' `eq2vars' `wrdg_contr2' - {a0} -{e0}), /* y - e0 - witB - xitJ - p(cit-1L) - ... - pg(cit-1L)^g
*/ instruments(eq1: `free' `interactionvars' `control' `translogVars' `overidentification') /* Zit1 = (1,wit,xit,c0it,cit-1)
*/ instruments(eq2: `state' `lagfreeVars' `lagInteractionvars' `control' `translogVars') /* Zit2 = (1,xit,wit-1,cit-1)
*/ winitial(unadjusted, independent) technique(`optimizer') conv_maxiter(`maxiter') `conv_nrtol' `init_gmm'
/// save locals for Hanses's J and p-value
qui estat overid
loc hans_j: di %3.2f `r(J)'
loc hans_p: di %3.2f `r(J_p)'
}
else{ /* WRDG - plain */
tempfile wrdg
qui save `wrdg'
qui reg `depvar' `state' `proxyGO' `interactionvars' `free' `lagfreeVars' `control' // just to take the number of observations used in the estimation
loc realObs = `e(N)'
qui expand 2, gen(cons2) // double the dataset in order to stack dependent and regressors
foreach var of local interactionvars{
qui replace `var' = l_gen`var' if cons2 == 1 // change interactionvars to lagged values
}
qui ivregress gmm `depvar' `state' `proxyGO' `interactionvars' `control' (`free' = `lagfreeVars') cons2, wmatrix(unadjusted) c // run the IV regression
/// save locals for Hanses's J and its p-value
loc hans_j: di %3.2f `e(J)'
mat cV = colsof(e(V)) // find the number of instruments
scalar cV = cV[1,1]
loc jdf = cV - `e(rank)' + 1
loc hans_p: di %3.2f chi2tail(`jdf', `hans_j') // this is the chi2 p-value of Hansen statistics
qui use `wrdg', clear
eret loc N `realObs' // post the real number of obseravtions used in estimation
}
}
else if "`method'" == "mr"{ /* MrEst */
if !mi("`overidentification'"){ // interactions are valid instruments, too
loc overidentification `lagInteractionvars'
}
/// god forgive me: in order to overcome the difference equation in system GMM we launch both equations in level and take the initial weighting matrix
loc instnum = (`freenum' + `statenum')
forv i = 1/`instnum'{
loc intVars `intVars' var_`i'
}
loc interinstr: list local interactionvars - intVars
qui gmm (`depvar' `gmmfit' - {xh: `interactionvars'} `wrdg_contr1' - {a0}), quickd instruments(1: `interinstr') /*
*/ xtinstruments(`free' `state', l(0/`lags')) winitial(xt L) onestep conv_maxiter(1)
qui mat W1 = e(W)
qui gmm (`depvar' `gmmfit' - {xj: `lagInteractionvars'} `wrdg_contr2' - {a0} - {e0}), quickd /*
*/ instruments(`state' `lagInteractionvars') xtinstruments(`free' `state', l(2/`lags'))/*
*/ /*xtinstruments(`state', l(0/`lags'))*/ winitial(xt L) onestep conv_maxiter(1)
qui mat W2 = e(W)
mata W_hat = st_matrix("W1"),J(rows(st_matrix("W1")),cols(st_matrix("W2")),0) \ J(rows(st_matrix("W2")),cols(st_matrix("W1")),0), st_matrix("W2")
mata st_matrix("W_hat", W_hat)
/* generate the model for both equations - same parameter for c_{i,t} and c_{i,t-1} in WRDG terms */
loc eq1counter = 0
foreach var of varlist `interactionvars' `control'{
loc ++eq1counter
loc eq1vars `eq1vars' "-{xb`eq1counter'}*`var'"
}
loc eq2counter = 0
foreach var of varlist `lagInteractionvars' `control'{
loc ++eq2counter
loc eq2vars `eq2vars' "-{xb`eq2counter'}*`var'"
}
/// launch the gmm with the winitial built above
`vv' qui gmm (1: `depvar' `gmmfit' `eq1vars' /*`wrdg_contr1'*/ - {a0}) /* y - a0 - witB - xitJ - citL
*/ (2: `depvar' `gmmfit' `eq2vars' /*`wrdg_contr2'*/ - {a0} - {e0}),/* y - e0 - witB - xitJ - p(cit-1L) - ... - pg(cit-1L)^g, where p = g = 1
*/ instruments(1: `interinstr' `overidentification' `control') xtinstruments(1: `free' `state', l(0/`lags')) /*
*/ xtinstruments(2: `free' `state', l(2/`lags')) instruments(2: `state' `lagInteractionvars' `control') /*
*/ onestep winitial("W_hat") nocommonesample quickd technique(`optimizer') conv_maxiter(`maxiter') `conv_nrtol' `init_gmm'
/// save locals for Hanses's J and p-value
qui estat overid
loc hans_j: di %3.2f `r(J)'
loc hans_p: di %3.2f `r(J_p)'
}
else{ /* Robinson / ACF */
qui ivregress gmm `depvar' `state' `proxyGO' `control' `lagInteractionvars' (`free' = `lagfreeVars'), vce(cluster `id') /* this is working! */
/// save locals for Hanses's J and its p-value
loc hans_j: di %3.2f `e(J)'
mat cV = colsof(e(V)) // find the number of instruments
scalar cV = cV[1,1]
loc jdf = cV - `e(rank)' + 1
loc hans_p: di %3.2f chi2tail(`jdf', `hans_j') // this is the chi2 p-value of Hansen statistics
}
/// save elements for result posting
loc nObs = `e(N)'
loc numInstr1: word count `e(inst_1)'
loc numInstr2: word count `e(inst_2)'
mat `__b' = e(b)
mat `__b' = `__b'[1...,1..`colnum']
mat `__V' = e(V)
mat `__V' = `__V'[1..`colnum',1..`colnum']
/// save locals for Hanses's J and its p-value
loc hans_j: di %3.2f `e(J)'
mat cV = colsof(e(V)) // find the number of instruments
scalar cV = cV[1,1]
loc jdf = cV - `e(rank)' + 1
loc hans_p: di %3.2f chi2tail(`jdf', `hans_j') // this is the chi2 p-value of Hansen statistics
continue, break
}
else{ /* if it's not WRDG or MrEst */
if !mi("`attrition'"){
foreach var in `interactionvars'{
qui g l_gen`var' = l.`var'
loc lagInterVars `lagInterVars' l_gen`var'
}
qui cap logit `exit' `lagInterVars'
if _rc == 0{
qui predict Pr_hat if e(sample), pr
}
else{
di as error "No ID exits the sample. Running the estimation with no attrition"
loc attrition ""
qui g Pr_hat = .
}
}
else{
qui g Pr_hat = .
}
/// in case of ACF we don't want free variables to appear twice in the regression
if !mi("`acf'"){
forv i = 1/`freenum'{
loc freeVars `freeVars' var_`i'
}
loc interactionvars: list local interactionvars - freeVars
/// generate the needed variables for translog: a local with all power of 2 + interactions and instruments - lag of free/proxy interacted with state
if !mi("`translog'"){
loc transNum: word count `transVars'
forv i=1/`transNum'{
forv j=`i'/`transNum'{
loc ivar `: word `i' of `transVars''
loc jvar `: word `j' of `transVars''
loc tvarname "`ivar'X`jvar'"
loc transNames `transNames' `tvarname'
loc interactionTransVars `interactionTransVars' var_`i'_`j'
qui g lagTrans_`i'_`j' = l.var_`i'_`j'
loc lagInteractionTransVars `lagInteractionTransVars' lagTrans_`i'_`j'
}
}
foreach fvar in `free' `proxyGO'{
tempvar d`fvar'
qui g `d`fvar'' = `fvar'^2
qui g lagInstr`fvar' = l.`d`fvar''
loc instrumentTransVars `instrumentTransVars' lagInstr`fvar'
foreach svar in `state'{
qui g instr_`fvar'`svar' = lagInstr`fvar'*`svar'
loc instrumentTransVars `instrumentTransVars' instr_`fvar'`svar'
}
}
foreach svar in `state'{
tempvar d`svar'
qui g `d`svar'' = `svar'^2
qui g lagInstr`svar' = l.`d`svar''
loc instrumentTransVars `instrumentTransVars' lagInstr`svar'
}
loc colnum: word count `free' `state' `control' `proxyGO' `interactionTransVars'
mat `__b' = J(`reps',`colnum',.)
}
}
loc regvars `free' `control' `interactionvars'
loc firstRegNum: word count `regvars'
loc regNum: word count `free' `state' `control' `proxyGO' `transVars'
/// first stage
qui _xt, trequired
qui reg `depvar' `regvars'
/// generating "freeFit" as the fit of free variables to be subtracted to Y
tempvar freeFit
qui g `freeFit' = 0
foreach var in `free'{
scalar b_`var' = _b[`var']
qui replace `freeFit' = `freeFit' + (b_`var'*`var')
}
mat `firstb' = e(b)
mat `robV' = e(V)
qui predict phihat if e(sample), xb
qui g phihat_lag = l.phihat
/// retrieve starting points through an OLS estimation --> first round only, not during bootstrap
qui reg `depvar' `toLagVars' `interactionTransVars' if `touse' == 1
if !mi("`init'"){
mat ols_s = `init' // THIS PART IS JUST A TRYOUT IN ORDER TO MAKE THE COMMAND WORK FOR OUR PURPOSES
}
else{
mat tmp = e(b)
mata: st_matrix("ols_s",st_matrix("tmp"):+ rnormal(1,1,0,.01)) // we add some "noise" to OLS results in order not to block the optimizer
}
/// save the first stage results
if !mi("`fsresiduals'"){
tempfile fsres
tempvar FSfit
mat score `FSfit' = `firstb'
qui g `fsresiduals' = `depvar' - `FSfit'
qui save `fsres'
}
qui g res = 0
if mi("`acf'"){ /* OP and LP (non-corrected) second stage */
/// here we generate a tempvar with the fitted value of all the free variables
qui replace phihat = phihat - `freeFit'
qui replace res = `depvar' - `freeFit'
qui replace phihat_lag = l.phihat
mat init = ols_s[1...,(`freenum'+1)..`regNum']'
loc toLagVars `state' `control' `proxyGO'
loc laggedVars: list local laggedVars - lagfreeVars
loc instrumentVars `state' `control' `lproxyGO'
/// here we launch the mata routine for OP or LP
foreach var of varlist `toLagVars' `laggedVars' /*`lagfreeVars'*/ phihat_lag phihat{
qui drop if mi(`var')
}
/// the routine cannot fail the first estimation - otherwise it would fail with the point estimates. We capture the boot repetitions
qui mata: opt_mata(st_matrix("init"),&foplp(),"`optimizer'","phihat","phihat_lag",/*
*/"`toLagVars'","`laggedVars'","`touse'",`maxiter',`tolerance',"`evaluator'","Pr_hat","res","`instrumentVars'","`endogenous'")
mat `__b'[1,1] = `firstb'[1...,1..(`freenum')],r(betas)
}
else{ /* ACF second stage */
if !mi("`overidentification'"){ // lag intereactions are valid instruments for the first equation, too
loc overidentification `lagInteractionvars'
}
loc toLagVars `free' `state' `control' `proxyGO' `interactionTransVars'
loc laggedVars `lagfreeVars' `lagstateVars' `lagcontrolVars' `lproxyGO' `lagInteractionTransVars'
loc instrumentVars `lagfreeVars' `state' `control' `lproxyGO' `instrumentTransVars' `overidentification'
/// here we launch the mata routine for ACF
foreach var of varlist `laggedVars' `lagfreeVars' phihat_lag{
qui drop if mi(`var')
}
loc betaNum: word count `free' `state' `proxyGO' `control' `interactionTransVars'
mat init = ols_s[1...,1..(`betaNum')]'
qui mata: opt_mata(st_matrix("init"),&facf(),"`optimizer'","phihat","phihat_lag",/*
*/ "`toLagVars'","`laggedVars'","`touse'",`maxiter',`tolerance',"`evaluator'","Pr_hat","res","`instrumentVars'","`endogenous'")
mat `__b'[1,1] = r(betas)
}
}
if !inlist("`method'","wrdg","mr","rob"){
clear
/// generate the varCovar matrix for the bootstrapped estimates
qui svmat `__b'
mat `__b' = `__b'[1,1...]
}
restore
/// merge the first stage residuals
if !mi("`fsresiduals'"){
qui merge 1:1 `id' `t' using `fsres', nogen keepusing(`fsresiduals')
}
mat coleq `__b' = ""
mat colnames `__b' = `free' `state' `control' `proxyGO' `transNames'
/// Display results - ereturn
eret clear
eret post `__b'
if !mi("`acf'"){
loc correction "ACF corrected"
}
if !mi("`fsresiduals'"){
eret loc FSres "`fsresiduals'"
}
eret loc cmd "prodest"
eret loc depvar "`depvar'"
eret loc free "`free'"
eret loc state "`state'"
eret loc proxy "`proxy'"
eret loc controls "`control'"
eret loc endogenous "`endogenous'"
eret loc method "`strMethod'"
eret loc model "`model'"
eret loc technique "`optimizer'"
eret loc idvar "`id'"
eret loc timevar "`t'"
eret loc correction "`correction'"
eret loc predict "prodest_p_m"
eret loc PFtype "`PFtype'"
eret loc gmm "`gmm'"
end program
//*---------------------------------------------------------------------*/
/// defining mata routines for optimization
capture mata mata drop opt_mata()
capture mata mata drop facf()
capture mata mata drop foplp()
*version 7
mata:
/*---------------------------------------------------------------------*/
void foplp(todo,betas,X,lX,PHI,LPHI,RES,Z,PR_HAT,ENDO,crit,g,H)
{
OMEGA = PHI-X*betas'
OMEGA_lag = LPHI-lX*betas'
OMEGA_lag2 = OMEGA_lag:*OMEGA_lag
OMEGA_lag3 = OMEGA_lag2:*OMEGA_lag
/* IF clause in order to see whether we have to use the "exit" variable */
if (!missing(PR_HAT)){
PR_HAT2 = PR_HAT:*PR_HAT
PR_HAT3 = PR_HAT2:*PR_HAT
OMEGA_lag_pol = (J(rows(PHI),1,1),OMEGA_lag,OMEGA_lag2,OMEGA_lag3,PR_HAT,PR_HAT2,PR_HAT3,PR_HAT:*OMEGA_lag,PR_HAT2:*OMEGA_lag,PR_HAT:*OMEGA_lag2,ENDO)
}
else{
OMEGA_lag_pol = (J(rows(PHI),1,1),OMEGA_lag,OMEGA_lag2,OMEGA_lag3,ENDO)
}
g_b = invsym(OMEGA_lag_pol'OMEGA_lag_pol)*OMEGA_lag_pol'OMEGA
XI = RES-X*betas'-OMEGA_lag_pol*g_b
crit = (XI)'*(XI)
}
/*---------------------------------------------------------------------*/
void facf(todo,betas,X,lX,PHI,LPHI,RES,Z,PR_HAT,ENDO,crit,g,H)
{
W = invsym(Z'Z)/(rows(Z))
OMEGA = PHI-X*betas'
OMEGA_lag = LPHI-lX*betas'
OMEGA_lag2 = OMEGA_lag:*OMEGA_lag
OMEGA_lag3 = OMEGA_lag2:*OMEGA_lag
/* IF clause in order to see whether we have to use the "exit" variable */
if (!missing(PR_HAT)){
PR_HAT2 = PR_HAT:*PR_HAT
PR_HAT3 = PR_HAT2:*PR_HAT
OMEGA_lag_pol = (J(rows(PHI),1,1),OMEGA_lag,OMEGA_lag2,OMEGA_lag3,PR_HAT,PR_HAT2,PR_HAT3,PR_HAT:*OMEGA_lag,PR_HAT2:*OMEGA_lag,PR_HAT:*OMEGA_lag2,ENDO)
}
else{
OMEGA_lag_pol = (J(rows(PHI),1,1),OMEGA_lag,OMEGA_lag2,OMEGA_lag3,ENDO)
}
g_b = invsym(OMEGA_lag_pol'OMEGA_lag_pol)*OMEGA_lag_pol'OMEGA
XI = OMEGA-OMEGA_lag_pol*g_b
crit = (Z'XI)'*W*(Z'XI)
}
/*---------------------------------------------------------------------*/
void opt_mata(init, f, opt, phi, lphi, tolag, lagged, touse, maxiter, tol, eval, | Pr_hat, res, instr, endogenous)
{
st_view(RES=.,.,st_tsrevar(tokens(res)), touse)
st_view(PHI=.,.,st_tsrevar(tokens(phi)), touse)
st_view(LPHI=.,.,st_tsrevar(tokens(lphi)), touse)
st_view(Z=.,.,st_tsrevar(tokens(instr)), touse)
st_view(X=.,.,st_tsrevar(tokens(tolag)), touse)
st_view(lX=.,.,st_tsrevar(tokens(lagged)), touse)
st_view(PR_HAT=.,.,st_tsrevar(tokens(Pr_hat)), touse)
st_view(ENDO=.,.,st_tsrevar(tokens(endogenous)), touse)
S = optimize_init()
optimize_init_argument(S, 1, X)
optimize_init_argument(S, 2, lX)
optimize_init_argument(S, 3, PHI)
optimize_init_argument(S, 4, LPHI)
optimize_init_argument(S, 5, RES)
optimize_init_argument(S, 6, Z)
optimize_init_argument(S, 7, PR_HAT)
optimize_init_argument(S, 8, ENDO)
optimize_init_evaluator(S, f)
/*optimize_init_evaluatortype(S, "d0")*/
optimize_init_evaluatortype(S, eval)
optimize_init_conv_maxiter(S, maxiter)
optimize_init_conv_nrtol(S, tol)
/*optimize_init_evaluatortype(S, "gf0")*/
optimize_init_technique(S, opt)
optimize_init_nmsimplexdeltas(S, 0.00001)
optimize_init_which(S,"min")
optimize_init_params(S,init')
p = optimize(S)
st_matrix("r(betas)", p)
/* tryout to find Newey-type Cov estimator
st_matrix("r(gradient)", optimize_result_gradient(S))
st_matrix("r(score)", optimize_result_scores(S))
Hl = optimize_result_gradient(S)
V = optimize_result_V(S)
Vhh = invsym(Hl)*V*invsym(Hl)'
st_matrix("r(Vhh)", Vhh)*/
}
/*---------------------------------------------------------------------*/
end
|
*! version 1.0.1 10May2020
*! author: Gabriele Rovigatti, Bank of Italy, Rome, Italy. mailto: gabriele.rovigatti@gmail.com | gabriele.rovigatti@bancaditalia.it
/***************************************************************************
** Stata program for Markup Estimation - prodest postestimation
** Programmed by: Gabriele Rovigatti
**************************************************************************/
cap program drop prodest_p_m
program define prodest_p_m, sortpreserve eclass
version 10.0
syntax [anything] [if] [in] [, ///
MARKups ///
INPUTvar(varlist numeric min=1 max=1) ///
CORRected ///
REPetition(integer 1) ///
]
marksample touse // this is not e(sample)
tempvar esample
qui gen byte `esample' = e(sample)
loc varlist `anything'
loc mod = "`e(PFtype)'"
loc fsres = "`e(FSres)'"
loc free = "`e(free)'"
tempvar val
if "`e(model)'" == "grossoutput"{
loc proxy = "`e(proxy)'"
loc free "`free' `proxy'"
qui g `val' = log( exp(`e(depvar)') - exp(`proxy') ) // in case of gross output, generate the measure of value added as the difference between gross output and material input costs
}
else{
qui g `val' = `e(depvar)'
}
loc state = "`e(state)'"
************************************************************************************************
********************* PART 1: MARKUPS ESTIMATION ************************
************************************************************************************************s
/* check for correct usage of options */
if !mi("`corrected'") & mi("`fsres'"){ // correction à la DLW only available with first-stage residuals
di as error "Markup correction requires launching prodest with 'fsresiduals(<fsvar>)' option"
exit 198
}
if !`:list inputvar in free'{ // check whether the input variable specified is in the list of free variables
di as error "<inputvar> should be either a free or a proxy variables used in the estimation"
exit 198
}
cap confirm var `varlist', exact // if the variable already exists
if !_rc & `repetition' == 1{
di as error "`varlist' already exists"
exit 198
}
if mi("`varlist'"){ // check the outcome variable: if it is missing, use a pre-specified _mkup[number] variable
loc c = 0
while (1){
loc ++c
loc varlist _mkup`c'
cap confirm var `varlist'
if (_rc != 0) continue, break
}
di as error "You should specify <newvarname> to store the estimated markups. They will be stored in `varlist' now"
}
********* ROUTINE START **************
tempvar theta alpha
/* generate the input share, either "raw" or corrected by the first-stage residuals as suggested by DLW */
*loc lhs `e(depvar)' // this is the output - in logs
qui g `alpha' = exp(`inputvar') / exp(`val') // share is input cost / value added
if !mi("`corrected'"){
qui replace `alpha' = `alpha' * exp(`fsres')
}
/* Generate the elasticity parameter - either the estimated beta (Cobb-Douglas) or a function of it (Translog) */
if "`mod'" == "Cobb-Douglas"{ /* PART I: COBB-DOUGLAS */
qui g `theta' = _b[`inputvar'] //
}
else { /* PART II: TRANSLOG */
tempname beta
mat `beta' = e(b) // extract the estimated betas
loc controls = "`e(controls)'"
loc transvars `free' `state' `controls'
loc translogNum: word count `transvars'
loc n = 1 // regenerate the variables used in the routine in order to fit the values
foreach x of local transvars{
tempvar var_`n' betavar_`n'
qui g `var_`n'' = `x'
loc ++n
}
forv i = 1/`translogNum'{
forv j = `i'/`translogNum'{
tempvar var_`i'`j' beta_`i'`j'
cap g `var_`i'`j'' = (`var_`i'' * `var_`j'')
cap g `beta_`i'`j'' = `beta'[1,`n']
loc ++n
}
}
loc varnum: word count `free' `state'
loc inputpos: list posof "`inputvar'" in free // this is the number of the input variable within the free vars
forv j = 1/`varnum'{
if `inputpos' != `j'{ /* generate the cross variables part only */
cap confirm variable `beta_`inputpos'`j''
if !_rc{
loc remainder `remainder' + (`beta_`inputpos'`j'' * `var_`j'')
}
else{
loc remainder `remainder' + (`beta_`j'`inputpos'' * `var_`j'')
}
}
}
qui gen `theta' = `beta'[1,`inputpos'] + (2 * `beta_`inputpos'`inputpos'' * `var_`inputpos'') `remainder' // the elasticity for translog is defined as beta_Wtranslog = beta_w + 2*beta_ww * W + beta_wx * X, and here we use the previously generated variables and weight them by the ith variable
}
/* Compute the Markups */
if `repetition' > 1{
tempvar _foo_
g `_foo_' = `theta' / `alpha' `if'
replace `varlist' = `_foo_' `if'
}
else{
g `varlist' = `theta' / `alpha' `if' // compute the markups, and save the relative variable
}
end
|
use "${datadir}/f_DGP3_replica.dta", clear
/* run the markup estimation */
forv m = 1/4{
markupest mkupD3_m`m'_t_corr, method(dlw) output(lny) inputvar(lnl) free(lnl) state(lnk) proxy(lnm`m') prodestopt("poly(3) acf trans va") corrected verbose
}
/* Run the graph */
tw (kdensity mkupD3_m1_t_cor if mkupD3_m1_t_cor < 4, lw(medthick) lc(ebblue)) /*
*/ (kdensity mkupD3_m2_t_cor if mkupD3_m2_t_cor < 4, lw(medthick) lp(_) lc(maroon)) /*
*/ (kdensity mkupD3_m3_t_cor if mkupD3_m3_t_cor < 4, lw(medthick) lp(-) lc(forest_green)) /*
*/ (kdensity mkupD3_m4_t_cor if mkupD3_m4_t_cor < 4, lw(medthick) lp(dot) lc(sand)) /*
*/ , legend(order( 1 "No Meas Error" 2 "{&sigma}{superscript:2}{subscript:m}=0.1" 3 "{&sigma}{superscript:2}{subscript:m}=0.2" 4 "{&sigma}{superscript:2}{subscript:m}=0.5")) /*
*/ xtitle("Markup")
|
use "${datadir}/data_hall18", clear
egen nnaics = group(naics)
xtset nnaics year
g naics_2d = substr(naics, 1, 2)
// time-invariant Hall version
bys naics_2d: markupest mkup_h18, inputs( K L M S E ) deltavars( deltaK deltaL deltaM deltaS deltaE) method(hall) /*
*/ go(Y) deltago(deltaY) instruments( deltainstEq deltainstRD deltainstSh deltainstSo deltainstOP)
// Roeger
bys naics_2d: markupest mkup_roeg, inputs(L M S E K) method(roeger) go(Y)
// last 10 years
bys naics_2d: markupest mkup_h18_10y if year > 2007, inputs( K L M S E ) deltavars( deltaK deltaL deltaM deltaS deltaE) method(hall) /*
*/ go(Y) deltago(deltaY) instruments( deltainstEq deltainstRD deltainstSh deltainstSo deltainstOP)
bys naics_2d: markupest mkup_roeg_10y if year > 2007, inputs(L M S E K) method(roeger) go(Y)
collapse (mean) mkup_h* mkup_r*, by(naics_2d)
replace naics_2d = "Agriculture, Forestry, Fishing, Hunting" if naics_2d == "11"
replace naics_2d = "Mining" if naics_2d == "21"
replace naics_2d = "Utilities" if naics_2d == "22"
replace naics_2d = "Construction" if naics_2d == "23"
replace naics_2d = "Manufacturing" if naics_2d == "31"
replace naics_2d = "Manufacturing" if naics_2d == "32"
replace naics_2d = "Manufacturing" if naics_2d == "33"
replace naics_2d = "Wholesale Trade" if naics_2d == "42"
replace naics_2d = "Retail Trade" if naics_2d == "44"
replace naics_2d = "Transportation and Warehousing" if naics_2d == "48"
replace naics_2d = "Transportation and Warehousing" if naics_2d == "49"
replace naics_2d = "Information" if naics_2d == "51"
replace naics_2d = "Finance and Insurance" if naics_2d == "52"
replace naics_2d = "Real Estate Rental and Leasing" if naics_2d == "53"
replace naics_2d = "Professional, Scientific, and Technical Services" if naics_2d == "54"
replace naics_2d = "Management of Companies and Enterprises" if naics_2d == "55"
replace naics_2d = "Administrative and Support and Waste Manag" if naics_2d == "56"
replace naics_2d = "Educational Services" if naics_2d == "61"
replace naics_2d = "Health Care and Social Assistance" if naics_2d == "62"
replace naics_2d = "Arts, Entertainment, and Recreation" if naics_2d == "71"
replace naics_2d = "Accommodation and Food Services" if naics_2d == "72"
replace naics_2d = "Other Services (No Public Admin)" if naics_2d == "81"
eststo fhall: estpost tabstat mkup_h18, by(naics_2d) stat(mean) nototal listwise elabels
eststo froeger: estpost tabstat mkup_roeg, by(naics_2d) stat(mean) nototal listwise elabels
eststo fhall_10: estpost tabstat mkup_h18_10, by(naics_2d) stat(mean) nototal listwise elabels
eststo froeger_10: estpost tabstat mkup_roeg_10, by(naics_2d) stat(mean) nototal listwise elabels
esttab fhall froeger fhall_10 froeger_10 using "${tabledir}/table1.tex", f tex main(mean) wide nostar label replace coeflabels(`e(labels)') noobs nonum /*
*/ mti("$\hat{\mu}^{hall}$" "$\hat{\mu}^{roeger}$" "$\hat{\mu}^{hall}$" "$\hat{\mu}^{roeger}$") /*
*/ mgroups("1987-2017" "2008-2017" , pattern(1 0 1 0) prefix(\multicolumn{@span}{c}{) suffix(}) span erepeat(\cline{@span}))
|
f/* Replica of Figure 1 - Markup estimation using Stata: Micro and Macro approaches with markupest by G Rovigatti */
use "${datadir}/mkup_example", clear
/* xtset the data - firms and years */
xtset f_id year
/* set the seed and run the estimation: value added with ACF translog - print results on-screen and correct for first-stage residuals */
set seed 12356
bys nnace: markupest mkupACF_translog, method(dlw) output(ln_va) inputvar(ln_l) free(ln_l) state(ln_k) proxy(ln_m) valueadded prodestopt("poly(3) acf trans") verbose corr
/* plot the graph */
tw (kdensity mkupACF_translog if nnace == 1, lw(medthick) lp(_) lc(ebblue)) (kdensity mkupACF_translog if nnace == 2, lw(medthick) lp(-) lc(maroon)) /*
*/ (kdensity mkupACF_translog if nnace == 3, lw(medthick) lp(.-.) lc(forest_green)) (kdensity mkupACF_translog if nnace == 4, lw(medthick) lp(-.-) lc(sand)) /*
*/ (kdensity mkupACF_translog if nnace == 5, lw(medthick) lp(l) lc(navy)) (kdensity mkupACF_translog if nnace == 6, lw(medthick) lp(dot) lc(purple)) /*
*/ (kdensity mkupACF_translog if nnace == 7, lw(medthick) lp(_) lc(olive_teal)) (kdensity mkupACF_translog if nnace == 8, lw(medthick) lp(-) lc(cyan)) /*
*/ (kdensity mkupACF_translog if nnace == 9, lw(medthick) lp(l) lc(ltblue)) (kdensity mkupACF_translog if nnace == 10, lw(medthick) lp(dot) lc(mint)) /*
*/ (kdensity mkupACF_translog if nnace == 11, lw(medthick) lp(_) lc(erose)) /*
*/ if mkupACF_translog > 0 & mkupACF_translog < 3, ytitle("Density") xtitle("Markup") legend(order( 1 "25" 2 "41" 3 "43" 4 "45" 5 "49" 6 "55" 7 "56" 8 "62" 9 "63" 10 "68" 11 "82") cols(4))
|
/* Graphs on simulated data */
/* In-body + Appendix graphs */
forv DGP = 1(1)3{
use "${datadir}/f_DGP`DGP'", clear
tw (kdensity mkupD`DGP'_m1_t if mkupD`DGP'_m1_t < 4, lw(medthick) lc(ebblue)) (kdensity mkupD`DGP'_m2_t if mkupD`DGP'_m2_t < 4, lw(medthick) lp(_) lc(maroon)) /*
*/ (kdensity mkupD`DGP'_m3_t if mkupD`DGP'_m3_t < 4, lw(medthick) lp(-) lc(forest_green)) (kdensity mkupD`DGP'_m4_t if mkupD`DGP'_m4_t < 4, lw(medthick) lp(dot) lc(sand)) /*
*/ , legend(order( 1 "No Meas Error" 2 "{&sigma}{superscript:2}{subscript:m}=0.1" 3 "{&sigma}{superscript:2}{subscript:m}=0.2" 4 "{&sigma}{superscript:2}{subscript:m}=0.5")) /*
*/ xtitle("Markup")
gr export "${plotdir}/DGP`DGP'_t_d.eps", replace
tw (kdensity mkupD`DGP'_m1_cb if mkupD`DGP'_m1_cb < 4, lw(medthick) lc(ebblue)) (kdensity mkupD`DGP'_m2_cb if mkupD`DGP'_m2_cb < 4, lw(medthick) lp(_) lc(maroon)) /*
*/ (kdensity mkupD`DGP'_m3_cb if mkupD`DGP'_m3_cb < 4, lw(medthick) lp(-) lc(forest_green)) (kdensity mkupD`DGP'_m4_cb if mkupD`DGP'_m4_cb < 4, lw(medthick) lp(dot) lc(sand)) /*
*/ , legend(order( 1 "No Meas Error" 2 "{&sigma}{superscript:2}{subscript:m}=0.1" 3 "{&sigma}{superscript:2}{subscript:m}=0.2" 4 "{&sigma}{superscript:2}{subscript:m}=0.5")) /*
*/ xtitle("Markup")
gr export "${plotdir}/DGP`DGP'_cb_d.eps", replace
tw (kdensity mkupD`DGP'_m1_t_cor if mkupD`DGP'_m1_t_cor < 4, lw(medthick) lc(ebblue)) (kdensity mkupD`DGP'_m2_t_cor if mkupD`DGP'_m2_t_cor < 4, lw(medthick) lp(_) lc(maroon)) /*
*/ (kdensity mkupD`DGP'_m3_t_cor if mkupD`DGP'_m3_t_cor < 4, lw(medthick) lp(-) lc(forest_green)) (kdensity mkupD`DGP'_m4_t_cor if mkupD`DGP'_m4_t_cor < 4, lw(medthick) lp(dot) lc(sand)) /*
*/ , legend(order( 1 "No Meas Error" 2 "{&sigma}{superscript:2}{subscript:m}=0.1" 3 "{&sigma}{superscript:2}{subscript:m}=0.2" 4 "{&sigma}{superscript:2}{subscript:m}=0.5")) /*
*/ xtitle("Markup")
gr export "${plotdir}/DGP`DGP'_t_cor_d.eps", replace
tw (kdensity mkupD`DGP'_m1_cb_cor if mkupD`DGP'_m1_cb_cor < 4, lw(medthick) lc(ebblue)) (kdensity mkupD`DGP'_m2_cb_cor if mkupD`DGP'_m2_cb_cor < 4, lw(medthick) lp(_) lc(maroon)) /*
*/ (kdensity mkupD`DGP'_m3_cb_cor if mkupD`DGP'_m3_cb_cor < 4, lw(medthick) lp(-) lc(forest_green)) (kdensity mkupD`DGP'_m4_cb_cor if mkupD`DGP'_m4_cb_cor < 4, lw(medthick) lp(dot) lc(sand)) /*
*/ , legend(order( 1 "No Meas Error" 2 "{&sigma}{superscript:2}{subscript:m}=0.1" 3 "{&sigma}{superscript:2}{subscript:m}=0.2" 4 "{&sigma}{superscript:2}{subscript:m}=0.5")) /*
*/ xtitle("Markup")
gr export "${plotdir}/DGP`DGP'_cb_cor_d.eps", replace
}
/* helpfile example dataset building and code */
use "${datadir}/f_DGP3", clear
keep if _n < 50001
replace firm = firm + ((rep -1) * 1000)
drop mkup* inv exit lnw lnp rep
xtset firm period
compress
save "${datadir}/f_DGP3_replica", replace
/* run the markup estimation */
forv m = 1/4{
markupest mkupD3_m`m'_t_corr, method(dlw) output(lny) inputvar(lnl) free(lnl) state(lnk) proxy(lnm`m') prodestopt("poly(3) acf trans va") corrected verbose
}
/* Plot the graph */
tw (kdensity mkupD3_m1_t_cor if mkupD3_m1_t_cor < 4, lw(medthick) lc(ebblue)) /*
*/ (kdensity mkupD3_m2_t_cor if mkupD3_m2_t_cor < 4, lw(medthick) lp(_) lc(maroon)) /*
*/ (kdensity mkupD3_m3_t_cor if mkupD3_m3_t_cor < 4, lw(medthick) lp(-) lc(forest_green)) /*
*/ (kdensity mkupD3_m4_t_cor if mkupD3_m4_t_cor < 4, lw(medthick) lp(dot) lc(sand)) /*
*/ , legend(order( 1 "No Meas Error" 2 "{&sigma}{superscript:2}{subscript:m}=0.1" 3 "{&sigma}{superscript:2}{subscript:m}=0.2" 4 "{&sigma}{superscript:2}{subscript:m}=0.5")) /*
*/ xtitle("Markup")
|
// Hall method //
use "${datadir}/data_hall18", clear
egen nnaics = group(naics)
xtset nnaics year
// time-invariant version with graphical representation of correction
bys naics: markupest mkup_h18_ado, inputs( K L M S E ) deltavars( deltaK deltaL deltaM deltaS deltaE) method(hall) /*
*/ go(Y) deltago(deltaY) instruments( deltainstEq deltainstRD deltainstSh deltainstSo deltainstOP) hgraph
gr export "${plotdir}/figure_h18.eps", replace
// time-varying version
bys naics: markupest mkup_h18_tv, inputs( K L M S E ) deltavars( deltaK deltaL deltaM deltaS deltaE) method(hall) /*
*/ go(Y) deltago(deltaY) instruments( deltainstEq deltainstRD deltainstSh deltainstSo deltainstOP) timevarying
collapse (mean) mkup_h18_tv _psi_ , by(year)
g tweights = year - 2002
g psi_t = _psi_ * tweights
tsset year
tsline mkup_h18_tv if year >= 1988 & year <= 2017, lw(thick) ylab(1.28(.03)1.38) xlab(1988(5)2018, angle(45)) lc(maroon) xtitle("Year") ytitle("Implied values of {&mu}")
gr export "${plotdir}/figure_h18_panelb.eps", replace
|
// Hall and Roeger methods //
use "${datadir}/data_hall18", clear
egen nnaics = group(naics)
xtset nnaics year
g naics_2d = substr(naics, 1, 2)
// time-invariant Hall version
bys naics_2d: markupest mkup_h18, inputs( K L M S E ) deltavars( deltaK deltaL deltaM deltaS deltaE) method(hall) /*
*/ go(Y) deltago(deltaY) instruments( deltainstEq deltainstRD deltainstSh deltainstSo deltainstOP)
// Roeger
bys naics_2d: markupest mkup_roeg, inputs(L M S E K) method(roeger) go(Y)
// last 10 years
bys naics_2d: markupest mkup_h18_10y if year > 2007, inputs( K L M S E ) deltavars( deltaK deltaL deltaM deltaS deltaE) method(hall) /*
*/ go(Y) deltago(deltaY) instruments( deltainstEq deltainstRD deltainstSh deltainstSo deltainstOP)
bys naics_2d: markupest mkup_roeg_10y if year > 2007, inputs(L M S E K) method(roeger) go(Y)
collapse (mean) mkup_h* mkup_r*, by(naics_2d)
replace naics_2d = "Agriculture, Forestry, Fishing, Hunting" if naics_2d == "11"
replace naics_2d = "Mining" if naics_2d == "21"
replace naics_2d = "Utilities" if naics_2d == "22"
replace naics_2d = "Construction" if naics_2d == "23"
replace naics_2d = "Manufacturing" if naics_2d == "31"
replace naics_2d = "Manufacturing" if naics_2d == "32"
replace naics_2d = "Manufacturing" if naics_2d == "33"
replace naics_2d = "Wholesale Trade" if naics_2d == "42"
replace naics_2d = "Retail Trade" if naics_2d == "44"
replace naics_2d = "Transportation and Warehousing" if naics_2d == "48"
replace naics_2d = "Transportation and Warehousing" if naics_2d == "49"
replace naics_2d = "Information" if naics_2d == "51"
replace naics_2d = "Finance and Insurance" if naics_2d == "52"
replace naics_2d = "Real Estate Rental and Leasing" if naics_2d == "53"
replace naics_2d = "Professional, Scientific, and Technical Services" if naics_2d == "54"
replace naics_2d = "Management of Companies and Enterprises" if naics_2d == "55"
replace naics_2d = "Administrative and Support and Waste Manag" if naics_2d == "56"
replace naics_2d = "Educational Services" if naics_2d == "61"
replace naics_2d = "Health Care and Social Assistance" if naics_2d == "62"
replace naics_2d = "Arts, Entertainment, and Recreation" if naics_2d == "71"
replace naics_2d = "Accommodation and Food Services" if naics_2d == "72"
replace naics_2d = "Other Services (No Public Admin)" if naics_2d == "81"
eststo fhall: estpost tabstat mkup_h18, by(naics_2d) stat(mean) nototal listwise elabels
eststo froeger: estpost tabstat mkup_roeg, by(naics_2d) stat(mean) nototal listwise elabels
eststo fhall_10: estpost tabstat mkup_h18_10, by(naics_2d) stat(mean) nototal listwise elabels
eststo froeger_10: estpost tabstat mkup_roeg_10, by(naics_2d) stat(mean) nototal listwise elabels
esttab fhall froeger fhall_10 froeger_10 using "${tabledir}/table1.tex", f tex main(mean) wide nostar label replace coeflabels(`e(labels)') noobs nonum /*
*/ mti("$\hat{\mu}^{hall}$" "$\hat{\mu}^{roeger}$" "$\hat{\mu}^{hall}$" "$\hat{\mu}^{roeger}$") /*
*/ mgroups("1987-2017" "2008-2017" , pattern(1 0 1 0) prefix(\multicolumn{@span}{c}{) suffix(}) span erepeat(\cline{@span}))
|
*! version 1.0.1 15Sep2016
*! version 1.0.2 22Sep2016
*! version 1.0.3 30Sep2016 Fixed a major bug, add first stage residuals options, seeds option, evaluator option and postestimation
*! version 1.0.4 10Feb2017 Fixed a major bugs in winitial() matrix in MrEst, fixed minor bug on Wrdg control and names,
*! added translog production function, added starting points option for estimation, add check for multiple state and proxy
*! version 1.0.5 06Jun2017 Fixed minor bugs in control variable management, error management of translog production function, added a new feature
*! in table reporting (prod function CB / Tranlsog), fixed major bugs in predict
*! authors: Gabriele Rovigatti, University of Chicago Booth, Chicago, IL & EIEF, Rome, Italy. mailto: gabriele.rovigatti@gmail.com
*! Vincenzo Mollisi, Bolzano University, Bolzano, Italy & Tor Vergata University, Rome, Italy. mailto: vincenzo.mollisi@gmail.com
/***************************************************************************
** Stata program for Production Function Estimation using the control function approach.
**
** Programmed by: Gabriele Rovigatti
** Parts of the code are based on the xsmle module by Belotti, F., Hughes, G. and Piano Mortari, A.
** and on the levpet module by Petrin, A., Poi, B. and Levinsohn, J.
**************************************************************************/
capture program drop prodest
program define prodest, sortpreserve eclass
version 10.0
syntax varlist(numeric min=1 max=1) [if] [in], free(varlist numeric min=1) /*
*/ proxy(varlist numeric min=1 max=2) state(varlist numeric min=1) /*
*/ [control(varlist min=1) ENDOgenous(varlist min=1) id(varlist min=1 max=1) t(varlist min=1 max=1) reps(integer 5) /*
*/ VAlueadded Level(int ${S_level}) OPTimizer(namelist min=1 max=3) MAXiter(integer 10000) /* OPTIMIZER: THERE IS THE POSSIBILITY TO WRITE technique(nr 100 nm 1000) with different optimizers after the number of iterations.
*/ poly(integer 3) METhod(name min=1 max=1) lags(integer 999) TOLerance(real 0.00001) /*
*/ VERbose ATTrition ACF INIT(string) FSRESiduals(name min=1 max=1) seed(int 12345) EVALuator(string) TRANSlog]
loc vv: di "version " string(min(max(10,c(stata_version)),14.0)) ", missing:" // we want a version 10 min, while a version 14.0 max (no version 14.2)
loc depvar `varlist'
marksample touse
markout `touse' `free' `proxy' `state' `control' `endogenous' `id' `t'
/// checks for same variables in state - free - proxy - controls
loc chk1: list free & state
loc chk2: list free & control
loc chk3: list free & proxy
loc chk4: list state & control
loc chk5: list state & proxy
loc chk6: list control & proxy
forval i = 1/6{
if "`chk`i''" != ""{
di as error "Same variables in free, state, control or proxy."
exit 198
}
}
/// check whether there are more than one state AND more than one proxy
loc pnum: word count `proxy'
loc snum: word count `state'
if `pnum' > 1 & `snum' > 1{
di as error "Cannot specify multiple state AND multiple proxy variables"
exit 198
}
/// check for unavailable choice of models
if (!inlist("`method'","op","lp","wrdg","mr") & !mi("`method'")){
di as error "Allowed methods are op, lp, wrdg or mr. Default is lp"
exit 198
}
else if ("`method'" == "mr" & c(stata_version) < 14.2){
di as error "MrEst only available with Stata version 14.2 or higher"
exit 198
}
else if mi("`method'"){
loc method "lp"
}
/// Check for unpractical value of polynomial approximation
if (`poly' >= 7 | `poly' < 2){
di as error "Polynomial degree must lie between 2 and 6"
exit 198
}
/// Syntax check: is data xtset?
if (mi("`id'") | mi("`t'")) {
capture xtset
if (_rc != 0) {
di as error "You must either xtset your data or specify both id() and t()"
error _rc
}
else {
loc id = r(panelvar)
loc t = r(timevar)
}
}
else {
qui xtset `id' `t'
}
/// Check for a valid confidence level
if (`level' < 10 | `level' > 99) {
di as error "confidence level must be between 10 and 99"
error 198
}
/// Value added or gross output?
if mi("`valueadded'"){
loc model "grossoutput"
loc proxyGO `proxy'
loc lproxyGO l_gen`proxy'
}
else {
loc model "valueadded"
}
/// Number of repetitions reasonable?
if (`reps' < 2) & ("`method'" != "wrdg" & "`method'" != "mr"){
di as error "reps() must be at least 2"
exit 198
}
/// optimizer choice
if (!mi("`optimizer'") & !inlist("`optimizer'","nm","nr","dfp","bfgs","bhhh")){
di as error "Allowed optimizers are nm, nr, dfp, bfgs and bhhh or gn for wrdg and mr. Default is nm or gn for wrdg and mr"
exit 198
}
else if inlist("`method'","wrdg","mr") & inlist("`optimizer'","nm","bhhh"){
di as error "`optimizer' is not allowed with `method' method. Optimizer switched to gn (default)"
loc optimizer "gn"
}
else if mi("`optimizer'") & !inlist("`method'","wrdg","mr"){
loc optimizer "nm"
}
else if mi("`optimizer'"){
loc optimizer "gn"
}
/// number of lags in MR metodology
if ("`method'" == "mr" & mi("`lags'")){
di as error "Method 'mr' requires a specification for lags. Switched to 'all possible lags' (default)"
loc lags = .
}
else if ("`method'" == "mr" & (`lags' < 1)) {
di as error "Minimum lag is 1"
exit 198
}
else if ("`method'" == "mr" & (`lags' == 999)) {
loc lags = .
}
/// check ACF correction in woolridge and mr cases
if ("`method'" == "wrdg" | "`method'" == "mr") & !mi("`acf'"){
di as error "`method' does not support ACF correction. Estimation run on baseline model"
loc acf ""
}
/// feasible values of tolerance
if mi(`tolerance') | `tolerance' > 0.01{
di as error "maximum value of tolerance is 0.01. Changed to default of value of e-5"
loc tolerance = 0.00001
}
if ("`method'" == "op") {
loc proxyGO ""
loc lproxyGO ""
}
/// define conv_nrtol for wooldridge and mr - NOT in case of GN
if ("`method'" == "wrdg" | "`method'" == "mr") & "`optimizer'" != "gn"{
loc conv_nrtol "conv_nrtol(`tolerance')"
}
if ("`method'" == "wrdg" | "`method'" == "mr") & !mi("`init'"){
loc init = subinstr("`init'",","," ",.)
foreach var in `free' `state'{
gettoken val init: init
loc init_gmm `init_gmm' xb_`var' `val'
}
loc init_gmm from(`init_gmm')
}
cap confirm var `fsresiduals'
if !_rc{
di as error "`fsresiduals' already exists"
exit 198
}
/// define the evaluator type
if mi("`evaluator'") & "`optimizer'" == "bhhh"{
loc evaluator = "gf0"
}
else if mi("`evaluator'"){
loc evaluator = "d0"
}
/// check the translog - only meaningful for ACF and Wooldridge methods
if !mi("`translog'") & mi("`acf'"){
di as error "translog is available with ACF-corrected models only"
exit 198
}
else if !mi("`translog'") & !mi("`acf'"){
loc transVars `free' `state' `proxyGO'
}
/// change the production function type
if !mi("`translog'"){
loc PFtype "translog"
}
else{
loc PFtype "Cobb-Douglas"
}
if "`method'" == "op" loc strMethod "Olley-Pakes"
if "`method'" == "lp" loc strMethod "Levinsohn-Petrin"
if "`method'" == "wrdg" loc strMethod "Wooldridge"
if "`method'" == "mr" loc strMethod "Mollisi-Rovigatti"
loc colnum: word count `free' `state' `control' `proxyGO'
/// initialize results matrices
tempname firstb __b __V robV
mat `__b' = J(`reps',`colnum',.)
/// preserve the data before generating tons of variables
preserve
set seed `seed'
/// generate some locals to be used in order to display results
qui xtdes if `touse' == 1, i(`id') t(`t')
loc nObs = `r(sum)'
loc nGroups = `r(N)'
loc minGroup = `r(min)'
loc meanGroup = `r(mean)'
loc maxGroup = `r(max)'
qui su `t' if `touse' == 1
loc maxDate = `r(max)'
/// directly keep only observations in IF and IN --> SAVE A TEMPORARY FILE
tempfile temp
qui keep if `touse' == 1
keep `depvar' `free' `state' `proxy' `control' `id' `t' `touse' `endogenous'
/// generate an "exit" dummy variable equal to one for all firms not present in the last period of panel
tempvar exit
qui bys `id' (`t'): g `exit' = (_n == _N & `t' < `maxDate')
qui save `temp', replace
/// here we start with bootstrap repetitions
forv b = 1/`reps'{
/// print the number of reps each 10 of them
if mod(`b'/10,1) == 0 & !inlist("`method'","wrdg","mr"){
noi di "`b'" _continue
}
else if mod(`b'/10,1) != 0 & !inlist("`method'","wrdg","mr"){
noi di "." _continue
}
if `b' > 1{
use `temp', clear
tempvar new_id
qui bsample, cluster(`id') idcluster(`new_id')
qui xtset `new_id' `t'
}
/// define all locals to run the command
loc toLagVars `free' `state' `control' `proxyGO'
foreach var of local toLagVars{
qui g l_gen`var' = l.`var'
loc laggedVars `laggedVars' l_gen`var'
}
foreach local in free state proxy control{
loc `local'num: word count ``local''
foreach var of local `local'{
loc lag`local'Vars `lag`local'Vars' l_gen`var'
}
}
loc instrumentVars `state' `lagfreeVars' `control' `lproxyGO'
/// OP LP polyvars
loc polyvars `state' `proxy'
/// ACF requires free variables to be among the polynomial
if !mi("`acf'"){
loc polyvars `free' `polyvars'
}
loc varnum: word count `polyvars'
loc controlnum: word count `control'
loc tolagnum: word count `toLagVars'
// poly-th degree polynomial
loc n = 1
foreach x of local polyvars{
qui g var_`n' = `x'
loc interactionvars `interactionvars' var_`n'
loc ++n
}
forv i=1/`varnum'{
forv j=`i'/`varnum'{
qui g var_`i'`j' = var_`i'*var_`j'
loc interactionvars `interactionvars' var_`i'`j'
if `poly' > 2{
forv z=`j'/`varnum'{
qui g var_`i'`j'`z' = var_`i'*var_`j'*var_`z'
loc interactionvars `interactionvars' var_`i'`j'`z'
if `poly' > 3{
forv g = `z'/`varnum'{
qui g var_`i'`j'`z'`g' = var_`i'*var_`j'*var_`z'*var_`g'
loc interactionvars `interactionvars' var_`i'`j'`z'`g'
if `poly' > 4{
forv v = `g'/`varnum'{
qui g var_`i'`j'`z'`g'`v' = var_`i'*var_`j'*var_`z'*var_`g'*var_`v'
loc interactionvars `interactionvars' var_`i'`j'`z'`g'`v'
if `poly' > 5{
forv s = `v'/`varnum'{
qui g var_`i'`j'`z'`g'`v'`s' = var_`i'*var_`j'*var_`z'*var_`g'*var_`v'*var_`s'
loc interactionvars `interactionvars' var_`i'`j'`z'`g'`v'`s'
}
}
}
}
}
}
}
}
}
}
if ("`method'" == "wrdg" | "`method'" == "mr"){
/// generate GMM fit - for each state and free variables need to initiate the GMM
foreach element in `free' `state' `proxyGO'{
local gmmfit `gmmfit'-{`element'}*`element'
}
/// generate lag interactionvars
foreach var of local interactionvars{
cap g l_gen`var' = l.`var'
loc lagInteractionvars `lagInteractionvars' l_gen`var'
}
if !mi("`control'"){
loc wrdg_contr1 "-{xe: `control'}"
loc wrdg_contr2 "-{xf: `control'}"
}
if "`method'" == "wrdg"{ /* WRDG */
`vv' qui gmm (eq1: `depvar' `gmmfit' -{xd: `interactionvars'} `wrdg_contr1' - {a0}) /* y - a0 - witB - xitJ - citL
*/ (eq2: `depvar' `gmmfit' -{xc: `lagInteractionvars'} `wrdg_contr2' - {a0} - {e0}), /* y - e0 - witB - xitJ - p(cit-1L) - ... - pg(cit-1L)^g
*/ instruments(eq1: `free' `interactionvars' `control') /* Zit1 = (1,wit,xit,c0it)
*/ instruments(eq2: `state' `lagfreeVars' `lagInteractionvars' `control') /* Zit2 = (1,xit,wit-1,cit-1)
*/ winitial(unadjusted, independent) nocommonesample /*
*/ technique(`optimizer') conv_maxiter(`maxiter') `conv_nrtol' `init_gmm'
}
else{ /* MrEst */
/// god forgive me: in order to overcome the difference equation in system GMM we launch both equations in level and take the initial weighting matrix
loc instnum = (`freenum' + `statenum')
forv i = 1/`instnum'{
loc intVars `intVars' var_`i'
}
loc interinstr: list local interactionvars - intVars
qui gmm (`depvar' `gmmfit' - {xh: `interactionvars'} `wrdg_contr1' - {a0}), quickd instruments(1: `interinstr') /*
*/ xtinstruments(`free' `state', l(0/`lags')) winitial(xt L) onestep conv_maxiter(1)
qui mat W1 = e(W)
qui gmm (`depvar' `gmmfit' - {xj: `lagInteractionvars'} `wrdg_contr2' - {a0} - {e0}), quickd /*
*/ instruments(`state' `lagInteractionvars') xtinstruments(`free' `state', l(2/`lags'))/*
*/ /*xtinstruments(`state', l(0/`lags'))*/ winitial(xt L) onestep conv_maxiter(1)
qui mat W2 = e(W)
mata W_hat = st_matrix("W1"),J(rows(st_matrix("W1")),cols(st_matrix("W2")),0) \ J(rows(st_matrix("W2")),cols(st_matrix("W1")),0), st_matrix("W2")
mata st_matrix("W_hat", W_hat)
/// launch the gmm with the winitial built above
`vv' qui gmm (1: `depvar' `gmmfit' - {xk: `interactionvars'} `wrdg_contr1' - {a0}) /* y - a0 - witB - xitJ - citL
*/ (2: `depvar' `gmmfit' - {xl: `lagInteractionvars'} `wrdg_contr2' - {a0} - {e0}),/* y - e0 - witB - xitJ - p(cit-1L) - ... - pg(cit-1L)^g
*/ instruments(1: `interinstr' `control') xtinstruments(1: `free' `state', l(0/`lags')) /*
*/ xtinstruments(2: `free' `state', l(2/`lags')) instruments(2: `state' `lagInteractionvars' `control') /*
*/ onestep winitial("W_hat") nocommonesample quickd technique(`optimizer') conv_maxiter(`maxiter') `conv_nrtol' `init_gmm'
}
/// save elements for result posting
loc nObs = `e(N)'
loc numInstr1: word count `e(inst_1)'
loc numInstr2: word count `e(inst_2)'
mat `__b' = e(b)
mat `__b' = `__b'[1...,1..`colnum']
mat `__V' = e(V)
mat `__V' = `__V'[1..`colnum',1..`colnum']
/// save locals for Hanses's J and p-value
qui estat overid
loc hans_j: di %3.2f `r(J)'
loc hans_p: di %3.2f `r(J_p)'
continue, break
}
else{ /* if it's not WRDG or MrEst */
if !mi("`attrition'"){
foreach var in `interactionvars'{
qui g l_gen`var' = l.`var'
loc lagInterVars `lagInterVars' l_gen`var'
}
qui cap logit `exit' `lagInterVars'
if _rc == 0{
qui predict Pr_hat if e(sample), pr
}
else{
di as error "No ID exits the sample. Running the estimation with no attrition"
loc attrition ""
qui g Pr_hat = .
}
}
else{
qui g Pr_hat = .
}
/// in case of ACF we don't want free variables to appear twice in the regression
if !mi("`acf'"){
forv i = 1/`freenum'{
loc freeVars `freeVars' var_`i'
}
loc interactionvars: list local interactionvars - freeVars
/// generate the needed variables for translog: a local with all power of 2 + interactions and instruments - lag of free/proxy interacted with state
if !mi("`translog'"){
loc transNum: word count `transVars'
forv i=1/`transNum'{
forv j=`i'/`transNum'{
loc interactionTransVars `interactionTransVars' var_`i'`j'
qui g lagTrans_`i'`j' = l.var_`i'`j'
loc lagInteractionTransVars `lagInteractionTransVars' lagTrans_`i'`j'
}
}
foreach fvar in `free' `proxyGO'{
tempvar d`fvar'
qui g `d`fvar'' = `fvar'^2
qui g lagInstr`fvar' = l.`d`fvar''
loc instrumentTransVars `instrumentTransVars' lagInstr`fvar'
foreach svar in `state'{
qui g instr_`fvar'`svar' = lagInstr`fvar'*`svar'
loc instrumentTransVars `instrumentTransVars' instr_`fvar'`svar'
}
}
foreach svar in `state'{
tempvar d`svar'
qui g `d`svar'' = `svar'^2
qui g lagInstr`svar' = l.`d`svar''
loc instrumentTransVars `instrumentTransVars' lagInstr`svar'
}
if `b' == 1{
loc colnum: word count `free' `state' `control' `proxyGO' `interactionTransVars'
mat `__b' = J(`reps',`colnum',.)
}
}
}
loc regvars `free' `control' `interactionvars'
loc firstRegNum: word count `regvars'
loc regNum: word count `free' `state' `control' `proxyGO' `transVars'
/// first stage
qui _xt, trequired
qui reg `depvar' `regvars'
/// generating "freeFit" as the fit of free variables to be subtracted to Y
tempvar freeFit
qui g `freeFit' = 0
foreach var in `free'{
scalar b_`var' = _b[`var']
qui replace `freeFit' = `freeFit' + (b_`var'*`var')
}
mat `firstb' = e(b)
mat `robV' = e(V)
qui predict phihat if e(sample), xb
qui g phihat_lag = l.phihat
/// retrieve starting points through an OLS estimation --> first round only, not during bootstrap
if `b' == 1{
qui reg `depvar' `toLagVars' `interactionTransVars' if `touse' == 1
if !mi("`init'"){
mat ols_s = `init' // give the starting points to the optimization routine
}
else{
mat tmp = e(b)
mata: st_matrix("ols_s",st_matrix("tmp"):+ rnormal(1,1,0,.01)) // we add some "noise" to OLS results in order not to block the optimizer
*mat ols_s = e(b)
}
/// save the first stage results
if !mi("`fsresiduals'"){
tempfile fsres
tempvar FSfit
mat score `FSfit' = `firstb'
qui g `fsresiduals' = `depvar' - `FSfit'
qui save `fsres'
}
}
qui g res = 0
if mi("`acf'"){ /* OP and LP (non-corrected) second stage */
/// here we generate a tempvar with the fitted value of all the free variables
qui replace phihat = phihat - `freeFit'
qui replace res = `depvar' - `freeFit'
qui replace phihat_lag = l.phihat
mat init = ols_s[1...,(`freenum'+1)..`regNum']'
loc toLagVars `state' `control' `proxyGO'
loc laggedVars: list local laggedVars - lagfreeVars
loc instrumentVars `state' `control' `lproxyGO'
/// here we launch the mata routine for OP or LP
foreach var of varlist `toLagVars' `laggedVars' /*`lagfreeVars'*/ phihat_lag phihat{
qui drop if mi(`var')
}
/// the routine cannot fail the first estimation - otherwise it would fail with the point estimates. We capture the boot repetitions
if `b' == 1{
qui mata: opt_mata(st_matrix("init"),&foplp(),"`optimizer'","phihat","phihat_lag",/*
*/"`toLagVars'","`laggedVars'","`touse'",`maxiter',`tolerance',"`evaluator'","Pr_hat","res","`instrumentVars'","`endogenous'")
}
else{
cap qui mata: opt_mata(st_matrix("init"),&foplp(),"`optimizer'","phihat","phihat_lag",/*
*/"`toLagVars'","`laggedVars'","`touse'",`maxiter',`tolerance',"`evaluator'","Pr_hat","res","`instrumentVars'","`endogenous'")
if _rc != 0{
noi di "x" _continue
loc laggedVars ""
loc lagfreeVars ""
loc lagstateVars ""
loc interactionvars ""
loc lagcontrolVars ""
continue
}
}
mat `__b'[`b',1] = `firstb'[1...,1..(`freenum')],r(betas)
}
else{ /* ACF second stage */
loc toLagVars `free' `state' `control' `proxyGO' `interactionTransVars'
loc laggedVars `lagfreeVars' `lagstateVars' `lagcontrolVars' `lproxyGO' `lagInteractionTransVars'
loc instrumentVars `lagfreeVars' `state' `control' `lproxyGO' `instrumentTransVars'
/// here we launch the mata routine for ACF
foreach var of varlist `laggedVars' `lagfreeVars' phihat_lag{
qui drop if mi(`var')
}
loc betaNum: word count `free' `state' `proxyGO' `control' `interactionTransVars'
mat init = ols_s[1...,1..(`betaNum')]'
if `b' == 1{
qui mata: opt_mata(st_matrix("init"),&facf(),"`optimizer'","phihat","phihat_lag",/*
*/ "`toLagVars'","`laggedVars'","`touse'",`maxiter',`tolerance',"`evaluator'","Pr_hat","res","`instrumentVars'","`endogenous'")
}
else{
cap qui mata: opt_mata(st_matrix("init"),&facf(),"`optimizer'","phihat","phihat_lag",/*
*/ "`toLagVars'","`laggedVars'","`touse'",`maxiter',`tolerance',"`evaluator'","Pr_hat","res","`instrumentVars'","`endogenous'")
if _rc != 0{
noi di "x" _continue
loc lagInteractionTransVars ""
loc interactionTransVars ""
loc instrumentTransVars ""
loc laggedVars ""
loc lagfreeVars ""
loc lagstateVars ""
loc interactionvars ""
loc lagcontrolVars ""
continue
}
}
mat `__b'[`b',1] = r(betas)
}
}
loc lagInteractionTransVars ""
loc transNames `interactionTransVars'
loc interactionTransVars ""
loc instrumentTransVars ""
loc laggedVars ""
loc lagfreeVars ""
loc lagstateVars ""
loc interactionvars ""
loc lagcontrolVars ""
}
if ("`method'" != "wrdg" & "`method'" != "mr"){
clear
/// generate the varCovar matrix for the bootstrapped estimates
qui svmat `__b'
qui mat accum `__V' = * in 2/`reps', deviations noconstant
mat `__V' = `__V' / (`reps' - 1)
mat `__b' = `__b'[1,1...]
}
restore
/// merge the first stage residuals
if !mi("`fsresiduals'"){
qui merge 1:1 `id' `t' using `fsres', nogen keepusing(`fsresiduals')
}
mat coleq `__b' = ""
mat coleq `__V' = ""
mat roweq `__V' = ""
mat colnames `__b' = `free' `state' `control' `proxyGO' `transNames'
mat colnames `__V' = `free' `state' `control' `proxyGO' `transNames'
mat rownames `__V' = `free' `state' `control' `proxyGO' `transNames'
/// Display results - ereturn
eret clear
eret post `__b' `__V', e(`touse') obs(`nObs')
if !mi("`acf'"){
loc correction "ACF corrected"
}
eret loc cmd "prodest"
eret loc depvar "`depvar'"
eret loc free "`free'"
eret loc state "`state'"
eret loc proxy "`proxy'"
eret loc controls "`control'"
eret loc endogenous "`endogenous'"
eret loc method "`strMethod'"
eret loc model "`model'"
eret loc technique "`optimizer'"
eret loc idvar "`id'"
eret loc timevar "`t'"
eret loc correction "`correction'"
eret loc predict "prodest_p"
eret loc PFtype "`PFtype'"
eret scalar N_g = `nGroups'
eret scalar tmin = `minGroup'
eret scalar tmean = `meanGroup'
eret scalar tmax = `maxGroup'
di _n _n
di as text "`method' productivity estimator" _continue
di _col(49) "`PFtype' PF"
di as text "`correction'"
if ("`model'" == "grossoutput") {
di as text "Dependent variable: revenue" _continue
}
else {
di as text "Dependent variable: value added" _continue
}
di _col(49) as text "Number of obs = " as result %9.0f `nObs'
di as text "Group variable (id): `id'" _continue
di _col(49) as text "Number of groups = " as result %9.0f `nGroups'
di as text "Time variable (t): `t'"
di _col(49) as text "Obs per group: min = " as result %9.0f `minGroup'
di _col(49) as text " avg = " as result %9.1f `meanGroup'
di _col(49) as text " max = " as result %9.0f `maxGroup'
di
_coef_table, level(`level')
if ("`method'" == "wrdg" | "`method'" == "mr"){
eret loc hans_j "`hans_j'"
eret loc hans_p "`hans_p'"
di "Hansen's J = `hans_j'"
di "Hansen's J p-value = `hans_p'"
}
end program
//*---------------------------------------------------------------------*/
/// defining mata routines for optimization
capture mata mata drop opt_mata()
capture mata mata drop facf()
capture mata mata drop foplp()
mata:
/*---------------------------------------------------------------------*/
void foplp(todo,betas,X,lX,PHI,LPHI,RES,Z,PR_HAT,ENDO,crit,g,H)
{
OMEGA = PHI-X*betas'
OMEGA_lag = LPHI-lX*betas'
OMEGA_lag2 = OMEGA_lag:*OMEGA_lag
OMEGA_lag3 = OMEGA_lag2:*OMEGA_lag
/* IF clause in order to see whether we have to use the "exit" variable */
if (!missing(PR_HAT)){
PR_HAT2 = PR_HAT:*PR_HAT
PR_HAT3 = PR_HAT2:*PR_HAT
OMEGA_lag_pol = (J(rows(PHI),1,1),OMEGA_lag,OMEGA_lag2,OMEGA_lag3,PR_HAT,PR_HAT2,PR_HAT3,PR_HAT:*OMEGA_lag,PR_HAT2:*OMEGA_lag,PR_HAT:*OMEGA_lag2,ENDO)
}
else{
OMEGA_lag_pol = (J(rows(PHI),1,1),OMEGA_lag,OMEGA_lag2,OMEGA_lag3,ENDO)
}
g_b = invsym(OMEGA_lag_pol'OMEGA_lag_pol)*OMEGA_lag_pol'OMEGA
XI = RES-X*betas'-OMEGA_lag_pol*g_b
crit = (XI)'*(XI)
}
/*---------------------------------------------------------------------*/
void facf(todo,betas,X,lX,PHI,LPHI,RES,Z,PR_HAT,ENDO,crit,g,H)
{
W = invsym(Z'Z)/(rows(Z))
OMEGA = PHI-X*betas'
OMEGA_lag = LPHI-lX*betas'
OMEGA_lag2 = OMEGA_lag:*OMEGA_lag
OMEGA_lag3 = OMEGA_lag2:*OMEGA_lag
/* IF clause in order to see whether we have to use the "exit" variable */
if (!missing(PR_HAT)){
PR_HAT2 = PR_HAT:*PR_HAT
PR_HAT3 = PR_HAT2:*PR_HAT
OMEGA_lag_pol = (J(rows(PHI),1,1),OMEGA_lag,OMEGA_lag2,OMEGA_lag3,PR_HAT,PR_HAT2,PR_HAT3,PR_HAT:*OMEGA_lag,PR_HAT2:*OMEGA_lag,PR_HAT:*OMEGA_lag2,ENDO)
}
else{
OMEGA_lag_pol = (J(rows(PHI),1,1),OMEGA_lag,OMEGA_lag2,OMEGA_lag3,ENDO)
}
g_b = invsym(OMEGA_lag_pol'OMEGA_lag_pol)*OMEGA_lag_pol'OMEGA
XI = OMEGA-OMEGA_lag_pol*g_b
crit = (Z'XI)'*W*(Z'XI)
}
/*---------------------------------------------------------------------*/
void opt_mata(init, f, opt, phi, lphi, tolag, lagged, touse, maxiter, tol, eval, | Pr_hat, res, instr, endogenous)
{
st_view(RES=.,.,st_tsrevar(tokens(res)), touse)
st_view(PHI=.,.,st_tsrevar(tokens(phi)), touse)
st_view(LPHI=.,.,st_tsrevar(tokens(lphi)), touse)
st_view(Z=.,.,st_tsrevar(tokens(instr)), touse)
st_view(X=.,.,st_tsrevar(tokens(tolag)), touse)
st_view(lX=.,.,st_tsrevar(tokens(lagged)), touse)
st_view(PR_HAT=.,.,st_tsrevar(tokens(Pr_hat)), touse)
st_view(ENDO=.,.,st_tsrevar(tokens(endogenous)), touse)
S = optimize_init()
optimize_init_argument(S, 1, X)
optimize_init_argument(S, 2, lX)
optimize_init_argument(S, 3, PHI)
optimize_init_argument(S, 4, LPHI)
optimize_init_argument(S, 5, RES)
optimize_init_argument(S, 6, Z)
optimize_init_argument(S, 7, PR_HAT)
optimize_init_argument(S, 8, ENDO)
optimize_init_evaluator(S, f)
optimize_init_evaluatortype(S, eval)
optimize_init_conv_maxiter(S, maxiter)
optimize_init_conv_nrtol(S, tol)
optimize_init_technique(S, opt)
optimize_init_nmsimplexdeltas(S, 0.00001)
optimize_init_which(S,"min")
optimize_init_params(S,init')
p = optimize(S)
st_matrix("r(betas)", p)
}
/*---------------------------------------------------------------------*/
end
|