Walk the Talk? The Effect of Voting and Excludability in Public Goods Experiments.

 

Appendices A-F

 

Economics Research International

 

Hans J. Czap, University of Michigan-Dearborn

Natalia V. Czap, University of Michigan-Dearborn (nczap at umich.edu)

Esmail Bonakdarian, Franklin University 

 

 

* All authors contributed equally to the project.

 


 

Appendix A. Summary of the descriptive statistics for Experiment 1

 

Treatment

Individual contribution

Proportion of successful provision

Proportion of votes for A

Proportion of successful provision

Mean

Standard deviation

Proportion of zero contributions

if A was selected

if B was selected

Excludable public good (with voting)

Group 1

6.09

2.63

0.04

0.75

0.9

0.72

1

2

3.69

3.35

0.23

0.2

0.3

0.17

0.21

3

4.30

3.89

0.03

0.3

0.65

0.08

0.71

4

4.98

3.16

0.09

0.5

0.65

0.31

0.86

5

5.12

3.65

0.21

0.65

0.4

0.13

1

6

4.65

2.98

0.19

0.35

1

0.35

NA

7

3.32

2.83

0.13

0.1

0.45

0

0.18

8

5.39

2.53

0.01

0.65

0.75

0.53

1

Average

4.69

3.26

0.12

0.44

0.64

0.22

0.60

 

 

 

 

 

 

 

 

Non-excludable public good with voting

Group 1

5.34

2.85

0.11

0.4

0.9

0.33

1

2

5.76

3.85

0.05

0.7

1

0.70

NA

3

5.75

2.36

0.05

0.65

1

0.65

NA

4

5.98

2.52

0.03

0.7

1

0.70

NA

5

6.52

2.80

0.02

0.8

1

0.80

NA

6

4.58

3.28

0.22

0.25

0.7

0.14

NA

7

5.74

2.60

0.02

0.65

0.85

1

1

8

5.30

2.63

0.03

0.55

0.55

0.27

0.89

Average

5.62

2.93

0.07

0.59

0.88

0.49

0.80

 

 

 

 

 

 

 

 

Excludable public good without voting, Project A

Group 1

4.93

2.57

0.03

0.30

 

 

 

2

4.45

3.02

0.12

0.20

 

 

 

3

5.16

2.47

0.06

0.35

 

 

 

4

4.01

3.78

0.3

0.10

 

 

 

5

5.64

2.54

0.05

0.60

 

 

 

6

5.40

2.41

0.05

0.50

 

 

 

7

4.26

4.11

0.27

0.20

 

 

 

Average

4.84

3.09

0.13

0.32

 

 

 

 

 

 

 

 

 

 

 

Non-excludable public good without voting, Project A

Group 1

4.65

2.51

0.06

0.2

 

 

 

2

4.29

3.37

0.12

0.2

 

 

 

3

5.66

3.23

0.02

0.55

 

 

 

4

5.39

2.31

0

0.45

 

 

 

5

5.79

2.37

0.04

0.7

 

 

 

6

7.44

2.87

0.01

0.95

 

 

 

7

5.75

2.84

0.04

0.55

 

 

 

Average

5.57

2.95

0.04

0.51

 

 

 

 

 

 

 

 

 

 

 

Excludable public good without voting, Project B

Group 1

4.69

2.22

0.02

0.8

 

 

 

2

4.88

1.94

0

0.7

 

 

 

3

4.66

2.51

0.01

0.7

 

 

 

4

5.21

2.34

0

0.85

 

 

 

5

4.36

2.36

0.05

0.6

 

 

 

6

6.12

2.84

0.02

1

 

 

 

7

4.33

2.32

0.01

0.7

 

 

 

Average

4.89

2.43

0.02

0.76

 

 

 

 

 

 

 

 

 

 

 

Non-excludable public good without voting, Project B

Group 1

4.55

2.93

0.04

0.75

 

 

 

2

4.40

3.18

0.2

0.65

 

 

 

3

4.03

2.90

0.28

0.55

 

 

 

4

4.66

3.42

0.13

0.65

 

 

 

5

4.96

2.34

0.01

0.85

 

 

 

6

3.30

2.60

0.04

0.2

 

 

 

7

4.38

3.11

0.06

0.6

 

 

 

Average

4.32

2.97

0.11

0.61

 

 

 

 

 

Appendix B. Regressions on Gender effect

 

Variable

Model B1 Excludable public good

Model B2 Public good

Dependent variable

Individual contribution

Constant

4.91(0.27)***

5.00(0.28)***

Round

-0.05(0.01)***

-0.06(0.01)***

Voting (Yes=1)

-0.09(0.35)

0.65(0.35)**

Project A (Yes=1)

0.18(0.21)

0.56(0.26)**

Female (Yes=1)

0.95(0.16)***

0.67(0.16)***

 

 

 

Akaike Info Criterion

10930

10923

Number of obs.

2200 (22 groups x 20 rounds x 5 players)

Standard error is in parentheses, *** - significant at 1%, ** - significant at 5%, * - significant at 10%.

 

 

Appendix C. Regressions on Economics major effect

 

Variable

Model C1 Excludable public good

Model C2 Public good

Dependent variable

Individual contribution

Constant

5.80(0.30)***

5.46(0.28)***

Round

-0.05(0.01)***

-0.06(0.01)***

Voting (Yes=1)

-0.15(0.42)

0.47(0.36)

Project A (Yes=1)

0.14(0.21)

0.50(0.26)*

Economics (Yes=1)

-1.42(0.15)***

-0.28(0.14)*

 

 

 

Akaike Info Criterion

10889

10937

Number of obs.

2200 (22 groups x 20 rounds x 5 players)

Standard error is in parentheses, *** - significant at 1%, ** - significant at 5%, * - significant at 10%.

 

 

Appendix D. Summary of the descriptive statistics for Experiment 2

 

Treatment

Individual contribution

Proportion of successful provision

Mean

Standard deviation

Proportion of zero contributions

Proportion of unfair contributions

Excludable public good in rounds 1-10

Group 1

5.98

1.92

0.00

0.48

0.60

2

6.80

1.36

0.00

0.20

1.00

3

5.34

3.08

0.14

0.44

0.30

4

6.39

2.96

0.00

0.40

0.90

5

5.48

2.51

0.00

0.54

0.30

6

6.14

3.62

0.00

0.38

0.80

7

7.17

2.19

0.00

0.30

0.90

8

7.01

2.76

0.04

0.20

1.00

9

6.98

3.20

0.02

0.38

0.80

10

9.17

1.13

0.00

0.02

1.00

11

6.50

2.48

0.00

0.38

0.90

12

5.42

2.72

0.12

0.38

0.20

Total

6.53

2.76

0.03

0.34

0.73

 

 

 

 

 

 

Excludable public good in rounds 11-20

Group 13

3.46

3.62

0.38

0.70

0.20

14

3.48

4.07

0.32

0.64

0.20

15

6.63

1.76

0.00

0.34

0.90

16

7.34

2.57

0.00

0.26

1.00

17

6.69

2.96

0.04

0.18

0.90

18

2.56

3.62

0.54

0.74

0.00

19

6.76

1.85

0.00

0.14

1.00

20

5.23

2.44

0.08

0.40

0.40

21

5.44

2.35

0.02

0.48

0.40

22

3.99

4.11

0.24

0.62

0.12

23

6.04

2.10

0.02

0.42

0.60

24

7.06

2.77

0.00

0.42

0.90

Total

5.39

3.33

0.14

0.44

0.55

 

 

 

 

 

 

Non-excludable public good in rounds 1-10

Group 1

4.68

3.36

0.24

0.52

0.50

2

3.76

4.13

0.48

0.54

0.10

3

7.46

2.33

0.00

0.20

1.00

4

8.21

2.83

0.02

0.18

1.00

5

6.52

3.07

0.04

0.24

0.80

6

6.02

2.32

0.02

0.38

0.50

7

6.70

2.22

0.00

0.20

1.00

8

5.84

2.63

0.02

0.54

0.40

9

6.14

2.16

0.02

0.28

0.60

10

5.69

3.71

0.20

0.46

0.40

11

6.16

2.74

0.00

0.42

0.60

12

5.99

3.18

0.12

0.48

0.40

Total

6.10

3.13

0.10

0.37

0.61

 

 

 

 

 

 

Non-excludable public good in rounds 11-20

Group 13

5.99

2.02

0.00

0.44

0.50

14

6.63

2.31

0.00

0.32

0.90

15

4.11

3.83

0.38

0.60

0.10

16

6.16

3.28

0.00

0.40

0.80

17

5.98

2.20

0.00

0.48

0.40

18

4.84

3.53

0.24

0.46

0.20

19

6.88

2.33

0.00

0.30

1.00

20

5.92

3.12

0.14

0.30

0.30

21

4.51

4.05

0.26

0.58

0.20

22

6.98

3.90

0.22

0.26

0.90

23

6.06

2.68

0.00

0.44

0.60

24

5.94

2.23

0.00

0.42

0.60

Total

5.83

3.14

0.10

0.42

0.54

 

 

Appendix E. Variables used in the Genetic Algorithm runs.

 

Set 1: 190 variables.

 

Linear terms

Squared terms

Interaction terms with:

Playing Excludable public goods first (Yes=1)

 

Round; Individual contribution (-1)[1];

Group contribution (-1); Success (-1) (Yes=1);

Unfair Contribution (-1) (Yes=1); Female (Yes=1);

Economics (Yes=1); Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Excludable PG (Yes=1)

 

Round; Individual contribution (-1);

Group contribution (-1); Success (-1) (Yes=1);

Unfair Contribution (-1) (Yes=1); Female (Yes=1);

Economics (Yes=1); Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Round

Round squared

Individual contribution (-1);

Group contribution (-1); Success (-1) (Yes=1);

Unfair Contribution (-1) (Yes=1); Female (Yes=1);

Economics (Yes=1); Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Individual contribution (-1)

Individual contributions (-1) squared

Group contribution (-1); Success (-1) (Yes=1);

Unfair Contribution (-1) (Yes=1); Rounds 1-10 (Yes=1);

Female (Yes=1); Economics (Yes=1); Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Group contribution (-1)

 

Success (-1) (Yes=1);

Unfair Contribution (-1) (Yes=1); Rounds 1-10 (Yes=1);

Female (Yes=1); Economics (Yes=1); Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Success (-1) (Yes=1)

 

Unfair Contribution (-1) (Yes=1); Rounds 1-10 (Yes=1);

Female (Yes=1); Economics (Yes=1); Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Unfair Contribution (-1) (Yes=1)

 

Rounds 1-10 (Yes=1)

Female (Yes=1); Economics (Yes=1); Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Rounds 1-10 (Yes=1)

 

Female (Yes=1); Economics (Yes=1); Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Female (Yes=1)

 

Economics (Yes=1); Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Economics (Yes=1)

 

Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Social Sciences (Yes=1)

 

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Freshman (Yes=1)

 

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Junior (Yes=1)

 

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Senior (Yes=1)

 

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

Grew up in Urban Area (Yes=1)

 

Parents Education;

Empathy; Internal Locus of Control; Trust

Parents Education

Parents Education squared

Empathy; Internal Locus of Control; Trust

Empathy

Empathy squared

Internal Locus of Control; Trust

Internal Locus of Control

Internal Locus of Control squared

Trust

Trust

Trust squared

 

 

Set 2.1: 20 variables.

 

Linear terms

Squared terms

Interaction terms with:

Individual Earnings (-1)

Individual Earnings (-1) squared

Playing XPG first (Yes=1); Excludable PG (Yes=1);

Round; Individual contribution (-1);

Group contribution (-1); Success (-1) (Yes=1);

Unfair Contribution (-1) (Yes=1); Rounds 1-10 (Yes=1);

Female (Yes=1); Economics (Yes=1); Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

 

Set 2.2: 20 variables.

 

Linear terms

Squared terms

Interaction term with:

Individual Profit from Group Good (-1)

Individual Profit from Group Good (-1) squared

Playing XPG first (Yes=1); Excludable PG (Yes=1);

Round; Individual contribution (-1);

Group contribution (-1); Success (-1) (Yes=1);

Unfair Contribution (-1) (Yes=1); Rounds 1-10 (Yes=1);

Female (Yes=1); Economics (Yes=1); Social Sciences (Yes=1);

Freshman (Yes=1); Junior (Yes=1); Senior (Yes=1);

Grew up in Urban Area (Yes=1); Parents Education;

Empathy; Internal Locus of Control; Trust

 

 

Appendix F. Discussion of Genetic Algorithm.

A genetic algorithm is a stochastic population-based global optimization algorithm that borrows heavily from the processes of natural evolution to solve problems, or in other words, evolve solutions to problems. One could also consider it a search algorithm that uses evolutionary pressures to move its search into the desired direction. The most basic form of the genetic algorithm was first described by John Holland (1975) and subsequently also by Goldberg (1987). The algorithm has since then evolved and a number of nature inspired evolutionary algorithms are now in use. We use a variation based on the simple genetic algorithm introduced by Eshelman (1990), CHC[2], that has proven to be successful and robust in solving problems.

 

F1. The Basic Genetic Algorithm

To solve a problem with the help of a genetic algorithm, two ingredients are needed: First the ability to encode a possible solution for the problem we are trying to solve. In many cases this is in the form of a binary string. We then randomly generate a set of possible solutions using this encoding scheme. This set is referred to as our population. Second is the ability to evaluate a given solution from our population for its fitness, i.e., to determine how well a given population member (potential solution) solves our problem. This way we are able to evaluate our whole population of potential solutions and rank the solutions according to their fitness. The evaluation function receives an encoded potential solution and applies the user defined criteria to it to determine a numerical score that assesses its fitness for solving the problem at hand. Usually, solutions closer to the optimum receive a higher fitness score than those further away.

The basic idea behind the simple genetic algorithm (sGA) consists of the following series of steps as motivated by the process of natural selection: Given a population of potential solutions of size N, all members are evaluated and ranked according to their fitness where those members that are closer to solving the problem receive a higher ranking. Next, two members of the population are probabilistically selected (those with higher fitness values are statistically favored over those with lower values) and occasionally[3] some of their data (encoding values) are exchanged. This operation, known as crossover, simulates mating, and results in two new offspring which are placed into a new child population. During the process of moving the new offspring into the child population, small random perturbation is occasionally[4] introduced in the data encoding to emulate the process of mutation.

Over time, just as in the process of natural evolution and selection, the population will continue to improve as better solutions are more likely to be selected for reproduction and continue to form the basis for new generations of solutions.

The purpose of the mutation is to prevent the population from stagnating and not exploring other directions in the search space during their evolution. Over time, our population will tend to become more homogenous as the algorithm continues to evolve solutions mostly with the more fit members. Mutation is intended to counter this trend in a "reasonable" manner. In other words, we don't want wild mutations that cause an erratic exploration of the search space, but just sufficient enough to dislodge the search should it get stuck in a local optimum.

 

F2. Basic Steps for a sGA:

1.      Create initial population of size N (parent population)

2.      Evaluate and rank parent population

3.      Create new child population of size N by

a.      Probabilistically selecting 2 parents

b.      Mating the parents (crossover: exchange of encoding information)

c.       Randomly introducing some changes in the encoding (mutation)

4.      Replace parent population with current child population

5.      Go to step (2.) starting a new generation until a termination condition[5] is met

Mitchell (1998) provides a very readable and approachable introduction to genetic algorithms for those wanting more in-depth knowledge.

 

F3. CHC algorithm

The CHC algorithm is based on the sGA, but differs in a number of important ways. The main differences will be noted below. Readers interested in more details should consult Eshelman (1990) and Bonakdarian et al. (2009) for more information.

Unlike in the sGA where two probabilistically selected parents are always mated, in CHC the selected parents are only mated if they are sufficiently different from each other. An additional difference is that the offspring is combined with the original parent population for ranking (this means a set of size 2 * N potentially), before selecting the top N members from this combined population to form the new parent population. Finally, unlike in the sGA where the mutation step is part of each iteration (generation), the CHC algorithm only uses mutation when the population has become too homogenous as a means to generate a new population. Once a lack of diversity in the population reaches a critical point, the best solution found up to that point is used as a template to generate a new child population of size N through mutating the template N-1 times. The algorithm refers to this as the cataclysmic event.

Our implementation of the CHC algorithm is true to the description given by Eshelman (1990) with the exception that we do not favor the parent if there is a "tie" in fitness values between child and parent. In our implementation one of the two is randomly chosen.

 

F4. Fitness Functions

Our fitness functions were based on running recursions with a given subset of variables and using the resulting Akaike Information Criterion (AIC), Bayes Information Criterion (BIC), and the log Likelihood (logLik) values. For each of these "main" statistics that formed the basis for our fitness functions we had three approaches for computing our fitness measures: "pure", "percent" and "delta". The best results for the various fitness criteria and subsets are presented in Tables F1-F3.

For the "pure" runs the value of AIC, BIC, and logLik respectively determined the fitness value for the given input set of variables. Lower values indicate higher fitness in each case.

In the "percent" approach we computed the fitness value based on AIC, BIC, and logLik respectively, and added a measure of the degree of significance of the variables. This measure was calculated by assigning "significance scores", based on the p value, to each variable, summing these scores up, and expressing it as a percentage of the best case scenario (all variables at p<.01). Regressions with a larger percentage of significant variables have therefore a higher fitness value. The idea behind this approach is to reward equations that provide a large number of significant variables, because typically those are more interesting for analysis.

Finally, the "delta" runs rewarded equations that consisted of fewer variables. If two different input sets of variables yielded the same main statistic (AIC, BIC, logLik respectively), the one with fewer variables would be favored. The reason for this modification is that in many cases we are interested in finding the most important variables only.

 

Table F1: Main Criterion: AIC

 

Subset 1 + Subset 2.1

Subset 1 + Subset 2.2

 

Pure

Delta

Percent

Pure

Delta

Percent

Akaike Info Criterion

10187.73

10191.38

10186.22

10184.69

10186.85

10183.3

Bayes Info Criterion

10541.45

10562.13

10534.26

10515.70

10580.30

10542.76

Log Likelihood

5031.86

5030.69

5032.11

5034.34

5024.43

5028.68

Number of variables

59

62

58

55

66

60

Percent of significant variables, at 5%

93.22

83.871

84.483

89.09

86.364

85.00

 

Table F2: Main Criterion: BIC

 

Subset 1 + Subset 2.1

Subset 1 + Subset 2.2

 

Pure

Delta

Percent

Pure

Delta

Percent

Akaike Info Criterion

10233.46

10239.97

10233.46

10233.67

10233.67

10240.83

Bayes Info Criterion

10347.94

10348.73

10347.94

10348.15

10348.15

10349.60

Log Likelihood

5096.73

5100.98

5096.73

5096.84

5096.84

5101.42

Number of variables

17

16

17

17

17

16

Percent of significant variables, at 5%

100.0

100.0

100.0

100.0

100.0

100.0

 

Table F3: Main Criterion: LogLikelihood

 

Subset 1 + Subset 2.1

Subset 1 + Subset 2.2

 

Pure

Delta

Percent

Pure

Delta

Percent

Akaike Info Criterion

10237.62

10287.75

10233.81

10234.20

10294.02

10234.66

Bayes Info Criterion

10874.06

11031.01

10858.98

10870.64

11054.11

10871.1

Log Likelihood

5006.81

5012.88

5006.90

5005.10

5013.01

5005.33

Number of variables

109

128

107

109

131

109

Percent of significant variables, at 5%

59.633

50.781

61.682

61.468

45.802

60.550

 

 



[1] Symbol (-1) means that the variable is considered for the previous round.

[2] CHC stands for Crossgenerational elitist selection, Heterogeneous recombination and Cataclysmic mutation algorithm.

[3] This probability is determined by the crossover parameter, which takes on a value between 0 and 1.0.

[4] The mutation parameter governs the probability of this occurring.

[5] Two common termination conditions are (1) a fixed number of iterations (generations), and (2) if the improvement of the best solutions levels of for a certain number of generations.