Dr. Hamidou Jawara
Hi Matida, it would have been good if you had asked the team for clarification on the sampling before coming with very bold claims on the issue of internal and external validity of the survey findings in public. All good, I will provide response to each of your major claims while I to keep things short:
(a) Methodology section in-articulated: given the nature of the subject and the audience were targeted, it made no sense to us to provide detailed technicalities on the methodology beyond what we thought was easier for the majority of the readers to comprehend.
(b) Internal and External Validity: you talked about internal and external validity as if the goal of our polling is to make a causal inference on the intention to vote or to forecast the outcomes of the upcoming elections. At our level, none of these were the goals of the polls. Instead, the goal was to provide some insights on the opinion of potential voters regarding the various issues we addressed in this opinion poll. To that end, I think we have some reliable results that are worth the attention of the general public. Forecasting or predicting the election outcome will be the aim in the next opinion poll and for us that will be the time to be concern with internal validity. Besides, the way we have reported our findings are very clear in our report and someone that is well acquainted with the interpretation of survey results would have seen this. Furthermore , there are opinion polls that are based on even non-probability sampling methods, and the opinion poll literature has shown that at times such surveys even do well in predicting the outcomes of an election. This means even when internal validity is a threat, it is possible to have results that have external validity. Therefore, rejecting opinion poll results on the basis of internal validity threats can at times be very myopic.
� There are other survey data why IHS?: There are other survey datasets, but none is as rich as IHS. Secondly, IHS is a random sample of households in The Gambia using the updated 2013 census as frame. Therefore, it is a fair representation of the population of all households in The Gambia. In statistics, it is well known that sample that is derived from a random sample is still random. It is on this basis that we used the IHS as our frame. It is understandable that this is not perfect, especially when the target respondents are likely voters. However, it was the best dataset we could use at the time. That’s why in the analysis we focused on just registered voters in the sample who have self-reported that they are likely to vote in the upcoming election.
(d) Basis on which telephone numbers were selected: the frame we used had telephone numbers for households. Each household had at least one telephone number to a maximum of three numbers. Depending on the GSM operator of the number associated with a household that was selected, the enumerator calls the household with the appropriate GSM provider. For example, if a respondent that was randomly selected has a GAMCEL, the enumerator will call the respondent using a GAMCEL and if it is AFRICELL or QCELL then with AFRICELL or QCELL . This is what we tried to explain in the report. So, it was not like we were choosing numbers based on GSM operator and therefore we missed Comium users. Also, your claim that COMIUM is a leading GSM operator in the country is misleading.
(e) Survey dates: We were very clear on this in our report. Our survey didn’t capture what happened after the 27th May 2021. In fact, this is the reason why we would do another poll close to the election. We know that intention to vote is very dynamic and it changes very quickly depending on what happens in the political terrain. So many things have happened since our survey and it is very likely that this will affect the perception of the voters.
(f) Design effects are important for sample selection and just for showing how the sample might have changed due to the sampling design adopted by the research. Not every reader is interested in such information, that is why we didn’t report on it. Design effect does not affect internal or external validity. If you are a statistician you should know that.
You concluded that because we didn’t provide enough details regarding our sampling method, then, the results lack internal and external validity. Detailing a sampling method is NOT one of the factors that threatens internal or external validity of a survey results. Please let us not politicise the results of the survey. Let’s look at them with an open mind. Our aim is not to make one party popular and the other unpopular. Our aim was to provide some insights on the perception of some section of voters. I hope I was able to bring some clarity in here. Thanks!
Matida Jallow, you will agree with me that even with an academic paper, depending on the audience, you can shelf some details of your methodology. Anyway, this was a learning point also for us and we would consider some of these things in our next poll. But given what was available at the time this was the best approach we could use. We even thought of using an RDD design but because it was impractical for our context we decided not to use it. We couldn’t poll on registered voters because during the sampling design phase of our project the voter registration had not started and we were not sure if IEC could grant us access to the data on registered voters. So, we had to rely on some reliable frame to select the sample and this was the IHS. Just for you to know we take every genuine criticism onboard as this helps us to improve what we do. We are a center that is out to provide quality service to the masses. That’s the mantra that keeps us moving.