Thursday, March 31, 2016

PCA/ACA 2016 and Fan Phenomena: Mermaids

Just coming back from the 46th Annual PCA/ACA Conference in Seattle, WA (well, technically I came back a few days ago, but we had chocolate eggs hunting in the meantime somehow). Like always when a meeting is held on the West Coast, there were slightly less people then when it is held on the East Coast or in central parts (even worse, last year it was in New Orleans, and that is a place difficult to compete with). Anyway, although there is always quite a huge variability in PCA/ACA talks (but that is due to the very nature of the topic of this conference I guess), the quality was once again there. I went to a few really interesting talks, coming back with new ideas and some new contacts of great people.

While individual presentations are of course of interest, it seems I am more and more drawn to the group discussions. Beside the open presentation I made in the Professional Development Area on “Publishing in the digital era: Presenting Computers in Human Behavior” which was a mix between thoughts and advices on publishing in the fields of Internet studies, Internet culture, and digital life, and on some specifics on Computers in Human Behavior (do I really need to say again all the good things I think about this journal? Obviously I am biased, but nonetheless it is a great media to publish all things related to cyberpsychology), I went to listen what people had to say in three roundtables. While the three of them where surprisingly excellent, one really deserves a few words. The “Game Studies XX – Roundtable on Teaching, Ethics, and the Future of Game Design: What are our Responsibilities?” from the always extremely vivid Game Studies Area was quite something. It was chaired by Dr. Josh Call who is Professor at Grand View University, and while the attendance was not that numerous (sadly it was in the evening and competing with quite a few other popular sessions such as the famous SFF screening), the discussions there were tremendous and thought-provoking (at least in my opinion). It was way too short to tap into all the potential of the issues which were raised there, but definitively, it was good having some serious group-thinking about these topics.

Oh, and yes, my talk on merfolk (“From fantasy, to virtual spaces, to reality, and back: Structuring the (online) merfolk community”, presented in the “Identity in Online Communities” session of the Virtual Identities and Self-Promoting Area) went really well! And related to that, the awesome book “Fan Phenomena: Mermaids” from Intellect is finally out!

It is available directly from UChicago Press; for your convenience, the link is even just here:

Oh My! Mermaids in movie, in popular culture, online merfolk community, and much more…
You KNOW you want to get it!
As all of the Fan Phenomena series of Intellect books, this opus combines scholars’ essays with the point of view of some leaders of the fan community (in this case, three amazing persons, the mermaid Hannah Fraser, the (virtual and real) Gold Mermaid Cynthia, and the mermaid instructor Marielle Chartier). And if you are into Internet studies (there is a lot of stuff related to the formation of online communities, and how online communities can have real-life impacts), film studies (obviously), fan studies (with three amazing non-scholar contributors, as well as in-depth analyses of the structure of the merfolk fan community), or cultural studies in general, I believe you might find it of interest.

Thursday, February 18, 2016

Can qualitative research help us to avoid hitting the wall in biomedical research?

Like many people I assume, I have been looking the recent exchanges between Prof. Trisha Greenhalg (Oxford, UK, see here her letter ) and the editors of BMJ (their answer can be found here ).

Obviously, this debate appeals greatly to me, as I am both a researcher heavily relying on mixed (quantitative/qualitative) approaches, and an Associate Editor for a journal ( Computers in Human Behavior ) which covers a field in which both quantitative and qualitative studies are to be found (as a disclaimer, the opinions stated below are mine and not those of the whole editorial board of the journal – although I am quite sure that most would share my views on this matter). We actually had quite a few discussions about this exact topic in the last few months with the students in the lab.

Anyway, I think if we take biomedical research as a whole, we all would agree that we are currently “hitting a wall”. Despite a sheer number of research groups which has probably never been so high, most of the research currently published does sadly not translate into major advances to patients’ bedside. We are accumulating studies after studies without being able to truly see a measurable impact on treatment outcomes, on efficiency of public health systems, and, ultimately, on patients’ health. Not that we are not progressing, rather, the progresses are slower than what we should be expecting.

In order not to hit this wall, we might need a paradigm shift. Could this paradigm shift be related to qualitative methods? Indeed, there are increasing numbers of researchers who turn back to qualitative methods to document various health-related phenomena. Qualitative research has always been around, and sadly, it always has been opposed to quantitative research. Who is guilty? I am not actually sure. Of course, “hardcore” scientists will all claim that qualitative research has no value, but I have heard quite a few times qualitative researchers saying the exact same about quantitative research. In both cases, such statements are (sorry to say, but it is true) stupid. What makes the value of a methodology, is its adequacy to answer the question we ask. If answering a question needs quantitative analyses, fine. If it would require another perspective – one that could be brought using qualitative methods – then so should it be.

As I just mentioned, I think that since a few years, more and more people are seeing that we were missing something by focusing only on large-scale measurable data. Nonetheless, the perception of qualitative research by a lot of journal editors (not to mention funding agencies) is still rather negative.

Given that as a researcher I am regularly doing research using mixed approaches, as an Associate Editor, I am obviously highly sympathetic to studies relying on such methodologies. In other words, I am not immediately turning down a study because it is solely a qualitative study. Does that mean that I will consider that any qualitative study is inherently good? Certainly not. Sadly, here come some of the issues that we are facing with qualitative research. Some people believe that doing qualitative research is a good way to do research when one doesn’t want to deal with statistics or complex tools. But in qualitative research, there is the word “research”. And indeed, qualitative research is a form of research, with its own methods, limits, potential biases, and so on. Furthermore, it is a field which is – like any given field – evolving. Unfortunately, not all qualitative researchers understand that: a common response I get from authors once their papers have been rejected by reviewers for methodological concerns (and here, I want to state again that I am familiar with such methods, so I make a lot of efforts to try to get reviewers which are competent in qualitative methods when such a paper is submitted), is typically going like that “in his paper, X said in 1980 that this approach is appropriate”. Sure, but we are in 2016 … let me count, 1980 was (my, I am getting old) 36 years ago. Your field is so numb that it did not evolve at all during 36 years? Probably not, update your methods. Second point, not all qualitative approaches are appropriate for all situations. Sometimes, a qualitative strategy was appropriate for a given situation, population, whatever, but is not for another one. Another point to consider regarding methods (and the necessary updates in qualitative methods). Back in the years, when I was younger, we were able to publish papers in mice behavior with just one behavioral test, and one experimental drug. And we were able to publish that in high impact factor journals at these times. Now, just to get in a moderate impact factor journal, you would need a dozen of behavioral tests, the same amount of drugs, and a bit of molecular biology on the top to get some “mechanistic explanation”. Well, the methodological quality requirements of quantitative research have jumped up (I know it is not the case in such magnitude for all fields, but still). So, it is legitimate to assume that people are expecting qualitative research methodology to be updated too. And some qualitative research do just that. Those papers should indeed be published, and receive the attention they deserve. Which leads me to the next point.

Not all research is good. Of course, there are a lot of bad quantitative research too. But in the case of bad quantitative research, it might create a (transitory) illusion: bad quantitative research does not necessarily look instantaneously bad. If you read it quickly, you might think it is plausible research. This is not the case for qualitative research: badly done qualitative research usually looks bad. So, from an editor point-of-view, when a paper goes to an editor, a “bad quantitative research” might pass the first quick screening and be sent to review, while a bad qualitative research will probably immediately get blocked by the editor – and not sent through the reviewers. That doesn’t mean that the bad quantitative research will pass (usually it doesn’t). That might give a bad feeling to some qualitative researchers. But trust me my friends, the proportion of quantitative research getting rejected is very high too ! When I say that, I am CERTAINLY NOT saying that all qualitative research papers which get rejected right by the editor are bad. Indeed, I fully agree with the statement of the letter of Prof. Greenhalg et al., that editors need to be educated in qualitative methods (if they are, then they will select appropriate reviewers). But with the working load we are all having, I understand an editor unfamiliar with qualitative research rejecting systematically qualitative papers after having had a brunch of them rejected by the two reviewers, with rough comments sent to the editor (“why the hell are you sending me that to review?”). Rather than forcing journals to accept one qualitative paper a month (quotas are not always the best way to do things: a study should be published if it is good, not because we need one per month), a way to change this perspective could be to include in the editorial boards of major journals some specialists of qualitative or mixed methods - so that GOOD qualitative research papers are given a fair chance and can undergo a fair peer-review process.

I have seen the argument that “qualitative papers don’t get cited”. Well, that is obviously not true. GOOD qualitative papers get cited. Like any paper, it is impossible to predict the fate of a paper in terms of citations just by defining its type. If decades ago, a review was sure to attract a fair amount of citation, with the considerable increase of the number of papers which get published on a daily basis, reviews are not necessarily more cited than experimental papers nowadays. While of course, I understand that some editors might consider the potential of citation in the decision process, it is and should not be the main point to consider. The main point is the quality of the study, its value for the field. And with that in mind, qualitative studies, when well done, can be of considerable value.

Finally, a question that I am asking myself quite regularly: is this qualitative/quantitative debate a real debate? I am actually not sure. I believe that we should be able to go beyond those old ghosts, and all (editors, reviewers, but also authors) become mature enough to understand that if we want to make real changes in patients’ life, the “truth” need to be approached by different, yet complementary angles. Depending on what we want to explore, we might need time to time quantitative tools, time to time qualitative tools, and I believe more often, both at the same time. The insights of qualitative approaches are incredibly valuable. But so are the data obtained through quantitative analyses. In my opinion, only combining both will allow us to crush this damn wall, and to go further.

Tuesday, October 6, 2015

Book Review: Handbook of Research on Holistic Perspectives in Gamification for Clinical Practice

Obviously I am biased.
Nonetheless, this book is quite an interesting opus.

Handbook of Research on Holistic Perspectives in Gamification for Clinical Practice

It can be found here.

Health games are becoming quite a popular trend now, and given their potential, it is indeed legitimate. We hear more about that regularly in the media, and we see more and more of them at specialized conferences. As for myself, as one of the associate editors of Computers in Human Behavior, I can see that more and more papers are submitted - and get published - on this topic. However, I can also see that Reviewers are getting more and more familiar with health games, and, therefore, that they are getting more and more strict in what is good and should be published ... and what is less strong should be rejected.

Therefore, it is critical for people working on this field, or willing to work on this field (and that is obviously true for researchers interested in getting their work published, but also, and maybe more importantly, for clinical specialists willing to develop optimal applications using and based on the most up-to-date knowledge in the field) to know what is going on around, and to know what is the state-of-the-art in health games. That being said, finding a single source which would cover topics as diverse as one could expect to meet in developing an health game paradigm is almost impossible.

This book answers this need. Interestingly, this handbook is that the approach it takes is an heuristic approach. Thus, the various chapters of this book cover very various topics, from angles and perspectives not commonly seen in the field (let's take for instance my own chapter (the Chapter 1: Ethical Challenges in Online Health Games) specifically deals with some ethical aspects of health games - a topic which is obviously extremely important, but which would classically not be covered in a conventional book on health games focusing more on technical approaches or applications to specific pathologies). Don't get me wrong here, this book also covers some of the technical issues and provides examples of      applications to specific pathological conditions. But it goes further, and thus offers a more globalized ("heuristic", huh !) of this emerging field.

For those not familiar with the format of IGI books, there are also a few characteristics which makes this handbook extremely useful and convenient :
- First, the book is made in a truly didactic way. For each chapter, in addition to the main text, there are summarizing tables, key definitions, etc ... making this book a pretty good pedagogical tool in the lab, if you have, let's say, graduate students beginning on this topic and coming from field not related to gamification or health games (such as psychology, conventional health sciences, etc ...), or even coming from fields related to Internet studies, but lacking a specification on online health.
- Second - and I would say that this is of major interest for researchers willing to gain time - all the chapters have, in addition to the references mentionned in the main text, a list of supplementary references, specifically selected for their relevance to the topic of the chapter. Given that all the chapters have been peer reviewed by experts in health games, this makes the book a fantastic resources to locate the "must-to-read" basic references in the field ... hence to get some fundamental information before stating anything on this topic.

In conclusion, this book would definitively find its place - and that would be a place handy, easy to access since it is going to be accessed quite regularly - on the shelves of the personal library of any researcher or health specialist interested in health games.

And if you are interested by this topic in general, I would also recommend to read our "My avatar is pregnant" paper from Dr Anna Lomanowska and myself:

Lomanowska AM, Guitton MJ (2014) My avatar is pregnant! Representation of pregnancy, birth, and maternity in a virtual world. Computers in Human Behavior, 31:322-331. [PDF]

Thursday, July 2, 2015

Literature Review

An excellent article recently published in Computers in Human Behavior is getting some well-desserved recognition: Lepp A, Barkley JE, Karpinsky AC (2014) The relationship between cell phone use, academic performance, anxiety, and satisfaction with life in college students, Computers in Human Behavior, 31: 343-350 (see here), by Andrew Lepp and colleagues, got nominated in the Computing Reviews' 19th Annual Best of Computing list as a "Notable Article in Computing - 2014". Once again, a well-desserved recognition for an excellent paper. It also points the excellence of Computers in Human Behavior, and its potential as a medium to publish top research in this field. This paper is an excellent exemple of what modern cyberpsychology is: interdisciplinary, multidimensional, and quite insightful. In any case, a highly recommended paper to read.

Wednesday, June 24, 2015

Computers in Human Behavior entering in the top 20 of psychology (multidisciplinary) journals !

The 2014 Impact Factors have been released, and once more Computers in Human Behavior is growing up ! Impact Factor 2013: 2.273 (5 years Impact Factor: 3.047), and now Impact Factor 2014: 2.694 (5 years Impact Factor: 3.624). Not only the Impact Factor is getting closer to the symbolic line of 3, but the 5 years Impact Factor is almost reaching 4 !

But Impact Factors are not what matter the most in terms of metric. Indeed, Impact Factors are just numbers, and they heavily depends on the size of the field. An Impact Factor of 2 does not mean the same if the journal it belongs to is a journal of molecular biology or a journal of computer-mediated communication.
So, while the growth in Impact Factor is great, what matter even more is the relative position of a journal among the other journals in the same field. And here come the real good news: Computers in Human Behavior entered this year the Top 20 of the journals in the "Psychology (Multidisciplinary)" category ! The ranking of Computers in Human Behavior in this category in 2013 was 24 out of 127, and it is 20 out of 129 in 2014.

The cherry on the top of the cake ? Well, Computers in Human Behavior also increased its ranking in the (arguably more competitive) category of "Experimental Psychology", reaching the rank 24 out of 85 (instead of 30 out of 83 in 2013), thus getting closer to enter the Top 20 there too.

Why is that so ? What can explain this success of the journal ? As a scientist, I have the flaw of looking for rational explanations for various phenomenons ... So, one reason is probably that the journal is pretty awesome: the Editors, Publisher, and all the people involved in it are doing an amazing job, and we can only be grateful to our authors and reviewers, who all contribute to the success of Computers in Human Behavior.
But there is probably more than that. First, the journal indeed is of great quality. Although the review process is never instantaneous, and although it can take several months in some case, we managed to keep the average time between submission and first decision relatively reasonable (for a journal receiving circa 2,000 submission a year, still). Second, the field has considerably evolved in the last few years. The methodologies have strengthened, the theories and the knowledge underlying cyberpsychology have expended, and as a result, the overall quality of the papers submitted (and thus, published), has greatly increased too. Better papers, more citations. Third, with the exponential advances in information technologies, virtual spaces are becoming prominent in our everyday life. What was once perceived by some at beast as an ectopic research subject is now becoming a topic of central interest. And Computers in Human Behavior being positioned as a leader in this field, the ranking in more general psychology-related fields is likely to get better. So, in conclusion, while those were good news, I hope to receive even better ones next year !

Friday, March 13, 2015

Mermaids, online intra-community communication and social density

Virtual communities are not necessarily independent from each other. Rather, it is quite the opposite. Indeed, people are usually member of several virtual communities, overlapping or not, and sometimes embedded within larger virtual communities. Therefore, a key question in understanding how virtual community is to understand how the members of the community can maintain contact in the ocean of possible virtual contacts. In other words, how is it possible for members of a community to maintain coherence in a diluted virtual environment? In a previous post, I was discussing how fantasy-based virtual communities can represent good models to approach general cyberpsychology question. Well, once again, this will prove to be true, as the merfolk community of Second Life is a perfect model to study this phenomenon.

The merfolk community of Second Life is a community which spontaneously emerged in the virtual seas of Second Life. However, compared to the number of land-dwelling avatars, the merfolk are ridiculously few. Thus, the question of how to maintain communication within the merfolk community was central in order of this community to exist. In our last paper, we studied the communication processes in the merfolk community, demonstrating how optimal communication strategy making an heavy use of redundancy allowed to keep high levels of social density (you can download the paper for free here, but just up to end of April : ).

Guitton MJ (2015) Swimming with mermaids: Communication and social density in the Second Life merfolk community. Computers in Human Behavior, 48:226-235.

But actually, there is even more to learn from the merfolk. Today, we had the visit of Dr Louise Arseneault, Professor of Developmental Psychology at King's College London. After her though-provoking talk, she spent the day at our Institute to meet and talk with the PIs. During the fascinating discussions we had, we went to talk about cyberbullying (as she is a world-famous expert on the psychology of bullying), and I gave the example of various virtual communities and how conflicts are handled there, and particularly with the merfolk community which had a very efficient way to deal with such events, notably due to the communication strategies used and to the clear identification of mentors and advisers within the community. So, as it was with the Star Wars role-play community a few years ago, there is quite more than meets the eye under the level of the virtual sea.

A mermaid in the virtual seas of Second Life.

In case you would get interested by the merfolk option in Second Life, you can get all the information you would want at the Safe Waters Foundation Headquarters (just type "Safe Waters Foundation" in the place search engine of Second Life and it will bring you there), or look at their site: .

Friday, November 7, 2014

Comment: Providing better training and better career perspectives to graduate students

The fact that the way research is currently conducted in universities is not made optimally in terms of cost-efficiency and of student formation / career perspectives is something which is rather obvious, and which has been bothering me since some times. Thus, it was pretty interesting for me to read this opinion paper published in PNAS.

Alberts B, Kirschner MW, Tilgham S, Varmus H (2014) Rescuing US biomedical from its systemic flaws. Proc Natl Acad Sci USA, 11:5773-5777.

It is part of a larger debate (and it even is the target of some discussion even in LinkedIn groups (for instance the AAAS group). I would like to comment on two specific points of the paper.

First, the need to downsize academic research labs. I believe since already some times that the current model which promotes the existence of very big labs is not the best. First, while the senior scientist can direct numerous projects, especially in the same area of expertise, our time is still limited. Meaning that we spend more time doing administrative things than actual science. This also means that the time devoted to each trainee will be way smaller. When you have simultaneously 10 students and 5 postdocs in your lab, it is not reasonable to believe that you will give 1h of individual attention per day to each of them. Well, sadly, even 1h of individual meeting would be probably difficult, since we have quite a lot of other tasks to do as well. Universities want to have more and more student graduating, so we have in terms of career some interest to manage large labs. But what about the quality of the formation we provide? And of the mentoring we do? Not areas of research are the same. In molecular biology it is very common to see “hives” of students working in the labs, since a lot of the experiments follow very strict protocols, and maybe (and the maybe is important, as I am not that sure of that actually) requires less guidance by the supervisor. However, in the field I know the best, which is behavior (or cyberbehavior, but it is just a specific case of behavior in general), while the protocols are usually simple, what makes the difference is how you exactly do it. How you design the protocol, how you quantify the actual behavior, how you control for any bias not only when developing the experiment but also during the process (since behavior is a dynamic display). And more importantly, we need time and intense contacts to really “transmit” not only knowledge, but also our own experience to our students, so they can really learn and benefit from it. That is particularly true for a PhD student or a post-doctoral fellow. This type of relation is a one/one relation, where the word “mentor” takes all its meaning. However, there is a second element favoring the fact that big labs are growing. And that is an economic argument. A student costs considerably less than a regular employee. So it is easier to have a lot of students in the lab rather than a lot of workers. But are students only cheap workers? I would think no: first they are still learning, so their productivity is not at the best (and it shows, since the larger labs obviously have more publications than smaller labs, but often the number of publications per capita is lower). And second, and more important, they are learning skills to get a job later on. And here comes the issue that we are maybe forming too many students in some disciplines that what the needs really are. A lot of public funding is thrown to research for Alzheimer’s disease, which is rightly a major problem of public health in Western countries. However, these funds are used to form a lot of students in this topic. Most of them would love to do an academic career. Would it be reasonable to believe that universities will recruit every year dozen of newly formed researchers just on this topic? Of course, not. On the other hand, we are struggling to find specialist of research in the field of rhinology, while we do have teaching need for that, and while disease related to the respiratory system and superior aerial ways are critical as well. Downsizing the labs to more reasonable and manageable research units really centered on the professor research interests, and allowing a better, deeper form of trainingship for graduate students.

Which leads to the second point. We still need people to do the research: nowadays research is getting more and more complex and complicated, interdisciplinarity is a key word. Modern research relies on various expertises. Staff Scientists could be a solution, as suggested in the paper. It would increase the quality of the research done. It would provide some new positions for all the students we are forming. It would increase also the formation of the students by having more senior people around them. But there are still a few issues with that. First, such a solution would have a cost. We can not expect recruiting high qualified people without decent salaries. This would have to come from the funding we get ... while the actual tendency for research grants is more oriented toward decrease, and the current success rates are falling down. Second, the nature of the employer might be a problem. Would it be the researcher? Meaning that, if a researcher loss his grants, then the staff scientist would be fired? It would be very difficult to secure on the long term this key people. Would this people be hired directly by universities? While that might be an acceptable solution for research institutes, it is hardly something which can be defended in the context of universities. Also, not being the actual employer would mean that it would be the university who will decide with whom (with which researcher) the staff scientist will be working. With all the problems which can arise for that. And with the issue that they would be considered mainly as super-technicians and would see their intellectual contribution potentially decrease. A possibility could be to develop new types of grants, which would be accessible only for staff scientist, under the direction and projects of university researchers (but independent from the professor own grants), with a system of renewal akin to the system we have for career grants in biomedical research in Quebec province. It would of course some money, but the results for the efficiency of both academic-based research, and more importantly, student formation, might be worth the deal.