User talk:Marshallcam

Wiki Exercise #2: Annotated Bibliography
The mundane realities of the everyday lay use of the internet for health, and their consequences for media convergence Authors: Sarah Nettleton, Roger Burrows, Lisa O'Malley (2005) Marshallcam (discuss • contribs) 12:38, 12 March 2018 (UTC)

In this review article, the authors analyse the impacts of media convergence in everyday use of the internet for health. As a frequent offender of diagnosing myself from medical articles on the internet, I found this article to be extremely interesting as it gives us the impacts of doing this by actual medical professionals. With the effects of media convergence meaning the internet is more accessible than ever before, it is becoming increasingly common for people to research their symptoms and self-prescribe the best remedies for their health. This article argues that this is having a negative impact on society's health, however, as with the emergence of media convergence, more people are likely to self-diagnose themselves from information on the internet than actually visiting a medical profession. Furthermore, this is becoming increasingly dangerous as according to the article, there is a "power law" that exists when searching for information on the internet, meaning the most used sources are ‘virtually distanced’ from other less well-used sources. This article is reliable as it is knowledge given by medical professionals. Also, we can also see the consequences for ourselves by simply searching the best remedies for common illnesses, and seeing the various results that we find, it is hard to argue that there is too much information on these subjects and therefore it is near impossible to determine which information is correct. I will be using this article as part of my collaborative essay as it gives a very powerful insight into one of the many negative impacts of media convergence. Marshallcam (discuss • contribs) 12:38, 12 March 2018 (UTC)

Comments
This is a well realised and presented annotated bibliography. In terms of structure you combine your own experiences with the text's conclusions and takes on the effects of self diagnosis on the internet effectively. I find it interesting and slightly surprising that medical professionals would conclude that the health of the nation as been negatively effected due to people wrongly diagnosing themselves and being lulled into false sense of security. From received wisdom I have always been led to believe that the wide variety of information for self diagnosis on the internet is harmful to the health system due to the effect in which more people are convinced they are seriously ill due to false internet diagnosis and therefore clog up and unnecessarily use resources of the health care system. You do make a substantial point based on the findings of medical professionals responsible for the texts, as the idea of the "powerlaw" does account for the obscured and potentially dangerous system through which results about diagnosis are presented. Furthermore the point made about convergence creating an information overload is in this case again vital to the explanation of internet provide medical assistance and its effect on the public. Do you believe that search engines and technological companies have a duty to make sure the public is not led a stray by results to the extent it is, given the immensely powerful and encompassing nature of media convergence on modern society? RossTheSnake (discuss • contribs) 18:29, 23 March 2018 (UTC)

Wiki Exercise 3:
- Hey @RosstheSnake, glad you enjoyed reading my post! To answer your question about whether or not I think major tech companies have a role in ensuring internet users are not self-diagnosing the wrong medical conditions, I believe that these major companies such as Google, Yahoo and Bing! have a major responsibility to ensure that at least the first page of results that come up after symptoms are searched for has accurate and useful medical information. I think that there are so many consequences with the current way in which these search engines display results, not just for the user who is attempting to self-diagnose themselves, but the consequences that this will have on hospitals and medical practices in general. So many people who search for their symptoms online will not be comfortable with browsing through pages until they come across a trusted source, they will simply read the first thing that comes up on the screen and then accept this information to be fact. I can only imagine how many times this has caused people to go to the hospital thinking that they are suffering from something much more serious than they actually are, and I'm sure medical practices will have hundreds of appointments every week that are from people not having the correct information on the condition that they are suffering from, therefore they seek professional advice as they think they are dealing with something much worse than it actually is. This is a difficult topic to discuss I think, however, as these tech companies can only do limited things to reduce the amount of non-factual information out there. With so many pages of information on almost any symptom, there is no way these search engines can go to every individual page and check. I do feel like there could be a couple of websites that are automatically pinned to the top of the page when the user asks for medical advice, and these pages will be checked by medical professionals once in a while to ensure that the information is both up to date and correct. Marshallcam (discuss • contribs) 20:53, 25 March 2018 (UTC)

- Thanks @Marshallcam, I think you make a valid point. Considering the popularity and influence of these search engines it would indeed be a benefit to have some sort of automated system in which the most reliable of medical sources gets to the top. I do agree that this would be a difficult issue to bring up with corporations or to implement in practice, but I believe that it is not out of the question. All search engine have a plethora of algorithms that determine what information and which websites get to the top of each search. Interestingly enough Google's algorithm sets it apart from its competitors and, although the full details of the search engine's methods have never been published, what is known is that the main system of ranking is called PageRank. PageRank deciphers what appears first in a search based on a number of factors: the frequency of key words in the resulting web page, how long the web page has existed, and how many other web pages link to the web page as a reference. From this it can actually be determined that certain medical websites from trained professionals, such as the NHS's website, are in fact far more likely to appear first under this algorithm due to the fact that this website has been around for a long time and due to the fact that many other websites use the site as a reference. However, despite this, I do not believes that this method goes far enough. Google goes to further lengths to display instantly results such as shopping or advertisements or even their own features such as the calculator. It is also the case that the NHS website is a product of the government and the national medical association, thus making it very distinguishable from other, less official websites. The fact that misinformation in this sphere leads to so many wrong diagnosis and a strain on the national health service prompts me to believe that it is within the best interests of the government to act upon this and possibly even communicate with Google and other search engines in order to create a way for the most reliable and official websites to appear first in this case. This may seem slightly far fetched in some ways, but as media convergence allows the web to become as great and as powerful as it is many media scholars, among them being Henry Fuchs, believe that we have to come out of this ultra-capitalist "let them be" rationale and take action and responsibility for the effect of the internet. RossTheSnake (discuss • contribs) 12:47, 26 March 2018 (UTC)

INSTRUCTOR FEEDBACK: DISCUSSION, ENGAGEMENT, CONTRIBS

 * Engagement on discussion pages of this standard attain the following grade descriptor for contribs. Whereas not all of the elements here will be directly relevant to your particular response to the brief, this will give you a clearer idea of how the grade you have been given relates to the standards and quality expected of work at this level:
 * Clear Fail. Assignment responses receiving marks below 30% tend to not contain any merit or relevance to the module. Contrinbutions are one-liners, sometimes made up of text-speak, if there are any contributions at all. Often they are indicative of failure to comment on other students’ ideas, and therefore do not engage with the crucial peer-review element. Entries of this grade may have been subject to admin warnings or take-down notices for copyright infringement, or the user has been blocked for vandalism or other contraventions of wiki T&C. The wiki markup formatting will be more or less non-existent.

Students should be engaging at least once a day, for the duration of the project. The following points illustrate how this engagement is evaluated.


 * This was clearly not the case here – only 3 days registered as having logged a contrib, and 7 logged in total for the entire project period.

Evidence from contribs to both editing and discussion of content (i.e. volume and breadth of editorial activity as evidenced through ‘contribs’). These are primarily considered for quality rather than quantity, but as a broad guideline: o	Each item on a contribs list that are 3000+ characters are deemed “considerable” o	Each item on a contribs list that are 2000+ characters are deemed “significant” o	Each item on a contribs list that are 1000+ characters are deemed “substantial” o	Items on a contribs list that are <1000 characters are important, and are considered in the round when evaluating contribs as a whole because of their aggregate value


 * All 7 contribs registered as being under 1000 characters, this simply doesn’t constitute several weeks’ worth of evidence of discussion. Simply too little, too late.

•	Engagement with and learning from the community on Discussion Pages o	Evidence of peer-assisted learning and collaboration o	Evidence of reading, sharing, and application of research to the essay o	Evidence of peer-review of others’ work


 * Very weak.

•	Reflexive, creative and well-managed use of Discussion Pages o	Clear delegation of tasks o	Clearly labelled sections and subsections o	Contributions are all signed


 * Not much in the way of this at all.

•	Civility. Your conduct is a key component of any collaboration, especially in the context of an online knowledge-building community. Please respect others, as well as observe the rules for civility on wiki projects. All contribs are moderated.


 * Too little material in evidence to make an assessment.

GregXenon01 (discuss • contribs) 12:50, 23 April 2018 (UTC)

Instructor Feedback on Wiki Exercise Portfolio
Posts and comments on other people’s work, of this standard, roughly corresponds to the following grade descriptor. Depending on where your actual mark is in relation to the making criteria as outlined in the relevant documentation, it should give you an idea of strengths and weaknesses within the achieved grade band overall:


 * Posts of this standard do not address the assignment requirements. They offer little to no engagement with the concerns of the module. They are poorly written and comments are often extremely brief or missing. Entries of this grade may have been subject to admin warnings or take-down notices for copyright infringement. The wiki markup formatting will be more or less non-existent.


 * The particular element of the above descriptor is that there seem to be posts and responses to the exercises missing here (e.g. your reflective account of the project – ex #4). Other than that, there’s clearly room for improvement here. You really need to engage beyond what is the minimum requirement and push yourself in order to learn more about the subject matter. This is done through reading independently, widely and in-depth around your subject, drawing from peer-reviewed literature.


 * Perhaps less generally, I also think that in order to engage with the wiki exercises a bit more, it might be useful for you to look at the Grade Descriptors and (especially for this, perhaps, the Understanding) criteria in the module handbook to get more of an idea of how to hit those targets.


 * In addition, making more use of the wiki functionality and markup would have gone a long way to improving fluidity and functionality of posts. I suspect that, if you become more familiar and proficient with the platform, that this would have made a considerable difference.


 * Re: responses to other people’s posts – these are fairly good where they appear, if a little perfunctory. Remember that the comments are "worth" as much as posts themselves. The reason for this is not only to help encourage discussion (a key element of wiki collaboration!) but also to get you to reflect upon your own work. This can all, of course be used to fuel ideas that might form part of your project work.

General:
 * Reading and research: more evidence needed of critical engagement with set materials and with that of appropriate academic and peer-reviewed material generally


 * Argument and analysis: this could have been improved through more engagement with some of the key debates relating to online communities, reflecting about that, and then applying it to the subject matter.


 * Presentation: see above point on use of wiki markup and organisational skills.

GregXenon01 (discuss • contribs) 11:10, 9 May 2018 (UTC)