You have begun to explore how users think about the tools they use, by reading Chapter 1 of The Design of Everyday Things and by doing some class exercises. You have read the article "'Of Course It's True; I Saw it on the Internet!' Critical Thinking in the Internet Era".
Work in teams of three or four people based on the number in the upper right-hand corner of this page.
Problem: The Internet is an unmonitored source of information with a low barrier for entry. People must use extra care to distinguish legitimate information from misleading information and to corroborate information with other reputable sources.
Correctness versus verification. Was the authors' equal treatment of them reasonable?
Is it ever reasonable to trust a single source from the library? If you say yes, what makes the situation different from using the Internet? Perhaps the library, serving as an authenticated scholar's portal, can offer a higher degree of trust.
Search engines: Text-based techniques, not idea-based. Use of "number of links in" as an indicator of value. Use of page contents and other author-provided information for indexing. Selling "top billing" rights to advertisers.
Student vulnerability to advertising claims, government claims, propaganda, scams. Inability to distinguish similarity intent between, say, HGH site and Microsoft site.
This paper operates in a mode similar to Norman's: investigate the thinking, beliefs, and expectations of users in order to understand how well they use a tool. The authors present survey results that seem to capture the way users think about the Internet.
Addressing the problem: Is it feasible to create and use scholar's portals or other 'certified' or vetted site as an entry points? What about education? How? In what courses?
Interesting result on confidence level X quality of answer. Don't let your lack of confidence be the only factor in causing you to devalue your ability... (Women versus men in CS course of this sort...)
The discussion then turned to search engines. Many people along with myself had no clue as to how a search engine worked. Search engines use several methods to come up with their list of web sites. The first one we discussed in class had to do with how many times a word appeared on the website, the more times a word appeared the more important that site was. So it got placed higher on the list. Web site developers can trick this system by using a comments section in the HTML code and placing key words there many times. This is only read by the search engine and does not actually appear on the web site itself. The second method discussed in class was rating how important the website was by how many other people in the world had links to that site.
My group and as was mentioned in class had the idea that there is a definite possibility that companies can pay money to the search engine to ensure that their site appears at the top of the list. Some suggestions were made in class as to how we can avoid the problem of quoting inaccurate information on the web. One student suggested that we "burn the internet" this was determined to be impossible, but however is my favorite suggestion. The other was to create reliable search engines that have no concern with advertising or money making. The problem is that when these companies go public on the stock market, and their price starts to fall they may have to resort to tactics that they would not have previously considered in order to raise the price of their stock.
There was also discussion as to how internet search engines work. Few people had any clue, but professor Wallingford told us a general format for how they find matches to a search. Thought was also given to how search engines could affect the quality of answers that come up on a search.
Professor Wallingford talked about how most people only select one source when they reseach a topic. He also said that a library is a more scholarly arena for finding resources than the internet, because the information found in books goes through many checks and double checks.
The internet is the total opposite. There are no monitors of what gets placed on the web, which makes the internet much less reliable.
Search engines are programs that look at words in a web page, then compare those words to their dictionary of words, and then place a check next to them. This gives the search engine a listing of what pages have what words in them, and pages are typically listed by the number of times that a word appears on the page. But, there are ways to cheat this system. Page builders will put hidden fields on a page with a key word listed over and over again to assure that they appear high on a list of pages that are returned.
In class we talked about how the internet isn't a source of reliable information, but it is an unmonitored medium. Students use the internet daily to form arguments, find research, and look up information about specific topics. What they don't realize, or always remember, is the internet has a large amount of false information.
Why is there so much garbage or fallacies on the internet? The main reason is that there is not an internet Librarian. Therefore, it is easy for anyone to put whatever they want on the internet. It is a lot easier to put information on the internet for people to see and read than it is to publish a book. The internet may cost $5 or less to put cheap information in the sight of millions of people.
So, what can we do? Many college professors are now requiring valid and credible information if it is found on the web. For example, information that comes from government websites or well-known newspapers. Also, professors are asking for journals, newspaper documents, or book sources to be included in papers. It's scary to think about how much false information we may have already used.