Code Monkey home page Code Monkey logo

design-methods's People

Contributors

amyjko avatar aoleson avatar carlinmack avatar fbanados avatar gordonje avatar greglnelson avatar michaelhilton avatar rlfranz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

design-methods's Issues

Cite "The meaning of interactivity"

Janlert, L. E., & Stolterman, E. (2017). The Meaning of Interactivity—Some Proposals for Definitions and Measures. Human–Computer Interaction, 32(3), 103-138.

Clarify confusion on A/B tests

Here's the student comment:

The paragraph about how not all thinking aloud is valid got kind of confusing so it would be nice if examples were given or the topic was elaborated a bit further. Also Elaborate more on what exactly A/B test is, what are some characteristics that makes an test an A/B test. Talk about the steps a bit more and some ways to do it (eg give an example) so its easier to understand the concept.

How to understand problems

Two small issues:

No one on the design team had any clue about the the challenges of finding small holes headphone jack holes without sight.

Remove the first "holes"

|Fresh Air host Terry Gross|http://www.npr.org/programs/fresh-air/archive]

I think the last bracket character should be a pipe so the link renders properly

Many thanks for contributing to open knowledge!

Justice-centered overhaul

The book, as currently written, is mostly missing any mention of the racism, sexism, and ableism underlying most design methods.

  • "What designers do" needs to cite Design Justice and discuss the tension between designers as deciders and designers as community facilitators, and the role of designers including marginalized voices.
  • "How to design" needs to cite Design Justice and discuss paradigms that center community voices.
  • "How to understand problems" needs to mention participatory design and Design Justice methods, including partnerships with communities.
  • "How to be creative" needs to mention that communities often know what they need, but need designers to facilitate ideation and expression of those ideas.
  • "How to design user interfaces" needs to discuss the importance of accessibility, cultural inclusion
  • "How to be critical" needs to discuss the importance of partnering with communities to understand the unintended consequences of designs
  • "How to evaluate empirically" needs to discuss the inherent limitations of evaluation methods that de-center the voices of stakeholders.
  • "How to evaluate analytically" needs to discuss the inherent limitations of using models of people that de-center the voices of diverse stakeholders.

Cite "rethinking think aloud"

Mayhew, P., & Alhadreti, O. (2017, December). Rethinking Thinking Aloud:: A Comparison of Three Think-Aloud Protocols. In 2018 ACM Conference on Human Factors in Computing Systems (CHI'18).

Typographical quotes are cool quotes

I noticed a couple of days ago that this page http://faculty.washington.edu/ajko/books/design-methods/what-designers-do.html has mojibake on some quotation marks that appear in the quoted sections.

Since then, you “fixed” it with 499e5e6

I think a better fix would be to keep the typographical quotation marks, U+201C and U+201D by the way, and put

<meta charset="UTF-8">

in the HTML HEAD element. It seems that if your webpage doesn't say its UTF-8 and your web server doesn't say it in the HTTP header, then the browser is still obliged to interpret it as ISO-8859-1.

Safer to put it in the HTML it seems to me.

Word of warning: testing using the browser on the local filesystem won't show the problem with typographical quotes, you have to look at it through a (local) web server.

Improve explanation of user interface fundamentals

Feedback from a student:

  • Maybe instead of talking about inputs, algorithms, and outputs before states and event handlers seperately, integrate them into one example, I think that it was kind of weird to transition from Google to alarm clocks. I think it might be better to introduce all of the terms in one example, and then reiterate them in the second example ratehr than introduce some in one example and the rest in the other example.
  • I don't think the distinction between state and mode was clearly explained
  • Hidden affordances are not really explained--I still cannot even guess what they are since it seems like a bit of an oxymoron (but I looked them up)

How to design: grammar issue

In the sentence "This involves simply taking some object in the world and using..." the grammar does not seem to be correct since it is putting the adverb simply between the verb involves and the object after. This can be changed to simply involves.

Incorporate a list of six common accessibility flaws

* Needs keyboard navigation
* Compatibility with screen readers
* Captioning
* Dynamic text resizing
* High contrast (at least 4.5:1)
* Encoding meaning in multiple channels, not just color.

If we can, we should find a reference for this. Not sure where I found these.

How to design: video summary

  1. It would be helpful to include a short summary of the video which encompasses the main takeaways in case people are still confused after watching the video. Since he uses a lot of jokes, it may distract viewers away from the main point.

Clarify difference between appropriation and bricolage

I do not feel like the appropriation vs bricolage explanation is very clear. The line of demarcation seems very slim to me. This is because in the article it describes appropriation as borrowing an idea from someone, and creating something new. Bricolage is pretty similar in that it is creating something from multiple objects, however where it is confusing is the fact that this thing that is created is also technically new. It is hard to identify what separates the two.

Incorporate explicit discussion of ability-based design

Discuss this paper and add to further reading:

Wobbrock, J. O., Kane, S. K., Gajos, K. Z., Harada, S., & Froehlich, J. (2011). Ability-based design: Concept, principles and examples. ACM Transactions on Accessible Computing (TACCESS), 3(3), 9.
 

Cite "I can do everything but see"

Thieme, A., Bennett, C. L., Morrison, C., Cutrell, E., & Taylor, A. S. (2018, April). I can do everything but see!--How People with Vision Impairments Negotiate their Abilities in Social Contexts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 203). ACM.

Clarify that there is no one best design paradigm, and how to choose

Because there are a few different design paradigms, that contribute to the overall design process, and it is not likely that designers can go through each one, individually. I would like to see an explanation of how to apply these different design paradigms and how to decide when one is more important than the other.

Restructure references

Some handy regexes to help restructure

"([a-z0-9]+)":(.+)",
"$1": [$2"]

\[ "(.+)\[(.+)\|(.+)\]\.\s?(.+)\]
["$1, "$2", "$4", $3],

\(?([0-9]{4})\)?\. 
", $1

"", 
", "

],
"],

How to evaluate empirically: student critiques

This might depend on the reading style of the reader, but I think that if there was a bullet list in the beginning that provided quick definitions of the 3 methods, it would add to the clarity of the article. To me, it was difficult to see how the title of "how to evaluate empirically" was connected with the 3 different methods. It wasn't clear until after I reread the introduction a few times that evaluating empirically refers to testing/observing a design in a real-life situation.
Break it down into smaller paragraphs! For an article like this, breaking the content down into maybe just 3 sentence chunks really helps get the point across--almost like writing in blog-style. For example, breaking apart the "Technology Probes and Experience Sampling" paragraph at "It's also possible to use experience sampling…". Also, adding a paragraph break before "Note that a 'representative user' is….". Overall this will break down the content better and also make it less visually overwhelming at first glance.
Use more numbered lists and bullet points! I think the core content is overall a lot of steps/requirements that would work very well in list form. For example, the steps to running a usability test could be broken down in this format:

Typo

Chapter 4 (problems):

"because you’re imagining will be fantasy." --> "because what you’re imagining will be fantasy." (emphasis added to show suggested change.

How to design: missing summary

The 2nd to last paragraph summary that contrasts the different types of designs is very helpful but leaves out 2 of the design paradigms mentioned in the reading (ability based, participatory). I think that sentence is great since it provides a very concise summary of why the designs are different but should be updated to include all the design types mentioned.

Low versus high fidelity citation

From James Landay:

You may want to cite our paper from HFES on high vs low fidelity usability here. It is by Walker, Takayama, and Landay.

Address Chapter 7 issues from student

Here are the critiques:

  1. This one is nit-picky, I apologize. When talking about the alarm clock example you use "It's state" when it should be "Its state" to show possession.

  2. The reading briefly touches on default values. While I think that your definition is fantastic, I think expanding on the point would make it even more powerful. In an informal study done https://www.uie.com/brainsparks/2011/09/14/do-users-change-their-settings/ they found that less than 5% of users changed the default settings for MS Word. I find this in my own life as well - I don't think I've touched the settings on my washing machine since I got it. Even in video games I play where customization is huge part of your character, I've rarely, if ever touched the controls. I think emphasizing that designing your defaults to reflect the most common use cases (while risking venturing into average user territory) is important. Many people often assume those are the best settings and don't touch them.

  3. Finally, I discussed this with you in class, but I think it's a major point to bring up. When you describe affordances, you define them using an older definition that Norman coined when talking about the relation between physical objects and people. In the newest edition of the design of everyday things, he defines signifiers as "where action should take place" and as "any mark or sound...that communicates appropriate behavior to a person." (Norman, 14) Signifiers fit the definition of the "perceptual cues" definition that you provide in the reading. I agree with your definition of hidden affordances, but I think false signifier would be a better term for "false affordance". After all, there's nothing about a screen that says touch me, but a false signifier like blue text, follows a convention with web links but offers no action when clicked.

I understand this isn't a huge issue, but I think it does warrant discussion as Norman, the one who brought the term affordance to design and information processing, explicitly redefines it in his revised book. I think you made a really good point in how the industry still uses the terms interchangeably, but if the term is conflicting and doesn't accurately convey what it means, shouldn't we make an effort to change that?

Anyway, thank you very much. I'm looking forward to the rest of the classes and readings.

Typo

In Chapter 4, "How to define problems", ~4th paragraph:

"One simple form of knowledge is to derive goals and alues from your data" --> "values" is incorrect.

Cite response bias paper

Incorporate this paper into the text of an appropriate chapter and add to further reading.

Dell, N., Vaidyanathan, V., Medhi, I., Cutrell, E., & Thies, W. (2012, May). Yours is better!: participant response bias in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1321-1330). ACM.

Improve definition of affordances

Here's an insightful critique from a student:

I did want to pass along one quick observation though as I had a look at your How to Design UI’s reading (http://faculty.washington.edu/ajko/info360/readings/how-to-design-user-interfaces.html). For us this is review from a previous semester so it wasn’t assigned, but I still took a look for giggles. I found the way you introduced all the vocab to be really helpful and it also reinforces what I’ve told students about how the terminology is important as a means to communicate ideas. However, I found your definition of affordances to be a bit fuzzy and I wonder if you’ve seen Norman’s clarification in the new edition of DOET. He separates the concept of affordances (the set of actions which emerge as a result of the relationship between the properties of the artifact and the capabilities of the user) and the concept of signifiers (the clues that we as designers leave in interfaces to hint about the presence of affordance). For example, Norman would likely now refer to a blinking cursor as a signifier for the affordance of text entry. Having taught using both editions of DOET, I’ve found this distinction really helps clarify things for students.

How to evaluate analyticlaly

Critique #1
This is small point, but I would edit the following sentence: "Claims analysis is a method where you define a collection of scenarios that a design is supposed to support and for each scenario, you generate a set of claims about how the design does and does not support the claims." Change "…does and does not support the claims" to "does and does not support the scenarios". I think that because "claim" was used twice to mean two different things in this sentence, it makes it a bit confusing.

Critique #2
Another small revision point, but heuristic evaluation was listed before walkthroughs in the initial paragraph, but then in "In this chapter, we'll discuss two of the most widely used methods: walkthroughs and heuristics." the order is switched. You also address walkthroughs first before heuristics, so keeping the order consistent would just add to clarity and reading smoothness.

Critique #3
I, as well as multiple other students, was quite confused as to what GenderMag actually was. I know that it has 4 personas, but I don't know if it's more than that? In the paragraph where you introduce GenderMag it would be great if it was explained a bit more, even if it's just "GenderMag is a set of four personas that were extensively researched." Also, why is it called GenderMag? Just a bit more information on that would be really helpful to a reader who is completely new to these ideas.

Analytical evaluation: explain rationale for choice of two methods

"I was confused as to why two specific methods of design were chosen to be discussed over the rest of the methods. Heuristics and walkthroughs, described as widely used analytical evaluation methods, lead me to ask why these methods were chosen over the rest. The benefits of these methods are later described in the article, but without having a knowledge of the other methods available it is confusing to me as to why these methods in particular hold more weight than the others."

Resolve chapter 9 critiques

"There are less certain and less robust forms of empirical evidence. We'll focus on the most common, a user test, also known as a usability test."
Is it the evidence that is less robust, or is it the way evidence is collected? The way I understand it, a user test is a method of gaining empirical evidence, not a form of evidence in of itself. This sentence makes it sound like the opposite is true.

"Once you've found the breakdowns that occur in your design, you can go back and redesign your interface to prevent breakdowns, running more user tests to see if those breakdowns still occur."
Ultimately, I understand that more user tests are to be conducted after iterating a design with knowledge gained from previous testing, but the way it is worded, someone could get slipped up if they read it quickly. I had to do a double take on my initial skim of the article. Perhaps splitting up the sentence or adding a more specific phrase like "running more user tests after redesign..." would help. This is a really subtle issue, and for the most part, it probably isn't much of an issue at all, just something I noticed.

"For example, you might have a script like this:

Today we're interested in seeing how people use this new copier design... We're here to test this system, not you, so anything that goes wrong is our fault, not yours. I'm going to give you some tasks to perform. I won't be able to answer your questions, because the goal is to see where people have difficulty, so we can make it easier. Do you have any questions before we begin?"

I realize that this is just a very brief example of what a dialogue could look like, but it seems unclear to say "I won't be able to answer your questions" just to end by asking "Do you have any questions?" I think more clarification of what type of questions the participant can ask would be beneficial. The topic is touched on briefly after the example, but maybe a concrete example would be helpful.


udgement"﷽gvaluation rious people are going for.This reading provided us a well-rounded understanding of empirical evaluation. However, I wish Andy could talk more about why real life contexts make the evaluation more valid and useful as well as some drawbacks of that. In the meantime I think it will be more helpful if he can also include an example of a written task description. I like how the reading emphasizes the importance of debriefing with the users because that is actually very important to establish possible potential future users. Lastly, there is a grammar mistake in the first sentence; it should spell as “judgment” instead of “judgement”.

—————————————————————————

  1. I think the way A/B tests and usability tests are presented is a bit confusing as it stands now. They are introduced one after another in a way that made me feel like they were exclusive, but that isn’t really the case. You might conduct usability testing on a variety of different options with a variety of different subjects to determine what design to go with. I think a sentence or two describing how they are related and can be used together would clear up any further confusion.

  2. You did talk about finding subjects that are representative of the group you’re designing for and why it’s important, but not too much into how to do that. I have a feeling this will be one of the hardest parts of the design process for our final project and would have liked to have seen a little more guidance on how to choose the right people to test with.

  3. Perhaps this could just be another entry in the “further reading” section, but I would have liked to have seen how companies that do this type of design (Google, Facebook, etc.) conduct tests and determine which design to move forward with. Being able to read about or see how these companies get around these issues would have been helpful for me.


• Maybe include some illustrations/pictures between the first few paragraphs so that the entire block of text is segmented, which makes reading easier and less stressful.
• You could give more examples of A/B tests and its usefulness based on a designer’s definition of success. If the measure of success is more for profit, does this mean that the designer should choose the design that made the most profit?
• Also, this sentence in the A/B test paragraph seems a little bit confusing to me, “If your definition of success is that users are less confused, that might be harder to measure, because it can be harder to observe, classify, and count, especially automatically.” (What was automatic in this case?)


  1. In the User Test section about finding people who are representative of an audience, it was explained how it might be difficult to go out and recruit them, but it wasn’t explained in depth on how to determine who is truly representative. I think this should be expanded on a little more because I think it’s more important to first understand who should be picked before understanding how to invite them.
  2. In the section where the article discusses breakdowns again (starting right above the image of the breakdowns graph), it says that we should define a set “path” that we expect them to follow so that we can see where the deviations occur. I think it is worth nothing that deviations from a path might not always be an error in design, like with video games.
  3. The last blue text linked article “Not all think aloud is valid” by Mark Fox, Anders Ericsson, and Ryan Best, is not cited at the bottom of the article

———————————————————————————

  1. I think this article should talk about the difference between an “average” user and a “representative” user. We’ve been taught that designing for an average using can be harmful, but logically, testing on a user that represents the average is logical. Some clarity would be great here.
  2. Perhaps, when talking about thinking aloud and how it cannot help the research find out what the user noticed first, mention something about eye tracking software. I assume that this is considered an analytical tool as well.
  3. In the second paragraph, I wish the idea of ecological validity was elaborated on more. I’d like to learn about an example of this in order to grasp the concept better.

———————————————————————————————————

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.