Windows 8.1 forces the user to interact in a way that doesn’t work

Problem

Wacom is world famous for digital pens and its professional Intuos series are used by millions of artists all around the world. It looks something like this and connects to the computer through a USB port:

Wacom Intuos 3

Wacom Intuos 3

After connecting a Wacom Intous 3 pen Windows took a couple of minutes to download and install some drivers. But the absolute positioning system which is one of the main features of digital pens didn’t work so I headed to Wacom’s support site and installed the latest drivers for Windows 8. So far so good. Now Windows 8.1 behaves like it’s a tablet PC with Touch interface. This is smart. It even shows this little keyboard on the taskbar that allows me to type with the pen:

Windows 8.1 touch keyboard

Windows 8.1 touch keyboard

However apparently Windows 8.1 fails to detect the touch gestures with this pen (I can live with that). The problem is when Windows tries to teach me touch gestures and the only way to get rid of this “compulsory education” is to do the gesture which is impossible with this device! Windows shows this black popover at the left side of the screen asking the user to swipe from the edge:

Windows 8.1 teaches the user to swipe the edge to switch between applications

Windows 8.1 teaches the user to swipe the edge to switch between applications

It reads like this: “Swipe in from the edge to go back to the last app you were using. Tip: begin with your finger outside of your screen”. So I tried swiping left, right, top, bottom, clicking, double clicking with pen and mouse. But nothing helped. I guess the key message is to use my “finger”. This is not OK. If the user was forgiving for the misdetection of the device, this annoying education makes them regret it. As I’m writing this, the popover is still there and there’s no way to get rid of it. So after finishing this post I’m going to try the old time Windows recipe: restart.

Solution

PS. This section is about interaction design solutions but if you are a user having this issue, the quickest solution is to move your mouse cursor to the top left of the screen.

  1. Don’t force the user into any sort of education. Sometimes they can’t do it. Sometimes they have other important things to do and want to skip it. Sometimes they already know how it works. There are tons of reasons why people don’t want to be forced into learning something so don’t force them if you want them to like your products.
  2. Do more usability testing with popular devices like Wacom. This device is being used professionally all around the world and when things don’t work out smoothly companies loose money. I would expect Windows to download the original driver or give equally well functionality. The new Windows is behaving like how Linux was behaving a few years ago: many devices didn’t have good support in Linux and when they did, often some of their features didn’t work. I haven’t used Linux lately but Mac OS X installed my Wacom automatically upon connecting it and it worked like a charm. Windows should catch up if it wants to reclaim its dominant position in the market.
  3. As one dear reader mentioned, it is possible to disable these “mandatory educations” altogether. However, that may need a little extra search. Here is a good explanation how to turn them off (Note: Group Policy only exists in Windows 8 pro). As of writing this post, that question is viewed 3704 times. This post has been visited 2541 times in less than 3 hours. So I assume this is a quite hot topic. I wonder if Windows designers will fix it. I’ll contact Microsoft and share their insight on this page. Meanwhile you can head to Hacker news and see what other people say about this issue.
  4. Add a close button to the popover to let the users close this modal, persistent, sticky popover:
Windows 8.1 educational popover with close button

Windows 8.1 educational popover with close button

Advertisements

Windows 8.1 doesn’t run apps in the corresponding screen

Problem

Windows 8.1 just like its predecessors has a good support for multiple screen. With the return of the start menu, you can “pin” apps to your taskbar. This makes an icon that allows you to quickly run an app. Even though the taskbar is exactly duplicated in every screen, clicking an app icon in one screen doesn’t guarantee that it’ll open in the same screen.

This behavior can be quite annoying specially if you’re using a projector. Here is a little image to illustrate the issue:

Image

Trying to run an app in the left screen may show the app in another screen

Mac OS X has the same issue. So when you run an app you really can’t predict where it will show up. It probably shows up where it ran last time, but that introduces an element of surprise since the user expects to see the action (click) and the effect (app) appear close to each other.

Solution

There are a few solutions to this issue.

  1. The easiest one is to run the app in the screen where the user has clicked the app button.
  2. When user clicks the button using an animation guide their eye to the screen that contains the app. Mac OS X uses this technique, though still the first solution makes it more predictable.
  3. If the designers can’t agree or confirm the design decision with usability tests, have some sort of setting in control panel that allows the user to choose weather the new apps load where they last showed up or close to where the run action is initiated.
  4. Allow the user to run the app in the current screen with some sort of right-click menu on taskbar. Something like this:

Image

Windows personalization has confusing duplicate controls for color

Problem

Windows 7 has a cool feature that allows the user to personalize the color of window frames of applications. This can be done by right-clicking on the desktop, choosing “Personalize” and then “Window Color”:

Choosing window colors in Windows 7

The basic mode is quite easy, you can choose either from a pre-selected set of colors (A in the following picture) or choose your own color by pressing “Show color mixer” (B in the picture):

Two ways to choose the window color

However the “Color intensity” and “Saturation” seem to do the same thing:

Color intensity and saturation

The color mixer in fact is a simple HSB color selector which allows the user to adjust a color by the following parameters:

  • Hue: the color spectrum? Red, Blue, etc.
  • Saturation: how strong the color should be? White or Strong
  • Brightness: how dark the color should be? Black or colorful

In fact they both do almost the same thing. So let’s see what different combinations do (Please pay attention to the window frame color in the following images):

Color intensity: max, Saturation: max

Color intensity: max, Saturation: off

Color intensity: off, Saturation: max

Color intensity: off, Saturation: off

This experiment shows that “Color intensity” is almost the same as “Saturation”. One can’t be off while the other is effective. This is confusing for the user because there are two controls that do almost the same thing and the final result depends on the confusing combinations of the two.

Solution

One solution is to show Saturation in basic mode and when the user chooses advanced color controls, show Hue and Brightness:

Window color solution 1: show saturation in basic mode

Another solution is to hide Color Intensity  in the advanced mode. With this solution the HSB controls appear exactly in order: Hue Saturation Brightness:

Window color solution 2: hide color intensity in advanced mode

However since Color Intensity disappears when HSB controls appear, it can be confusing for the user to know what is happening. I recommend renaming “Color intensity” to “Saturation”.

If you want to turn it off, press “start”

Problem

When it comes to interaction design Windows has a bunch of popular examples how not to do it. Many of us are so used to these problems that we no more remember how much problem we went through to find our way. One classic example is that you have to press the start button if you want to shut down the computer.

Shutting down Microsoft Windows 7

It is a subtle problem but breaks the grammar of the visual interaction language. In the grammar of visual interfaces, a button is a verb (an action). A menu item is the same.

When the user presses Start and then “Shut Down”, he is actually saying “start to shut down”. This is Microsoft’s documentation about how to shut down a Windows machine:

Microsoft documentation about shutting down Windows 7

It refers to “The Power button” which doesn’t even exist in that shape with that icon anymore. So the newbie user (who is probably the audience of that documentation) keeps looking for that button and ends up frustrated. Anyway even with the power button it would read as “start power”. Little things like this make the users dislike a product or at least not be emotionally attached to it.

Technically it’s not wrong but wouldn’t it be more beautiful to say “shut down the computer”? Apple for example has a better visual grammar. First click the Apple logo (which is a representation of your computer) then choose shut down from the menu:

Shutting down Apple Mac OS X

It is read as “shut down my apple computer”. Similar, but more meaningful. Read on to know why.

Solution

This problem was more obvious in Windows 95 to XP where the “start” button still had a label in English saying “Start”. Today, that button can also be safely called the “Windows button”. At least I hope it’s like that because it makes many other things under that menu have the right visual grammar. For example “Help and Support of Windows”.

If the “Start button” is called “Windows button”, everything works just fine. The user click sequence for shutting down means “shutdown the Windows” rather than “start shutting down” (the article “the” comes from the fact that it is a logo, not a generic noun or verb).

The documentation needs to be updated accordingly. There is no power button in that shape.

PuTTY has its options and main functions in one window

Problems

PuTTY is a great little utility that comes handy when trying to connect to Unix computers from Windows machines. The GUI hasn’t changed for many years. Even though it doesn’t have many functions (just connect), it has a confusing user interface. Albeit when you get used to it, you don’t even think about the initial steep learning curve. The first screen as shown below is pretty much all the functionality of the application:

  • Connect using different protocols (Raw, Telnet, RLogin, RRS, Serial)
  • Load/Save the connection

Unfortunately a lot of options are put in the interface in a confusing manner. That big tree structure on the left represents a list of options available for configuring certain connection parameters. When the user first runs PuTTY, he or she will be wandering around trying to figure out how to connect.

Also some of these options prove to be really critical so if you are one of those users who learn by trying things, you don’t want to mess with options in PuTTY.

Another issue is the load/save function. It is a little bit hard to get. I recommend downloading PuTTY and trying it. It doesn’t need installation. This is basically how it works:

  • Save: type a session name in the “Saved Sessions” (it’s not saved yet, though! weird name) and press “Save” button
  • Load: select an element from the list and press “Load” button (it doesn’t load when you click and select and item!)

Solutions

There is a lot to learn from a native Windows application which can be roughly considered as the graphical version of PuTTY: Windows Remote Desktop. Let’s take a look:

The first page is pretty simple and focused on the functionality of the application: to connect. So if you just want to connect, you type the destination address and then press the “Connect” button. Of course if it fails or you want to change some settings, you can press the “Options” button. Also some help if available if you are interested. This is what happens when Options is clicked:

I suggest PuTTY to have the same first page as Windows Remote Desktop. However, I suggest putting all the options under another dialogue and show it when the user clicks “Options” button. The application functionality and options dialogue box should be separated.

I also suggest adding a help button (it can lead to an online help). There is not a lack of space in the GUI of PuTTY. There is enough space for “About”, so there can be space for “Help” as well. In Help the user can learn about the basics of PuTTY and how to take advantage of the various connection options.

For the load/save functionality, Windows Remote Desktop again has some lessons to learn. The “Save” and “Open” button on the first page let the user know about what they can save (it would be better if those buttons are put right under the Logon Settings to show this is what they save or open). Please pay attention that these buttons are not visible by default in Widnows Remote Desktop and the user has to press the “Options” button to see them. That’s not logical. They are not part of the options.

This is the wireframe of my suggestion for the function dialogue (first screen when running PuTTY):

And this is a suggestion for the options dialogue:

The tabs are sorted in the order of importance. Please note that the options dialogue has the standard OK, Cancel and Apply buttons just like any other standard Windows application. And finally here is a suggestion to allow loading and saving the configuration from the main screen. The open and save would look nice with icons. If verbs “Open” and “Save” are written on the button caption, they should end up with “…” because of Windows naming conventions for actions that open another dialogue.

Note

I contacted PuTTY development team and here is what one of them replied:

…PuTTY is generally used by technical people for a wide variety of purposes and they often need to be able to get at a wide variety of options quickly.  I’m sure there would be complaints if we hid the options away in a sub-dialogue box; it would certainly make it more awkward for _me_ to use…

Another developer added:

…the fact that all the _most commonly used_ controls are on the front page is a feature, not a bug. I don’t disagree that this somewhat hurts usability concerns such as discoverability and conceptual coherence, but there is a compensatory element of convenience to the experienced user…