Monday, December 22, 2014

Task 8: Reflection on project work

After all's said and done, my ethnographic study of a team prior to toolset migration is finished. It took me quite a few hours of work, many of those comprised of simply sitting quietly while writing furiously everything people were doing. Interpreting hours of scribbling proved challenging and the fact I knew what most of those people were doing made the notes a lot more biased than I'd hoped for an academic document, but in the end it's fine for its pragmatic application.
Shadowing alone didn't do the trick, which was an unexpected result. Since I had prior knowledge of how the team is structured and how it theoretically relates - I was even partially responsible for the training of some of the newer members of the team - I was able to build a good theoretical map of the relationships between sub-teams and the toolset, but shadowing revealed behavioral quirks, technical misconceptions, work vices and other work characteristics that needed clarification. Therefore, a couple of short interviews were executed to better understand what was actually going on.
The result was a much more complex relationship map than previously anticipated. That was a sobering experience and I'll try to remember not to assume I understand how a team works only by superficial observation. This is the most important lesson I expect to carry into future field research projects. The good news is that I left the endeavor with a great understanding of how each sub-team uses the tool differently, how previous assumptions shaped current work vices and how to prevent these during the migration to the new tool.
It has also taught me an important lesson that I tried to convey in the final document - that while people are fixated on what functions tool A possesses and how to achieve the exact same function in tool B, the really important thing is that both tools allow for the same workflow, even if functions are not exactly analogous. Less focus on functions and a more holistic view of how work revolves around the tool will help us shape not only the migration effort, but also training for the new tool.

Thursday, December 11, 2014

Different People, Digital World

The task at hand is a rather touchy one for me - discussing how a minority group can or uses the internet to reduce alienation and prejudice. It's touchy because it is, but also personally, because I'm a huge advocate for equality. I'd state I'm a feminist, but my girlfriend being a radical feminist, I know this is anathema as radfem declare that the oppressor cannot be a feminist, which is a view that I respect (but of course, I politely disagree with. But that can be my white male heterosexual privilege talking. Enough of that).

That said, I'd like to hijack this post not to talk about how women use the internet to reduce alienation and prejudice but instead how they use it to mobilize and support each other. I'm a great admirer of the many times extremely ad-hoc but always fantastic mobilization power or Brazilian radfem, how they are massacred by every side, including liberal feminists, how they're deviled and called feminazis and still when their sisters are in trouble, be they other radfem or libfem or not really feminists at all, it's always a huge support network of radfem organizing through social networks and mobile chats who get shit done.

While libfem are worried about including men in the debate or organizing polls or whatnot, radfem go and flashmob wherever there's trouble brewing and clash with authority and give their blood (sometimes quite literally) to keep sorority alive and kicking. I admire not only their spirit, but their non-tech savvy, patched together approach that gets great results even if most of them have no idea of the underlying tech that's enabling them to fight the good fight.

And maybe that's one of the beauties of living in the times we live today - you don't need to be a techie, you don't really need to read an RFC on how to implement a protocol or a big manual on how to configure a server to run a top-notch support network for your social movement. You just need to have a need and will to set things right.

Fun times.

Browsing the Jargon File

It always brings a smile to my face every time a link begins with http://www.catb.org/~esr/jargon/
So much of how we communicated in Internet Relay Chat could be traced back to the Jargon File, though we didn't know back in the day. I'm still a great user of language hacks like soundalikes - no one gets by me running Microsoft's browser without me exclaiming how I hate Internet Exploder and no day goes by without me making fun of my girlfriend being disclexy.

But my favorite story in the Jargon File is A Story About ‘Magic'. In the best tradition of laziness or respect for the original content (you'll never know which), I'll quote it integrally instead of discussing:

Some years ago, I (GLS) was snooping around in the cabinets that housed the MIT AI Lab's PDP-10, and noticed a little switch glued to the frame of one cabinet. It was obviously a homebrew job, added by one of the lab's hardware hackers (no one knows who).

You don't touch an unknown switch on a computer without knowing what it does, because you might crash the computer. The switch was labeled in a most unhelpful way. It had two positions, and scrawled in pencil on the metal switch body were the words ‘magic' and ‘more magic'. The switch was in the ‘more magic' position.

I called another hacker over to look at it. He had never seen the switch before either. Closer examination revealed that the switch had only one wire running to it! The other end of the wire did disappear into the maze of wires inside the computer, but it's a basic fact of electricity that a switch can't do anything unless there are two wires connected to it. This switch had a wire connected on one side and no wire on its other side.

It was clear that this switch was someone's idea of a silly joke. Convinced by our reasoning that the switch was inoperative, we flipped it. The computer instantly crashed.

Imagine our utter astonishment. We wrote it off as coincidence, but nevertheless restored the switch to the ‘more magic’ position before reviving the computer.

A year later, I told this story to yet another hacker, David Moon as I recall. He clearly doubted my sanity, or suspected me of a supernatural belief in the power of this switch, or perhaps thought I was fooling him with a bogus saga. To prove it to him, I showed him the very switch, still glued to the cabinet frame with only one wire connected to it, still in the ‘more magic’ position. We scrutinized the switch and its lone connection, and found that the other end of the wire, though connected to the computer wiring, was connected to a ground pin. That clearly made the switch doubly useless: not only was it electrically nonoperative, but it was connected to a place that couldn't affect anything anyway. So we flipped the switch.

The computer promptly crashed.

This time we ran for Richard Greenblatt, a long-time MIT hacker, who was close at hand. He had never noticed the switch before, either. He inspected it, concluded it was useless, got some diagonal cutters and diked it out. We then revived the computer and it has run fine ever since.

We still don't know how the switch crashed the machine. There is a theory that some circuit near the ground pin was marginal, and flipping the switch changed the electrical capacitance enough to upset the circuit as millionth-of-a-second pulses went through it. But we'll never know for sure; all we can really say is that the switch was magic.

I still have that switch in my basement. Maybe I'm silly, but I usually keep it set on ‘more magic’.

1994: Another explanation of this story has since been offered. Note that the switch body was metal. Suppose that the non-connected side of the switch was connected to the switch body (usually the body is connected to a separate earth lug, but there are exceptions). The body is connected to the computer case, which is, presumably, grounded. Now the circuit ground within the machine isn't necessarily at the same potential as the case ground, so flipping the switch connected the circuit ground to the case ground, causing a voltage drop/jump which reset the machine. This was probably discovered by someone who found out the hard way that there was a potential difference between the two, and who then wired in the switch as a joke.

The values of hacker ethic in the new century

Hacker ethics arguably have had as much influence on the last few years of the 21st century as Protestant work ethics have shaped the spirit of Capitalism. Many an entrepreneur, inventor or maintainer of some of the cornerstones of modern society were greatly influenced or completely guided by hacker ethics.

Freedom, for instance, is one such value which can be found in many of the technologies we use today, or which defines one of several famous dichotomies from which there are still no clear winners. One such dichotomy is the iOS versus Android battle, which once seemed lost to the hordes of proprietary software married to Digital Rights Restriction Management wrapped in a walled garden of curated content of Apple Inc., but nowadays it seems more and more like the open-source (if not entirely Free Software), free-for-all and laissez-faire Android alternative has been winning this battle, which is still far from over.

Hackers' approach to money and profit has had a tremendous influence on the current world - from the open standards that enable the web to be what it is today to Free/Libre software to producing content for free for projects like Wikipedia. Money is good and no hacker (ok, few hackers) is preaching Communism, but a challenge is much more important to a hacker - the paycheck he gets for cracking those challenges is just the icing in the cake. Also, "the right way" is not something that appeals to hackers much - "burn the manual, lets have an intellectually stimulating debate about this" has a lot more value. It's this painting outside the lines habit that has given us some amazingly weird stuff like location sharing services or ephemeral chat tools.

Caring is an interesting quirk of hackers which has been migrating to the mainstream through one of the least predictable of sources - hipsters. Those cappuccino-sipping, Apple loving, fashion-oriented skinny people who are otherwise the antithesis of hackers are great at caring and passion. While Gen-Xers at this age were fighting for BMWs and high-rise apartments and cocaine, hispters buy local, ride their bikes to work, integrate into the community, subvert capitalism, work smart and not more. I almost feel disgusted with myself for saying this, but hipsters are direct descendants of hacker culture.

Last but not least, network ethic is now a basic tenet of society. You're not an island anymore, what with your constantly connected smartphone and smartwatch and who-knows how many social network accounts. People no longer expect news to come from the media (and the media is suffering dearly for its nearsightedness in recognizing this trend) and now trust the free flow of information between peers more than anything. This distrust of established top-down hierarchies is and will remain shaping society for years to come.

Stay tuned.

Wednesday, December 10, 2014

Public Wifi: Security and Privacy - a review

This is a review of the wiki article on Public Wifi: Security and Privacy located at http://ethandlawpubwifi.wikidot.com/

Phishing over the air
Interesting section, but the rhythm is kind of weird - it spends a huge amount of time exploring what email phishing is, its history and estimated social costs, but it then flies over what phishing over the air is (which admittedly has little to do with email phishing) and sprinkles it with technicalities (OpenWRT, PHP, HTTPD) while explaining very little. Special bogus points for "Fig. 1" with no accompanying image ;)

Viruses over the air
More to-the-point than the previous section, but it confused me - if Chameleon doesn't change the router's firmware, how does it infect it? So I Googled Chameleon and ended up on the Malwarebytes blog, where they were perfectly non-informative as well, so maybe those University of Liverpool researchers have some bad publicists. I still don't know what Chameleon does.

Wifi Sniffing
Interesting section with no major flaws but it rubbed me the wrong way - do the authors think public Wi-Fi in intrinsically good or bad? Or do they avoid going the black-and-white route on purpose? My two cents is that people should encrypt any sensitive data anyway, so the fact that the Wi-Fi network they are accessing is unencrypted becomes immaterial. And unencrypted Wi-Fi makes for more universal access so I'm all for it.

Packet Sniffing technology
Very well written albeit overly long section - I'll avoid breaking down every single packet sniffer they've listed because a) there are so many sniffers around so either you cover every single one of them or make a generic post about all of them and they went for a middle ground solution and b) TL;DR

What can be done to protect yourself and your network?
Last but not least, a very interesting section on practical recommendations for a safe Wi-Fi network, though it starts with a bullshit suggestion - setting up a readable set of rules people have to agree to to access the network is a waste of time because TL;DR people won't read it and will do whatever they please so if you don't want people to do something, enforce it with good network configuration (to their credit, they do recommend that anyway). All in all a very sane list of recommendations but it lacked what in my opinion is the sanest of options for a safe work environment over wireless networks - have two separate Wi-Fi networks, one free-for-all open SSID so people can BYOB (Bring Your Own Device, which they'll do anyway) and another, restricted, encrypted, password protected SSID and if possible use a MAC Address Filter so only devices that were previously approved by the IT department can have access to this second network where all sensitive information resides.

My (rather long) two cents, given ;)

Monday, December 8, 2014

A Constructive Proposal For Copyright Reform - The Pirate Party's approach

It's no secret that copyright is a mess: between impossibly long copyright terms to draconian rules that treat corporations and natural people the same, the laws that rule over our right to intellectual property are outdated, eschewed and sometimes just plain wrong. The Pirate Party, which was born out of the PiratbyrĂ„n of Pirate Bay fame, has some very interesting proposals on copyright reform that I'd like to discuss here.

First, they'd like all non-commercial sharing to be free, meaning that if you're not basing your business around making copies of other people's intellectual property, your copies are not illegal. That seems a bit broad at first glance, but it's actually a very sane proposal - we have made personal copies before copyright was even defined and have since been protected by fair use rules. But 1976 was a greedy year (remember Bill Gates' Open Letter to Hobbyists?)  - Walt Disney and friends managed to get US law to protect copyright for the life of the holder + 50 years (to be fair, the Berne convention had already done so in 1886) and Universal tried to stop fair use by suing Sony into copyright violation by manufacturing VCRs. From then on, corporations have been working very hard to reduce the reach of fair use and to restrict it through non-legal means, like Digital Rights Restrictions Management. My take? Fair use should get proper legislation - the fact that there's no proper written definition of the limits of fair use makes it more of a nuisance than a right.

Which brings us to the next proposal, which is reducing commercial monopoly to 20 years. Again, we've been giving life + 50 years as the standard copyright term for most of the world for almost 130 years, so this sounds harsh at first glance. But corporations have been working very hard to extend copyright indefinitely, not coincidentally passing new laws as soon as Mickey Mouse's copyright protection is about to expire - 50 became 70 and in some cases protection can go to 120 years or more(!) The Pirate Party has a very fair point - no investor in their right mind expects a return on investment of 120 years, so why are we giving copyright protection of over a century (in fact, it goes against the spirit of copyright, which is to protect the creator, to give rights that extend after their death). So here's my take personal spin on their proposal - copyright protection should last the life of the creator OR 20 years, whichever comes latest.

Next issue is orphan works - copyright starts counting at the moment of creation, but the fact that it counts automatically creates a legal issue in that since the creator doesn't need to register his creation to be protected, some never will so you have a whole universe of orphaned works whose copyright protection is fuzzy because no one knows if the author is dead or alive or even who (s)he is, so you never know if this has reached the public domain or if you're in violation of copyright. Their proposal is very sane - copyright counts automatically from the moment of creation and if you have any commercial interest in protecting it, you have five whole years - not from the moment of creation but actually from the first publication, which can come many years later - to register. If at the end of this period you have not tried to protect your work by registering it with the proper authorities, it goes into the public domain. Sounds fair enough and I have nothing to add to it.

Then comes free sampling, which is the right to make derivative works, to cite existing works and to parody. I point to my original take early on that the limits to fair use needs to be properly codified and should contain these rights. Which brings us to the last important part, which is the banning of Digital Rights Restrictions Management. I say good riddance, as the law should be all protection that copyright needs and any external restriction to hard-coded legal rights should be illegal in the first place. While they're at it, they should also ban clickwrap/shrinkwrap agreements (where you enter into a licensing contract simply by installing a piece of software) and End User License Agreements which restrict user rights further than what's in the law, like the 'newish' trend between commercial software developers to state that you don't own the software you paid for, but merely have a license to use it which can't be transferred, sold or even moved between two of your own machines.
So yes, the Pirate Party has my vote, definitely ;)

Monday, December 1, 2014

The Uneasy Alliance: Free Software vs Open Source

To most people, Free and Open Source Software are the same thing. Some for over-simplification, some for ignorance. But Free Software is a philosophical approach as old as software itself, as most of the software in the 50s and 60s was written within academia and shared freely like all proper scientific discoveries should. Even as early as 1953 the operating system for the UNIVAC version A-2 was fully FOSS (free and open source software). It was only in the late 60s that the cost of development became high enough that software began to be seen as a market in itself and the first proprietary software came to be. In 1976 Bill Gates wrote the first treatise in defense of proprietary software (the Open Letter to Hobbyists), where he argued that copying Micro-Soft's Altair BASIC without a license was stealing.

The fact is people kept sharing and proprietary software never had 100% of acceptance, but it became a huge business model nonetheless. So in 1985 Richard Stallman published his own treatise, the GNU Manifesto, with views of getting rid of AT&T's grip on UNIX and creating a new operating system that was free to use and modify (which of course implied that the source code should also be fully open). The following year the Free Software Foundation would be created and, with it, the Free Software Definition - software that ensures that the end users have freedom in using, studying, sharing and modifying it.

1992 would see the creation of Stallman's dream when Linus Torvalds decided to publish his Linux kernel, which he had open sourced as soon as created, under the GNU General Public License. This was the first time that GNU was a complete software stack, as it finally had a free, open kernel to run on. GNU/Linux (simply Linux to most of us) was born.

This is when things became confusing.

GNU/Linux was too interesting to pass and would soon attract commercial interest. But Free Software was "tainted" by Stallman's license, which, by being nonrestrictive, restricted commercial usage of it (or so went the argument by 1997/1998). When Netscape decided to publish their Communicator openly, a few members of the Free Software movement saw this as a decisive moment to jump in and come up with a more pragmatic/less broad definition that could appeal to commercial software while still making the source code fully available. this Open Source Definition would give birth to the Netscape Public License, effectively the first Open Source License. Stallman objected that the focus on Open Source meant most philosophical debate was being ignored (and he was right) - but an uneasy alliance would soon form - 1998 was also the year of the infamous Halloween Documents - once again, Microsoft (no dash by now) would seriously attack the Free/Open Source scene.

In the end, both pragmatic and idealistic approaches have their own merits - Open Source attracted many players to the field and Free Software has kept it honest - but both have their shortcomings. The debate will not go away anytime soon - GPL 3 and its push against DRM has made some interesting enemies likes Linus Torvalds himself - but all have to gain with Free/Libre Open Source Software, whatever narrow or broad definition you want to give it.

Monday, November 24, 2014

The Digital Enforcement - The Trouble with DRM and Other Such ̶c̶r̶a̶p̶

As I've discussed already when analyzing the future of licensing, corporate interest has been running the show, to disastrous results. It might be time to make licensing and other rights' restriction schemes outright illegal. One such system is Digital Rights Restriction Management, which is a "technology" created by digital publishers in the 90s to make sure that, if governments didn't play by their rules (which they mostly did, either by extending copyright again and again or by allowing them to play their shenanigans with End User License Agreements) they would simply take matters into their own hands and make sure their own definition of the rules was followed, by force.

DRM comes in many different forms - CSS, which was one of the first such schemes and was used to encrypt DVDs, uses a cipher that makes reading a movie stream impossible without the key. Obtaining the key is a jump-through-hoops process where the player needs to have a key (which was paid for in form of a license) to decrypt a key (which is physically present in the disc) which is then used to decrypt yet another key which they allows the system to read the actual bits and bytes of the movie stream.

If it sounds overly complicated and stupid, then you've understood DRM and my job is done.

One of the many things that CSS ignores is one tiny little thing called Inalienable Human Rights like the right to Free Speech and the right to make Fair Use of copyrighted works, which is fine for evil corporations because fuck Human Rights, right? But as governments let corporations play their dirty game they get more and more daring and DRM gets more brazen. Nowadays it's common practice to keep part of the DRM process physically separate from the rest, so you need internet access (either prior or constant) to have access to content you paid for and rightfully own (at least as far as the law goes).

Nowadays such shenanigans have been having lots of backlash, especially from consumers buying music from online stores (most if not all migrated away from DRM because of negative customer feedback) and from players being locked away from games because of draconian DRM (as #GamerGate has taught us, Hell hath no fury like a gamer scorned), so hopefully sometime soon there will be a shift in legislation that either seriously restricts DRM and EULAs or outright eliminates them.

One can dream, right?

The World of Proprietary Software or How could the software licensing landscape look like in 2020

So it's undeniable that proprietary software has a place in the market - we're long past the age where all software came free as part of a hardware sale and way too early in the evolution of the media to have all software be free/libre for good and, most of all, there are billions (maybe trillions) of dollars behind proprietary software models right now that won't easily shift to better formats anytime soon, so we better learn to deal with it. But if current trends in proprietary software aren't curbed soon, the licensing landscape can look very ugly in a few years, as I wish to explore in this text.

First comes the "pig in a bag" issue (or "gato por lebre" in my native Portuguese) where what you buy and what you get can be completely different things. The fact that End User License Agreements have never been tested in court and are so far regarded to be valid from the moment you click "Install" has been breeding constant contractions of user rights, some which would be outright illegal if analysed by any sane optic. Buying software used to mean you owned it and therefore could resell it or move it to a new computer once the one you're using to read this blog post on goes South or whatever, but a cursory glance on an EULA these days will show you otherwise - you're merely a licensee and that copy of Windows or Photoshop you paid for still belongs to the publisher. Not only that, but it's probably restricted to one install, to the current machine you own. Not only that, but it's perfectly legal for the publisher to test your hardware and, should it ever change, accuse you of having moved it to a different machine and disabling your license. Not only that, but you might have to ping a licensing server every few days (during such visit they'll probably pull some usage information "for marketing purposes") so you can remain licensed - just ask Adobe Creative Suite users what it feels like when Adobe goes offline and they can't use Photoshop. Not only that, but it might be simply impossible to use your software without internet connection, even if there's no need for you to be online at all - many videogame publishers have seen the online gaming experience as a nice way to curb piracy and now think it's fair to stop you from playing offline even if it's a single player game.

So what's next? Subscription is already a thing - Microsoft Office and Creative Suite are now available in a yearly version along with the "bought" version (which with every restriction they add gets shittier until it finally feels like paying for a subscription is no longer such a stupid deal). Digital Rights Restriction Management already makes pirating games easier than playing the version you paid for and for every new restriction they get away with, they'll come up with a more outlandish one, so soon you'll be jumping through extra hoops to use proprietary software that you paid for. But there's a silver lining, which is mobile software. Mobile users (especially Android users) are profoundly less tolerant to bullshit pricing schemes and they expect the software they bought for their phone to run on their tablet or their next phone and so on, so it's entirely possible that, as mobile becomes more important than workstations, proprietary software vendors get pushed back harder and come up with saner licensing schemes, and with Chromebooks and convertibles and etc the category as a whole may well cease to exist and everything we call mobile may simply be the personal computers of the future.

Time will tell.

Saturday, November 15, 2014

helpmefixit - Poster and Demo

Working on the feedback from last session, here's a new poster we've come up with (right-click and select Open link on a new tab to view in original size):


Also, please visit the following link for a demo: http://marvl.in/1jbe3j

Sunday, November 9, 2014

Field research project outline - Ethnographic study of a team prior to toolset migration

The objective of this study is to understand Company A’s usage of their current Customer Relationship Management (CRM) tool to better prepare for the transition to a new web-based CRM tool. This study is necessary because Company A has put a lot of Engineering effort into integrating their current CRM tool into their own systems, as well as built years of workflow around the tools feature set

Of all departments in Company A, three departments have direct contact with the CRM tool - Customer Support (CS for short), with over 80 employees whose workflow is entirely centered around the tool and its integration into the Internal System (known internally as Ninjas), Operations, who have their own toolset (mostly developed in-house) but use the CRM tool to communicate with CS, who then act as a buffer between operations and customers themselves and Compliance, who, like Operations, have their own toolsets but make use of the CRM tool to communicate with CS and, sometimes, to customers themselves.

As the new CRM tool possesses all features of the current one (either a direct equivalent or an analogue feature that achieves the same results) and lots of new features, this ethnographic investigation’s short-term goal are:
  1. To understand how each team relates to the CRM tool
  2. To catalog all features used by each team
  3. To understand if the same workflows are approached differently by each team
To achieve this goal we intend to employ shadowing: the researcher will shadow one member of each concerned team during one hour of work. No information will be shared with the subjects on what the goals of the study are and they'll be asked to work normally and pay no regard to the researcher. The researcher will make notes during this time of every single action taken during the completion of their tasks, what tools are used and how information is conveyed between different tools.

The longer-term goals are to use this information to:
  1. Draft a migration plan that focuses Engineering resources on critical features first
  2. Ensure that the new CRM tool is configured to make workflow migration (as) seamless (as possible)
  3. Understand how to make better use of features not yet available in the current CRM tool to improve the current workflow once the new tool is available
  4. Draft training material for current and new team members on using the new tool

Saturday, October 4, 2014

Sustainability in design, sustainability through design


by Eduardo Mercer, Katya Ostamatiy and Jinesh Parekh

Solene le Goff and Christophe Gouache built a solar powered radio fully made of reforestation oak, a sturdy material that's fully renewable. 80% of the materials used are natural and most of the rest is recyclable. It's a great example of sustainability in design, though the use of a more renewable wood like bamboo would be more desireable.


Steve Haslip designed a clothing packaging system called HangerPak. It's made of recycled and recyclable paper, but instead of simply discarding it back into recycling, you can actually disassemble and re-assemble it in the form of a clothes hanger. It's a great example of both sustainability in and through design (and earned him a D&AD Student Award in 2007).


Last, but not least, Portland, OR has replaced all their downtown trash cans with new pairs of can, one for regular non-recyclable trash and another with two divisions - the upper part is for paper and the lower part for plastic and aluminum packaging. The recyclable garbage can has a folded newspaper-shaped slit for the paper division, inviting people to discard their reading materials there and the lower part has circular holes to admit cans and bottles through. Also, these cans have no locks, allowing poor citizens who live off foraging recyclables to access the paper and/or container divisions. This shaping and being shaped by sustainable behaviors are great examples of sustainability through design.


Saturday, September 20, 2014

Sustainable Interaction Design - Rethink, Phonebloks, Ara and Beyond

Rethink was an example scenario of sustainable phone presented by Jain & Wullert in 2002. It was a phone built with recycled and recyclable materials, using modular components to promote reuse and longevity of use and it used modular software to achieve the same goals. Limited by the physical longevity of  the components, it would hardly be able to achieve heirloom status, but not impossibly so. It also aimed to replace many different devices, which in a sense was looking for wholesome use and made the user a source of energy, replacing unsustainable use (burning of fossil fuel as an energy source) for sustainable use. It's interesting to note that many of the software (and some of the hardware) characteristics described by Jain & Wullert are present in Android smartphones, such as being sufficiently integrated, programmable and convenient that they eliminate several other devices, possessing software radios for adaptability to new environments and technology advancement and an open, standard API so that new applications can be downloaded on demand, extending its function as well as life. The availability of these characteristics vary from one phone to the other, but are all present in the platform as a whole.

Phonebloks was a concept conceived by Dutch designer Dave Hakkens as his graduation thesis for the Design Academy Eindhoven. It extends on the idea of modular components presented in the Rethink scenario by describing how to achieve modularity: a base provides mechanical and communication infrastructure for modules responsible for processing, data and energy storage, sensor arrays and user interface hardware. It addresses most of the same questions of SID Rethink does and extends on some of them. Motorola based its Project Ara on Phonebloks and, through acquisition, Ara ended up under Google. Ironically enough, a major stumbling block for Project Ara is Android itself. Although Android covers many aspects of reusable software, Google says adapting Android to a modular hardware platform is one of the main challenges of building Ara right now (probably because most members of the Open Handset Alliance are manufacturers and providers who rely on planned obsolescence for sales).

Sustainability and HCI - Concept Map


Wednesday, September 17, 2014

Game Design - Mission IV - Idea - Grease Joint


Remember Pressure Cooker?
Of course you don't. It was a game for the Atari 2600 released in 1983 where you, the 'chef' on a burger place, had to juggle the take-away orders of three demanding clients. A screen showed you combinations of cheese, onion, lettuce and tomato that each customer wanted while a conveyor belt dropped patties on top of buns. Ingredients were literally hurtled at you and you had to either bounce them back or take them to a patty. If you matched a desired combination, the top half of the bun would be thrown at you so you could close a burger and take it to one of the three bags. If you filled either of the bags, you'd have points counted: 


Even though it was a fine concept (at least I loved it dearly) it saw very few remakes in the last 31 years. So I suggest a reboot on the concept: you're a burger-flipper on a grease joint and a waitress yells orders at you non-stop. You must assemble burgers at the fastest rate possible with making as few mistakes as you can - maybe throw in an extra ingredient and get creativity bonuses, but if you get an ingredient that doesn't "match" the combination, you lose points instead for poor taste. In the beginning you don't know what matches, so you will probably lose many creativity points, but trial and error will get you places (after all, practice makes perfect). We can throw in a few extra concepts like cooking levels (rare, medium or well done) to make combinations harder to master and maybe the waitress speaks in code like in old American diners. It's simple mechanics with lots of potential for fun (and frustration and trial and error, which are all part of engaging gameplay).



Friday, September 5, 2014

Game Design - Mission II - Game Analysis

My favorite computer game is turning 20 years old this November. I know, that makes me a very old man. But it also means I'm a man of refined tastes, so bugger off.
The game I'm talking about was published by Maxis as SimTower but started it's life as ザ・ă‚żăƒŻăƒŒ (Za Tawā or The Tower) in 1994. As is normal with Maxis games, it is a part Construction and Management, part Simulation, part Artificial Life and part Strategy.


It's a single player 2D game in a God model of sorts - instead of a birds-eye view, you see through walls into the houses, offices and other structures.
In SimTower, you build and run a mixed-use tower filled with offices, condos, restaurants, hotel rooms, cinemas, temples and even subway stations. Each level unlocks new structure types and reaching the following level requires using these structures while growing your tower's population. You must also balance your budget by picking rent prices and choosing when to build what.


It evolves from the original SimCity model in several ways, the first of which being that each inhabitant is an individual - you can name them and follow them around the tower throughout the day. This is not just a gimmick: knowing your tenants allows you to know how changes to the tower affect its inhabitants. Many of the features introduced in SimTower moved on to be staples of the Sim series.


The proximity of different types of structure affects the satisfaction their occupants feel: condos placed too close to offices lead to tenants complaining about noise, restaurants too far away from the lobby lead to poor business during weekends, elevators get busy during rush hour, which leads to busy corridors, which lead to lower real estate values, etc.


Some structures are only used sporadically: security lays dormant most of the time, but you'll regret not having a good security center placed near a staircase next time there's a fire or a bomb threat. Housekeeping doesn't share elevators with tenants, so they need their own service elevators with different rush hours and usage dynamics.


Speaking of which, I probably learned more about programming logic and about logistics tweaking the various elevator settings in SimTower than during mechatronics engineering school which, believe me, means a lot.
Experimenting with the new structure types unlocked at each level is a reward in itself: earning enough money and finding the right place to build a subway station or even a cathedral inside your tower is quite a personal achievement, believe me. It also allows for a very varied and game where you can express yourself creatively - there are strict rules, but finding your way around these to create real masterpieces is a true joy.


Since the game is extremely old and feels uncomfortable in newer processors and operating systems, it's a great excuse to dust off a very old computer or to install DOSBox.
I truly recommend this amazing time-waster that, although old, has yet to be surpassed at its own genre.

Sunday, May 25, 2014

Evaluating Trust when Sending Money Online via Web vs. Mobile




Abstract. This research used self-assessments on trust and emotional reactions (hedonics) as well as pragmatic measurements to determine if the UX of two different access methods to the same money transfer service also affected the perception of trustworthiness of said service. It confirms Chu & Yuan’s (2013) observations on how interactivity can affect trust, but further observations of how trust will be affected by completing an actual money transfer instead of simply simulating a transfer order are recommended.


1.     Introduction

TransferWise is an Estonian-British start-up that intermediates currency conversions between peers. The users never interact directly with other users with whom they are converting their currency – price is set at the interbank rate at the moment of transaction and the user only enters with his money in the sending currency and receives the converted money in the other end of the transaction without having to deal with third parties. This approach eliminates many of the trust issues with converting money without involving a bank, retaining the economic advantages.
The service has been operating successfully for three years but only recently launched mobile applications – first for iOS and then for Android in early 2014. The main object of this study was to evaluate trust issues with both approaches to the same platform and, through comparison of pragmatic and hedonic measurements, try to determine if those can be attributed to user interface issues or stem from the different modes of access.

2.     The object of evaluation

TransferWise shifts all of the money-conversion operations to the web, eliminating the human factor in most (or all) steps but for the users themselves. This eliminates trust issues but also creates new ones – people are used to relying on banks for these operations and banks are typical last-century human-centered institutions. Eliminating the human operator on the other side can be jarring for first-time users and this can lead to mistrust.
According to Chu & Yuan (2013), perceived user-control, interactivity, responsiveness and connectedness affect trust and consumer behaviour online. The main object of this study was to determine if completing the same tasks on the same service using different access methods would lead to different levels of trust.

2.1       Design procedure

2.1.1    Procedure

Participants were invited to fill in a form asking for background information containing Yamagishi & Yamagishi’s General Trust Scale (GTS)-like questions. They then proceeded to visiting the company’s website and making a short heuristics evaluation, then completing a series of three tasks – send money to a saved recipient, locate a previous transaction and modify personal settings – using the TransferWise platform, both on the web browser and on a mobile phone. Video was captured of all on-screen interaction as well as from the participants themselves. After completing each task, participants were invited to select an emotion and intensity on the Geneva Emotion Wheel (GEW). At the end of all three tasks, participants took a post-mortem questionnaire containing more GTS-like questions to asses a shift in trust. A post-mortem interview was arranged individually to better understand emotional reactions and shifts in trust indicated.

Fig. 1. The Geneva Emotion Wheel

2.1.2    Apparatus and Materials

Two different access points were used, a common Windows PC running Google Chrome for web access and an Android phone running the native TransferWise application downloaded from the Play Store. A second mobile phone was used to film the interaction to an external memory card. Printed copies of the consent form, the background questionnaire, the heuristics evaluation form, three Geneva Emotion Wheels and the post-mortem questionnaire were provided to all participants.

2.1.3    Tools and Methods

Koyote Software’s Free Screen to Video was used in the Windows machines to capture on-screen interaction, while on the mobile phone, the native screen video capture capabilities of Android 4.4 KitKat was used for the same purpose. Yamagishi & Yamagishi’s GTS evaluation and BĂ€nziger, Tanja, VĂ©ronique, Tran, and Scherer’s GEW evaluation methods were applied to the information provided by the participants. The post-mortem interviews were used to make sense of this hedonic layer of information.

 
Fig. 2. Screen Capture of Mobile App vs. Website.

2.1.4    Participants

Participants were nine people with almost-normal gender distribution (five men, four women) aged 21~35 years old. Participants were screened previously for reason to send money abroad – only those with family in other countries or other reasonable expectation to convert money to or from another currency in the next 12 months were invited to participate.

3.     Results and Discussion

In general, there was a shift in trust for the worst after using both web and mobile versions of the system, with a more pronounced negative shift in women and when using the mobile application. Post-mortem interviews revealed that the generalized shift in trust was mostly related to questions of whether the money would be delivered to the intended recipient, in the time frame promised and in the amount expected. This points to the need for a more detailed study with “live” money transfers that follows participants during the whole money transfer process, which can take several days.
GEW emotions also showed a negative shift when comparing web and mobile, with less-intense positive reactions and more frequent and more-intense negative reactions when using the mobile application. These were also reflected in the pragmatic evaluations, showing greater difficulty in completing tasks and, according to the post-mortem interviews, impaired by some user interface bugs and glitches. This perfectly reflects Chu & Yuan’s results in comparing E-Trust and interactivity.

3.1       Recommendations

Pragmatic UX evaluation was invaluable in interpreting the hedonics results. Especially metrics like number of clicks and time to complete tasks, which explained shifts in trust related to poor user experience.
The GEW is a wonderful but confusing tool, both for participants and for evaluators afterwards. After the pilot test showed the Wheel itself had some poor UX aspects, it was decided that evaluators would recommend participants to pick a single emotion that better represented the task to be assessed and mark its intensity. This makes comparing emotional reactions difficult, because there are less overlapping data points between participants. Once again, the post-mortem interview allowed us to better categorize these reactions.
The main recommendation to TransferWise is to take Chu & Yuan’s conclusions in interactivity to heart and trying to make the mobile experience as complete and fluid as the web experience, this way saving the service from negative hedonic UX aspects.
It is also recommended to follow this study up with a more in depth one that follows the participants through the entire process of an actual transfer, as many of the participants questions on trust were left unanswered as this was a non-live test where no money transfer actually took place. A follow up study could also gather more statistically significant numbers, as the sample of participants for this study was rather limited.     

4.     Conclusion

The main conclusion we can take away from this study is that the same platform can produce different trust responses if accessed by different methods. In a user base that’s constantly shifting from web to mobile and back again, this is an important observation as designers must carefully craft the mobile experience to mitigate or eliminate perceptual differences that can lead to perceived untrustworthiness.  

5.     References


1.  Chu, Kuo-Ming, and Benjamin JC Yuan. "The Effects of Perceived Interactivity on E-Trust and E-Consumer Behaviors: The Application of Fuzzy Linguistic Scale." Journal of Electronic Commerce Research 14.1 (2013).
2.  Yamagishi, Toshio, and Midori Yamagishi. "Trust and commitment in the United States and Japan." Motivation and emotion 18.2 (1994): 129-166.
3.  Yamagishi, Toshio. "The provision of a sanctioning system as a public good."Journal of Personality and social Psychology 51.1 (1986): 110.
4.  BĂ€nziger, Tanja, VĂ©ronique Tran, and Klaus R. Scherer. "The Geneva Emotion Wheel: A tool for the verbal report of emotional reactions." Poster presented at ISRE (2005).

Philosophy of HCI - Post-mortem

So, I broke the rules - I decided to write about the tools before my final impressions on the course. Why you ask? First, because I find that this post is a better closing chapter than that. I wouldn't want to sound like a band who plays a nice epic final song at a concert and then comes back for an encore and completely breaks the experience. Not that I think this is some epic Queen song, quality wise, but please, let's not spoil the metaphor with technicalities.
Second, because I wanted to speak of how my use of tools during this course influenced my view of my idle times. That's because the most important consequence of this course is I'll never see idle times the same way again.
When I (and most of us) think of human-computer interaction, we think of humans doing something to computers (action) and computers response to this (reaction) in a cycle of communication until a task is completed (interaction). But not all interaction is two-way, and not all interaction is initiated with a purpose, and not all interaction preempts reciprocation.
Computers try to communicate much more than we request of them - they're always telling us what time it is, how much battery is left, how many emails on God know how many inboxes are begging for your attention. And these can be passive information, but most often than not are not - computers are needy, greedy beasts - "Hey, look at me, there's a very important email from your mom you must give your undivided attention to, right now!"
And we also tell computers the most stupid stuff sometimes - why do we spin the mouse around when we are bored, bringing an idle screen back to life and spinning the hard drive up? Are we now begging their attention? Asking for an interruption? Or is this just payback for all those notifications in improper times - "Hey, look, I moved your mouse for no reason whatsoever - who's needy now?"
Maybe I fixated a little too much on our first experiencement, but I'd like to think otherwise - I think it opened my eyes to a whole bunch of interactions that we, interaction designers, neglect - incidental interactions, accidental interactions, non-reciprocated interactions.
All of these are communication and should be studied, improved upon. And now I have more than a fixation - I have a purpose. I don't think I can pretend my thesis will go the same direction it was going only six months ago. I have you to thank and to blame, Emanuele, for killing any chance of me writing anything other than these tiny neglected interactions I had paid so little attention to before!

Saturday, May 24, 2014

What tools did I use during Philosophy of HCI?

What kind of question is that, I ponder? The interesting thing is that studying philosophy requires not much more than a brain (one would joke that a brain with a liking for knowledge, but I digress). I would argue that, before anything, I used my brain a lot.
Ha ha, funny, you'd answer, we all use our brains for learning; but do we? If you're in a simple listen-and-repeat mode, your brain is being taken for a ride - you go to a lecture, you listen (or you waste away playing silly games on your laptop in the hopes that whatever seeps in through the cracks in our concentration accounts for learning) and you then repeat ready-made concepts in a test. We've all been there - able to answer some questions on a given subject, but unable to think critically about it. But if you don't allow yourself to philosophize during a philosophy course, what else are you going to do? I have seen philosophy courses being taught that way - teacher blabbers philosopher's names and their schools of thought, students "learn" them, never think of what they had to say, game over. Not this course though - we heard of movies, football matches, idleness, but not of Spinoza, Kant, Deleuse, Nietzsche. Not much space for easily digested factoids on philosophy there.
So we were not taken for a ride - or maybe we were, but we had to pedal our own bikes this time. Like one of these nice GPS-enhanced countryside explorations Emanuele talked so much about, we were powering our own little exploration of the field and, in the end, our 'brain muscles' hurt, but have we seen some beautiful stuff in the way!
Another good friend of mine that I took to these rides was my trusty smartphone. Nowadays, saying "I brought a smartphone" is like saying "I wasn't naked" - we all carry these little buggers everywhere. But just like our brains, it's not having one that counts, but how you use them. From the very first "experiencement", I used mine as a chamber of isolation, pumping tunes into my head, as a source of easily searchable information, or simply a nice escape for my idle times. Oh, what effect this course had on what I think of idle times - but that I'll leave for another post.
Just as when riding a bike, I could definitely finish this course only using my 'brain muscles', but my trusty sidekick was there. When biking through the marshes and hills and woods of the subject, I could find solace - and distraction, and sometimes some cheap satisfaction during times of doubt by quickly Googling away some question - thanks to my phone. My sidekick. My side-brain. 

Wednesday, May 7, 2014

Social Computing Data Analysis - Will Facebook die out by 2017

The paper titled Epidemiological modeling of online social network dynamics by John Cannarella & Joshua A. Spechler (2014) uses epidemic recovery models (irSIR - infection recovery S = number susceptible, I =number infectious, R =number recovered) to predict the rise and fall of Online Social Networks using Google Trends search data in place of actual usage reports. It first tries to fit the model with the rise and fall of MySpace usage, then uses the adapted model to predict if the same effects apply to Facebook data and, therefore, when Facebook would see similar decline.



The usage of search terms to predict the spread of disease has been proven before and is the basis for Google's Flu Trends, which attempts to predict when and where flu epidemics will hit next. But the model does not exactly fit OSN adoption and the paper discusses some of the shortcoming of the model, but not all. The first shortcoming discussed by the paper is that, differently from diseases, people do not join an OSN expecting and/or making a conscious effort to leave them. People will remain members for as long as it is interesting to them, meaning the as long as there are enough friends using an OSN to justify being a member, they'll stay.


Another shortcoming (one that is not discussed satisfactorily in the paper) is that R, the number of people who recovered from a disease, is replaced by people opposed to join an OSN, both those who never joined in the first place and those who joined and left deciding to never come back. This adaptation is necessary because it assumes S+I+R = N, meaning it assumes population remains constant during the study and adding up susceptible, infected and recovered members gives you the full population. This carries two shortcomings:

First, you cannot consciously decide to resist/accept to be infected by a disease, but people make conscious decisions about joining/not joining an OSN all the time, decisions that can change with time (natural immunity can fluctuate, but not be switched on and off at will);
Second, internet usage numbers in the period have not remained constant, instead growing exponentially. The assumption of S+I+R = N is feeble at best.



The third (or fourth?) shortcoming is that the data includes a highly eschewed input, as Google Trends data shows a circa 20% jump around October 2012 that never recovered. This data was 'corrected' by multiplying all input after that date by a correction factor derived from their own projections of where the data points should be, without feedback from Google on what exactly is the nature of the change in data. The turning point in the search data occurs only after the correction factor is applied, putting into question how much of the reduction observed is actually bias generated by the researchers' own 'correction' of input data.

All in all, the paper interestingly draws a parallel between the decline of MySpace and historical Facebook usage data, but the predictions derived by the parallel must be taken with a big grain of salt.

Reference

John Cannarella & Joshua A. Spechler, Epidemiological modeling of online social network dynamics (2014)

Images: John Cannarella & Joshua A. Spechler, Epidemiological modeling of online social network dynamics (2014)

Pingback

Saturday, April 12, 2014

Last 'Experiencement' - Describing the world in pictures, and back again in words.

During a week (I know, it was supposed to be two weeks, but I have a semi good excuse) I collected pictures that, to me, should synthesize the day. It is becoming more common than ever to express yourself in pictures - what was once the domain of the semi professional photographer or the tourist, that entity whose eyes and hands seem sometimes to be fused to a camera, is now an everyday happenstance thanks to the likes of Instagram and its super-easy way of destroying applying cheesy effects to otherwise good mundane pictures.


Instagram is most definitely not the only culprit of this renewed culture of communicating by pictures. There is 4chan, the western equivalent of the Japanese 2chan forums. It's an anonymous forum where people feel free to express themselves in slurs words and porn pictures, home to the meme phenomenon. Below, a rare image of a rather innocent 4chan picture thread (they are usually a lot less suitable for a blog post you want your professor and colleagues to see):


From there were born subcultures such as the demotivationals and memes, pictures that express an opinon in a zany way and can completely replace a reply in an internet conversation:


The pinnacle of this type of textless communication is the recent rise of Relay, an instant messaging application where instead of writing, you can search a collection of community-curated animated GIF images to express that hard-to-put-in-words feeling you want to convey. It's specially addictive when talking to close friends, with whom you already have strong non-verbal communication skills:


So back to my week. I had a very stressful week, I had visits (something I love, but can't cope very well with), shitloads of work, schoolwork and, in the weekend, a birthday party to organize. I see the narrative rather well in my sequence of images, but your mileage may vary (even after lightly describing my week, you probably won't be pointed to the same feelings those pictures express to me). These pictures carry a lot of prÀgnanz, as in they carry a lot more significance than their concise nature seems to imply.

As the culture of non-verbal expression sees a rebirth and more and more people use memes outside of internet context, it's fairly clear that this multi-dimensional forms of expression will only become more commonplace, as they allow us to express things where verbal communication simply falls short. Walk around the streets of Tallinn and you'll see meme faces pasted to light posts, announcing parties and sometimes even replacing spokespeople in advertising. The meme culture is here to stay (which only confirms Richard Dawkins' idea that the meme is the cultural equivalent of the gene, spreading our cultural phenomena in the zeitgeist the same way sex spreads our genetic material in the gene pool).