The key to understanding multi-touch touch panels is to realize that a touch is not the same thing as a mouse click.

With a multi-touch projected capacitive touch panel, the user interface of an embedded application can be enhanced with gestures such as pinch, zoom, and rotate. True multi-touch panels, that is, panels that return actual coordinates for each individual touch, can support even more advanced features like multiple-person collaboration and gestures made of a combination of touches (for example, one finger touching while another swipes). The different combinations of gestures are limited only by the designer’s imagination and the amount of code space. As multi-touch projected capacitive touch panels continue to replace single-touch resistive touch panels in embedded systems, designers of those systems must develop expertise on how to interface to these new panels and how to use multi-touch features to enhance applications.

When implementing a touch interface, the most important thing is to keep in mind the way the user will interact with the application. The fastest, most elegant gesture-recognition system will not be appreciated if the user finds the application difficult to understand. The biggest mistake made in designing a touch interface is using the same techniques you would use for a mouse. While a touch panel and a mouse have some similarities, they’re very different kinds of input devices. For example, you can move a mouse around the screen and track its position before taking any action. With tracking, it’s possible to position the mouse pointer precisely before clicking a button. With a touch interface, the touch itself causes the action.

The touch location isn’t as precise as a mouse click. One complication with touch is that it can be difficult to tell exactly where the touch is being reported since the finger obscures the screen during the touch. Another difference is in the adjacency of touch areas; due to the preciseness of a mouse, touch areas can be fairly small and immediately adjacent to each other.

With a touch interface, it’s helpful to leave space between touch areas to allow for the ambiguousness of the touch position. Figure 1 shows some recommended minimum sizes and distances.

Click on image to enlarge.

Feedback mechanisms need to be tailored to a touch interface to help the user understand what action was taken and why. For example, if the user is trying to touch a button on the screen and the touch position is reported at a location just outside the active button area, the user won’t know why the expected action did not occur. In the book Brave NUI World: Designing Natural User Interfaces for Touch and Gesture, Daniel Wigdor and Dennis Wixon suggest several ways to provide feedback so the user can adjust the position and generate the expected action.1 One example is a translucent ring that appears around the user’s finger. When the finger is over an active touch area, the ring might contract, wiggle, change color, or indicate in some other way that the reported finger position is over an active element (Figure 2a). Another option is that the element itself changes when the finger is over it (Figure 2b).

The authors describe several other strategies for designing touch interfaces, including adaptive positioning (which activates the nearest active area to the touch), various feedback mechanisms, and modeling the algorithmic flow of a gesture.

You’ll need to consider the capabilities of the touch controller when designing the gestures that will be recognized by the user interface. Some multi-touch controllers report gesture information without coordinates. For example, the controller might send a message saying that a rotation gesture is in progress and the current angle of the rotation is 48º, but it won’t reveal the center of the rotation or the location of the touches that are generating the gesture. Other controllers provide gesture messages as well as the actual coordinates and some controllers provide only the touch coordinates without any gesture information. These last two types are considered “true” multi-touch because they provide the physical coordinates of every touch on the panel regardless of whether a gesture is occurring or not.

Even if the controller provides gesture information, its interpretation of the gestures may not match the requirements of the user interface. The controller might support only one gesture at a time while the application requires support for three or four simultaneous gestures; or it may define the center of rotation differently from the way you want it defined. Of course no controller is going to automatically recognize gestures that have been invented for an application such as the “one finger touching while another swipes” example given above. As a result, you will often need to implement your own gesture-recognition engine.

A gesture-recognition engine can be a collection of fairly simple algorithms that generates events for touches, drags, and flicks, or it can be a complicated processing system that uses predictive analysis to identify gestures in real time. Gesture engines have been implemented using straight algorithmic processing, fuzzy logic, and even neural networks. The type of gesture-recognition engine is driven by the user interface requirements, available code space, processor speed, and real-time responsiveness. For example, the Canonical Multitouch library for Linux analyzes multiple gesture frames to determine what kinds of gesture patterns are being executed.2 In the rest of this article I’ll focus on a few simple gesture-recognition algorithms that can be implemented with limited resources.

Common gestures
The simplest and most common gestures are touch (and double touch), drag, flick, rotate, and zoom. A single touch, analogous to a click event with a mouse, is defined by the amount of time a touch is active and the amount of movement during the touch. Typical values might be that the touch-down and touch-up events must be less than a half second apart, and the finger cannot move by more than five pixels.

A double touch is a simple extension of the single touch where the second touch must occur within a certain amount of time after the first touch, and the second touch must also follow the same timing and positional requirements as the first touch. Keep in mind that if you are implementing both a single touch and a double touch, the single touch will need an additional timeout to ensure that the user isn’t executing a double touch.

While a drag gesture is fairly simple to implement, it’s often not needed at the gesture-recognition level. Since the touch controller only reports coordinates when a finger is touching the panel, the application can treat those coordinate reports as a drag. Implementing this at the application level has the added benefit of knowing if the drag occurred over an element that can be dragged. If not, then the touch reports can be ignored or continually analyzed for other events (for example, passing over an element may result in some specific behavior).

A flick is similar to a drag but with a different purpose. A drag event begins when the finger touches the panel and ends when the finger is removed. A flick can continue to generate events after the finger is removed. This can be used to implement the kinds of fast scrolling features common on many cell phones where a list continues to scroll even after the finger is lifted. A flick can be implemented in several ways, with the responsibilities divided between the gesture-recognition layer and the application layer. Before we discuss the different ways to implement flick gestures, let’s first focus on how to define a flick.

A flick is generally a fast swipe of the finger across the surface of the touch panel in a single direction. The actual point locations during the flick do not typically matter to the application. The relevant parameters are velocity and direction. To identify a flick, the gesture-recognition layer first needs to determine the velocity of the finger movement. This can be as simple as determining the amount of time between the finger-down report and the finger-up report divided by the distance traveled. However, this can slow the response time since the velocity is not determined until after the gesture has finished.

full article by Tony Gray, Ocular LCD, Inc.   in EE|Times

Using a stereoscopic projector and the Kinect camera, real objects are rendered digitally in a 3-D space.

What humans can accomplish with a gesture is amazing. By holding out a hand, palm forward, we can stop a group of people from approaching a dangerous situation; by waving an arm, we can invite people into a room. Without a touch, we can direct the actions of others, simply through gestures. Soon, with those same types of gestures, we’ll be directing the operations of heavy pieces of machinery and entire assembly lines.

Manufacturing workers are on the verge of replacing the mouse-and-keyboard-based graphical user interface (GUI) with newer options. Already, touchscreens are making great inroads into manufacturing. And in many locations, the adoption of other natural user interfaces (NUIs) is expanding to incorporate eye scans, fingerprint scans and gesture recognition. These interfaces are natural and relevant spinoffs of the type of technology we find today in video games, such as those using Microsoft’s Kinect.

In the gaming world, gestures and voices are recognized by Kinect through an orchestrated set of technologies: a color video camera, a depth sensor that establishes a 3-D perspective and a microphone array that picks out individual players’ voices from the background room noise. In addition, Kinect has special software that tracks a player’s skeleton to recognize the difference between motion of the limbs and movement of the entire body.

The combined technologies can accurately perceive the room’s layout and determine each player’s body shape and position so that the game responds accordingly.One can expect to see NUI applications working in every industry imaginable—from health care to education, retail to travel—extending user interactions in multiple ways.

NUI technology is of particular interest to the manufacturing industry. For instance, when a worker logs on to a machine, instead of clicking a mouse and entering a personal ID and password on a computer screen, the user will look into a sensing device that will perform a retinal scan for identification. Then, just by using hand gestures, the identified worker can start a machine or, with an outstretched hand, stop it. The machine may ask for the employee to confirm the requested action verbally, and a simple “yes” response will execute the command.

Avatar Kinect replicates a user’s speech, head movements and facial expressions on an Xbox avatar, and lets users hang out with friends in virtual environments and shoot animated videos to share online.

NUI technologies can improve ways to move products across assembly lines, as well as to build them on an individual line. For example, if a batch of partially assembled components must be transferred to a pallet or another machine, the worker can use a gesture to designate the subassemblies to be moved and the location of their destination.

Safeguards can be built into the NUI system so that unrelated movements or conversations in the plant do not accidentally initiate a command. Each machine will know who is logged in to it and will respond exclusively to that individual’s motions and voice. The computer could even be set to shut down automatically if its “commander” is away from the station for more than a selected period of time.

The benefits of NUI technology specific to manufacturing will be extensive. Many of these examples are already in development:

• Employees who must wear gloves on the job no longer need to remove them to operate a keyboard, so they can carry out their work and react to situations more speedily, resulting in higher productivity, faster throughput and higher safety in the workplace.

• Those who work in areas that contain considerable dirt, dust and grease know that touchscreens quickly can become smudged and difficult to view. With gestures, the screen can remain clean. Using the gesture-based NUI in these situations also reduces the spread of contagion and therefore improves health and productivity on the job.

• When computers remain cleaner, because they are touched only infrequently, the manufacturer can cut costs significantly. The screen and other computer components require less maintenance and repair, and elements such as a keyboard are no longer required investments.

Microsoft Dynamics is taking a lead in incorporating NUI technologies into its offerings. The Microsoft Dynamics AX 2012 enterprise resource planning solution offers a touch-based user interface for the shop floor, and independent software developers are working on gesture-based interfaces to provide touchless commands.

The first generation of gesture-based equipment will soon be installed in plants that manufacture heavy equipment, such as cars and large machine tools. Dirt and grease in such facilities can cause substantial problems for conventional computer control units.

NUIs also are expected to become popular in such difficult environments as cold rooms, where workers must wear heavy gloves, and pharmaceutical and food-processing plants, which require exceptional levels of cleanliness.

In the near future, we might see systems that can track the eyes of workers to anticipate the next command. And, soon, NUI interfaces will enter the office environment, where the productivity and cost-effectiveness they offer will be just as important as they are on the plant floor. With such widespread applications, voice- and gesture-based interfaces are certain to usher in an era in which interacting with technology becomes easier, faster and less costly.

by Rakesh Kumar in EE|Times

EYeka  just released a white paper on the the Future of Shopping.The company conducted interviews with retail experts and also asked its community of “creative consumers” to imagine shopping in 2020.

5 Consumer -Generated Trends That Are Shaping Tomorrow’s Shopping have been identified :

Responsible shopping: consumers want a socially and environmentally responsible retail ecosystem

Augmented shopping: consumers value rich and interactive shopping experiences

Informed shopping: consumers look for relevant, personal information about brands and products

Facilitated shopping: consumers expect technology to help them choose

Experience shopping: consumers hope that shopping will become more entertaining turned to a variety of industry figures for their predictions on what to expect when you’re expecting to work in digital signage in 2012.

  1. Continued emphasis on reaching with digital signage the on-the-go consumer and shopper
  2. Built-in features like QR readers make interaction with screens more user-friendly
  3. Media strategists focus on uniting consumers and shoppers through a completely digital experience will coincide with the acceleration in accountability resulting in escalating advertising budgets for digital out-of-home
  4. Rapid rise in the use of more affordable video walls and higher brightness flat panels up to 3,500 nits
  5. Autostereoscopic 3-D without glasses will begin a slow but steady increase as the technology improves
  6. Interactivity will become the norm, several unique configurations such as the touch table move into the market.
  7. Portable devices will also come into their own, with interactivity between installed networks and both cellphones and tablets
  8. Audience analytics will become more commonplace to evaluate the performance of a system.
  9. Places where people gather or pass by in numbers,  will monetize their displays through various advertising options.
  10. Consolidation and partnerships taking shape in 2012 to help mature the industry into more of a mainstream media.
  11. Retailers will continue to refine their one-on-one relationships with consumers through kiosks using digital signage.
  12. Special deals through Facebook or Twitter — or using a smartphone to scan a digital sign for coupons or other promotions.
  13. Use of digital signage to eliminate printing costs related to in-store advertising and sales, and time loss reacting to market pressures in minutes instead of weeks.
  14. Advertising and marketing will adopt the medium to see how it fits into their plans for clients. But they will choose what they need, not the industry as a solution. Many organizations simply don’t need all the bells and whistles that the industry can provide.
  15. forcing buyers and end users to question the need and seek alternative forms of engagement that may be cheaper and outside the realm of the industry.
  16. customers encouraging advertisers to fully utilize the power of digital billboards.
  17. more use of conditional content (based on triggers like the weather, sports scores, stock quotes, etc.), ads integrating RSS feeds and campaigns that truly take advantage of the technology.
  18. Network deals, particularly large networks, should step up a notch in this year.
  19. Many more deals will come together, several of which may be surprising. Some combinations will be strategic, while others will be born of desperation or convenience.
  20. Transformation in the media planning/aggregation/DSP arena, as agencies, networks and advertisers sort out how to face off against the DOOH space.
  21. Entry of new players into the digital signage space, even as established players are absorbed or simply disappear.
  22. Brands and their agencies are beginning to recognize that the last mile of a multichannel promotional campaign might best be relegated to digital signage networks.
  23. Scale attracts big players, media companies, private equity, brands and online powerhouses turn some attention to DOOH.
  24. Rapid drive of broader-range environments that are finding that they must “bid, compete” for people’s time and attention.Waiting areas, call centres,support organizations, manufacturing operations, offices.
  25. The beginning of industry consolidation in every segment including channel partners.  Too many companies — in every segment — losing money because they have poor process control or the signage area is simply a sideline activity and not part of the company’s main business venture.
  26. The weak, undernourished, under committed organizations will be displaced or absorbed by the industry’s most refined and focused companies.
  27. The digital signage ecosystem will become even more prevalent as a theme for the industry.
  28. Signage overall is becoming much more sophisticated and intelligent. It’s tying into the marketing array more closely and delivering better data to allow more intelligent choices for companies when planning their digital marketing.

see the full article by  Christopher Hall

digi retail interactive screens digital signage

Η Ειδική Γραμματεία Ψηφιακού Σχεδιασμού του Υπουργείου Ανάπτυξης, Ανταγωνιστικότητας και Ναυτιλίας και η εταιρία «Ψηφιακές Ενισχύσεις ΑΕ» ανακοινώνουν την ένταξη 403 νέων επενδυτικών σχεδίων στη δράση «digi-retail», η οποία αφορά στην αξιοποίηση των νέων τεχνολογιών στον τομέα της λιανικής.

Τα 403 νέα επενδυτικά σχέδια που εντάσσονται, έχουν συνολικό προϋπολογισμό 18,93 εκατ. ευρώ και η συνολική επιχορήγηση που εγκρίθηκε (δημόσια δαπάνη) ανέρχεται σε 11,23 εκατ. ευρώ και προέρχεται από πόρους του ΕΣΠΑ 2007-2013. Ο μέσος προϋπολογισμός των επενδύσεων σε νέες τεχνολογίες ανά εγκεκριμένη αίτηση ανέρχεται σε 46.976 ευρώ και η επιχορήγηση αγγίζει το 60% της επένδυσης. Οι εγκεκριμένες επιχειρήσεις, εφ’ όσον το επιθυμούν, μπορούν με την έναρξη του επενδυτικού τους σχεδίου να λαμβάνουν προκαταβολή ύψους 35% της δημόσιας επιχορήγησης που τους αναλογεί, βάσει όσων προβλέπονται στους Οδηγούς της δράσης.

Τα επενδυτικά σχέδια που έχουν ενταχθεί στη δράση «digi-retail» από τις αρχές Ιουλίου έως σήμερα ανέρχονται σε 667 και έχουν συνολικό προϋπολογισμό 30,7 εκατ. ευρώ. H αξιολόγηση των επενδυτικών σχεδίων ξεκίνησε από τις Περιφέρειες του Επιχειρησιακού Προγράμματος «Ψηφιακή Σύγκλιση» (409 εντάξεις έως σήμερα) και συνεχίστηκε με την Περιφέρεια Αττικής (258 εντάξεις). Στο πλαίσιο της δράσης «digi-retail», βρίσκεται σε πλήρη εξέλιξη και η αξιολόγηση επενδυτικών σχεδίων από ολόκληρη την Ελλάδα και ακολουθούν εντός των επόμενων εβδομάδων πρόσθετες εγκρίσεις επενδυτικών σχεδίων από  τις υπόλοιπες Περιφέρειες.

Η δράση «digi-retail» ενισχύει τις «ψηφιακές επενδύσεις» επιχειρήσεων λιανικής με σκοπό: α) τη βελτίωση της εσωτερικής διαχείρισης και τη μείωση του κόστους λειτουργίας, με αυτοματοποίηση των διαδικασιών διαχείρισης αποθήκης, πωλήσεων, αγορών και β) την ενίσχυση της εξωστρέφειας και την πρόσβαση σε νέα καταναλωτικά κοινά, αξιοποιώντας τις δυνατότητες της τεχνολογίας για την ηλεκτρονική τους προβολή και προώθηση.

Στο πλαίσιο της δράσης «digi-retail» διατίθενται συνολικά πόροι του ΕΣΠΑ 2007-2013 ύψους 100 εκατ. ευρώ, προκειμένου να ενισχυθούν κατά 50% τεχνολογικές επενδύσεις συνολικού ύψους &euro200 εκατ. ευρώ από επιχειρήσεις λιανικής. Η δράση εντάσσεται στο πλαίσιο πολιτικών του Επιχειρησιακού Προγράμματος «Ψηφιακή Σύγκλιση», με τη συνδρομή πόρων όλων των ΠΕΠ και συγχρηματοδοτείται σε ποσοστό 85% από το Ευρωπαϊκό Ταμείο Περιφερειακής Ανάπτυξης και 15% από εθνικούς πόρους.

Επιχειρησιακό πρόγραμμα “Ψηφιακή Σύγκλιση”

If yes, then you can make your own Predator effect with kinect.

A Japanese coder by the name of Takayuki Fukatsu has exploited the versatile openFrameworks to give Kinect a mode where it tracks your movement and position, but turns the dull details of your visage into an almost perfectly transparent outline. Of course, you’re not actually transparent, it looks to be just the system skinning an image of the background onto the contours of your body in real time, but man, it sure is cool to look at.

It’s very cool effect…

The winners of the Digital Signage Best Practice Award 2010 in each of the five categories; Retail Signage, Guiding Signage, Information Signage, Content for Digital Signage and Interactive Signage are as follows: –

Retail Signage

  • ~sedna GmbH
    Project: “Mac-based Digital Signage solution in use at Gravis”
    Client: GRAVIS EDV Vertriebs GmbH

Guiding Signage/Wegeleitung

  • macnetix
    Project: Digital route guiding system for the Düsseldorf Exhibition Centre’ Client: Messe Düsseldorf

Information Signage

  • GnosySoft Ltd. and Minicom Digital Signage
    Project:”New Digital Signage Network for the Larnaca Airport in Cyprus”
    Client: Ad Airport Media

Content for Digital Signage


Interactive Signage JOINT WINNERS

  • Sedley Place
    Project: “Coca-Cola Piccadilly Sign London; Interactive World Cup 2010 Campaign”
    Client: Coca-Cola Great Britain
  • people interactive GmbH
    Project: “Telekom Shop 2010″
    Client: Deutsche Telekom AG

Special prize

  • xplace GmbH
    Project: “xplace Instore-TV bei Media -Saturn”
    Client: Media-Saturn Holding Gmb


You can see more  HERE