Archive

Future

The key to understanding multi-touch touch panels is to realize that a touch is not the same thing as a mouse click.

With a multi-touch projected capacitive touch panel, the user interface of an embedded application can be enhanced with gestures such as pinch, zoom, and rotate. True multi-touch panels, that is, panels that return actual coordinates for each individual touch, can support even more advanced features like multiple-person collaboration and gestures made of a combination of touches (for example, one finger touching while another swipes). The different combinations of gestures are limited only by the designer’s imagination and the amount of code space. As multi-touch projected capacitive touch panels continue to replace single-touch resistive touch panels in embedded systems, designers of those systems must develop expertise on how to interface to these new panels and how to use multi-touch features to enhance applications.

When implementing a touch interface, the most important thing is to keep in mind the way the user will interact with the application. The fastest, most elegant gesture-recognition system will not be appreciated if the user finds the application difficult to understand. The biggest mistake made in designing a touch interface is using the same techniques you would use for a mouse. While a touch panel and a mouse have some similarities, they’re very different kinds of input devices. For example, you can move a mouse around the screen and track its position before taking any action. With tracking, it’s possible to position the mouse pointer precisely before clicking a button. With a touch interface, the touch itself causes the action.

The touch location isn’t as precise as a mouse click. One complication with touch is that it can be difficult to tell exactly where the touch is being reported since the finger obscures the screen during the touch. Another difference is in the adjacency of touch areas; due to the preciseness of a mouse, touch areas can be fairly small and immediately adjacent to each other.

With a touch interface, it’s helpful to leave space between touch areas to allow for the ambiguousness of the touch position. Figure 1 shows some recommended minimum sizes and distances.


Click on image to enlarge.

Feedback mechanisms need to be tailored to a touch interface to help the user understand what action was taken and why. For example, if the user is trying to touch a button on the screen and the touch position is reported at a location just outside the active button area, the user won’t know why the expected action did not occur. In the book Brave NUI World: Designing Natural User Interfaces for Touch and Gesture, Daniel Wigdor and Dennis Wixon suggest several ways to provide feedback so the user can adjust the position and generate the expected action.1 One example is a translucent ring that appears around the user’s finger. When the finger is over an active touch area, the ring might contract, wiggle, change color, or indicate in some other way that the reported finger position is over an active element (Figure 2a). Another option is that the element itself changes when the finger is over it (Figure 2b).


The authors describe several other strategies for designing touch interfaces, including adaptive positioning (which activates the nearest active area to the touch), various feedback mechanisms, and modeling the algorithmic flow of a gesture.

You’ll need to consider the capabilities of the touch controller when designing the gestures that will be recognized by the user interface. Some multi-touch controllers report gesture information without coordinates. For example, the controller might send a message saying that a rotation gesture is in progress and the current angle of the rotation is 48º, but it won’t reveal the center of the rotation or the location of the touches that are generating the gesture. Other controllers provide gesture messages as well as the actual coordinates and some controllers provide only the touch coordinates without any gesture information. These last two types are considered “true” multi-touch because they provide the physical coordinates of every touch on the panel regardless of whether a gesture is occurring or not.

Even if the controller provides gesture information, its interpretation of the gestures may not match the requirements of the user interface. The controller might support only one gesture at a time while the application requires support for three or four simultaneous gestures; or it may define the center of rotation differently from the way you want it defined. Of course no controller is going to automatically recognize gestures that have been invented for an application such as the “one finger touching while another swipes” example given above. As a result, you will often need to implement your own gesture-recognition engine.

A gesture-recognition engine can be a collection of fairly simple algorithms that generates events for touches, drags, and flicks, or it can be a complicated processing system that uses predictive analysis to identify gestures in real time. Gesture engines have been implemented using straight algorithmic processing, fuzzy logic, and even neural networks. The type of gesture-recognition engine is driven by the user interface requirements, available code space, processor speed, and real-time responsiveness. For example, the Canonical Multitouch library for Linux analyzes multiple gesture frames to determine what kinds of gesture patterns are being executed.2 In the rest of this article I’ll focus on a few simple gesture-recognition algorithms that can be implemented with limited resources.

Common gestures
The simplest and most common gestures are touch (and double touch), drag, flick, rotate, and zoom. A single touch, analogous to a click event with a mouse, is defined by the amount of time a touch is active and the amount of movement during the touch. Typical values might be that the touch-down and touch-up events must be less than a half second apart, and the finger cannot move by more than five pixels.

A double touch is a simple extension of the single touch where the second touch must occur within a certain amount of time after the first touch, and the second touch must also follow the same timing and positional requirements as the first touch. Keep in mind that if you are implementing both a single touch and a double touch, the single touch will need an additional timeout to ensure that the user isn’t executing a double touch.

While a drag gesture is fairly simple to implement, it’s often not needed at the gesture-recognition level. Since the touch controller only reports coordinates when a finger is touching the panel, the application can treat those coordinate reports as a drag. Implementing this at the application level has the added benefit of knowing if the drag occurred over an element that can be dragged. If not, then the touch reports can be ignored or continually analyzed for other events (for example, passing over an element may result in some specific behavior).

A flick is similar to a drag but with a different purpose. A drag event begins when the finger touches the panel and ends when the finger is removed. A flick can continue to generate events after the finger is removed. This can be used to implement the kinds of fast scrolling features common on many cell phones where a list continues to scroll even after the finger is lifted. A flick can be implemented in several ways, with the responsibilities divided between the gesture-recognition layer and the application layer. Before we discuss the different ways to implement flick gestures, let’s first focus on how to define a flick.

A flick is generally a fast swipe of the finger across the surface of the touch panel in a single direction. The actual point locations during the flick do not typically matter to the application. The relevant parameters are velocity and direction. To identify a flick, the gesture-recognition layer first needs to determine the velocity of the finger movement. This can be as simple as determining the amount of time between the finger-down report and the finger-up report divided by the distance traveled. However, this can slow the response time since the velocity is not determined until after the gesture has finished.

full article by Tony Gray, Ocular LCD, Inc.   in EE|Times

Using a stereoscopic projector and the Kinect camera, real objects are rendered digitally in a 3-D space.

What humans can accomplish with a gesture is amazing. By holding out a hand, palm forward, we can stop a group of people from approaching a dangerous situation; by waving an arm, we can invite people into a room. Without a touch, we can direct the actions of others, simply through gestures. Soon, with those same types of gestures, we’ll be directing the operations of heavy pieces of machinery and entire assembly lines.

Manufacturing workers are on the verge of replacing the mouse-and-keyboard-based graphical user interface (GUI) with newer options. Already, touchscreens are making great inroads into manufacturing. And in many locations, the adoption of other natural user interfaces (NUIs) is expanding to incorporate eye scans, fingerprint scans and gesture recognition. These interfaces are natural and relevant spinoffs of the type of technology we find today in video games, such as those using Microsoft’s Kinect.

In the gaming world, gestures and voices are recognized by Kinect through an orchestrated set of technologies: a color video camera, a depth sensor that establishes a 3-D perspective and a microphone array that picks out individual players’ voices from the background room noise. In addition, Kinect has special software that tracks a player’s skeleton to recognize the difference between motion of the limbs and movement of the entire body.

The combined technologies can accurately perceive the room’s layout and determine each player’s body shape and position so that the game responds accordingly.One can expect to see NUI applications working in every industry imaginable—from health care to education, retail to travel—extending user interactions in multiple ways.

NUI technology is of particular interest to the manufacturing industry. For instance, when a worker logs on to a machine, instead of clicking a mouse and entering a personal ID and password on a computer screen, the user will look into a sensing device that will perform a retinal scan for identification. Then, just by using hand gestures, the identified worker can start a machine or, with an outstretched hand, stop it. The machine may ask for the employee to confirm the requested action verbally, and a simple “yes” response will execute the command.

Avatar Kinect replicates a user’s speech, head movements and facial expressions on an Xbox avatar, and lets users hang out with friends in virtual environments and shoot animated videos to share online.

NUI technologies can improve ways to move products across assembly lines, as well as to build them on an individual line. For example, if a batch of partially assembled components must be transferred to a pallet or another machine, the worker can use a gesture to designate the subassemblies to be moved and the location of their destination.

Safeguards can be built into the NUI system so that unrelated movements or conversations in the plant do not accidentally initiate a command. Each machine will know who is logged in to it and will respond exclusively to that individual’s motions and voice. The computer could even be set to shut down automatically if its “commander” is away from the station for more than a selected period of time.

The benefits of NUI technology specific to manufacturing will be extensive. Many of these examples are already in development:

• Employees who must wear gloves on the job no longer need to remove them to operate a keyboard, so they can carry out their work and react to situations more speedily, resulting in higher productivity, faster throughput and higher safety in the workplace.

• Those who work in areas that contain considerable dirt, dust and grease know that touchscreens quickly can become smudged and difficult to view. With gestures, the screen can remain clean. Using the gesture-based NUI in these situations also reduces the spread of contagion and therefore improves health and productivity on the job.

• When computers remain cleaner, because they are touched only infrequently, the manufacturer can cut costs significantly. The screen and other computer components require less maintenance and repair, and elements such as a keyboard are no longer required investments.

Microsoft Dynamics is taking a lead in incorporating NUI technologies into its offerings. The Microsoft Dynamics AX 2012 enterprise resource planning solution offers a touch-based user interface for the shop floor, and independent software developers are working on gesture-based interfaces to provide touchless commands.

The first generation of gesture-based equipment will soon be installed in plants that manufacture heavy equipment, such as cars and large machine tools. Dirt and grease in such facilities can cause substantial problems for conventional computer control units.

NUIs also are expected to become popular in such difficult environments as cold rooms, where workers must wear heavy gloves, and pharmaceutical and food-processing plants, which require exceptional levels of cleanliness.

In the near future, we might see systems that can track the eyes of workers to anticipate the next command. And, soon, NUI interfaces will enter the office environment, where the productivity and cost-effectiveness they offer will be just as important as they are on the plant floor. With such widespread applications, voice- and gesture-based interfaces are certain to usher in an era in which interacting with technology becomes easier, faster and less costly.

by Rakesh Kumar in EE|Times

EYeka  just released a white paper on the the Future of Shopping.The company conducted interviews with retail experts and also asked its community of “creative consumers” to imagine shopping in 2020.

5 Consumer -Generated Trends That Are Shaping Tomorrow’s Shopping have been identified :

Responsible shopping: consumers want a socially and environmentally responsible retail ecosystem

Augmented shopping: consumers value rich and interactive shopping experiences

Informed shopping: consumers look for relevant, personal information about brands and products

Facilitated shopping: consumers expect technology to help them choose

Experience shopping: consumers hope that shopping will become more entertaining

If you represent a digital signage software company, implementing a software as a service approach to your offerings can be very beneficial. Utilizing “the cloud” hosted approach in any business scenario can be an excellent source for recurring and residual revenue. Interestingly, such an approach also holds other benefits as well, including overall cost reduction, easy scalability, opportunity for innovation increases, easier case-by-case implementation, greater capacity, and better security.

Cost Reduction

Simply put, cloud computing is paid in increments in an as-you-use fashion. As a result, the need for up-front cash to purchase expensive servers is eliminated. For most digital signage networks, “the cloud” method has some overwhelming advantages. Using a SaaS (software as a service) model for digital signage simply allows for lower information technology costs, increased economies of scale, and payment is required on an as-used basis only.

In the absence of a SaaS solution, you and you alone are responsible for purchasing and maintaining servers, housing them securely, and installing and maintaining the software. This alone would often require the full-time efforts of reliable IT personnel–a cost that most would rather expend on the “core competencies” of their business. In addition, when service fees are charged on a metered basis, it means you only pay for what you use, saving you valuable resources in the long run. Finally, cloud computing means you benefit from multi-tenancy. Multi-tenancy is the use of many different applications from multiple clients being used on the same unit of software. Accordingly, efficiency is the name of the game when it comes to cloud computing.

Capacity and Scalability

Similarly, cloud computing allows for scalable network growth. For instance, as your network grows your space on “the cloud” can grow accordingly. Scalable network growth can be crucial for smaller advertising networks to grow on an as-needed basis. For smaller “bootstrap” digital sign networks, a SaaS solution is often crucial for slow growth. As hardware is incrementally added to the cloud, the server usage can be readily enlarged so that throughput under an increased load does not slow or otherwise decrease server efficiency.

Signage companies can store and schedule much more data on “the cloud” than most individual server networks could otherwise. This may seem somewhat obvious, but is vastly important when we start talking copious (I love that word) amounts of HD video that could be housed for deployment to your network. With helpful tools such as file-recognition software and role-based administration, you can also more easily ensure that redundant data is avoided. This again increases efficiency and even more server capacity.

Increase in Innovation

When server updates are not a chief concern of network operations personnel, innovation in other areas can become a primary focus. It’s a simple issue of core competency focus that applies to Adam Smith’s theory of the “Invisible Hand”: “If I focus my time on my core competencies and you focus your time on those things you do best, the community as a whole benefits.”  No longer does the organization have to spend time working on a server, scrambling for updates and working on maintenance. That is taken care of by the cloud managers, allowing for more innovation within your own organization. Keep in mind, this not only applies to innovation, but also to the other workings of your enterprise. Freeing time otherwise spent on digital sign server maintenance to focus on what you do best, will increase your productivity and profits across the board.

Ease of Implementation

Without the need to purchase hardware, software licences or implementation services, a company can get its cloud-computing arrangement off the ground in record time — and for a fraction of the cost of an on-premise solution. A majority of the cost of digital signage servers is the initial purchase and implementation fees. Such fees can often be substantial. Servers, cabling, and implementation of a large server

Security

It may be even more obvious to go over how general security of a network is important in digital signage. However, it may be very important to point out how SaaS can be beneficial in such an instance. For starters, having all the information on the cloud reduces the need for annoying redundant security testing at multiple sites, cutting the overall cost of security testing for passwords, and cracking substantially. Second, having all your data in one central location reduces data leaks and losses. Instead of caching data on various smaller devices, where data control and disk encryption standards often aren’t enough, you can rest more easily with the data on the cloud. Boy, does that sound “big brother” or what? But, in reality, it’s very true.

Redundant server farms for hosting large and small digital signage networks, complete with role-based administrative access will truly provide ease of use for huge and minuscule networks alike, giving more opportunities for growth and expansion for others wishing to start their own digital signage advertising network. Otherwise, they will be consigned to purchasing, managing, and monitoring their own servers–a task that can take time away from other important matters, like expanding the size of the network in general. Who wants to be blamed for slowing network expansion?

article from Deploid

A lightweight, flexible display technology, which InAVate reported on last year, has emerged into marketplace. The creator, Georgia based NanoLumens, will launch a 112” model at Las Vegas’ Digital Signage Expo, February 23 – 25. The display weighs approximately 40kg, is only one inch thick and extremely energy efficient. It is also so flexible it can be rolled up for transportation.

The NanoLumens product, mounted high above the exhibition hall entrance, will greet visitors as they enter the Digital Signage Expo (DSE) at the Las Vegas Convention Centre.

Richard Cope, CEO of NanoLumens, said the screen, which can be used to wrap columns or follow the contours of a bend, was “as thin as a candy bar, used less energy than a coffeemaker and could be hung from the ceiling like a work of art.”

The company’s president, John Wilson said although NanoLumens was launching the 112” product the company could build the screen in practically any shape or size. He told InAVate that NanoLunmens had provided quotes for many custom solutions including a16ft x 60ft screen.

“We will be shipping in the states by the end of Q1,” he continued. “We have already been approached by around 30 companies across the globe wanting to work with us on the distribution of our product and expect to have availability in other regions by the end of the summer.”

He added that the company had been overwhelmed by the demand for the product. “We didn’t appreciate fully the depth of the market that wanted a product that was both lightweight and energy efficient,” he mused.

The display operates at between 500 and 1,000 nits so is suited to shop window applications where daylight would make some other technologies redundant. Wilson says the company is already talking to advertising agencies regarding this application.

Here is some videos for your viewing pleasure !!

While i was searching youtube for a projection system i found something very interesting.

A company named Obscura has created for Google the worlds largest video dome

It is the largest multi channel video environment ever created. They used 11 Christie 20,000 lumen projectors to create seamless video on over 12,000 square feet of surface area.

This dome was 90 feet across and stood over 50 feet high. They shot stills and video that was played back throught their 4K playback decks. Take a look at the video. It’s amazing…