Search Results

Search found 18 results on 1 pages for 'touchscreens'.

Page 1/1 | 1 

  • Dual Touchscreens in Different Rooms

    - by Ash
    I'm planning on having one dual-core system running two touchscreens, each in its own room in the house. I'd like to be able to use the internet on one, while someone else uses the other to record music - each of us interacting with the screen as we would separate computers. I was also thinking that running each screen, or certain programs, on its own core might make this work more smoothly. Will this setup work in Windows 7 Home on a mini-tower or do I need to invest in a server to get this sort of workstation/terminal setup to work?

    Read the article

  • How Does a Touch Screen Phone Work? [Chart]

    - by Asian Angel
    There are three types of touch screen technologies available in today’s touch screen phones: resistive, capacitive, and infra-red. Learn about the different benefits and capabilities of each and make a more informed decision about your next mobile phone selection with this helpful chart. How Does a Touch Screen Phone Work? [via GraphJam] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video]

    - by Asian Angel
    Think that you have seen awesome touchscreen setups before? Then think again because the University of Groningen has put together the ultimate version with a super-sized 10 meter curved screen setup housed at their reality center. To learn more about the assorted hardware and software used in the creation of this touchscreen wonder see the detailed information section on the YouTube page (link provided below). Note: The video has approximately 1 minute of “blank” airplay at the end. Reality touchscreen University of Groningen [via Geeks are Sexy] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper] N64oid Brings N64 Emulation to Android Devices Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform]

    Read the article

  • How to future-proof my touch-enabled web application?

    - by Rice Flour Cookies
    I recently went out and purchased a touch-screen monitor with the intention of learning how to program touch-enabled web applications. I had reviewed the MDN documentation about touch events, as well as the W3C specification. To get started, I wrote a very short test page with two event handlers: one for the mousedown event and one for the touchstart event. I fired up the web page in IE and touched the document and found that only the mousedown event fired. I saw the same behavior with Firefox, only to find out later that Firefox can be set to enable the touchstart event using about:config. When touch events are enabled, the touchstart event fires, but not mousedown. Chrome was even stranger: it fired both events when I touched the document: touchstart and mousedown, in that order. Only on my Android phone does it appear to be the case that only the touchstart event fires when I touch the document. I did a a Google search and ended up on two interesting pages. First, I found the page on CanIUse for touch events: http://caniuse.com/#feat=touch Can I Use clearly indicates that IE does not support touch events as of this writing, and Firefox only supports touch events if they are manually enabled. Furthermore, all four browsers I mentioned treat the touch in a completely different way. It boils down to this: IE: simulated mouse click Firefox with touch disabled: simulated mouse click Firefox with touch enabled: touch event Chrome: touch event and simulated mouse click Android: touch event What is more frustrating is that Google also found a Microsoft page called RethinkIE. RethinkIE brags about touch support in IE; as a matter of fact, one of their slogans is "Touch the Web". It links to a number of touch-based application. I followed some of these links, and as best I can tell, it's just like CanIUse described; no proper touch support; just simulated mouse clicks. The MDN (https://developer.mozilla.org/en-US/docs/Web/API/Touch) and W3C (http://www.w3.org/TR/touch-events/) documentation describe a far richer interface; an interface that doesn't just simulate mouse clicks, but keeps track of multiple touches at once, the contact area, rotation, and force of each touch, and unique identifiers for each touch so that they can be tracked individually. I don't see how simulated mouse clicks can ever touch the above described functionality, which, once again, is part of the W3C specification, although it is listed as "non-normative", meaning that a browser can claim to be standards-compliant without implementing it. (Why bother making it part of the standard, then?) What motivated my research is that I've written an HTML5 application that doesn't work on Android because Android doesn't fire mouse events. I'm now afraid to try to implement touch for my application because the browsers all behave so differently. I imagine that at some time in the future, the browsers might start handling touch similarly, but how can I tell how they might be handled in the future short of writing code to handle the behavior of each individual browser? Is it possible to write code today that will work with touch-enabled browsers for years to come? If so, how?

    Read the article

  • Why is implementing copy-paste in a touch screen based smartphone such a big deal?

    - by EpsilonVector
    I'm not entirely sure this is on-topic, but it definitely needs a programmer's understanding to be answered, and deals with general development (for a specific scenario) as opposed to a specific piece of code. In a way it also translates into "what are the challenges in doing X in a touch screen app", and similar questions have been asked here in the past. So here it is: When Apple didn't implement copy-pasting on the iPhone since version 1 I just assumed it was a UI issue- they were waiting until they figured out a good UI for it. But now the idea is out there, and Microsoft still released Windows Phone 7 without copy-pasting, promising it'll be ready in a few months. My question is: why does this takes a few months to implement? Are there some technological challenges that are unique to programming for a touch screen that I'm not familiar with?

    Read the article

  • Hover alternatve for touch devices [migrated]

    - by Joshua Frank
    I'm building a standard infographic where you mouse over a region and the image changes as you move. For instance, imagine a map of the world, and when you mouse over a country, that country glows and a panel shows statistics about that country. The implementation is to have a separate image for the glowing country, and a div element with the statistics, and the code shows these additional elements on a hover over the country. The question is: what should this do on a tablet, where there's no hover event? What's a good alternative navigation metaphor for this kind of situation on touch-only devices?

    Read the article

  • 3D touch "Minority Report" style interface - what platform gets me there the fastest?

    - by Ross Braden
    I'm working on a project that requires touch interface, though the use case is desktop more than mobile. Want to start out platform agnostic, not a mobile app. There will be gridwork type of 3D objects and diagraming being represented - think AutoCAD or Minority Report. Want to build a prototype that will have hooks into a database to represent the data. Any advice on what tools to use both for the design and the development of the functionality is greatly appreciated. Thanks!

    Read the article

  • How to use uTouch on multitouch-enabled touchpads?

    - by Freddi
    I currently have a Synaptics touchpad with only few classic multitouch features (2 finger scroll, right click). By installing the uTouch testing suite, I saw that it doesn't accept my touchpad as input device. I want to buy a newer notebook and would like to benefit of uTouch features (window management, swipe, pinch, rotate). Does uTouch only work on touchscreens or also on touchpads? What requirements should I take into account when choosing a new notebook?

    Read the article

  • Any simple shape recognition libraries for Java?

    - by Phil
    I am working on a on-screen keyboard for Android, and I need to recognize starting points, turning points and end points of lines drawn by the user on the keyboard. A simple straightening function would be nice, as it is difficult to draw a perfectly straight line even with a stylus, not to mention finger-only touchscreens today. What I am trying to write is something like Swype. Any good libraries that I can use or make reference to?

    Read the article

  • Which GUI toolkit would you use for a touchscreen interface?

    - by Drealmer
    The only experience I have so far with a touchscreen interface was one where everything was custom drawn, and I get the feeling it's not the most efficient way of doing it (even the most basic layout change is hell to make). I know plenty of GUI toolkits intended at keyboard & mouse interfaces, but can you advise something suited for touchscreens? Target platform is Windows, but cross-platform would be nice.

    Read the article

  • Can I use feature detection to know if css hover works for this client?

    - by user366061
    I've got a website that provides labels when the user hovers over an image. You can see the example at: http://www.185vfx.com/ For touchscreens, I'd like to have those hints on by default (since hover isn't usually available). I'd prefer not to browser-sniff and try to maintain that list as new devices/versions arrive. Any reliable way to detect if a browser can respond to hover or otherwise know about a touchscreen user via javascript or css?

    Read the article

  • EloTouch touchscreen suspends correctly but does not come out of standby on Ubuntu 10.04

    - by Ton van den Heuvel
    I am using an EloTouch touchscreen on a minimal Ubuntu 10.04 installation. I have a bare Xserver running without any desktop environment, just an xterm. The touchscreen is working great, but there is still one small problem. As soon as the touchscreen goes to standby, it is not coming out of standby when touching the screen. Using a keyboard or mouse does get it out of standby though. I am looking for any hints or directions to a cause of this problem. Is this an x.org configuration problem? A driver problem? Is this a known problem with touchscreens in general? Any pointers are welcome. The driver I am using is an unreleased EloTouch driver (3.5.0). I received it through a reseller who can not give me any technical information unfortunately.

    Read the article

  • Navigation in Win8 Metro Style applications

    - by Dennis Vroegop
    In Windows 8, Touch is, as they say, a first class citizen. Now, to be honest: they also said that in Windows 7. However in Win8 this is actually true. Applications are meant to be used by touch. Yes, you can still use mouse, keyboard and pen and your apps should take that into account but touch is where you should focus on initially. Will all users have touch enabled devices? No, not in the first place. I don’t think touchscreens will be on every device sold next year. But in 5 years? Who knows? Don’t forget: if your app is successful it will be around for a long time and by that time touchscreens will be everywhere. Another reason to embrace touch is that it’s easier to develop a touch-oriented app and then to make sure that keyboard, nouse and pen work as doing it the other way around. Porting a mouse-based application to a touch based application almost never works. The reverse gives you much more chances for success. That being said, there are some things that you need to think about. Most people have more than one finger, while most users only use one mouse at the time. Still, most touch-developers translate their mouse-knowledge to the touch and think they did a good job. Martin Tirion from Microsoft said that since Touch is a new language people face the same challenges they do when learning a new real spoken language. The first thing people try when learning a new language is simply replace the words in their native language to the newly learned words. At first they don’t care about grammar. To a native speaker of that other language this sounds all wrong but they still will be able to understand what the intention was. If you don’t believe me: try Google translate to translate something for you from your language to another and then back and see what happens. The same thing happens with Touch. Most developers translate a mouse-click into a tap-event and think they’re done. Well matey, you’re not done. Not by far. There are things you can do with a mouse that you cannot do with touch. Think hover. A mouse has the ability to ‘slide’ over UI elements. Touch doesn’t (I know: with Pen you can do this but I’m talking about actual fingers here). A touch is either there or it isn’t. And right-click? Forget about it. A click is a click.  Yes, you have more than one finger but the machine doesn’t know which finger you use… The other way around is also true. Like I said: most users only have one mouse but they are likely to have more than one finger. So how do we take that into account? Thinking about this is really worth the time: you might come up with some surprisingly good ideas! Still: don’t forget that not every user has touch-enabled hardware so make sure your app is useable for both groups. Keep this in mind: we’re going to need it later on! Now. Apps should be easy to use. You don’t want your user to read through pages and pages of documentation before they can use the app. Imagine that spotter next to an airfield suddenly seeing a prototype of a Concorde 2 landing on the nearby runway. He probably wants to enter that information in our app NOW and not after he’s taken a 3 day course. Even if he still has to download the app, install it for the first time and then run it he should be on his way immediately. At least, fast enough to note down the details of that unique, rare and possibly exciting sighting he just did. So.. How do we do this? Well, I am not talking about games here. Games are in a league of their own. They fall outside the scope of the apps I am describing. But all the others can roughly be characterized as being one of two flavors: the navigation is either flat or hierarchical. That’s it. And if it’s hierarchical it’s no more than three levels deep. Not more. Your users will get lost otherwise and we don’t want that. Flat is simple. Just imagine we have one screen that is as high as our physical screen is and as wide as you need it to be. Don’t worry if it doesn’t fit on the screen: people can scroll to the right and left. Don’t combine up/down and left/right scrolling: it’s confusing. Next to that, since most users will hold their device in landscape mode it’s very natural to scroll horizontal. So let’s use that when we have a flat model. The same applies to the hierarchical model. Try to have at most three levels. If you need more space, find a way to group the items in such a way that you can fit it in three, very wide lanes. At the highest level we have the so called hub level. This is the entry point of the app and as such it should give the user an immediate feeling of what the app is all about. If your app has categories if items then you might show these categories here. And while you’re at it: also show 2 or 3 of the items itself here to give the user a taste of what lies beneath. If the user selects a category you go to the section part. Here you show several sections (again, go as wide as you need) with again some detail examples. After that: the details layer shows each item. By giving some samples of the underlaying layer you achieve several things: you make the layer attractive by showing several different things, you show some highlights so the user sees actual content and you provide a shortcut to the layers underneath. The image below is borrowed from the http://design.windows.com website which has tons and tons of examples: For our app we’ll use this layout. So what will we show? Well, let’s see what sorts of features our app has to offer. I’ll repeat them here: Note planes Add pictures of that plane Notify friends of new spots Share new spots on social media Write down arrival times Write down departure times Write down the runway they take I am sure you can think of some more items but for now we'll use these. In the hub we’ll show something that represents “Spots”, “Friends”, “Social”. Apparently we have an inner list of spotter-friends that are in the app, while we also have to whole world in social. In the layer below we show something else, depending on what the user choose. When they choose “Spots” we’ll display the last spots, last spots by our friends (so we can actually jump from this category to the one next to it) and so on. When they choose a “spot” (or press the + icon in the App bar, which I’ll talk about next time) they go to the lowest and final level that shows details about that spot, including a picture, date and time and the notes belonging to that entry. You’d be amazed at how easy it is to organize your app this way. If you don’t have enough room in these three layers you probably could easily get away with grouping items. Take a look at our hub: we have three completely different things in one place. If you still can’t fit it all in in a logical and consistent way, chances are you are trying to do too much in this app. Go back to your mission statement, determine if it is specific enough and if your feature list helps that statement or makes it unclear. Go ahead. Give it a go! Next time we’ll talk about the look and feel, the charms and the app-bar….

    Read the article

  • Integrating virtual keyboard on a HP TouchSmart with an Adobe AIR app

    - by Alan
    Hi, Does anyone know if it's possible to integrate the ToushSmart's virtual keyboard with an Adobe AIR application? In most programs (Internet Explorer, Firefox, etc), when a user touches a text field a little keyboard icon automatically pops up which, when pressed, will bring up the virtual keyboard. However, this doesn't happen when clicking on text input fields in Adobe AIR applications. Has anyone had any experience working with AIR/Flash and touchscreens? Is there any API that can tell Windows (or the HP virtual keyboard specifically) that the user has clicked in a text field and that the virtual keyboard should be shown? The text fields are the standard kind (fl.controls.TextInput). Any suggestions would be greatly appreciated. Thanks in advance!

    Read the article

  • Transparent Technology from Amazon

    - by David Dorf
    Amazon has been making some interesting moves again, this time in the augmented humanity area.  Augmented humanity is about helping humans overcome their shortcomings using technology.  Putting a powerful smartphone in your pocket helps you in many ways like navigating streets, communicating with far off friends, and accessing information.  But the interface for smartphones is somewhat limiting and unnatural, so companies have been looking for ways to make the technology more transparent and therefore easier to use. When Apple helped us drop the stylus, we took a giant leap forward in simplicity.  Using touchscreens with intuitive gestures was part of the iPhone's original appeal.  People don't want to know that technology is there -- they just want the benefits.  So what's the next leap beyond the touchscreen to make smartphones even easier to use? Two natural ways we interact with the world around us is by using sight and voice.  Google and Apple have been using both in their mobile platforms for limited uses cases.  Nobody actually wants to type a text message, so why not just speak it?  Any if you want more information about a book, why not just snap a picture of the cover?  That's much more accurate than trying to key the title and/or author. So what's Amazon been doing?  First, Amazon released a new iPhone app called Flow that allows iPhone users to see information about products in context.  Yes, its an augmented reality app that uses the phone's camera to view products, and overlays data about the products on the screen.  For the most part it requires the barcode to be visible to correctly identify the product, but I believe it can also recognize certain logos as well.  Download the app and try it out but don't expect perfection.  Its good enough to demonstrate the concept, but its far from accurate enough.  (MobileBeat did a pretty good review.)  Extrapolate to the future and we might just have a heads-up display in our eyeglasses. The second interesting area is voice response, for which Siri is getting lots of attention.  Amazon may have purchased a voice recognition company called Yap, although the deal is not confirmed.  But it would make perfect sense, especially with the Kindle Fire in Amazon's lineup. I believe over the next 3-5 years the way in which we interact with smartphones will mature, and they will become more transparent yet more important to our daily lives.  This will, of course, impact the way we shop, making information more readily accessible than it already is.  Amazon seems to be positioning itself to be at the forefront of this trend, so we should be watching them carefully.

    Read the article

  • TouchDevelop: The Fast Path to Windows 8 and Phone Apps

    - by Clint Edmonson
    Are you looking for a little extra cash for the upcoming holidays? Then you might be interested in creating some cool apps to sell in the Windows Store. Or maybe you’re simply curious and want to try your hand at developing for Windows 8 and Windows Phone. In either case, the newly released TouchDevelop Web App is for you. TouchDevelop Web App is a development environment to create apps on your tablet or smartphone, without requiring a separate PC. Scripts written by using TouchDevelop can access data, media, and sensors on the phone, tablet, and PC. The script can interact with cloud services, including storage, computing, and social networks. TouchDevelop lets you quickly create fun games and useful tools, turning your scripts into true Windows Phone and Windows 8 apps. A year ago, Microsoft Research released TouchDevelop for Windows Phone, which is being used by enthusiasts, students, and researchers to program their phones in fun, inventive, and interesting ways. These scripts are available at TouchDevelop for anyone to download and use. Ever since we released TouchDevelop, we’ve been eyeing the tablet form factor and working on a version for the browser. Now, with the release of TouchDevelop Web App, the wait is over: the tablet version is ready, so go play around with it. All TouchDevelop scripts that are developed on the smartphone can be downloaded to the tablet and run (if hardware allows). Any script that is developed on the tablet can also be accessed on the phone. And scripts can be converted to Windows Phone or Windows 8 apps and submitted to the Windows Phone Store or Windows Store, respectively. TouchDevelop Web App’s editor and programming language have been designed for tablet devices with touchscreens, but you can also use a keyboard and a mouse. So grab your web-enabled device and give the TouchDevelop Web App a try. It’s fun and easy, and could even put a little cash in your holiday-depleted wallet. Or at least give you bragging rights at family get-togethers. Are you interested in further tips on Windows 8 development?  Sign up for the 30 to launch program which will help you build a Windows Store application in 30 days.  You will receive a tip per day for 30 days, along with potential free design consultations and technical support from a Windows 8 expert. As always, stay tuned to my twitter feed for Windows 8, Windows Azure and other Microsoft announcements, updates, and links: @clinted

    Read the article

  • Incorporating the Windows 7 onscreen keyboard into a WPF app

    - by mmr
    Windows 7 has a really nice onscreen keyboard program/control for touchscreens. I have a touchscreen app that was originally written for, and will be deployed on, XP. Is it possible to incorporate this keyboard directly into my app, rather than me using a custom control? I can find no programmatic information about it, so any links would be very helpful. Specifically, I'd need: To be able to use the keyboard on an XP machine that will have .NET 3.5 sp1 installed on it. To be able to hide the native keyboard on Windows 7, because I've already incorporated the touchscreen keyboard in my UI and so I don't need another one cluttering up the UI. This native keyboard has two attractive aspects to it. First off, it's automatically localized to the customer's language (though the rest of the app will need modification), and second off, it doesn't seem to suffer from 'touch lag' as the OS tries to figure out whether or not I'm doing a gesture, because I'm clearly typing on a keyboard. The app is WPF based, which should mean easy integration with Windows 7 based controls. EDIT: I'd really like the XP thing, but it's not a requirement. The ability to use the keyboard in Win7, though, seems like it should be possible and even the right way to do it.

    Read the article

  • Can iPad/iPhone Touch Points be Wrong Due to Calibration?

    - by Kristopher Johnson
    I have an iPad application that uses the whole screen (that is, UIStatusBarHidden is set true in the Info.plist file). The main window's frame is set to (0, 0, 768, 1024), as is the main view in that frame. The main view has multitouch enabled. The view has code to handle touches: - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { for (UITouch *touch in touches) { CGPoint location = [touch locationInView:nil]; NSLog(@"touchesMoved at location %@", NSStringFromCGPoint(location)); } } When I run the app in the simulator, it works pretty much as expected. As I move the mouse from one edge of the screen to the other, reported X values go from 0 to 767. Reported Y values go from 20 to 1023, but it is a known issue that the simulator doesn't report touches in the top 20 pixels of the screen, even when there is no status bar. Here's what's weird: When I run the app on an actual iPad, the X values go from 0 to 767 as expected, but reported Y values go from -6 to 1017. The fact that it seems to work properly on the simulator leads me to suspect that real devices' touchscreens are not perfectly calibrated, and mine is simply reporting values six pixels too low. Can anyone verify that this is the case? Otherwise, is there anything else that could account for the Y values being six pixels off from what I expect? (In a few days, I should have a second iPad, so I can test this with another device and compare the results.)

    Read the article

1