Search Results

Search found 4985 results on 200 pages for 'moving'.

Page 42/200 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Is this possible?

    - by Stud33
    I want to incorporate some Accelerometer code into a Android application im working and want to see if this is possible. Basically what I need is for the code to detect car acceleration motion. I am not wanting to determine speed with the code but just distinguish if the phone is in a car and has accelerated motion (Hence the car is moving for the first time). I have gone through many different accelerometer applications to see if this motion produces a viable profile to go off of and it appears it does. Just looking for something that popups a "Hello World" dialog when it detects your in the car and its moving for the first time down the street. Any help would be appreciated and a simple yes or no its possible would work. I would also be interested in compensating anyone that is capable of doing this as well. I need this done like yesterday so please let me know. Thank You, JTW

    Read the article

  • Friday Fun: Favorite Games to Play in Chrome

    - by Asian Angel
    Online games can provide a perfect break while you are working and being able to choose from a multitude of games makes it even better. If you are a game addict then you will definitely want to have a look at the Game Button extension for Chrome. Game Button in Action Once the extension has finished installing you are ready to enjoy all that gaming goodness. To get started just click on the “Toolbar Button” and choose a game category. For our example we chose “Shooting Games”. Once you select a game category a new window will open. Towards the lower right corner you will be able to access a scrollable drop-down menu and choose the game that you would like to play. Note: Some of these games come with sounds that can not be turned off so you may want to have the volume lowered all the way or your speakers temporarily turned off if you are at work. For our first game we chose “Snowball Throw”. Notice that there is a nice variety such as “DinoKids – Archery” to games like “Secret Agent”. You can see that our game was nicely sized…not too small and not too large. Go go snowballs! This is definitely a fun one to try…the best approach for this one is to use one hand for clicking the mouse and the other hand for moving it at the same time. If desired you can post your score and see other high scores afterwards. For our second game we decided to try “Target Shooter Firing Range”. This one is definitely a little harder because you have to be extremely precise while moving as quickly as possible. Not too bad for the score but that is ok. You will certainly be able to have fun finding the games that will become your favorites while enjoying the nice variety. Conclusion If you love online games and want a good variety to choose from then the Game Button extension will make a nice addition to your browser. Links Download the Game Button extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Play a New Random Game Each Day in ChromeFriday Fun: Get Your Mario OnFriday Fun: Go Retro with PacmanFriday Fun: Play Air Hockey in Google ChromeFriday Fun: Five More Time Wasting Online Games TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • Upgrade Exchange 2003 to Exchange 2010

    In this article, the first of two in which Jaap describes how to move from Exchange Server 2003 straight to Exchange Server 2010, he shows what is required before moving mailboxes from Exchange Server 2003 to Exchange Server 2010. He shows how to upgrade Active Directory, install both Exchange Server 2010 and certificates, and set up the Public Folder replication.

    Read the article

  • Easy Listening = CRM On Demand Podcasts

    - by Anne
    OK, here's my NEW favorite resource for CRM On Demand info -- podcasts! Specifically, the CRM On Demand Podcast site -- signed, sealed, and delivered with humor and know-how. Yes, I admit, I know the cast of characters. But let's face it, sometimes dealing with software is just soooo dry! Not so when discussed by the two main commentators, Louis Peters and Robert Davidson, whom someone once referred to as CRM On Demand's "Click and Clack." (Thought that was too good not to pass along!) Anyhow, another huge plus about the site is the option to listen OR to read. Out walking my dog or doing the dishes? Just turn up the podcast. Listening to music or watching TV? I'll read Louis's entertaining write-ups to glean great info about CRM On Demand in a very short period of time. So that you get a better understanding of why I like this site so much, here's a sampling of what's discussed: Five Things about Books of Business As Louis Peters put it in his entry, when you see "Five Things" in the title, "you'll know you're going to get some concrete advice that you can put to work right away." Well, Louis and Robert do just that, pointing you in the right direction when using Books of Business to segment data. Moving to Indexed Fields - A Rough Guide (only an article, not a podcast) I've read all about performance and even helped develop material around it. But nowhere have I heard indexed custom fields referred to as "super heroes." Louis and Robert use imaginative language to describe the process for moving your data to indexed fields for optimal performance. Data Access QA from the Forums I think that everyone would admit that data access and visibility is the most difficult topic to understand in CRM On Demand. Following up on their previous podcast on the same topic, Louis and Robert answer a few key questions from the many postings on the Oracle CRM On Demand forums. And I bet that the scenarios match many companies' business requirements...maybe even yours! We Need to Talk About Adoption Another expert, Tim Koehler, joins Louis to talk about how to drive user adoption: aligning product usage with business results, communicating why and how to use the product, getting feedback on usability, and so on. Hope I've made my point -- turn to these podcasts to hear knowledgeable folks discuss CRM On Demand tips and tricks in entertaining ways. One podcast is even called "SaaS Talk"!

    Read the article

  • How Visual WebGui helps ASP.NET Cloud-based apps

    - by Visual WebGui
    Everyone is talking about Cloud computing and moving to the cloud (public or private), but very few have actually done it so far. The reason is that the process of migrating existing applications to the cloud is a lot more complicated than one might think which is exactly where the Visual WebGui technology comes in for a rescue. In the past year the Visual WebGui R&D Team have been intensively working on a tool-based solution that gives Microsoft application developers and enterprises a simpler...(read more)

    Read the article

  • Four Emerging Payment Stories

    - by David Dorf
    The world of alternate payments has been moving fast of late.  Innovation in this area will help both consumers and retailers, but probably hurt the banks (at least that's the plan).  Here are four recent news items in this area: Dwolla, a start-up in Iowa, is trying to make credit cards obsolete.  Twelve guys in Des Moines are using $1.3M they raised to allow businesses to skip the credit card networks and avoid the fees.  Today they move about $1M a day across their network with an average transaction size of $500. Instead of charging merchants 2.9% plus $.30 per transaction, Dwolla charges a quarter -- yep, that coin featuring George Washington. Dwolla (Web + Dollar = Dwolla) avoids the credit networks and connects directly to bank accounts using the bank's ACH network.  They are signing up banks and merchants targeting both B2B and C2B as well as P2P payments.  They leverage social networks to notify people they have a money transfer, and also have a mobile app that uses GPS location. However, all is not rosy.  There have been complaints about unexpected chargebacks and with debit fees being reduced by the big banks, the need is not as pronounced.  The big banks are working on their own network called clearXchange that could provide stiff competition. VeriFone just bought European payment processor Point for around $1B.  By itself this would not have caught my attention except for the fact that VeriFone also announced the acquisition of GlobalBay earlier this month.  In addition to their core business of selling stand-beside payment terminals, with GlobalBay they get employee-operated mobile selling tools and with Point they get a very big payment processing platform. MasterCard and Intel announced a partnership around payments, starting with PayPass, MasterCard's new payment technology.  Intel will lend its expertise to add additional levels of security, which seems to be the biggest barrier for consumer adoption.  Everyone is scrambling to get their piece of cash transactions, which still represents 85% of all transactions. Apple was awarded another mobile payment patent further cementing the rumors that the iPhone 5 will support NFC payments.  As usual, Apple is upsetting the apple cart (sorry) by moving control of key data from the carriers to Apple.  With Apple's vast number of iTunes accounts, they have a ready-made customer base to use the payment infrastructure, which I bet will slowly transition people away from credit cards and toward cheaper ACH.  Gary Schwartz explains the three step process Apple is taking to become a payment processor. Below is a picture I drew representing payments in the retail industry. There's certainly a lot of innovation happening.

    Read the article

  • Bad Screen Flicker from video recording of recordmydesktop

    - by Tarun
    I have ubuntu 11.10 and I installed recordmydesktop. Video recording from recordmydesktop always result in screen flicker. In recording I see half of the screen moving forward while half would be stuck. I checked the settings and "Frame per Second" is set to 15 One such recording is available here - http://www.youtube.com/watch?v=QafF44m2Ttk&feature=youtu.be I am quite new to Ubuntu and not sure what is wrong.

    Read the article

  • CEO Is the New CRM

    - by andrea.mulder
    Danny Rippon launched his blogging career last week with The Marketer outlining how CRM has evolved from managing customer data to 'CEM' - Customer Experience Management, and for true market leaders it is moving towards 'CEO' - Customer Experience Optimisation. Or as we like to say here in the states Customer Experience Optimization (with a "z"). Click here to hear Danny's thought on why CEO Is the New CRM.

    Read the article

  • What's the recommended way to configure a Synaptics touchpad device?

    - by htorque
    I want to increase the scroll area by moving the so-called RightEdge a bit towards the middle. Right now I'm doing this via a one-liner that's called at session start (added via gnome-session-properties): xinput --set-prop --type=int --format=32 11 252 1781 5125 1646 4582 This works fine, but feels like a hack. What's the recommended way to edit/set touchpad device properties like this one? Few years ago I'd have put that into the xorg.conf, but this seems to be discouraged nowadays.

    Read the article

  • Windows Azure Use Case: Infrastructure Limits

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Physical hardware components take up room, use electricity, create heat and therefore need cooling, and require wiring and special storage units. all of these requirements cost money to rent at a data-center or to build out at a local facility. In some cases, this can be a catalyst for evaluating options to remove this infrastructure requirement entirely by moving to a distributed computing environment. Implementation: There are three main options for moving to a distributed computing environment. Infrastructure as a Service (IaaS) The first option is simply to virtualize the current hardware and move the VM’s to a provider. You can do this with Microsoft’s Hyper-V product or other software, build the systems and host them locally on fewer physical machines. This is a good option for canned-applications (where you have to type setup.exe) but not as useful for custom applications, as you still have to license and patch those servers, and there are hard limits on the VM sizes. Software as a Service (SaaS) If there is already software available that does what you need, it may make sense to simply purchase not only the software license but the use of it on the vendor’s servers. Microsoft’s Exchange Online is an example of simply using an offering from a vendor on their servers. If you do not need a great deal of customization, have no interest in owning or extending the source code, and need to implement a solution quickly, this is a good choice. Platform as a Service (PaaS) If you do need to write software for your environment, your next choice is a Platform as a Service such as Windows Azure. In this case you no longer manager physical or even virtual servers. You start at the code and data level of control and responsibility, and your focus is more on the design and maintenance of the application itself. In this case you own the source code and can extend or change it as you see fit. An interesting side-benefit to using Windows Azure as a PaaS is that the Application Fabric component allows a hybrid approach, which gives you a basis to allow on-premise applications to leverage distributed computing paradigms. No one solution fits every situation. It’s common to see organizations pick a mixture of on-premise, IaaS, SaaS and PaaS components. In fact, that’s a great advantage to this form of computing - choice. References: 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0  Application Patterns for the Cloud: http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx

    Read the article

  • Do I need to match hardware on a Mac to my PC to get the same user experience?

    - by Darth
    I've been playing around with the thought of moving from a PC to a Mac. if you don't want to read this, skip to the "upgrade options" My current setup Most of my time I spent moving back and forth between Linux and Windows. During the last upgrade to Vista, I got myself pc with Core 2 Quad, 8GB of RAM and GeForce 9800GTX+. Currently I'm running dual boot between Ubuntu 10.04 and Windows Vista x64. Most of my work, around 80%, I can do on Ubuntu, which is mostly Ruby/Java programming. If that was all I needed, Ubuntu would be really great. However, I also do quite a lot of Photography and Design, which forces me to use Adobe software (not only Photoshop). I also work with Wacom Intuos4 tablet, which doesn't really have great support on Linux machines. I've tried virtualization both ways (Linux in Win and Win in Linux), but neither was anywhere near satisfying. These are those of many many reasons I want to move to OS X. Upgrade options This is how I see my upgrade options: Mac Mini - cheapest solution, but worst performance iMac - more expensive, better performing with second LCD for free Mac Pro - could match my current PC performance, currently outside of the price range When I compare the Mac hardware vs my current PC, it will be always worse, unless I decide to pump in a lot of money. The question that comes to my head, do I need to match my current PC hardware to get the same user experience with a Mac? If I look at it from the Vista point of view, 2GB RAM is as low as it gets, 4GB is usable ... and the 8GB runs very smoothly. PC HW != Mac HW? If I bought the Mac Mini for roughly the same price I paid for my PC (Core 2 Quad with 8GB RAM), I'd get Core 2 Duo with 4GB RAM. But I don't want to run Vista on it, so I can't compare the hardware directly. Say that I want to do the same things on the Mac Mini as I do on my PC, eg. open up 50 tabs in Google Chrome and start working with a large PSD in Photoshop (couple hundred MB), would running on Mac OS X compensate for the lower hardware performance? My point is, that if I'm about to upgrade, I wouldn't like to upgrade to hardware that runs a lot slower. Good analogy for this is Vista vs Ubuntu, where you can run Ubuntu smoothly on a low end laptop, but in Vista, you'd be happy to open a browser. Does the same principle apply to OS X?

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • SQL Server v.Next (Denali) : More on contained databases and "contained users"

    - by AaronBertrand
    One of the reasons for contained databases (see my previous post ) is to allow for a more seamless transition when moving a database from one server to another. One of the biggest complications in doing so is making sure that all of the logins are in place on the new server. Contained databases help solve this issue by creating a new type of user: a database-level user with a password. I want to stress that this is not the same concept as a user without a login , which serves a completely different...(read more)

    Read the article

  • SQL Server v.Next (Denali) : OS compatibility & upgrade support

    - by AaronBertrand
    Microsoft's Manageability PPM Dan Jones has asked for our feedback on their proposed list of supported operating systems and upgrade paths for the next version of SQL Server. (See the original post ). This has generated all kinds of spirited debates on twitter, in protected mailing lists, and in private e-mail. If you're going to be involved in moving to Denali, you should be aware of these proposals and stay on top of the discussion until the results are in. (The media are starting to pick up on...(read more)

    Read the article

  • How To Use Regular Expressions for Data Validation and Cleanup

    You need to provide data validation at the server level for complex strings like phone numbers, email addresses, etc. You may also need to do data cleanup / standardization before moving it from source to target. Although SQL Server provides a fair number of string functions, the code developed with these built-in functions can become complex and hard to maintain or reuse. The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • Interpolation using a sprite's previous frame and current frame

    - by user22241
    Overview I'm currently using a method which has been pointed out to me is extrapolation rather than interolation. As a result, I'm also now looking into the possibility of using another method which is based on a sprite's position at it's last (rendered) frame and it's current one. Assuming an interpolation value of 0.5 this is, (visually), how I understand it should affect my sprite's position.... This is how I'm obtaining an inerpolation value: public void onDrawFrame(GL10 gl) { // Set/re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip) { SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick += skipTicks; timeCorrection += (1000d / ticksPerSecond) % 1; nextGameTick += timeCorrection; timeCorrection %= 1; loops++; tics++; } interpolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(interpolation); } I am then applying it like so (in my rendering call): render(float interpolation) { spriteScreenX = (spriteScreenX - spritePreviousX) * interpolation + spritePreviousX; spritePreviousX = spriteScreenX; // update and store this for next time } Results This unfortunately does nothing to smooth the movement of my sprite. It's pretty much the same as without the interpolation code. I can't get my head around how this is supposed to work and I honestly can't find any decent resources which explain this in any detail. My understanding of extrapolation is that when we arrive at the rendering call, we calculate the time between the last update call and the render call, and then adjust the sprite's position to reflect this time (moving the sprite forward) - And yet, this (Interpolation) is moving the sprite back, so how can this produce smooth results? Any advise on this would be very much appreciated. Edit I've implemented the code from OriginalDaemon's answer like so: @Override public void onDrawFrame(GL10 gl) { newTime = System.currentTimeMillis()*0.001; frameTime = newTime - currentTime; if ( frameTime > (dt*25)) frameTime = (dt*25); currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { SceneManager.getInstance().getCurrentScene().updateLogic(); previousState = currentState; t += dt; accumulator -= dt; } interpolation = (float) (accumulator / dt); render(); } Interpolation values are now being produced between 0 and 1 as expected (similar to how they were in my original loop) - however, the results are the same as my original loop (my original loop allowed frames to skip if they took too long to draw which I think this loop is also doing). I appear to have made a mistake in my previous logging, it is logging as I would expect it to (interpolated position does appear to be inbetween the previous and current positions) - however, the sprites are most definitely choppy when the render() skipping happens.

    Read the article

  • Social Targeting: Who Do You Think You’re Talking To?

    - by Mike Stiles
    Are you the kind of person that tries to sell Clay Aiken CD’s outside Warped Tour concert venues? Then you don’t think a lot about targeting your messages to the right audience. For your communication to pack the biggest punch it can, you need to know where to throw it. And a recent study on social demographics might help you see social targeting in a whole new light. Pingdom’s annual survey of social network demographics shows us first of all that there is no gender difference between Facebook and Twitter. Both are 40% male, 60% female. If you’re looking for locales that lean heavily male, that would be Slashdot, Hacker News and Stack Overflow. The women are dominating Pinterest, Goodreads and Blogger. So what about age? 55% of tweeters are 35 and up, compared with 63% at Pinterest, 65% at Facebook and 70% at LinkedIn. As you can tell, LinkedIn supports the oldest user base, with the average member being 44. The average age at Facebook is 51, and it’s 37 at Twitter. If you want to aim younger, have you met Orkut yet? 83% of its users are under 35. The next sites in order as great candidates for the young market are deviantART, Hacker News, Hi5, Github, and Reddit. I know, other than Reddit, many of you might be saying “who?” But the list could offer an opportunity to look at the vast social world beyond Facebook, Twitter and Google+ (which Pingdom did not include in the survey at all due to a lack of accessible data). As for the average age of social users overall: 26% are 25-34 25% are 35-44 19% are 45-54 16% are 18-24  6% are 55-64  5% are 0-17  and 2% are 65 Now you know where you stand on the “cutting edge” scale for a person your age. You’re welcome. Certainly such demographics are a moving target and need to be watched and reassessed on a regular basis to make sure you’re moving in step with the people you want to talk to. For instance, since Pingdom’s survey last year, the age of the average Facebook user has gone up 2 years, while the age of the average Twitter user has gone down 2 years. With the targeting and analytics tools available on today’s social management platforms, there’s little need to market in the dark. Otherwise, good luck with those Clay CD’s.

    Read the article

  • BDD using SpecFlow on ASP.NET MVC Application

    - by Rajesh Pillai
    I usually love doing TDD and am moving towards understanding BDD (Behaviour Driven Development).  My learnings are documented in the form of an article at CodeProject. The URL is http://www.codeproject.com/KB/architecture/BddWithSpecFlow.aspx I will keep this updated as and when I learn a couple of more things. Hope you like it. Cheers !!!

    Read the article

  • I&rsquo;ve moved out&hellip;

    - by Michael Cummings
    Originally posted on: http://geekswithblogs.net/Mathoms/archive/2013/06/21/irsquove-moved-outhellip.aspxGeeksWithBlogs has been a great property ever since I decided to start bloggging, however I have outgrown it and am moving to a new location. Please visit me at http://michaelcummings.net from now on. The RSS feed has been updated so that should automatically update to the new address. I’ll miss GWB, but my new property is hosted on Azure using Orchard and I have been really enjoying it so far.

    Read the article

  • export emails from open-xchnage webmail and import them to horde webmail

    - by sam
    Im moving hosting and need to move the emails stored with the old domain/hosting, the hosting i want to transfer from uses open-xchange for its webmail and the hosting i want to move to uses horde for their webmail. Is their an automated way from inside of the open xchange webmail, or is there a way i can do it from inside of outlook / mac mail ? not sure if this is the right place for this, if not please advise..

    Read the article

  • Long Running Service Request or COLD CASE?

    - by chris.warticki
    What's going on? Why is it taking so long? Is anyone out there? Resolving Service Requests can seem to take forever. If your Service Request is taking more than a few days, moving into weeks or months, here are few things to consider.  Details here.  Comments welcome. -Chris Warticki twittering @cwarticki Join one of the Twibes - http://twibes.com/OracleSupport or http://twibes.com/MyOracleSupport

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >