Search Results

Search found 93603 results on 3745 pages for 'one to many'.

Page 18/3745 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How to login as other in ubuntu 12.04

    - by murali
    i have upgraded my ubuntu 11.10 to 12.04. i could not see other login option in the login screen. it shows only Guest login and User login. The User Login ask only password and i had never entered in as User login so that i do not know about password of User login. my problem is how to login as root from the login screen? how can i get Other login option to login as root or some other user? before ask this question i have tried the following: try to add the greeter-show-manual-login=true line at the bottom of /etc/lightdm/lightdm.conf file as Guest login but i get access denied error. i do not know the password of User login (ask only password while login) to purpose of adding above line. from the safe mode login, i could login as root but i could not add the above line the lightdm.conf file . i got read only error so that i tried to change the permission to 777 like the following > chmod 777 lightdm.conf (i am within the /etc/lightdm/). but i got the error the file marked as/is read only in the file system. In 11.10 version i have created 4 users. i can see that the users exist in 12.10 . so i am sure my self users are not removed while upgrate. In short, i need Other login option on my login screen? how to get it? please help me. * Edited Question:* i have add the following line the /etc/lightdm/lighdm.conf file on recovery mode greeter-show-manual-login=true and i saved the file using wq command. now my /etc/lightdm/lighdm.conf file looking as the following: [SeatDefaults] greeter-session=unity-greeter user-session=ubuntu greeter-show-manual-login=true if i commit any mistake please correct me. by this problem i have wasted the two working day and all the works are in pending... please help me.

    Read the article

  • How one decision can turn web services to hell

    - by DigiMortal
    In this posting I will show you how one stupid decision may turn developers life to hell. There is a project where bunch of complex applications exchange data frequently and it is very hard to change something without additional expenses. Well, one analyst thought that string is silver bullet of web services. Read what happened. Bad bad mistake In the early stages of integration project there was analyst who also established architecture and technical design for web services. There was one very bad mistake this analyst made: All data must be converted to strings before exchange! Yes, that’s correct, this was the requirement. All integers, decimals and dates are coming in and going out as strings. There was also explanation for this requirement: This way we can avoid data type conversion errors! Well, this guy works somewhere else already and I hope he works in some burger restaurant – far away from computers. Consequences If you first look at this requirement it may seem like little annoying piece of crap you can easily survive. But let’s see the real consequences one stupid decision can cause: hell load of data conversions are done by receiving applications and SSIS packages, SSIS packages are not error prone and they depend heavily on strings they get from different services, there are more than one format per type that is used in different services, for larger amounts of data all these conversion tasks slow down the work of integration packages, practically all developers have been in hurry with some SSIS import tasks and some fields that are not used in different calculations in SSAS cube are imported without data conversions (by example, some prices are strings in format “1.021 $”). The most painful problem for developers is the part of data conversions because they don’t expect that there is such a stupid requirement stated and therefore they are not able to estimate the time their tasks take on these web services. Also developers must be prepared for cases when suddenly some service sends data that is not in acceptable format and they must solve the problems ASAP. This puts unexpected load on developers and they are not very happy with it because they can’t understand why they have to live with this horror if it is possible to fix. What to do if you see something like this? Well, explain the problem to customer and demand special tasks to project schedule to get this mess solved before going on with new developments. It is cheaper to solve the problems now that later.

    Read the article

  • Google+1 button strategy - Combined +1s or separate +1s?

    - by nctrnl
    I have included the Google+1 button on my blog. Each post outputs a +1 button on the bottom. Depending if you are viewing the actual post or just the main page the +1 button will "+1" either the post address or blog website address. This made me think for a bit if the +1 button should be configured to +1 the blog section (www.example.org/blog), +1 the main website address (www.example.org), or +1 individual posts?

    Read the article

  • U1 music mp3 files not put into albums

    - by david
    Via the web page I can see that my files sync to U1 cloud servers. For the mp3 files, there seems to be a problem that several questions have already addressed but there does not seem to be a clear answer. If I use EasyTAG 2.1.6, I can see the ID3 tags on the local files and they seem to correctly define the artist, album title and track name. I expect it is not relevant, but I am using 10.04 with several different clients to rip the CDs. However, some mp3 files do not appear in the cloud at all and some others get assigned to Various Artist or Unknown artist. Does the music streaming (e.g. via Ipad) use the tags or the directory/file structure to assign the artist or album, and how quickly should it be expected to work? :-) Which version of ID3 tags does U1 music streaming work best with or prefer? thanks for any help David

    Read the article

  • Google Author information in search results still havent displayed my details in search results

    - by Jayapal Chandran
    I followed the following instructions but still not clear whether i had completely understood it. http://www.google.com/support/webmasters/bin/answer.py?answer=1408986 http://www.labnol.org/internet/author-profile-in-google/19775/ I did the above last week and i did not find my picture in google search result. First i added google + link in certain web pages and in my google profile i added those pages which had google + anchor link with rel=author tag. After updating i used the following to verify. http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fvikku.info%2Fcodesnippets%2Fphp%2F&view= You can see that my pic is appearing at the right. here is a screen shot. so, what am i missing? why it is not in the search result. The author of labnol.org said it will take 3 days for my profile photo link to appear... ? Google has stated the following Note that there is no guarantee that a Rich Snippet will be shown for this page on actual search results. For more details, see the FAQ( http://knol.google.com/k/google-rich-snippets-tips-and-tricks#Frequently_Asked_Questions ). Fingers crossed. Thoughtful.

    Read the article

  • Java EE 7 turns one today!

    - by delabassee
    "Tell me and I forget. Teach me and I remember. Involve me and I learn." (Benjamin Franklin) Today marks the first year anniversary of Java EE 7. The JSR 342 specification was finalised on May 28, 2013 with the official launch taking place on June 12, 2013 (original press release). As of today, there are already 3 Java EE 7 compatible Application Servers, coming from different 'vendors' (Oracle, TmaxSoft and Red Hat). Two of those Java EE 7 Application Servers are free and open source. We expect the list of Java EE 7 compatible Application Servers to grow over the coming months. Source: RebelLabs - 'Java Tools and Technologies Landscape for 2014' According to a recent independent survey, one third of the Java EE users who participated in that survey is already using Java EE 7. This is a good sign but it also means that a lot of people are not yet on Java EE 7. So if you haven't yet embarked on Java EE 7, now is really the time to do so! There are various ways to learn Java EE 7, in no particular order ... Continue to read The Aquarium. Through this blog, we are relaying Java EE news but we are also doing our best to highlight relevant technical contents such as articles, community tutorials, etc. Watch the GlassFish YouTube channel. Amongst others, it contains the different videos of the Java EE 7 launch, those videos will give you good technical update on Java EE and its different components specifications (JMS 2.0, JAX-RS 2.0, EJB 3.2, etc.) Take a formal training. Oracle University is starting to roll-out Java EE 7 trainings like the 'Java EE 7: New Features' class.  Attend conferences and JUGs sessions. On that note, we have spent a lot of time to create a strong JavaOne 'Server-Side Java' track. It's still possible to benefit from the early bird JavaOne pricing but don't wait too much! Read books. There are more than 25 (!) books related to Java EE 7 or to one of the Java EE 7 component specification.  There are many more ways to learn Java EE but if I have to suggest one and only one way, I would recommend the Java EE 7 Tutorial. It's exhaustive and clear, it's free and it continues to evolve. And finally as the introductory quote suggest, participation is key to learning. Participate in JUGs,  participate in Adopt-a-JSR, get involved in the different open source communities evolving around Java EE, participate in the JCP... in one word, participate!

    Read the article

  • HTML5 valid Google+ Button - Bad value publisher for attribute rel

    - by mrtsherman
    I recently migrated my website from xhtml transitional to html5. Specifically so that I could make use of valid block level anchor tags. <a><div /></a>. When running validation I encountered the following error: Bad value publisher for attribute rel on element link: Keyword publisher is not registered. But according to this page, that is exactly what I am supposed to do. https://developers.google.com/+/plugins/badge/#connect My code: <link href="https://plus.google.com/xxxxxxxxxxxxxxxx" rel="publisher" /> <a href="https://plus.google.com/xxxxxxxxxxxxxxx?prsrc=3" style="text-decoration:none;"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" alt="" style="border:0;width:16px;height:16px;"/> </a> I can't figure out how to implement this in an html5 compliant way. Can anyone help?

    Read the article

  • magic trackpad - Ubuntu

    - by UbuntuGuy
    I've been using a mac in my job for a while now. The only feature i like about it above my ubuntu (on an hp) is the trackpad. I love doing the strokes to move between different files. It really makes things quicker. Is it possible to imitate this feature on my ubuntu laptop. (like maybe there might be something that utilizes my mouse pad on the laptop, as well as the scroller) If that is impossible or doesn't exist then can i set up a magic trackpad to ubuntu on my hp.

    Read the article

  • Evolution has no access to couchdb

    - by berkes
    Evolution gives the error "Cannot open addressbook". "We were unable to open this addressbook. This either means you have entered an incorrect URI, or the server is unreachable". "Details: Operation not permitted". (rough translation from Dutch). Enabling verbose logging in (desktop)couchdb tells me roughly the same: [info] [<0.7875.1>] 127.0.0.1 - - 'PUT' /contacts/ 400 [debug] [<0.7875.1>] httpd 400 error response: {"error":"invalid_consumer","reason":"Invalid consumer (key or signature method)."} It seems that evolution tries to fetch the contacts, then couchdb denies access, and then evolution fails to do a proper oauth. This is on Ubuntu 10.10, with its default dektopcouch 1.0.1. Any hints on where to start would be most appreciated :)

    Read the article

  • Can't open Evolution CouchDB address book

    - by Amanda
    Unable to open address book This address book cannot be opened. This either means that an incorrect URI was entered, or the server is unreachable. I tried the solution (and suggestions) in Evolution has no access to couchdb but that isn't working for me. I tried stopping desktopcouch-service and deleting my access keys and now the error I get says Unable to open address book This address book cannot be opened. This either means that an incorrect URI was entered, or the server is unreachable. Detailed error message: Address Book does not exist Do I need to create my addressbook anew?

    Read the article

  • how to fix error when authenticating in U1

    - by Yann
    Like other users, when authenticating in U1 I get this error: Method "CreateItem" with signature "a{sv}(oayay)b" on interface "org.freedesktop.Secret.Collection" doesn't exist. And in /var/log/auth.log: gnome-keyring-daemon[4793]: egg_symkey_generate_simple: assertion `iterations >= 1' failed gnome-keyring-daemon[4793]: couldn't prepare to write out keyring: /home/yann/.gnome2/keyrings/login_1.keyring So I tried to move "login.keyring". I typed no password for the security when logging into U1, accepted non secured way to save passwords. And a message told me the operation successfully completed (in French). But, sync did not work. And u1sdtool --status returned this: State: AUTH_FAILED connection: With User With Network description: auth failed is_connected: False is_error: True is_online: False queues: WORKING I'm sorry U1 doesn't work anymore; it was really useful to me. Can you help me ? Ubuntu 11.10

    Read the article

  • How do I recover files from U1 cloud

    - by MarkWW
    OK I have just stopped syncing a folder. This appears to have been a huge mistake. I did not realise the folder would disappear from my cloud storage. I assume (and hope) the sub-folders/files it contained still exists in the cloud. I hope they still exists in the cloud because they do not exist on my computer. I did not delete these files from the cloud (I have tried to recover deleted files - none are being recovered). Where are they and how do I get them back into My storage?

    Read the article

  • How to access UbuntOne when it asks for default keyring which has never been set?

    - by obu-tim
    I am trying to set up UbuntuOne on a new computer and after I enter the email and password, it asks for the keyring 'default'. I don't know what it is and I never set it. Makes it difficult to enter so it seems to be a counterproductive security default. I understand that if autologin is set the keyring is called. I tried setting the main user to need a password but if I reboot it doesn't ask for the password so it sort of autoboots still. So How do I set the keyring default password? If I can't set it I can't install UbuntOne

    Read the article

  • I removed my WinXP machine's association to U1 from within the Windows client, and now can't get it back to work anymore

    - by Andrea
    After testing the windows client for U1 sync, I decided to test the preferences' settings, and tried to remove the association for the WinXP station from which I was working. Now I can start the client, but if I try to open the preferences' settings the application stops. I tried to uninstalle and reinstall, but that won't change the situation: appearently the old settings are kept the same even after a total reinstall.

    Read the article

  • About my main Project

    - by user207365
    My project is to build a AI(Artificial Intelligence) system in which i am planning to have a raspberry pi or Intel i3/i5 processor. Raspberry pi is small and efficient but i don't know whether it can support 2 TB or more external hard disk. Where as in Intel i can have internal hard disk and at the same time external also and will be more faster with 2gb or 4gb RAM. Which is better Raspberry pi or Intel,is it possible to stimulate my ubuntu in Intel processor. the main reason using processor it to give the system decision making capability ,understanding and analyzing capabilities using different algorithm's .my processor should analyze the condition take proper steps in running the appropriate application

    Read the article

  • Synchronizing (updating) published files

    - by MucMug
    How can I update a published file and maintain the same URL? After saving an update of a published file on my desktop, it will automatically "synchronise" with the corresponding files at UbuntuOne (and it does). Problem is that the "new file", actually the updated file with the same name, is no longer published. Pressing the publish button results is a new URL. I now have to mail new URL's and change embedded links, as old URL will result in a failure to find the updated file (or indeed any file). I am not sure if it is a bug or a design flaw (maybe intentional?), but it seems strange to me.

    Read the article

  • Android Nexus One - Can I save energy with color scheme?

    - by Max Gontar
    Hi! I'm wondering what color-scheme is more energy-saving for AMOLED display? I've already decided to manage c-scheme according to ambient light, thanks to this post: Somewhat-proof, the link posted by nickf: Ironic Sans: Ow My Eyes. If you read that in a well lit room, the black-on-white will be the most pleasant to read. If you read it in a dark room, the white-on-black will be nicer. But if I want to save battery power, should I use bright content with light background or vice versa? Is it possible anyway (they say it's not)? Thanks!

    Read the article

  • Spring-Hibernate: How to submit a for when the object has one-to-many relations?

    - by Czar
    Hi, I have a form changeed the properties of my object CUSTOMER. Each customer has related ORDERS. The ORDER's table has a column customer_id which is used for the mapping. All works so far, I can read customers without any problem. When I now e.g. change the name of the CUSTOMER in the form (which does NOT show the orders), after saving the name is updated, but all relations in the ORDERS table are set to NULL (the customer_id for the items is set to NULL. How can I keep the relationship working? THX

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

  • Merge two different API calls into One

    - by dhilipsiva
    I have two different apps in my django project. One is "comment" and an other one is "files". A comment might save some file attached to it. The current way of creating a comment with attachments is by making two API calls. First one creates an actual comment and replies with the comment ID which serves as foreign key for the Files. Then for each file, a new request is made with the comment ID. Please note that file is a generic app, that can be used with other apps too. What is the cleanest way of making this into one API call? I want to have this as a single API call because I am in a situation where I need to send user an email with all the files as attachment when a comment is made. I know Queueing is the ideal way to do it. But I don't have the liberty to add queing to our stack now. So this was the only way I could think of.

    Read the article

  • Dual monitors with one above the other?

    - by Felix
    I'm using Gnome 3 and proprietary Nvidia drivers. I have tried to set in nvidia-settings my external monitor to be "above" my main one (it's a laptop). However, when I try to drag a window up from the main display to the external one, it gets stuck and can't move past a certain point. Trying to maximize it changes its decoration so it looks maximized (i.e. no borders, etc), but its size or position doesn't change. Now, if I set my external monitor to be "to the left" of the main one, it works, which is why I'm suspecting this is a Gnome issue, not an Nvidia one. Anyone know how to fix this? Update: some versions: Gnome: 3.2.2.1 Nvidia: 280.13 Update 2: I can see that Gnome 3.4 is out, and among the release notes is better external monitor support. However, they only mention a small fix that is unrelated to my problem. Can anyone with Gnome 3.4 and access to an external monitor please test this out and tell me if it works? I don't want to go through the hassle of upgrading my Ubuntu installation unless I know for certain it's going to fix the problem.

    Read the article

  • Complex shading using one single (small) texture

    - by teodron
    Recently I stumbled upon a demo reel in UDK about how one can attain beautiful results using just one (rather tiny) texture that's being sent to the shader pipeline. The famous link is this one. Basically, the author states that they've used just one texture and give a snapshot of the technique here. I see that every RGBA channel contains different grayscale information.. and that info could be used to inside a shader to obtain a colour blended output. The problem is that the reel displays a fairly complex scene. To top that, the author even makes use of a normal map. How did they manage to fit a normal map in an already cluttered texture? It makes sense to have a half-space normal map by using only RG from an RGB texture, but what about the rest of the information? Since it was proven to be possible, could someone please explain how it was done (the big picture, not the dirty details!)!? Here's the texture being used. Click to see in full size.

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • Nexus One Guys…Android 2.3 update comming your way

    - by Boonei
    Good News ! If you are a nexus one customer, Google said on its tweet “The Gingerbread OTA for Nexus One will happen in the coming weeks. Just hang tight!” Non-Nexus owners have to wait much much longer. Don’t know when their phone maker and operator will roll out the same. This article titled,Nexus One Guys…Android 2.3 update comming your way, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >