Search Results

Search found 7706 results on 309 pages for 'checked'.

Page 146/309 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Startup/Shutdown time in Xubuntu is increasing!

    - by Ankit
    I am a novice Xubuntu user on a dual-boot machine. The other OS I have is Windows 7. When I first began using Xubuntu, I had really fast startup and shutdown (much much faster than Windows 7 :) ). However, as I started using it more and more for my work, these times started rising. I do not have any problems with execution speed of running applications. My main concern is the shutdown time. Now it has gone above Windows shutdown time [startup time has only partially increase compared to shutdown]. I checked some similar questions like this. However, they seem to not answer my concern as I feel that the concerned users there experience a long wait before the screen goes blue. In my case, the screen goes blue (desktop session ends a blue screen with a moving slider appears) pretty fast. However, it remains blue for a long time. Another answer that I saw on google was to use dmesg and then stopping some services that I do not want. However, me being a novice could not completely understand what it meant

    Read the article

  • intel atom getting too hot

    - by user59565
    I own an Asus 1215N which is getting very hot Intel Atom 525D dual core ION 2 (geforce 220 + GMA 3150) 4 GB RAM Ubuntu 12.04 it hits 86 C at idle. Some times (at load or turned on 1 hour) it shuts down due to heat, in Windows 7 it runs idle at 49 C. I tried an acpi call to shut down the nVidia chip which is cooled together with the Atom chip. That didn't solve the problem. To check up to see if it really turned off I checked how much power the laptop consumed, it only went from using 1400 mW to 860 mW, no changes in heat. I also tried reapply the standard heat adhesive, the old heat adhesive made it run at 97 (it couldn't even put up a useful install of Ubuntu). This really annoys me, as Ubuntu is the OS of choice to me. Should I try compile the kernel? Is it true that compile for a P4 is the better choice to the atom, when compiling the kernel for this processor architecture? Now I tried compiling the kernel for atom. Now temperature is 83 C (think the drop has more to do with ambient temp than the customized kernel) help appreciated

    Read the article

  • Getting MTP to work with a Galaxy tab 2 7.0?

    - by Wouter
    I'm trying to get MTP with the galaxy tab 2 7.0 working on my ubuntu installation. Such that I can access the files. I tried to do what is described here: http://www.omgubuntu.co.uk/2011/12/how-to-connect-your-android-ice-cream-sandwich-phone-to-ubuntu-for-file-access I however fail at executing one of the following commands mtp-detect | grep idVendor mtp-detect | grep idProduct This fails [20:42|0] $ mtp-detect | grep idVender Device 0 (VID=04e8 and PID=6860) is a Samsung GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note. PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device LIBMTP PANIC: failed to open session on second attempt Unable to open raw device 0 [20:44|0] $ mtp-detect | grep idProduct Device 0 (VID=04e8 and PID=6860) is a Samsung GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note. PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device LIBMTP PANIC: failed to open session on second attempt Unable to open raw device 0 Now my guess was was that the idVender is the same as the VID (04e8) and the idProduct is the same as PID (6860) Now I continued to work with those values and completed the tutorial. When finished I tried android-connect This returned fuse: bad mount point `/media/GalaxyTab': Transport endpoint is not connected Does anybody have a clue what to do? Also I want to note that when I connect my GalaxyTab 2 7.0 that I still get a pop-up of ubuntu that a device was connected. I also can still see the mapstructure, the problem however is is that all the folders have 0 bytes and do not have any subfolders. I can only see the folders in the root. ps. I also checked a similar question and tried what is described in this answer http://askubuntu.com/a/88630/27480

    Read the article

  • Xfce gets really confused about session saving, etc

    - by Pointy
    I'm getting a new laptop running with 11.04 Ubuntu. I've got the xfce4 packages all installed, which is something I've had no problems with on any of my other machines. On this new laptop, however, though I can log in and use an xfce session without any problems, logging out of a session is problematic: I click the "Log out" widget from the panel and then "Log out" from its option dialog. Then the thing just sits there, not logging out. Subsequent attempts to open the "Log out" widget fail with an error about the session manager being busy. After maybe a minute or so, it logs out. Though I've got the "Save session" option checked in the log out dialog, xfce just makes a complete hash of the business. It does remember the applications that I had running, but it seems to forget about the window manager (!!) and the workspace configuration. I don't log in/out that often, and generally I don't care much about restarting applications, but the window manager being missing is of course pretty annoying. I like xfce because it's simple and unobtrusive and usually works pretty well. I've never experienced this, and I've got two other machines also running 11.04 with pretty much the same setup (straight Ubuntu install with xfce4 packages added). Is there some good way to diagnose stuff like that? edit — well I nuked my session cache, did an explicit save from the session widget, and now it works. Well, it doesn't save the workspace location for each client and instead opens them all up on the first workspace, but I think that may be because, in the session, xfwm4 is the last thing in the "Client" list, so before it's started all the other clients just pile up in the first (and only) workspace. I'm still curious about how exactly it gets so messed up. I certainly wasn't knowingly attempting anything fancy or unorthodox, though I may have done something fishy inadvertently.

    Read the article

  • SMART Status Data Interpretation - Disk Utility

    - by Mah
    Last week my external harddisk (Seagate Barracuda 1.5TB in a custom enclosure) showed signs of failure (Disk Utility SMART Pre-failure status - several bad sectors) and I decided to change it. I bought a new HDD (Seagate Barracuda 2TB) and connected it to my Ubuntu box with a SATA to USB cable that could not report SMART status. I copied all the contents of the old HDD to the new HDD (one partition with rsync, the other with parted cp) and then gently replaced the old HDD with the new one inside my aluminum enclosure. For obscure reasons after reconnecting the new HDD through the old enclosure, the Linux box could not detect my partitions. I recovered the partitions with testdisk and restarted the computer. After the restart I checked the SMART status of the new HDD an I get this: Read Error Rate --------------- Normalized 108 Worst 99 Threshold 6 Value 16737944 I got a high value on the Seek Error Rate as well. Wondering why this happens I copied 2 GB directory from one partition to the other and rechecked the SMART status (5 minutes later). This time I got the following: Read Error Rate --------------- Normalized 109 Worst 99 Threshold 6 Value 24792504 As you see there has been an increase in the error rate. I am unable to interpret these numbers. Is my new hard disk already dying? What are the acceptable values in these fields for Seagate hard disks? Then why the assessment is still good? While I could get temperature and airflow temperature data from my old HDD, I can not fetch them for the new one. I noticed that my old hdd had got really hot sometimes. Is it possible that the enclosure is killing the harddisks due to high temperature?... Thanks

    Read the article

  • JPEG images not loading on PlayBook (Marmalade + iwgame)

    - by Vexille
    I'm using iwgame on a test project and I was trying to render different resolutions of JPG and PNG images. Everything works fine on the Marmalade Simulator, however once I deploy the game to our PlayBook and run it, only the PNG images are shown. I have declared the images in the MKB file and on a XML file iwgame's using to load the images. I've checked the deployments folder and all images are present in the intermediatefiles/native folder. We're currently using a BlackBerry only license, so we can only test this on the PlayBook, but we do intend to get a Community license and deploy to iOS and Android devices eventually (I'm not sure if this is a problem exclusive to the PlayBook). I really don't know if this is a Marmalade or a iwgame issue. I have a different test project without iwgame and it simply won't run with jpg images (I get the error: 'Could not find handler for extension "jpg"'). While searching for a sollution, I've seen people talking about using libjpg, but I've also found that Marmalade supposedly has integrated native jpeg support (and because of that iwgame has abandoned their jpeg loading support since v0.340), so I don't know what to think. I'm currently using the most recent versions of both Marmalade and iwgame, I believe: Marmalade 6.1.2 and iwgame 0.400. Also, please let me know if there's an easier or better way to do this, such as linking libjpg or something (I'm not exactly sure how to do this). I really would appreciate some help with this, there's a huge difference in size for the images we're planning to use, from a ~500kb jpg file to a ~3.5mb png file. Thanks, guys.

    Read the article

  • Google Webmaster tools Incorrect rel-alternate-hreflang implementation warning message

    - by Noam
    I'm getting this warning msg. in Google webmaster tools Incorrect rel-alternate-hreflang implementation In particular, there seems to be a problem with missing or incorrect bi-directional linking (when page A links with hreflang to page B, there must be a link back from B to A as well). This msg. seems pretty straight forward, but when checking their example pages, I'm not finding anything wrong. I'm using alternate for translation of main site menu, titles, etc.. In each page I have this: <link rel="alternate" hreflang="en" href="http://mydomain.com/page" /> <link rel="alternate" hreflang="jp" href="http://ja.mydomain.com/page" /> <link rel="alternate" hreflang="ko" href="http://ko.mydomain.com/page" /> <link rel="alternate" hreflang="th" href="http://th.mydomain.com/page" /> <link rel="alternate" hreflang="es" href="http://es.mydomain.com/page" /> <link rel="alternate" hreflang="pt" href="http://pt.mydomain.com/page" /> I've double checked this exists in all the 6 pages. This is the first time I've seen this msg although I've implemented this at least 6 months ago, and implementation hasn't changed. Is there any way to check a specific set of pages for these things? Am I missing something in my implementation? We're auto-redirecting people from a location to their specific language, and give them an option to manually change this. I've also just found out about the suggestion for Vary HTTP header - is that relevant and important here?

    Read the article

  • Partial upgrade on 12.04, how to stop nagging after locking to a working NVIDIA & xorg

    - by alsk
    How to stop the upgrade manager from offering updates and upgrades that potentially would harm my working 2D and 3D graphics? Finally, I got 12.04 working as it should: with nvidia-173 drivers by downgrading xorg and locking the version: On my 32-bit system on Athlon64, with (Albatron) NVIDIA GeForce FX5700XT, locked (/pinned) to xorg 1:7.6-7ubuntu7, xserver-xorg-core 2:11.1-0obuntu10.07, nvidia-173 173.14.35-0ubuntu0.2? An annoying thing left is that every time the updates are checked, I get warning of partial updates, and ambiguous options of "partial update" and "close". Ambiguous in that sense that if I click close, I will get option to update a few packages, which has been OK, while "partial update" would like to update my kernel to 3.2, alter xorg, remove nvidia-173 etc., and update mesa etc. This is not what I call appropriate, after locking XORG and NVIDIA drivers to working ones. One may say according to package management logic it may be correct, but to me as an user it makes little sense. Last Ubuntu that worked without big mess for me was 10.10, hence I will not put 12.10 to my "production" system, until I can be sure it will not trash the system again. P.S. Is there a recommended way to keep NVIDIA GeForce FX working with 3D on Ubuntu... in future?

    Read the article

  • Migrating BizTalk 2006 R2 to BizTalk 2010 XLANGs Issue

    - by SURESH GIRIRAJAN
    When we migrate some BizTalk apps from BizTalk 2006 R2 to BizTalk 2010, and we ran into issue when a .net component called inside the orchestration. In the .net component we are trying to retrieve some promoted property and we also checked in the BizTalk group hub to validate it was promoted, no issues there.  Only when we try to access the data into the .net component we had issue. We just moved all the assembly what we had in BizTalk 2006 R2 to BizTalk 2010, didn’t recompile anything in BizTalk 2010 environment. But looking further there is couple of new namespace added to the Microsoft.XLANGs… in BizTalk 2010 compared to BizTalk 2006 R2 caused the issue. So all we did to fix the issue is recompile the project in 2010 environment and it worked fine. So it looks like some backward compatibility issue.  public static void Load(XLANGMessage msg) {  try  {      // get the process id from context.       object ctxVal = msg.GetPropertyValue(typeof(ProcessID)); … } BizTalk 2010: Error Message in the event viewer:  The service instance will remain suspended until administratively resumed or terminated. If resumed the instance will continue from its last persisted state and may re-throw the same unexpected exception. InstanceId: 441d73d3-2e84-49d2-b6bd-7218065b5e1d Shape name: Bulk Load ShapeId: bb959e56-9221-48be-a80f-24051196617d Exception thrown from: segment 1, progress 65 Inner exception: A property cannot be associated with the type 'Tellago.Common.Schemas.ProcessId'.   Exception type: InvalidPropertyTypeException Source: Microsoft.XLANGs.Engine Target Site: Microsoft.XLANGs.RuntimeTypes.MessagePropertyDefinition _getMessagePropertyDefinition(System.Type) The following is a stack trace that identifies the location where the exception occured   at Microsoft.XLANGs.Core.XMessage._getMessagePropertyDefinition(Type propType) at Microsoft.XLANGs.Core.XMessage.GetContentProperty(Type propType) at Microsoft.XLANGs.Core.XMessage.GetPropertyValue(Type propType) at Microsoft.BizTalk.XLANGs.BTXEngine.BTXMessage.GetPropertyValue(Type propType) at Microsoft.XLANGs.Core.MessageWrapperForUserCode.GetPropertyValue(Type propType) at Tellago.Common.Components.Load(XLANGMessage msg) at Tellago.SuspensionProcess.segment1(StopConditions stopOn) at Microsoft.XLANGs.Core.SegmentScheduler.RunASegment(Segment s, StopConditions stopCond, Exception& exp)

    Read the article

  • Configuring Transmission for faster download

    - by Luis Alvarado
    I have tested on the same PC with the same torrent/magnet links the following Torrent Clients: Transmission Ktorrent Deluge qBittorrent Vuze After 7 days of testing I noticed that the only one that took longer to start downloading and to keep an optimum/max download speed was Transmission. It was the slowest of them all to download the same torrents or magnet links which I tested 8 torrents and 4 magnet links from different sites and the one that took the most to start downloading or start after a pause/resume event. The other 4 just took less than 2 seconds for example to start downloading and to download the same content between 50% less time to 80% less time. I think that Transmission has the same capabilities about downloading/resuming than the other torrent clients but it may be because of some configuration I need to do to get the same speed and effect than the others. In my tests all torrent clients were tested with their default configurations. No changes were made. They were tested on the same PC, with the same network connection in the same time periods. So I am thinking that Transmission just needs a little bit of configuration tunning. I also set the ports for use to the same one for each. Checked the router for any blocking and anything related to the network. What options can I change to make it so Transmission resumes a download faster (grabs the seeds faster) and keeps a fast download all the time (Stays with the seeds that offer the best connection for example). Both of which by the look of it are features that the rest of the torrent clients do already.

    Read the article

  • samba share not on network after upgrading to Ubuntu 12.04LTS.

    - by Sylvain Huard
    I just upgraded an old Ubuntu box to 12.04LTS (machine named A-Ubuntu). This is an upgrade not a format re-install. All the accounts and config were preserved. The basic setup is a local network with 2 Ubuntu machines (let say A-Ubuntu, B-Ubuntu) and a MAC (C-MAC). Before the upgrade, all of them could see each other by their names not only the IP address. The local network has a D-Link Router where everybody is connected with RJ-45 wired etherenet (not wi-fi). Since the A-Ubuntu upgrade, we can't see this machine name on the Network and its name is not on machine list in the D-Link router anymore. We can see it's IP address only. I can't access A-Ubuntu from the other two by its name but I can ping it with its address (192.168.0.109). From A-Ubuntu, I can connect and see the shared samba folders on B-Ubuntu and C-MAC. But from B-Ubuntu and C-MAc, I can't connect to A-Ubuntu. Correct me if I'm wrong but this tells me that Samba should be fine and the real problem is that A-Ubuntu does not advertise its name on the Network so the D-Link does not have it in its table so nobody else finds it. After a lot of googling, I see that it is the job of avahi and mdns to do so. Those packages are running, I checked multiple config files for samba, avahi, mdns to see as if it is like the examples on the WEB and also similar to what I find on the working B-Ubuntu machine. This is the same. I did multiple service restart with samba, avahi, remove the firewall to make sure it does not block the hostname broadcast. I rebooted multiple time to make sure the update I was making were effective. Still, Can't see the A-Ubuntu name on the network. Any idea what it can be?, Where to look next?

    Read the article

  • Should my dropdown of recently used items show items I no longer have access to

    - by Dan Hibbert
    We are implementing a client for our document management system. Part of this is the checkin screen where one of the fields a user chooses is the folder where the document should be checked into. In our original system, this was represented with a combobox where a user could hand type a folder path or select a path from a list of 5 folders they'd recently used for checking. It is possible that between the time they used the folder and the time they are doing the new checkin the user will no longer have access to the folder. At present, we still show the folder as an option and then, if the user chooses that folder, display an error message when the user submits the check in. We are thinking of removing these recently used folders the user doesn't have access to (we'll make a check when the form is instantiated) because why show an option if we know it will cause a failure (and another dialog message the user has to OK). However, an opposite opinion is that if we remove those folders, the users will think the system has "forgotten" their recent choices and will lose trust in what they are using. I'd like to get some opinions on the better user experience for this problem.

    Read the article

  • How to open the JavaScript console in different browsers?

    - by Šime Vidas
    Updated on October 7th 2012 Chrome: Press either CTRL + SHIFT + J to open the "Console" tab of the Developer Tools. Alternative method: Press either CTRL + SHIFT + I or F12 to open the Developer Tools. Press ESC (or click on "Show console" in the bottom right corner) to slide the console up. Note: In Chrome's dev tools, there is a "Console" tab. However, a smaller "slide-up" console can be opened while any of the other tabs is active. Safari: Press CTRL + ALT + I to open the Web Inspector. See Chrome's step 2. (Chrome and Safari have pretty much identical dev tools.) Note: Step 1 only works if the "Show Develop menu in menu bar" check box in the Advanced tab of the Preferences menu is checked! IE9: Press F12 to open the developer tools. Click the "Console" tab. Firefox: Press CTRL + SHIFT + K to open the Web console. or, if Firebug is installed (recommended): Press F12 to open Firebug. Click on the "Console" tab. Opera: Press CTRL + SHIFT + I to open Dragonfly. Click on the "Console" tab.

    Read the article

  • Where or what are the instructions for installing FMOD Ex for Linux to use in g++?

    - by Andrey
    I'm looking for the instructions on how to install FMOD. I want to do extra credit for my computer graphics assignment - sound effects. A teammate wants me to go with something simple, and he suggested that I use FMOD Ex. (If you guys can think of something better, do suggest it, but so far FMOD looks more promising compared to SDL, OpenAL, etc.) Right now I'm having a really hard time finding the instructions for installing the latest version of FMOD (audio content creation tool) on Linux Ubuntu 12.04 LTS (32-bit) so that I can use it in g++ with OpenGL. I checked out this YouTube video: http://www.youtube.com/watch?v=avGxNkiAS9g, but it's for Windows. Then, there is a Ubuntu Forums thread which redirected me to this page: https://wiki.debian.org/FMOD, and it has some dated instructions. I've downloaded FMOD Ex v. 4.44.24, which I believe is the latest version. Now I'm looking at eight files: libfmodex.so; libfmodex64.so; libfmodex64-4.44.24.so; libfmodex-4.44.24.so; libfmodexL.so; libfmodexL64.so; libfmodexL64-4.44.24.so; libfmodexL-4.44.24.so ... not knowing what to do. I've looked everywhere I could think of: StackOverflow, here, YouTube, Google, ... and came up with zilch. Please help. Thanks in advance.

    Read the article

  • apt-get does not work with proxy

    - by tommyk
    For a command sudo apt-get update I get following error W: Failed to fetch http://ch.archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages.gz 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) I am running Ubuntu 10.10 installed on Windows XP using VirtualBox. For internet connections I am using proxy server with an authentication. I tried to use gnome-network-proxy tool to set proxy settings system-wide. After that /etc/environment has been updated by http_proxy variable with the format http://my_proxy:port/, there were no authentication data. I checked this with firefox. Browser asked my for login and password and everything was working fine. It was unfortunately not the case for apt-get. I have also tried to do as described here. Unfortunately it does not work. May it be somehow related to the fact that a proxy is in a Windows domain, any ideas ? EDIT: My proxy name is http-proxy. Is '-' a special character here ?

    Read the article

  • "Serious errors found HF checking the drive for /home" After Moving /home to external HFSplus partition

    - by Arctic Shadow
    I just installed Mac OS X 10.7 "Lion" and Ubuntu 11.10 on my MacBook Pro. Using these instructions: tuxation.com/creating-home-partition-mac-linux.html . After changing the location of my home folder to the new location, it gives me the error in the title, and my username no longer appears in the login screen. Using the "Other" option with my username seems to make it try to log in, but the screen quickly flashes between blank and a shell before kicking me back to the login screen without notice. I'm trying to share my home folder between Mac OS X and Ubuntu, using an hfsplus partition (unjournaled) between the two. The home partition seems to mount fine as /home, and I am able to modify it under Ubuntu. Below is the line I've added to fstab: /dev/sda3 /home hfsplus defaults 0 1 I should also note that I changed my account's username and home directory location to match this, though I've double checked that and everything seems in order there... Thank you in advance for any assistance. Edit: It seems that the /etc/passwd file didn't have my new home directory's location in it, so I changed that, and I am now able to log into my account, although I am still not listed in the login screen, and my username in the menu on the top right shows up as "[Invalid UTF-8]"...

    Read the article

  • Unavailable packages repository

    - by bitmask
    I'm running ubuntu 11.10 (oneiric) on this machine, and suddenly, apt is unable to update properly. If I ask it to update its package information, by running apt-get update (or alternatively telling the update manager to "check"), it succeeds for about 120 packages (more precisely, I get about 120 Ign/Hit notes) and then says it cannot find universe Sources and restricted amd64: Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/universe Translation-en Err http://de.archive.ubuntu.com oneiric/universe Sources 404 Not Found [IP: 141.30.13.20 80] Err http://de.archive.ubuntu.com oneiric/restricted amd64 Packages 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/universe/source/Sources 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/restricted/binary-amd64/Packages 404 Not Found [IP: 141.30.13.20 80] E: Some index files failed to download. They have been ignored, or old ones used instead. I manually checked the de server and cannot find anything wrong with the stuff it's complaining about. Also it looks pretty much like, say, the us mirror. But oddly enough, the IP it lists, seems to point to a debian package server, which obviously does not contain ubuntu packages. So, is this a local problem that I can fix somehow (and if so, how?) or is there actually some server down right now?

    Read the article

  • How do I Series: Connecting an Expression Blend Project to Team Foundation Server

    - by Enrique Lima
    I have heard of people wanting and needing to add projects created in Expression Blend to Team Foundation Server. Here is the recipe: 1) Create your project in Expression Blend … click OK 2) Select the option to open your recently created project in Visual Studio. Once that option is selected, your solution will open up in Visual Studio, close Expression Blend at this point. Now, I want to add this project to Source Control … Next, I connect to my TFS environment, and pick the location to save my project Once the project is added, I will get a status window of pending changes for my project, all that we are left to do is to check in those changes. Since we have checked in our project, we can now close Visual Studio, and we will proceed to open Expression Blend again. And select our project we will! We notice some differences from before, just by opening it What differences you say?!? Notice the lock to the right of the item name … And we also get this when we right click … And there we have it, it is a combination of tools to achieve this, but it is well worth it.

    Read the article

  • Duplicate content appearing for multi lingual sites

    - by Rocky Singh
    I have a site which has a default url say "http://www.blahblah.com/" (which is default in english language). In my site there is support for multi languages. I am having few links at my home page say "English" "French" "Spanish" etc. and on clicking these links user is redirected to these links: http://www.blahblah.com/en-us/ (English) http://www.blahblah.com/fr-ca/ (French) http://www.blahblah.com/spanish-culture/ (Spanish) and based on culture in the url I am showing the content accordingly to end users in their desired language. Now, this was how my site is. The issue I am getting is with SEO. I noticed Google is considering (I checked via Google web masters) my site pages as duplicate like: 1. http://www.blahblah.com/documents/ and http://www.blahblah.com/en-us/documents/ 2. http://www.blahblah.com/news/ and http://www.blahblah.com/en-us/news and similarly all the pages are considered as a duplicate content in Google webmasters tools. I am worried of this, since I think my site is getting penalized in ranking because of this. Could you drop some idea how to overcome this situation?

    Read the article

  • Standards for how developers work on their own workstations

    - by Jon Hopkins
    We've just come across one of those situations which occasionally comes up when a developer goes off sick for a few days mid-project. There were a few questions about whether he'd committed the latest version of his code or whether there was something more recent on his local machine we should be looking at, and we had a delivery to a customer pending so we couldn't wait for him to return. One of the other developers logged on as him to see and found a mess of workspaces, many seemingly of the same projects, with timestamps that made it unclear which one was "current" (he was prototyping some bits on versions of the project other than his "core" one). Obviously this is a pain in the neck, however the alternative (which would seem to be strict standards for how each developer works on their own machine to ensure that any other developer can pick things up with a minimum of effort) is likely to break many developers personal work flows and lead to inefficiency on an individual level. I'm not talking about standards for checked-in code, or even general development standards, I'm talking about how a developer works locally, a domain generally considered (in my experience) to be almost entirely under the developers own control. So how do you handle situations like this? Are the one of those things that just happens and you have to deal with, the price you pay for developers being allowed to work in the way that best suits them? Or do you ask developers to adhere to standards in this area - use of specific directories, naming standards, notes on a wiki or whatever? And if so what do your standards cover, how strict are they, how do you police them and so on? Or is there another solution I'm missing? [Assume for the sake of argument that the developer can not be contacted to talk through what he was doing here - even if he could knowing and describing which workspace is which from memory isn't going to be simple and flawless and sometimes people genuinely can't be contacted and I'd like a solution which covers all eventualities.]

    Read the article

  • Strange behavior on Gnome after update on from 13.04 to 13.10

    - by WayneBrady
    I made an automatic update on my Ubuntu 13.10 (from 13.04) system today. Since this point of time, I am in really big trouble. I use a VNC server with Gnome classic, after the update my Gnome was gone. So i tried everything. Checked the xstartup file of vncserver. Right now I reached a point where I can't find the answer. The logfile says that gnome-session-fallback is missing, even directly after I installed it with apt-get (tried it serveral times, installing, uninstalling and so on). I have no chance to use it as you can see in this terminal copy: root@ip-xxx:~/.vnc# apt-get install gnome-session-fallback Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: gnome-session-fallback 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/2,914 B of archives. After this operation, 247 kB of additional disk space will be used. Selecting previously unselected package gnome-session-fallback. (Reading database ... 210977 files and directories currently installed.) Unpacking gnome-session-fallback (from .../gnome-session-fallback_1%3a3.6.2-0ubuntu15_all.deb) ... Setting up gnome-session-fallback (1:3.6.2-0ubuntu15) ... root@ip-xxx:~/.vnc# gnome-session-fallback The program 'gnome-session-fallback' is currently not installed. You can install it by typing: apt-get install gnome-session-fallback If you have some idea, please give me a hint... Thank you!

    Read the article

  • Differentiating between Hard and Soft Dependencies - Fedora Yum [closed]

    - by Sujit
    I will ask this with an example - I have installed gnash-plugin on fedora 64 bit with Yum. It pulled in following packages - Installing : agg-2.5-9.fc13.x86_64 1/6 Installing : gtkglext-libs-1.2.0-10.fc12.x86_64 2/6 Installing : boost-thread-1.44.0-7.fc14.x86_64 3/6 Installing : boost-date-time-1.44.0-7.fc14.x86_64 4/6 Installing : 1:gnash-0.8.8-4.fc14.x86_64 5/6 Installing : 1:gnash-plugin-0.8.8-4.fc14.x86_64 6/6 Now, I tested the plugin and I didn't like it. I want to remove all these above packages which got installed with the plugin as I don't longer going to need them. How can I do this? I checked remove-with-plugin for yum but it pulls in all the packages which are currently depending on the packages. I understand the thought process behind showing what packages are getting affected - but I am wondering if there is any way of looking at the history with what package got installed when I installed a certain package. When gnash-plugin wasn't there firefox was running fine with but after I installation firefox is now depends on this new plugin. Has any one worked on differentiating hard-dependencies(hard means the program will break if that package is not there) and soft-dependencies ( soft means the program may not get affected fatally) ?

    Read the article

  • How to recover from terminal custom setup

    - by linq
    I installed ubuntu 12.04 and I added "Terminal" to the launcher bar, the default size of the terminal is a bit small, after some googling here is how I made the disaster change: At HUD menu, type "preference" and I see the option of Edit > preference, this is as I expected After I clicked the option, I forget the exact steps, but somehow I came to some configuration panel and there is a checkbox for terminal "custom" and I checked it now an input box is enabled and I type gnome-terminal --geometry 160x50 Now the problem comes, whenever I click the terminal button at the launcher bar, new terminal windows pops up endlessly from everywhere, I cannot do anything any more except logout or shutdown. The weird thing is, after I come back, I cannot get that Edit > preference any more when I type "preference" in HUD, I tried "Edit", "preference". To be clear, I still can use the computer as long as I do not click that terminal button, but I really want to use terminal to run commands. Help is appreciated greatly! Update - OK, figured out. Go to /home/jibin/.gconf/apps/gnome-terminal/profiles/Default and open that xml file, I can see the options I changed at the GUI, remove them and terminal runs good. The preference HUD is back also, it is actually Profile Preference and it is only available when terminal is open. Continue my ubuntu-ing...

    Read the article

  • How to create or recover Windows Bootloader after deleting Ubuntu boot drive

    - by Kincaid
    I have a computer that dual-boots (or tri-boots) Windows 8 Release Preview, Windows 7, and Ubuntu 12.04. Grub boots between Windows 8 and Ubuntu; for which I use primarily. Recently, I decided to remove Ubuntu, as I hardly used it. I deleted the Ubuntu partition accidentally before replacing the Grub bootloader. Now, whenever I want to boot the machine, it gives me the "grub-rescue" prompt -- I am unable to boot into either Windows (8 nor 7), nor Ubuntu (except via USB, of course). I do not have any Windows 7/8 recovery media, so that isn't an option. Please note that after I deleted the Ubuntu partition, I put the PC into hibernate, and then turned it on. This means the C:\ [Windows 8] drive cannot be mounted. I don't know if that is bad, but it definitely doesn't make things better. I am currently booting Ubuntu via USB, in an effort to restore the Windows bootloader. I have looked into using boot-repair to solve the problem using the instructions here, although after attempting to apply the changes, it gave the error: "Please install the [mbr] packages. Then try again." I don't know why I'm getting this error; is there a way to install the 'mbr packages?' I honestly don't know what exactly they are, nor how to install them. Are there any other options I have not yet exhausted to be able to boot back into Windows, in the case that there is a better way? I want to set the bootloader to boot into Windows 8, but booting into either Windows 7 or 8 is fine (I can use EasyBCD from there). Is there a simple solution to this? I've checked BIOS, and I haven't been able to find a way to boot into Windows.

    Read the article

  • 2D Collision masks for handling slopes

    - by JiminyCricket
    I've been looking at the example at: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and am trying to figure out how to adjust the sprite once a collision has been detected. As David suggested at XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision, I made a few sensor points (feet, sides, bottom center, etc.) and can easily detect when these points actually collide with non-transparent portions of a second texture (simple slope). I'm having trouble with the algorithm of how I would actually adjust the sprite position based on a collision. Say I detect a collision with the slope at the sprite's right foot. How can I scan the slope texture data to find the Y position to place the sprite's foot so it is no longer inside the slope? The way it is stored as a 1D array in the example is a bit confusing, should I try to store the data as a 2D array instead? For test purposes, I'm thinking of just using the slope texture alpha itself as a primitive and easy collision mask (no grass bits or anything besides a simple non-linear slope). Then, as in the example, I find the coordinates of any collisions between the slope texture and the sprite's sensors and mark these special sensor collisions as having occurred. Finally, in the case of moving up a slope, I would scan for the first transparent pixel above (in the texture's Ys at that X) the right foot collision point and set that as the new height of the sprite. I'm a little unclear also on when I should make these adjustments. Collisions are checked on every game.update() so would I quickly change the position of the sprite before the next update is called? I also noticed several people mention that it's best to separate collision checks horizontally and vertically, why is that exactly? Open to any suggestions if this is an inefficient or inaccurate way of handling this. I wish MSDN had provided an example of something like this, I didn't know it would be so much more complex than NES Mario style pure box platforming!

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >