Search Results

Search found 13697 results on 548 pages for 'linking errors'.

Page 306/548 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • Added resolution not working after upgrading to 12.04

    - by David
    After upgrading, my screen resolution (added by xrandr comands on the start) did not work like before. Messages appear showing some errors (that i have never had in 11.10). "No se pudo aplicar la configuración almacenada para los monitores"/"Can't apply the stored configuration for the monitors." This script didn't work either. xrandr --newmode "1280x1024_60.00" 109.00 1280 1368 1496 1712 1024 1027 1034 1063 -hsync +vsync xrandr --addmode VGA1 1280x1024_60.00 xrandr --output VGA1 --mode 1280x1024_60.00 I also tryed deleting monitors.xml but, nothing. This only erase the window message. It's been sayd that ##It's a normal buggy and well know problem for Pc's with Intel integrated video cards.## The new version of gnome-settings-daemon stores its configuration information in dconf rather than gconf. I tryed something, but the problem persist. This is what i did. Install the dconf-tools package, and then run dconf-editor. In the tree on the left, navigate org - gnome - settings-daemon - plugins - xrandr. Uncheck the active checkbox. restart your XServer (Ctrl+Alt+Backspace) (It didn't worked out for me, but it may be helpful to someone)

    Read the article

  • Perl script rendered in browser as code through symlink - fine when accessed directly

    - by John Dittmar
    I have a Rails 4 app that has some views that post to Perl cgi scripts. The perl scripts are accessed via a symbolic link to a folder called "cgi-bin". When I navigate to a perl script through the symbolic link they are rendered as text instead of executed (ie: localhost:3000/cgi-bin/test.cgi), however when I access them directly they execute without issue (ie. localhost/path/to/cgi-bin/test.cgi). I am using apache2 on os x. In the directory localhost/path/to/ I have an .htaccess file that contains the following: # General Apache options AddHandler fastcgi-script .fcgi AddHandler cgi-script .cgi Options +FollowSymLinks +ExecCGI I have the exact same lines in the .htaccess file that I have in localhost:3000/ I have also uncommented the AllowOverride all in httpd.conf. The are no errors in apache's error log. When I access the direct link to test.cgi a new line is appended to apache's access log, when I access the script through the symbolic link (and it is rendered as text), there is no line appended to the access log. Any idea why this error occurs? This setup worked fine in a previous version of rails of OS X, but recently I upgraded to Mavericks and figured I should update the Rails application to v4.0 as well.

    Read the article

  • Is goto to improve DRY-ness OK?

    - by Marco Scannadinari
    My code has many checks to detect errors in various cases (many conditions would result in the same error), inside a function returning an error struct. Instead of looking like this: err_struct myfunc(...) { err_struct error = { .error = false }; ... if(something) { error.error = true; error.description = "invalid input"; return error; } ... case 1024: error.error = true; error.description = "invalid input"; // same error, but different detection scenario return error; break; // don't comment on this break please (EDIT: pun unintended) ... Is use of goto in the following context considered better than the previous example? err_struct myfunc(...) { err_struct error = { .error = false }; ... if(something) goto invalid_input; ... case 1024: goto invalid_input; break; return error; invalid_input: error.error = true; error.description = "invalid input"; return error;

    Read the article

  • Why can't tuxboot and ubuntu play well together?

    - by mmr
    I'm trying to get clonezilla to run off of a usb stick, and it seems that the right way to do that is via tuxboot. Tuxboot is not compilable on ubuntu. I used git to get it from the repository, and then when I run the 'install' script (because building it is apparently not allowed, since the build script just tries to install windows things). Qmake-linux wants my qmake executable to be in the same directory as the stuff I pulled down, and let's just say that if there's a way to do this easily, I ain't seein' it. So then I download the linux file, the most recent of which is tuxboot-linux-25. Try to run it, get a failure that libpng12.so.0 isn't found. OK, then I go to install that via the instructions I found on the web but firefox seems to have already deleted from my history (yay!) Then I add the /usr/local/lib directory to ldconfig via emacs (had to install that too, of course): http://ubuntuforums.org/showthread.php?t=369848 I still get the errors that libpng12.so.0 cannot be opened because 'No such file or directory'. ldconfig -p | grep libpng shows that the library is there, but it still doesn't seem to be findable. What to do next? (for the record, doing this in windows is painless-- download, click, and it's done. But I'm trying to be all linuxy and get away from Windows for this...)

    Read the article

  • Ensure Payroll Success with PeopleSoft Year-End Training for U.S. and Canada

    - by Breanne Cooley
    Year-end payroll processing and reporting is a requirement for your business. If you're responsible for completing these processes in either Canada or the United States using the PeopleSoft Payroll application, and if you're new to PeopleSoft Payroll or to performing these processes, consider enrolling in Oracle University's expert training. Our PeopleSoft Payroll specialists will guide you through the necessary steps to ensure you can smoothly and successfully perform your job. Training is specific to the country for which you are performing the processing and reporting. Training lasts one day and is delivered in our Live Virtual Class Format, which helps you avoid travel during this busy season. Here's the training we recommend: PeopleSoft Year-End Payroll - U.S. This course teaches you how to complete U.S. year-end processing and reporting using PeopleSoft Payroll for North America, step-by-step. Update tax reporting setup tables and update employees' income and tax records. Load each employee's year-end data into a single year-end record for processing and reporting.  Identify reports needed to reconcile the year-end data. Correct tax balances and other data as necessary. Generate final print and online W-2 forms and prepare the electronic file for the Social Security Administration.  Enter corrected W-2 information and print a W-2c form. Report periodic retirement distributions and related tax withholding amounts on form 1099-R.   Please Note: this course is intended for organizations using PeopleSoft release 8.81 or higher. PeopleSoft Year-End Payroll – Canada This course covers the steps necessary to perform Canadian year-end processing using Oracle's PeopleSoft Payroll for North America. Explore adjustments, balances, year-end slip processing, common pitfalls and errors and balancing reports.  Produce accurate year-end reporting results such as T4, T4A, RL-1 and RL-2.  Please Note: this course is intended for organizations using PeopleSoft release 8.81 or higher. See you in class! -Oracle University Marketing Team 

    Read the article

  • Testing of visualization projects

    - by paxRoman
    We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?

    Read the article

  • when I type apt-get -f install, I get the error message

    - by gene
    xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. Also I can not upgrade my software, It said that the package system is broken, with detail information: The following packages have unmet dependencies: xserver-xorg-core: Depends: xserver-common (>= 2:1.11.4-0ubuntu10.8) but 2:1.11.4-0ubuntu10.8 is installed when I issue sudo apt-get update, the output seems fine the source is(sorry the output has too many links that I can not post in);http://archive.ubuntu.com Reading package lists... Done ====================== when I issue sudo apt-get dist-upgrade, the output is: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: xserver-xorg-core : Breaks: xserver-xorg-video-5 E: Unmet dependencies. Try using -f. ================== when I issue 'sudo apt-get -f install', the output is: dpkg: dependency problems prevent configuration of xserver-xorg-video-radeon: xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. xserver-xorg-video-radeon (1:6.12.1-0ubuntu2) provides xserver-xorg-video-5. dpkg: error processing xserver-xorg-video-radeon (--configure):dependency problems leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: xserver-xorg-video-radeon E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Implementing the NetBeans Project API on Maven in IntelliJ IDEA

    - by Geertjan
    James McGivern, one of the speakers I met at JAX London, is creating media software on the NetBeans Platform. However, he's using Maven and IntelliJ IDEA and one of the features he needs is project support, i.e., the project infrastructure that's part of NetBeans IDE. The two documents that describe the NetBeans Project API are these: http://platform.netbeans.org/tutorials/nbm-projecttype.html http://netbeans.dzone.com/how-create-maven-nb-project-type By combining the above two, you'll understand how to create a project infrastructure on top of the NetBeans Platform with Maven. However, an additional step of complexity is added when IntelliJ IDEA is included into the mix and therefore I created the following screencast which, in 15 minutes, puts all the pieces together. Be aware that I'm probably not using IntelliJ IDEA and Maven as optimally as I could and I'm publishing this at least partly so that the errors of my ways can be pointed out to me. But, first and foremost, this is especially for you James:  Note: Intentionally no sound, only callouts explaining what I'm doing. You'll probably need to pause the movie here and there to absorb the text; for details on the text, see the two links referred to above.

    Read the article

  • Can't mount USB devices, shut down etc. as a user

    - by Alok
    I tried gnome3 and gnome3-staging ppas to test running Gnome 3.8. After a while I decided that Gnome 3.8 wasn't for me, so I did a ppa-purge of both the ppas. As described in gnome3-staging ppa page, I also did: $ sudo apt-get purge libpam-systemd $ sudo apt-get install libpam-xdg-support The trouble is, I can't mount my external USB device anymore. When I try to mount it as a user, it fails: $ udisks --mount /dev/sdc1 Mount failed: Not Authorized I am logged in an XFCE session, but the same thing happens in a fallback Gnome session, or from a Unity session. Also, in XFCE, "suspend" and "shut down" menus are grayed out. I can't also open synaptic package manager from XFCE menus (sudo synaptic works). After a lot of searching, it seems like it is a policykit issue. I see the following in my ~/.xsession-errors: (polkit-gnome-authentication-agent-1:5805): polkit-gnome-1-WARNING **: Unable to determine the session we are in: No session for pid 5805 PID 5805 doesn't exist. If I try to start polkit-dnome-authentication-agent-1 from an xterm, I get the same error (different PID): $ /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 ... (polkit-gnome-authentication-agent-1:15971): polkit-gnome-1-WARNING **: Unable to determine the session we are in: No session for pid 15971 (the ... lines are warnings from GTK about missing css files etc.). polkitd is running: $ pidof polkitd 1495 Is there something I am missing?

    Read the article

  • Restore Failure from Ubuntu One

    - by Qawi Robinson
    Had to do a reinstall of Ubuntu 12 after 13.10 failed. Lost all my data, but I remembered that I had data backed up to Ubuntu One. It recognized my previous backups but I got errors and a restore failure when I went to restore the data. This is what I got. Can anyone make heads or tails of this? I still don't have my data. Thanks. Traceback (most recent call last): File "/usr/bin/duplicity", line 1412, in <module> with_tempdir(main) File "/usr/bin/duplicity", line 1405, in with_tempdir fn() File "/usr/bin/duplicity", line 1339, in main restore(col_stats) File "/usr/bin/duplicity", line 630, in restore restore_get_patched_rop_iter(col_stats)): File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 522, in Write_ROPaths for ropath in rop_iter: File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 495, in integrate_patch_iters final_ropath = patch_seq2ropath( normalize_ps( patch_seq ) ) File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 475, in patch_seq2ropath misc.copyfileobj( current_file, tempfp ) File "/usr/lib/python2.7/dist-packages/duplicity/misc.py", line 166, in copyfileobj buf = infp.read(blocksize) File "/usr/lib/python2.7/dist-packages/duplicity/librsync.py", line 80, in read self._add_to_outbuf_once() File "/usr/lib/python2.7/dist-packages/duplicity/librsync.py", line 94, in _add_to_outbuf_once raise librsyncError(str(e)) librsyncError: librsync error 103 while in patch cycle

    Read the article

  • How would I batch rename a lot of files using command-line?

    - by Whisperity
    I have a problem which I am unable to solve: I need to rename a great dump of files using patterns. I tried using this, but I always get an error. I have a folder, inside with a lot of files. Running ls -1 | wc -l, it returns that I have like 160000 files inside. The problem is, that I wish to move these files to a Windows system, but most of them have characters like : and ? in them, which makes the file unaccessible on said Windows-based systems. (As a "do not solve but deal with" method, I tried booting up a LiveCD on the Windows system and moving the files using the live OS. Under that Ubuntu, the files were readable and writable on the mounted NTFS partition, but when I booted back on Windows, it showed that the file is there but Windows was unable to access it in any fashion: rename, delete or open.) I tried running rename 's/\:/_' * inside the folder, but I got Argument list too long error. Some search revealed that it happens because I have so many files, and then I arrived here. The problem is that I don't know how to alter the command to suit my needs, as I always end up having various errors like Trying find -name '*:*' | xargs rename : _, it gives xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option [\n] syntax error at (eval 1) line 1, near ":" [\n] xargs: rename: exited with status 255; aborting Adding the -0 after xargs turns the error message to xargs: argument line too long These files are archive files generated by various PHP scripts. The best solution would be having a chance to rename them before they are moved to Windows, but if there is no way to do it, we might have a way to rename the files while they are moved to Windows. I use samba and proftpd to move the files. Unfortunately, graphical software are out of the question as the server containing the files is what it is, a server, with only command-line interface.

    Read the article

  • Dealing with inflexible programmers.

    - by Singleton
    Sometimes programmers who work on a project for long time get inflexible, and it becomes difficult to reason with them. Even if we do manage to convince them, they can be unlikely to implement our suggestions. For instance, I recently joined a project where the build & release process is too complicated and has unnecessary roadblocks. I suggested that we get rid of some of the development overhead (like filling a few spreadsheets) just by integrating defect management and version control tools (both are IBM-Rational tools so integration can be a very easy one-off effort). Also, if we use tools like Maven & Ant (the project involves Java and some COTS products) build & release can be simplified which should reduce manual errors & intervention. I managed to convince others and I'm ready to put in the effort to develop a proof of concept. But the ‘Senior’ developer is not willing, possibly because the current process makes him more valuable. How do we handle this situation without developing friction in the team?

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by kss
    sudo apt-get install acroread, i got the following output Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: libldap2 libgnome-speech7 The following NEW packages will be installed: acroread 0 upgraded, 1 newly installed, 0 to remove and 6 not upgraded. 1 not fully installed or removed. Need to get 0 B/60.1 MB of archives. After this operation, 142 MB of additional disk space will be used. (Reading database ... 237901 files and directories currently installed.) Unpacking acroread (from .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./opt/Adobe/Reader9/Browser/intellinux/nppdf.so': No space left on device No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for man-db ... /usr/bin/mandb: can't write to /var/cache/man/1645: No space left on device Errors were encountered while processing: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Commands don't have permission when using absplute path

    - by Markos
    I have folders set up this way: /srv/samba/video getfacl /srv/samba/video # file: srv/samba/video # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx group:deluge:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:group:deluge:rwx default:mask::rwx default:other::--- That means, user deluge has rwx to folder /srv/samba/video. However, when running command as user deluge, I am getting weird permission errors. When in folder /srv/samba/video: sudo -u deluge mkdir foo works flawlessly. But when using absolute path: sudo -u deluge mkdir /srv/samba/video/foo I am getting permission denied. When running sudo -u deluge id, I get output uid=113(deluge) gid=124(deluge) skupiny=124(deluge) which shows that user deluge is indeed in group deluge. Also, the behavior was the same when I gave the permissions also to user deluge not just group deluge. When executing as non-system user, it does work. The reason that I want to use absolute paths is that I am using automatically triggered post-download script which extracts some files into the folder. I have spent way too many hours to solve this problem myself. mkdir isn't the only command that fails, touch is doing the same thing, so I suspect that it's not mkdir's fault. If you need more info, I will try to put it in here, just ask. Thanx in advance.

    Read the article

  • team viewer 8 beta wont run

    - by Conner Jones
    I installed team viewer 7 and then one of my friends using windows got version 8 so I installed the beta of version 8 for linux. When I try to run it for terminal I get these errors i atempted to do as the comment bellow said and when trying to run teamveiwer i stil got an error conner@DemonicGrace:~$ teamviewer Init... Checking setup... Launching TeamViewer... wine: cannot find L"C:\windows\system32\winemenubuilder.exe" err:wineboot:ProcessRunKeys Error running cmd L"C:\windows\system32\winemenubuilder.exe -a -r" (2) err:winedevice:ServiceMain driver L"MountMgr" failed to load err:secur32:SECUR32_initSchannelSP libgnutls not found, SSL connections will fail fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:ole:CoInitializeSecurity ((nil),-1,(nil),(nil),0,3,(nil),0,(nil)) - stub! fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:process:SetProcessShutdownParameters (00000100, 00000000): partial stub. fixme:resource:GetGuiResources (0xffffffff,0): stub fixme:win:EnumDisplayDevicesW ((null),0,0x32df64,0x00000000), stub! fixme:win:EnumDisplayDevicesW (L"\\.\DISPLAY1",0,0x32dc1c,0x00000000), stub! fixme:win:EnumDisplayDevicesW ((null),1,0x32df64,0x00000000), stub! please help me out if anyone has ideas im more than willing to listen

    Read the article

  • Adsense: You have rejected ad requests, which will result in lost revenue

    - by Chankey Pathak
    Got an alert on my adsense account which says You have rejected ad requests, which will result in lost revenue. The following ad units have made ad requests with incorrect site information. This occurs when the URL of the server from which the ad unit has been served differs from the URL of the actual page where the ad will be displayed. Learn how to fix these errors. So the solution is that I'll have to use "google_page_url = "http://myurl.com/fullpath";" I'm using wordpress, what should be the URL for google_page_url? For example my website is www.technostall.com. Should I put www.technostall.com there or should I give the path of each post? That is not good because I'm using a sidebar widget for sidebar ad unit. I can't change google_page_url for each page. What should I do? This error is appearing only on my sidebar/navigation ad units. Is using google_page_url = document.location; fine?

    Read the article

  • Stylecop 4.7.37.0 has been released

    - by TATWORTH
    Stylecop  4.7.37.0 has been released at http://stylecop.codeplex.com/releases/view/79972The release notes follow:Add docs for new SA1650 spelling rule.Fix for 7395. Dont remove parenthesis around await expressions.Insert a returns element into docs within a see element.Update our tools folder StyleCop dll'sfix for 7392. Insert generic type docs for return types correctly.Fix for 7393. Allow documentation elements with attributes to end the string and still be valid.Make sure the MSBuild Task logs the warning id and type of exception. Unless the description field holds all this info VS cannot show the text in the Error List.Load custom dictionaries for multiple cultures. For a culture like en-GB; we load CustomDictionary.xml, then look for CustomDictionary.en-GB.xml and then CustomDictionary.en.xmlUpdate standard shipping dictionaries.Element documentation spelling fixes.Reduce the standard dictionaryUpdate our own devbuild StyleCop checks.Don't check spelling of xml documentation attributes are anything inside  <c> or <code> elements.Update StylingStyling update.Add timestamps for all the dependant files into the StyleCopResults.cache. Add a FileSystemWatcher to all custom dictionary files.Write out the full violation into the StyleCopResults.cache.Change a rules description text.Styling fixes.Styling fixes.NEW RULE: Check Spelling Of Element Documetation. Fix over 2000 spelling errors in our source code. Update the VS addin to show the rule violation in more detail. Add spelling checker to the deployment.Set our own Culture to en-USDocumentation spelling fixes.First draft of the documentation spelling checker.Fix for 7325. Don't throw 1126 in goto statements.Fix for 7090. Add TargetsDir to registry during install.Fix for 7060. Sort usings after moving them inside namespace.Fix FxCop issues.Fix for 7389. Detect CpuCount on Unix/MACFix for 6788. Allow opening curly brackets for scope. Added new tests.Updating constants.Fix for 7167. Show version number of StyleCop in VS Help window.Only output StyleCop excluded files if there are any.

    Read the article

  • Password protect an alias virtual difrecory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • I am looking to create realistic car movement using vectors

    - by bobthemac
    I have goggled how to do this and found this http://www.helixsoft.nl/articles/circle/sincos.htm I have had a go at it but most of the functions that were showed didn't work I just got errors because they didn't exist. I have looked at the cos and sin functions but don't understand how to use them or how to get the car movement working correctly using vectors. I have no code because I am not sure what to do sorry. Any help is appreciated. EDIT: I have restrictions that I must use the TL engine for my game, I am not allowed to add any sort of physics engine. It must be programmed in c++. Here is a sample of what i got from trying to follow what was done in the link I provided. if(myEngine->KeyHeld(Key_W)) { length += carSpeedIncrement; } if(myEngine->KeyHeld(Key_S)) { length -= carSpeedIncrement; } if(myEngine->KeyHeld(Key_A)) { angle -= carSpeedIncrement; } if(myEngine->KeyHeld(Key_D)) { angle += carSpeedIncrement; } carVolocityX = cos(angle); carVolocityZ = cos(angle); car->MoveX(carVolocityX * frameTime); car->MoveZ(carVolocityZ * frameTime);

    Read the article

  • Cryptswap boot error - can't mount?

    - by woody
    I believe i have my swap set up but am not sure because on start up it says that it is something along the lines of "could not mount /dev/mapper/cryptswap1 M for manual S for skip". But it appears to be mounted? I have already tried this solution with no success. When i run free -m the output is: total used free shared buffers cached Mem: 3887 769 3117 0 54 348 -/+ buffers/cache: 366 3520 Swap: 4026 0 4026 and sudo bklid is: /dev/sda1: UUID="9fb3ccd6-3732-4989-bfa4-e943a09f1153" TYPE="ext4" /dev/mapper/cryptswap1: UUID="bd9fe154-8621-48b3-95d2-ae5c91f373fd" TYPE="swap" and cat /etc/crypttab is: cryptswap1 /dev/sda5 /dev/urandom swap,cipher=aes-cbc-essiv:sha256 my /etc/fstab is: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=9fb3ccd6-3732-4989-bfa4-e943a09f1153 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation #UUID=bb0e378e-8742-435a-beda-ae7788a7c1b0 none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 cat /proc/swaps output is: Filename Type Size Used Priority /dev/dm-0 partition 4123644 0 -1 Is my swap not setup correctly or how can i fix my boot message?

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • Install on Acer Aspire 4752

    - by user216962
    I am at my wits end with this computer. I bought and Acer Aspire 4752 with a fully loaded version of Windows 7 on it. I prefer Ubuntu so I began to install 14.04 from USB. Got the error: [Errno 5] Input/output error This is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment. So I tried a different USB stick, same error. Tried different versions of Ubuntu, got the same error. I've used startup disk creator and Unetbootin to make start USB boot devices. I can boot with the USB drive and run Ubuntu that way. I even checked the hard drive using the tools in Ubuntu. Everything was fine, except it said the hard drive was hot. I tried a different hard drive. Got same error above. I ran a test with mem86, everything was fine. No matter what I do, using the USB gives me the Errno5 error. I then switched to using DVDs. Now I keep getting an uncompression error when installing Ubuntu 14.04 or 12.04. I can't figure out for the life of me why I get nothing but errors. Can anyone help?

    Read the article

  • Compiling custom kernel 3.7.x lowlatency on Ubuntu 12.04

    - by FlabbergastedPickle
    All, I have a peculiar problem with trying to compile a lowlatency flavor of the latest 3.7 kernel. I retrieved the prepatched source from the launchpad using bzr, compiled it using the usual make-kpkg using the current config file plus default options for the rest, installed the kernel and booted into it. Everything works except for the fglrx and wl drivers that I had to install in the original 12.04 lowlatency kernel. So, I tried recompiling these and succeeded with both of them (no errors were reported)--wl driver required a minor adjustment to system.h include while latest fglrx 12.11 beta11 (released yesterday, Dec. 3rd, 2012) compiled without the hitch. Yet, when I try to modprobe either module (both having in common the fact that they were built after the kernel, fglrx as a deb, and wl via the usual make/make install), I get "FATAL: no MODULENAME module found" (MODULENAME being either wl or fglrx). The graphic driver watermark shows 3D crossed out and "for testing purposes" (or "unsupported hardware," can't remember), and no fglrx or wl is loaded. More mysteriously, dmesg shows no attempt on kernel's behalf to load the said drivers, even though they are clearly in the right /lib/modules/KERNEL_VERSION folder. How is this possible? Has something fundamentally changed in 3.7 kernel that would prevent modprobing of these? I know that there is driver signing option that was merged recently but as far as I could tell the kernel config file generated by the build process had that disabled. OTOH, while building wl driver, I did get a warning that the driver was not signed... Then again, even if the kernel disallowed loading of those modules, shouldn't dmesg reflect that? Any thoughts on this one are most appreciated.

    Read the article

  • Ubuntu update deleted entries from grub

    - by Kevin
    My computer currently has Fedora, Ubuntu, and Windows installed. I just updated Ubuntu 12.04, and on restarting, the Fedora entry was gone from GRUB. Ubuntu and Windows remained, though. I have looked at these threads: Fedora login gone after Ubuntu updates on a dual boot http://forums.fedoraforum.org/showthread.php?t=279221 GRUB's menu.lst deleted after a kernel update However, I cannot figure out how to mount the drive as suggested. It does not appear in the list on the left side of nautilus as shown in the links above. I also tried running the following as suggested above: sudo grub-install /dev/sdX sudo update-grub But this gave scary errors: /usr/sbin/grub-setup: warn: Attempting to install GRUB to a partitionless disk or to a partition. This is a BAD idea.. /usr/sbin/grub-setup: warn: Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However, blocklists are UNRELIABLE and their use is discouraged.. /usr/sbin/grub-setup: error: will not proceed with blocklists. The highlighted drive below is where Fedora lives. Thanks for any help reversing Ubuntu's decision to delete this from GRUB.

    Read the article

  • IRQ Conflicts Causing Video Card and Boot Problems?

    - by sanpatricio
    tl;dr - I have 4 devices sharing 1 IRQ. Is this bad and how do I tell the BIOS to stop it? Background: I have an old Dell GX280 dual Pentium 4 that I (semi) resurrected last weekend with an installation of Ubuntu 12.04. Everything was going fine the first several hours until a problem that plagued me when WinXP was on that machine happened -- it froze. Completely froze. None of the myriad of ways I have found here on askubuntu helped me to regain control except a long-press of the power button to shut it off. Clearly, this wasn't a software/WinXP issue. After much googling, I found that hardware conflicts can often cause this sort of total lock-up and with all the odd blocks of yellow and flecks of color showing on my screen (both WinXP and Ubuntu) I figured my old GeForce 7600 was failing and causing me these odd issues. (A good canned-air dusting of the entire interior fixed the color fleck problem) Again, through much googling and numerous answers found on askubuntu, I somehow stumbled my way onto the lshw command. After going through it, line by line, I found that I have four devices sharing IRQ 16: eth0, wlan0, ide0 (DVD-RW), and my video card. In hindsight, I can recall weird instances of my Ethernet connection to another computer not working when I thought it should. I never full troubleshot those issues so it could be a coincidence. The other thing that has been plaguing me since installing Ubuntu (wasn't there during WinXP) has been periodic moments of my monitor getting no signal from Ubuntu during boot. The first couple days, it would disappear after the Dell boot screen and reappear at Ubuntu login. Now, it disappears after the Dell boot screen and doesn't return at all -- I have to hit F12 where I can load a safe mode version of Ubuntu and get more details like dmesg and lsdev. I also ran memtest86 overnight and woke up to zero errors, so failing RAM is out. Where do I go from here?

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >