Search Results

Search found 73305 results on 2933 pages for 'copy run start'.

Page 326/2933 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • How do you handle measuring Code Coverage in JavaScript

    - by Dancrumb
    In order to measure Code Coverage for JavaScript unit tests, one needs to instrument the code, run the tests and then perform post-processing. My concern is that, as a result, you are unit testing code that will never be run in production. Since JavaScript isn't compiled, what you test should be precisely what you execute. So here's my question, how do you handle this? One thought I had was to run Unit Testing on the production code and use that for my pass fail. I would then create a shadow of my production code, with instrumentation and run my unit tests again; this would give me my code coverage stats. Has anyone come across a method that is a little more graceful than this?

    Read the article

  • TFS 2008 to TFS 2010 moves and some issues

    - by Enrique Lima
    There have been many things going on this year around TFS.  Most of them had to do with migrations (I don’t call them upgrades for the most part since it involved new hardware and such).  Many were implementations using the Conchango SfTS template (now EMC). But there were others that were CMMI or Agile 4.0. Everything would move just fine, no issues.  That was until you attempted to run Test Case Management or run the last configuration steps for Lab Management. There is an error that states a project is not ready to run or integrate with Test or Lab Management.  And while there was some documentation on how to adjust and update the Agile WITs to work with it, there was still some disconnect to making it work with CMMI. Now there is a great post on how to run the “fix” from end to end. Check the post here:  TFS 2010: Enable Test Case Management for upgraded Team Projects

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • Tip #15: How To Debug Unit Tests During Maven Builds

    - by ByronNevins
    It must be really really hard to step through unit tests in a debugger during a maven build.  Right? Wrong! Here is how i do it: 1) Set up these environmental variables: MAVEN_OPTS=-Xmx1024m -Xms256m -XX:MaxPermSize=512mMAVEN_OPTS_DEBUG=-Xmx1024m -Xms256m -XX:MaxPermSize=512m  -Xdebug (no line break here!!)  -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=9999MAVEN_OPTS_REG=-Xmx1024m -Xms256m -XX:MaxPermSize=512m 2) create 2 scripts or aliases like so:  maveny.bat: set MAVEN_OPTS=%MAVEN_OPTS_DEBUG% mavenn.bat: set MAVEN_OPTS=%MAVEN_OPTS_REG%    To debug do this: run maveny.bat run mvn install attach your debugger to port 9999 (set breakpoints of course) When maven gets to the unit test phase it will hit your breakpoint and wait for you. When done debugging simply run mavenn.bat Notes If it takes a while to do the build then you don't really need to set the suspend=y flag. If you set the suspend=n flag then you can just leave it -- but only one maven build can run at a time because of the debug port conflict.

    Read the article

  • What's the story with TRIM support?

    - by Kent Boogaart
    The situation with TRIM in Ubuntu has me utterly confused. I don't know whether I need to manually run it or not. Sources I've found are disparate and unclear. I have Ubuntu 10.04 with latest updates (including kernel 2.6.32-26). I have an Intel X-25M drive with latest firmware, and it is formatted as ext4. So, does TRIM support run automatically? If not, how do I check if I need to run it, and then how do I run it if necessary?

    Read the article

  • TechEd 2010 Twitter Book Give Away

    - by Scott Dorman
    If you’re going to TechEd North America this year, Sams Publishing will be giving away 9 advanced reader copies. In order to win a copy, be sure to follow the InformIT Twitter account and the #TechEd hash tag. Sporadically throughout the day a tweet will be sent out stating that the first person who comes to the booth and mentions my book will get a copy. The give away will probably occur over multiple days, so be sure to keep an eye on Twitter. Technorati Tags: Books,Sams Teach Yourself C# 2010

    Read the article

  • Weird execution of ruby/git executables in Windows

    - by Frexuz
    Something strange has happened. I can't run some command line executables in Windows anymore. Steps: Open cmd Run an executable, such as ruby -v or git -h When I do that, a new command prompt opens, running that command (I think, it's too fast to see), and instantly closes again. I've managed to print screen the new command prompt, and it shows that it's running inside this path: C:\Documents and Settings\Administrator\Local Settings\Temp\3582-490 Inside this folder, is the executable I'm tring to run. If I run ruby, then ruby.exe is in there. If I run git, then git.exe is in there. And it's always emptying the folder in between, so there is always just one .exe file

    Read the article

  • Is it possible to keep nm-applet running between invocations of WM startup?

    - by serverninja
    I am using nm-applet to interface with NetworkManager, running xmonad as a window manager. My X sessions (including nm-applet) are set up with a /usr/local/bin/xmonad.start script. My question is, how can I keep nm-applet running in the background as long as X is running, but not necessarily xmonad? As mentioned above, it is being started with xmonad (and dying with it when xmonad is restarted, etc). I am using gdm to manage my X sessions, and I'm running 10.10. Where's a good place to start nm-applet to suit my particular needs? I need to remove it from the control of xmonad, but don't know where to start it otherwise. Any help, tips, etc appreciated. Edit: problem seems to be with how I have integrated xmonad. I have the session script as a file in /usr/share/xsessions/xmonad.desktop with the following contents: [Desktop Entry] Encoding=UTF-8 Name=XMonad Comment=Lightweight tiling window manager Exec=/usr/local/bin/xmonad.start Icon=xmonad.png Type=XSession /usr/local/bin/xmonad.start contains the following: #!/bin/bash xrdb -merge ~/.Xresources xcompmgr -c & trayer --edge top --align right --SetDockType true --SetPartialStrut true --expand true --width 8 --heighttype pixel --height 18 --transparent true --alpha 0 --tint 0x000000 & gnome-settings-daemon & gnome-screensaver & if [ -x /usr/bin/nm-applet ] ; then nm-applet --sm-disable & fi /usr/bin/urxvtd -q -o -f & eval `ssh-agent` & if [ -x /usr/bin/gnome-power-manager ] ; then sleep 1 gnome-power-manager & fi /usr/bin/gnome-volume-control-applet & exec xmonad The question is how do I integrate xmonad, gdm, X, etc in such a manner to replicate the behavior I currently have except with nm-applet (and possibly other programs) running whether or not xmonad is?

    Read the article

  • Cron job running successfully suddenly reports script is not found

    - by Ted B
    What might cause cron to suddenly report a file it is supposed to run is "not found," when the file hasn't been touched, and in fact, the entire system hasn't been touched since it last ran successfully? I have a cronjob schedule I define by doing sudo crontab -e In it, I have dozens of cron jobs that run successfully.. I do not have a PATH specified, and I use absolute paths to call all my scheduled scripts, setting the PATH in them as needed. I do not specify a SHELL in the crontab. All scripts identify the shell as their first line. Without me touching the system, a particular job defined in the middle of other jobs will suddenly stop running. To debug this, I added an output redirection to a log file. In that, the output clearly shows the output of the script successfully running time after time for weeks, and then suddenly the following appears: /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found If I do the ls command, copying and pasting that exact path from the error message, it clearly reports the file is still there (no surprise). Yet the log continues to report that file is "not found" until I take action. I can run the script manually and it runs just fine. If I do sudo crontab -e and save the file, the job runs on the next scheduled time, putting its output in the log, no longer reporting the script is "not found". It seems to me the contents of the script trying to be run are irrelevant since cron doesn't even process the file because it is "not found". I have a job scheduled below the one that is encountering this problem that I know is continuing to run, because I have its output mailed to me. So I know cron is running and continues to run at least one other job, even after it suddenly reports this job's script is "not found". All my lines end with a newline. I had no periods in the crontab until I added the redirection to a log file. I have now added a PATH specification, but left the absolute paths in the jobs. Unfortunately, I have no idea if and when this problem will occur. It will likely be weeks from now. By the way, I am running a script to syncronize the clock, and I see the time is exactly what it should be.

    Read the article

  • Program instantly closing [migrated]

    - by Ben Clayton
    I made this program and when I compiled it there were no errors but the program just instantly closed, any answers would be appreciated. #include <iostream> //Main commands #include <string> // String commands #include <windows.h> // Sleep using namespace std; int main () { //Declaring variables float a; bool end; std::string input; end = false; // Making sure program doesn't end instantly cout << "Enter start then the number you want to count down from." << ".\n"; while (end = false){ cin >> input; cout << ".\n"; if (input.find("end") != std::string::npos) // Ends the program if user types end end = true; else if (input.find("start" || /* || is or operator*/ "restart") != std::string::npos) // Sets up the countdown timer if the user types start { cin >> a; cout << ".\n"; while (a>0){ Sleep(100); a = a - 0.1; cout << a << ".\n"; } cout << "Finished! Enter restart and then another number, or enter end to close the program" << ".\n"; } else // Tells user to start program cout << "Enter start"; } return 0; // Ends program when (end = true) }

    Read the article

  • Can i make a launcher shortcut perform different actions based on a condition

    - by Nirmik
    What i want to do is pretty simple to understand. I want the same launcher shortcut to act like a ON/OFF swith for eg. Like suppose i have made a launcher to start LAMPP with the command gksudo /opt/lampp/lampp start This works properly to start lampp. Now,what i want to do is,If LAMPP is not on,the launcher should execute the above command and start lampp.While if lampp is on,I want ti to execute the command gksudo /opt/lampp/lampp stop and stop lampp. I want these 2 commands to be executed by the same launcher where the command to be executed is decided on the basis of a condition(LAMPP being ON or OFF in the above example) Can i do this? and how? I guess writing a bash script and then making the launcher execute the bash script would do it.but then how do i check weather lampp is on of off?

    Read the article

  • How do I X forward a Windows application to a Linux system using ssh?

    - by triunenature
    Ok, so if I have two Linux Machines (A and B) and I have a program on one, and want to run it on be I do: user@LinuxA:~$ ssh -X LinuxB user@LinuxB:~$ programName (Displays on LinuxA machine) Ok same thing, WindowsA LinuxB (Program on Linux) Start Xming X Server on Windows Run Putty, use x11 port forward with :0.0 After connect to LinuxB, run program, it loads in Windows! Now here is the question, WindowsA LinuxB, (Program on Windows) Run Windows Program On Linux, using a windows resources. How to make it work? BTW, I know it can because years ago, I read a white paper on it, but never actually tested it out.

    Read the article

  • Consecutive verse Parallel Nunit Testing

    - by Jacobm001
    My office has roughly ~300 webpages that should be tested on a fairly regular basis. I'm working with Nunit, Selenium, and C# in Visual Studio 2010. I used this framework as a basis, and I do have a few working tests. The problem I'm running into is is that when I run the entire suite. In each run, a random test(s) will fail. If they're run individually, they will all pass. My guess is that Nunit is trying to run all 7 tests at the same time and the browser can't support this for obvious reasons. Watching the browser visually, this does seem to be the case. Looking at the screenshot below, I need to figure out a way in which the tests under Index_Tests are run sequentially, not in parallel. errors: Selenium2.OfficeClass.Tests.Index_Tests.index_4: OpenQA.Selenium.NoSuchElementException : Unable to locate element: "method":"id","selector":"textSelectorName"} Selenium2.OfficeClass.Tests.Index_Tests.index_7: OpenQA.Selenium.NoSuchElementException : Unable to locate element: "method":"id","selector":"textSelectorName"} example with one test: using OpenQA.Selenium; using NUnit.Framework; namespace Selenium2.OfficeClass.Tests { [TestFixture] public class Index_Tests : TestBase { public IWebDriver driver; [TestFixtureSetUp] public void TestFixtureSetUp() { driver = StartBrowser(); } [TestFixtureTearDown] public void TestFixtureTearDown() { driver.Quit(); } [Test] public void index_1() { OfficeClass index = new OfficeClass(driver); index.Navigate("http://url_goeshere"); index.SendKeyID("txtFiscalYear", "input"); index.SendKeyID("txtIndex", ""); index.SendKeyID("txtActivity", "input"); index.ClickID("btnDisplay"); } } }

    Read the article

  • Error running phusion passenger in standalone mode

    - by msidell
    I'm trying to run standalone phusion passenger so that I can run different ruby rvm configurations on the same host. I already have ruby and passenger running fine on this host. I am following the instructions here. When I run standalone passenger the first time, it appears to successfully install nginx. But then when it tries to run, I get this error: [root@clark directra]# passenger start -a 127.0.0.1 -p 3001 -d --user dweb *** ERROR *** Could not start Passenger Nginx core: nginx: [alert] could not open error log file: open() "/tmp/passenger-standalone.16757/logs/error.log" failed (2: No such file or directory) nginx: [alert] Unable to start the Phusion Passenger watchdog (/var/lib/passenger-standalone/3.0.11-x86-ruby1.9.3-linux-gcc4.1.2-1002/support/ agents/PassengerWatchdog): Permission denied (13) (13: Permission denied) Stopping web server... done FWIW, /tmp is writeable. Any idea what's wrong?

    Read the article

  • starting up with VPS or cloud hosting? [closed]

    - by FlyOn
    Possible Duplicate: How to find web hosting that meets my requirements? Summary: I want to start hosting my product. I'd like to register domains (at some point). I'm a linux beginner. Thinking about scalability and price, I'm thinking am I better off on a VPN to get started or would some form of cloud hosting be better (not being familiar with either). Full question: I'm creating a product where people can create their own 3D representations of whatever data / info they have, and (re)organise that data. The product is coming along beautifully on my local environment, but it's about time I start getting some form of hosting ready, and I could really use some advice where / how to get started: I'd like people to be able to move/register their own domains on my server. I could start without this just to demo the product, but it would be the very first on the todo list. I'd like to automatically copy some files / install databases etc for each domain. I probably want to see if I can let users manage their own subdomains at some points, but for now: I'd like start as simple as possible. I've always on a windows machine, so my linux experience is quite basic. I really don't mind getting into it, but I'm thinking it's better to get my product out first of all and see where to go from there. Although... I'd like things to be scalable. If I set up some reseller VPN now which only scales to 100 domains or so, which means I have to set up something else / move again when I pass that level, or which means that I'm in trouble if I suddenly get lots of new customers... hmm. Finally, I need to start cheap. I'm putting all I have into starting this company, and live on very little. So before I have any customers, 50 dollars a month is a fair bit and 100 dollars a month may be too much. If anyone has some tips to help get me started I'd be really grateful.

    Read the article

  • LDoms with Solaris 11

    - by Orgad Kimchi
    Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page. Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11. The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • Why doesn't 12.04 recognize my ATI Radeon X300SE graphics card?

    - by Nomad Zero
    So far everything including the BCM4321 card is working, however, the OEM ATI Radeon X300SE (RV370) is not recognized. When I go into system details the card comes up as unknown. When I run lspci the card shows up as: 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV370 5B60 [Radeon X300 (PCIE)] 01:00.1 Display controller: Advanced Micro Devices [AMD] nee ATI RV370 [Radeon X300SE] I have attempted check Additional Drivers but the card does not show up as needing a driver. So far this is causing the system to run warmer as the processor is doing a bit of the work to run Unity and any full screen videos tend to be rather choppy. Is there an available driver that will fix my issue and what is the install method? I am not scared to run a terminal so no worries there, as long as I have instructions, I can find my way. I am running under 12.04 LTS Kernel 3.2.0-25-generic-pae

    Read the article

  • Ubuntu 64bit Black Screen on Minecraft

    - by Signify
    I have tried posting on the forums, but I really need help (I'm a server admin and really don't want to have to switch to Windows just to run Minecraft). Anyhow, I originally was running openjdk6 as I was told that 7 was unstable and was getting periodical lag spikes while walking (at least once every 3 seconds the screen would freeze for a tenth of a second). After that, I attempted to install Sun's Java JDK7 (I couldn't get ahold of 6 without signing up for Oracle's newsletters). Upon attempting to run Minecraft, I got a black screen after logging in with this error message: 27 achievements 182 recipes Setting user: Thunder7102, -1618112820878091307 Exception in thread "Minecraft main thread" java.lang.UnsatisfiedLinkError: /home/noiro/.minecraft/bin/natives/liblwjgl.so: /home/noiro/.minecraft/bin/natives/liblwjgl.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1939) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1864) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1825) at java.lang.Runtime.load0(Runtime.java:792) at java.lang.System.load(System.java:1059) at org.lwjgl.Sys$1.run(Sys.java:69) at java.security.AccessController.doPrivileged(Native Method) at org.lwjgl.Sys.doLoadLibrary(Sys.java:65) at org.lwjgl.Sys.loadLibrary(Sys.java:81) at org.lwjgl.Sys.<clinit>(Sys.java:98) at org.lwjgl.opengl.Display.<clinit>(Display.java:132) at net.minecraft.client.Minecraft.a(SourceFile:184) at net.minecraft.client.Minecraft.run(SourceFile:657) at java.lang.Thread.run(Thread.java:722) Now, this got me fed up, so I tried to install a Windows 7 virtual machine through virtualbox, I gave it 256mb of graphics memory with 2D and 3D acceleration and 3GB of RAM. I installed Java JDK7 for Windows (which does work from experience on my other Windows 7 partition). Once again, a black screen after login. What the heck is going on guys? My System Specs: Ubuntu 12.04 64bit Fully Updated running Gnome3 Nvidia GTS 450 1.3GB OC'd AMD Athlon II 4x 2.8Ghz 6GB of RAM So, what do you think?

    Read the article

  • How to upgrade a remote server from 8.10 to newer version?

    - by DisgruntledGoat
    I have a remote server still running Ubuntu 8.10 9.04 that I can only access via SSH. If I run apt-get update I get a bunch of 404 errors on the packages. I've asked a few questions on Server Fault but got nowhere. Here's what I've done: Run apt-get update which returns errors like: Err http://gb.archive.ubuntu.com intrepid/main Packages 404 Not Found [and same for many other packages] Run do-release-upgrade which returns: Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Edited /etc/update-manager/release-upgrades and changed from Prompt=normal to Prompt=lts (as suggested here). Running do-release-upgrade after this returns: Checking for a new ubuntu release current dist not found in meta-release file No new release found (Updated) I have followed the advice in this question and changed /etc/apt/sources.list to refer to jaunty instead of intrepid. However, that distro is not online anymore either. A comment there says I have to upgrade in chronological order... So basically, it seems like I cannot upgrade because my current distro is out of date and not supported. Is there a way to upgrade direct to 10.x or 11.x? Note, as this is a server I only have command-line access. UPDATE 24/11: I have managed to upgrade from 8.10 to 9.04. Ubuntu's EOL Upgrades page provides some alternate URLs for apt sources. I also needed to update /var/lib/update-manager/meta-release to point to the old-releases server too. However, now I cannot upgrade from 9.04 to 9.10. Running do-release-upgrade produces the same error as #2 above, except it "Failed to fetch" (the URLs in meta-release are valid). The Ubuntu Jaunty upgrade page says it's necessary to upgrade using a CD image. I followed the instructions here, but it didn't work: A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade is now aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/tmp.JLhTwVUugb/karmic", line 7, in sys.exit(main()) File "/tmp/tmp.JLhTwVUugb/DistUpgradeMain.py", line 132, in main if app.run(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1590, in run return self.fullUpgrade() File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1506, in fullUpgrade if not self.doPostInitialUpdate(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 762, in doPostInitialUpdate self.quirks.run("PostInitialUpdate") File "/tmp/tmp.JLhTwVUugb/DistUpgradeQuirks.py", line 83, in run for plugin in self.plugin_manager.get_plugins(condition): File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 167, in get_plugins filenames = self.get_plugin_files() File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 120, in get_plugin_files basenames = [x for x in os.listdir(dirname) OSError: [Errno 2] No such file or directory: './plugins' It does say to report the bug, but since this is an old unsupported release I don't know if it's worth doing. However, is there a way round this, to upgrade from 9.04 to 9.10 (And then finally to 10.04 LTS.)

    Read the article

  • Can not authenticate form system tools

    - by alfish
    Whenever I try to run a program from gnome, I get messages like Authenticated is required to run the Gparted Partition Editor The same goes for all programs that need root permission and I try to run from 'System tools' in my gnome-fallback. However the same user can become root in gnome terminal with no problem (I added the user to sudoers). I must mention that I've changed the user's password after OS install, so I think I need to update something but don't know what. I appreciate your hints.

    Read the article

  • Exporting from the GAC

    - by TATWORTH
    Recently I had need to export from the GAC - here are some useful resources:http://gacassemblyexporter.codeplex.com/SourceControl/list/changesetshttp://blogs.msdn.com/b/johnwpowell/archive/2009/01/14/how-to-copy-an-assembly-from-the-gac.aspxThere is an alternative method at http://aspdotnetcodebook.blogspot.co.uk/2008/09/get-copy-of-dll-in-gac-or-add-reference.html that involves de-installing what is part of the operating system - I would recommend this as a method of last resort.

    Read the article

  • ATI Radeon XPress 1200 Graphics Card Driver Install Problems

    - by 16trohrt
    I've got an ATI Radeon XPress 1200 Graphics Card, and the default driver isn't cutting it. I downloaded the proprietary driver .run file ("ati-driver-installer-9-3-x86.x86_64.run") from AMD, and tried to run it with sudo sh ati-driver-installer-9-3-x86.x86_64.run. Everytime I try and to it I get this error: Error: ./default_policy.sh does not support version default:v2:i686:lib::none:3.8.0-25-generic; make sure that the version is being correctly set by --iscurrentdistro I don't know what's throwing it, and I would really appreciate some help. Thanks in advance! :)

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >