Search Results

Search found 42786 results on 1712 pages for 'install from source'.

Page 362/1712 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • compile software with older version of gcc and linux kernel

    - by ant2009
    Distributor ID: SUSE LINUX Description: openSUSE 11.4 (x86_64) Release: 11.4 Codename: Celadon gcc (SUSE Linux) 4.5.1 Linux linux-14ay 2.6.37.6-0.20-desktop #1 SMP PREEMPT 2011-12-19 23:39:38 +0100 x86_64 x86_64 x86_64 GNU/Linux Hello, I am trying to install software on the above system. However, the software that I need requires an earlier version of gcc (version 4.1) my current install version is 4.5.1. It is possible to install an 4.1 on my current system? Where would I get the gcc version from? Also, I get this message about the Linux kernel The current kernel version (2.6.37.6-0.20-desktop) is later than the version currently supported by this software (2.6.5) Is it possible to install this earlier kernel. Where would I get that from? Many thanks for any suggestions,

    Read the article

  • Installing mysql-devel to match MySQL version

    - by markxi
    I'm running MySQL 5.1.52 on CentOS 4.6 and I'm trying to install mysql-devel to match my MySQL version. If I do yum install mysql-devel it wants to upgrade MySQL to 5.1.58, yet if I do yum search mysql-devel, in addition to finding 5.1.58, I get a match for: 5.1.52-jason.1 .. utterramblings .. Matched from: mysql-devel Why is yum trying to install an updated version and is there any way to get it to install the correct version without the need to upgrade MySQL? I'd appreciate any help.

    Read the article

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • What is the simplest way to build your own .deb package?

    - by Calvin Fisher
    Having used Ubuntu for several years now, I've assembled a short list of scripts and packages that I always install on my computers. I would like to pack them up into a .deb to make it easier to get set up on a fresh OS installation. I'm imagining, for instance, one package that would install all of my custom BASH scripts that I've made for common tasks, and another one that would depend on other packages (like w64codecs) that I always install but forget that I need to until I go to do something and it's not there. It doesn't even have to be by-the-book; I'm not looking to deploy these publicly. I'm just looking to roll up all these tasks into one sudo dpkg --install. To quantify "simple" or "easy," I mean to say that I'm looking for the method with the fewest steps requiring the least technical knowledge and, most importantly, taking the least time.

    Read the article

  • New installation - Win 7 or directly Win 8?

    - by Horst Walter
    My laptop's hard disk is gone, so I need to re-install my Windows. Would it be a good idea to directly install Win8 (so actually the preview). Is there any known incompatibility with Win7 which would be a show stopper? I have a Dell XPS laptop, I do not expect any driver issues since the series is pretty much mainstream. Would the Win8 preview become Win8 RTM automatically by updates, or would I need to re-install Win8 RTM. This would be the no-go for me. The Win8 preview does not require a key, so I guess it becomes Win 8 RTM and then needs to be activated (which is what I want). Basically the idea is, I have to install windows anyway, so why not using the latest version?

    Read the article

  • Setting up Eclipse for C development using CDT plugin

    - by Homunculus Reticulli
    I am using Eclipse 3.5.2. I want to install the CDT plugin so that I can compile C/C++ programs. I attempted to install the CDT plugin and it failed, given the following error message: Cannot complete the install because one or more required items could not be found. Software being installed: C/C++ GCC Cross Compiler Support 1.1.0.201206111645 (org.eclipse.cdt.build.crossgcc.feature.group 1.1.0.201206111645) Missing requirement: C/C++ Managed Builder UI 8.1.0.201206111645 (org.eclipse.cdt.managedbuilder.ui 8.1.0.201206111645) requires 'bundle org.eclipse.ui.console [3.5.100,4.0.0)' but it could not be found Cannot satisfy dependency: From: CDT GCC Cross Compiler Support 1.1.0.201206111645 (org.eclipse.cdt.build.crossgcc 1.1.0.201206111645) To: bundle org.eclipse.cdt.managedbuilder.ui 8.1.0 Cannot satisfy dependency: From: C/C++ GCC Cross Compiler Support 1.1.0.201206111645 (org.eclipse.cdt.build.crossgcc.feature.group 1.1.0.201206111645) To: org.eclipse.cdt.build.crossgcc [1.1.0.201206111645] Has anyone managed to install/use the CDT plugin with Eclipse v 3.5.2 ?

    Read the article

  • Make exe or bat require admin privileges UAC

    - by petebob796
    I am trying to create an install CD to install multiple windows updates and hotfixes in one. The Autorun.inf launching a .bat (or .exe) running each update in turn. Currently if I run this .bat each update brings up a UAC prompt individually which can be annoying. However if I run the .bat as administrator it can launch and install each update with just one prompt. Is there a way to force the bat (or .exe) to need admin priviledges no matter who runs it.

    Read the article

  • difference between compiled and installed via rpm (zypper)

    - by cherouvim
    In an openSUSE 11.1 I download, compile and install ImageMagick via: wget ftp://.../pub/graphics/ImageMagick/ImageMagick-6.7.7-0.zip unzip ImageMagick-6.7.7-0.zip cd ImageMagick-6.7.7-0 ./configure --prefix=/usr/local/ImageMagick make make install Everything works nicelly until I discover that JPG is not supported: identify -list format | grep -i jpg [nothing related to JPG returned] So I reconfigure and recompile using: ./configure --prefix=/usr/local/ImageMagick --with-jpeg=yes --with-jp2=yes make make install But that changes nothing. I end up uninstalling: make uninstall and installing via zypper: zypper install ImageMagick This installed version 6.4.3 and now it does support JPG: identify -list format | grep -i jpg JPG* JPEG rw- Joint Photographic Experts Group JFIF format Any idea on what is going on here? What is a possible reason that this capability of ImageMagick was not there when compiled from source but was there when installed from rpm? Note that I don't necessarily care a lot about ImageMagick (since it now works), but generally about his kind of behaviour, becase in one way or another I've seen this happen in other ocasions as well.

    Read the article

  • Ubuntu eats itself after I followed updater instruction

    - by Tony Martin
    Updater (I assume) put a no entry style alert icon on the panel which informed me that certain package dependencies were not up to snuff. Upgrades were thereafter only partial. The dialogue advised that I (and this is from noob memory) sudo apt-get install -f. I did this and typed in the confirmation phrase and watched apt-get systematically remove every component of linux, both the stuff I installed and the core ubuntu packages. I could only assume at this stage that this was for a fresh install but of course, I know better now. There's much complaint about Windows, but I've never met with advice from Microsoft tools to wipe out the operating system because of a couple of missing .dlls. So what gives? This was a 64 bit install of 12.04. All that is left is grub pointing to a couple of windows recovery partitions on the hard drive. I'm tempted, but I have hopes of recovering the data that I had enough misguided faith to trust to the linux ext4 partition. I've tried pen driving back into it with a 32 bit iso but I'm simply informed that ubuntu is running in low graphics mode and get to watch the dots cycle indefinitely. EDIT: Thanks for the advice vis positive request. I've got onto the machine with a 64 bit stick and can see the file structure left behind by the installation. My first instinct was to run install from the stick but it did not seem to offer a recovery option. My question then: is there a way to recover the current installation so that if I reinstall the packages I had they will pick up the original settings. I'm particularly worried about losing email from evolution - the rest I could probably lash back together. I would also be interested how this disaster came about. I see people in the know recommending this same procedure in similar circumstances. Thanks for your attention, Tony Martin

    Read the article

  • Netbook with Windows 7 Starter - Can I upgrade to Professional or Ultimate?

    - by Andrew J. Brehm
    There are finally Windows 7 netbooks available, but they come with Windows 7 "Starter". Can I upgrade "Starter" to Windows 7 Ultimate using a normal retail DVD (and its new licence key)? What about special netbook drivers? Will they be replaced by an in-place upgrade? Does Windows 7 even have a way to upgrade to another edition or will I have to delete and re-install? If I delete and re-install, how will I get the special netbook drivers to install again? In my case it will probably be a HP netbook (HP Mini 110-1115SA). Can I just install the drivers from the HP CD that comes with the computer? Any ideas?

    Read the article

  • Error building main Guest Additions Module while installing VirtualBox guest additions

    - by Praveen Sripati
    I have installed Ubuntu 12.10 Guest on Ubuntu 12.04 Host using VirtualBox. Everything is from repository and no direct install. When I install the guest additions, the below error is shown in the console. Before running the command I mapped the VBoxGuestAdditions.iso in the Guest. The closest I could get is this article which says to install the latest version of VirtualBox (not the one from the repository). Is there any alternate solution? sudo ./VBoxLinuxAdditions.run Verifying archive integrity... All good. Uncompressing VirtualBox 4.1.12 Guest Additions for Linux......... VirtualBox Guest Additions installer Removing installed version 4.1.12 of VirtualBox Guest Additions... Removing existing VirtualBox DKMS kernel modules ...done. Removing existing VirtualBox non-DKMS kernel modules ...done. Building the VirtualBox Guest Additions kernel modules The headers for the current running kernel were not found. If the following module compilation fails then this could be the reason. Building the main Guest Additions module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) Doing non-kernel setup of the Guest Additions ...done. Installing the Window System drivers Warning: unknown version of the X Window System installed. Not installing X Window System drivers. Installing modules ...done. Installing graphics libraries and desktop services components ...done.

    Read the article

  • BizTalk: mapping with Xslt

    - by Leonid Ganeline
    BizTalk Map Editor (Mapper) is a good editor, especially in the last 2010 version of the BizTalk. But still sometimes it cannot do the tasks easily. It is time for the Xslt code, It is time to remember that the maps are executed by the Xslt engine.  Right-click the Mapper Grid (a field between the source and target schemas) and choose Properties /Custom XSLT Path.  Input here a name of the file with Xslt code. Only this code will be executed, forget the picture in the Mapper, all those links and functoids.  Let’s see the real-life example. There are two source Addresses. One is on the top level and the second is inside the Member_Address record with MaxOccurs=* . The target address is placed inside the Locator record with MaxOccurs=*. The requirement is to map all source address to the one target address structure. The source Xml document looks like: The result Xml should be like this: Try to do this mapping with the Mapper and you will spent good amount of time and the result map would be tricky. If we use the Xslt code, the mapping will be simple and unambiguous, like this: Simple, elegant.

    Read the article

  • How to tell if SPARC T4 crypto is being used?

    - by danx
    A question that often comes up when running applications on SPARC T4 systems is "How can I tell if hardware crypto accleration is being used?" To review, the SPARC T4 processor includes a crypto unit that supports several crypto instructions. For hardware crypto these include 11 AES instructions, 4 xmul* instructions (for AES GCM carryless multiply), mont for Montgomery multiply (optimizes RSA and DSA), and 5 des_* instructions (for DES3). For hardware hash algorithm optimization, the T4 has the md5, sha1, sha256, and sha512 instructions (the last two are used for SHA-224 an SHA-384). First off, it's easy to tell if the processor T4 crypto instructions—use the isainfo -v command and look for "sparcv9" and "aes" (and other hash and crypto algorithms) in the output: $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc These instructions are not-privileged, so are available for direct use in user-level applications and libraries (such as OpenSSL). Here is the "openssl speed -evp" command shown with the built-in t4 engine and with the pkcs11 engine. Both run the T4 AES instructions, but the t4 engine is faster than the pkcs11 engine because it has less overhead (especially for smaller packet sizes): t-4 $ /usr/bin/openssl version OpenSSL 1.0.0j 10 May 2012 t-4 $ /usr/bin/openssl engine (t4) SPARC T4 engine support (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support t-4 $ /usr/bin/openssl speed -evp aes-128-cbc # t4 engine used by default . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 487777.10k 816822.21k 986012.59k 1017029.97k 1053543.08k t-4 $ /usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc engine "pkcs11" set. . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 31703.58k 116636.39k 350672.81k 696170.50k 993599.49k Note: The "-evp" flag indicates use the OpenSSL "EnVeloPe" API, which gives more accurate results. That's because it tells OpenSSL to use the same API that external programs use when calling OpenSSL libcrypto functions, evp(3openssl). DTrace Shows if T4 Crypto Functions Are Used OK, good enough, the isainfo(1) command shows the instructions are present, but how does one know if they are being used? Chi-Chang Lin, who works on Oracle Solaris performance, wrote a Dtrace script to show if T4 instructions are being executed. To show the T4 instructions are being used, run the following Dtrace script. Look for functions named "t4" and "yf" in the output. The OpenSSL T4 engine uses functions named "t4" and the PKCS#11 engine uses functions named "yf". To demonstrate, I'll first run "openssl speed" with the built-in t4 engine then with the pkcs11 engine. The performance numbers are not valid due to dtrace probes slowing things down. t-4 # dtrace -Z -n ' pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes . . . dtrace: pid 2029 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_init 1 libcrypto.so.1.0.0 t4_add_NID 3 libcrypto.so.1.0.0 t4_aes_expand128 5 libcrypto.so.1.0.0 t4_cipher_init_aes 5 libcrypto.so.1.0.0 t4_get_all_ciphers 6 libcrypto.so.1.0.0 t4_get_all_digests 59 libcrypto.so.1.0.0 t4_digest_final_sha1 65 libcrypto.so.1.0.0 t4_digest_init_sha1 65 libcrypto.so.1.0.0 t4_sha1_multiblock 126 libcrypto.so.1.0.0 t4_digest_update_sha1 261 libcrypto.so.1.0.0 t4_aes128_cbc_encrypt 1432979 libcrypto.so.1.0.0 t4_aes128_load_keys_for_encrypt 1432979 libcrypto.so.1.0.0 t4_cipher_do_aes_128_cbc 1432979 t-4 # dtrace -Z -n 'pid$target::*yf*:entry{ @[probemod, probefunc] = count();}   pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes engine "pkcs11" set. . . . dtrace: pid 2033 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_get_all_ciphers 1 libcrypto.so.1.0.0 t4_get_all_digests 1 libsoftcrypto.so.1 rijndael_key_setup_enc_yf 1 libsoftcrypto.so.1 yf_aes_expand128 1 libcrypto.so.1.0.0 t4_add_NID 3 libsoftcrypto.so.1 yf_aes128_cbc_encrypt 1542330 libsoftcrypto.so.1 yf_aes128_load_keys_for_encrypt 1542330 So, as shown above the OpenSSL built-in t4 engine executes t4_* functions (which are hand-coded assembly executing the T4 AES instructions) and the OpenSSL pkcs11 engine executes *yf* functions. Programmatic Use of OpenSSL T4 engine The OpenSSL t4 engine is used automatically with the /usr/bin/openssl command line. Chi-Chang Lin also points out that if you're calling the OpenSSL API (libcrypto.so) from a program, you must call ENGINE_load_built_engines(), otherwise the built-in t4 engine will not be loaded. You do not call ENGINE_set_default(). That's because "openssl speed -evp" test calls ENGINE_load_built_engines() even though the "-engine" option wasn't specified. OpenSSL T4 engine Availability The OpenSSL t4 engine is available with Solaris 11 and 11.1. For Solaris 10 08/11 (U10), you need to use the OpenSSL pkcs311 engine. The OpenSSL t4 engine is distributed only with the version of OpenSSL distributed with Solaris (and not third-party or self-compiled versions of OpenSSL). The OpenSSL engine implements the AES cipher for Solaris 11, released 11/2011. For Solaris 11.1, released 11/2012, the OpenSSL engine adds optimization for the MD5, SHA-1, and SHA-2 hash algorithms, and DES-3. Although the T4 processor has Camillia and Kasumi block cipher instructions, these are not implemented in the OpenSSL T4 engine. The following charts may help view availability of optimizations. The first chart shows what's available with Solaris CLIs and APIs, the second chart shows what's available in Solaris OpenSSL. Native Solaris Optimization for SPARC T4 This table is shows Solaris native CLI and API support. As such, they are all available with the OpenSSL pkcs11 engine. CLIs: "openssl -engine pkcs11", encrypt(1), decrypt(1), mac(1), digest(1), MD5sum(1), SHA1sum(1), SHA224sum(1), SHA256sum(1), SHA384sum(1), SHA512sum(1) APIs: PKCS#11 library libpkcs11(3LIB) (incluDES Openssl pkcs11 engine), libMD(3LIB), and Solaris kernel modules AlgorithmSolaris 1008/11 (U10)Solaris 11Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES2-ECB, DES2-CBC, DES-ECB, DES-CBC XXX bignum Montgomery multiply (RSA, DSA) XXX MD5, SHA-1, SHA-256, SHA-384, SHA-512 XXX SHA-224 X ARCFOUR (RC4) X Solaris OpenSSL T4 Engine Optimization This table is for the Solaris OpenSSL built-in t4 engine. Algorithms listed above are also available through the OpenSSL pkcs11 engine. CLI: openssl(1openssl) APIs: openssl(5), engine(3openssl), evp(3openssl), libcrypto crypto(3openssl) AlgorithmSolaris 11Solaris 11SRU2Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES-ECB, DES-CBC X bignum Montgomery multiply (RSA, DSA) X MD5, SHA-1, SHA-256, SHA-384, SHA-512 XX SHA-224 X Source Code Availability Solaris Most of the T4 assembly code that called the new T4 crypto instructions was written by Ferenc Rákóczi of the Solaris Security group, with assistance from others. You can download the Solaris source for this and other parts of Solaris as a few zip files at the Oracle Download website. The relevant source files are generally under directories usr/src/common/crypto/{aes,arcfour,des,md5,modes,sha1,sha2}}/sun4v/. and usr/src/common/bignum/sun4v/. Solaris 11 binary is available from the Oracle Solaris 11 download website. OpenSSL t4 engine The source for the OpenSSL t4 engine, which is based on the Solaris source above, is viewable through the OpenGrok source code browser in directory src/components/openssl/openssl-1.0.0/engines/t4 . You can download the source from the same website or through Mercurial source code management, hg(1). Conclusion Oracle Solaris with SPARC T4 provides a rich set of accelerated cryptographic and hash algorithms. Using the latest update, Solaris 11.1, provides the best set of optimized algorithms, but alternatives are often available, sometimes slightly slower, for releases back to Solaris 10 08/11 (U10). Reference See also these earlier blogs. SPARC T4 OpenSSL Engine by myself, Dan Anderson (2011), discusses the Openssl T4 engine and reviews the SPARC T4 processor for the Solaris 11 release. Exciting Crypto Advances with the T4 processor and Oracle Solaris 11 by Valerie Fenwick (2011) discusses crypto algorithms that were optimized for the T4 processor with the Solaris 11 FCS (11/11) and Solaris 10 08/11 (U10) release. T4 Crypto Cheat Sheet by Stefan Hinker (2012) discusses how to make T4 crypto optimization available to various consumers (such as SSH, Java, OpenSSL, Apache, etc.) High Performance Security For Oracle Database and Fusion Middleware Applications using SPARC T4 (PDF, 2012) discusses SPARC T4 and its usage to optimize application security. Configuring Oracle iPlanet WebServer / Oracle Traffic Director to use crypto accelerators on T4-1 servers by Meena Vyas (2012)

    Read the article

  • Resize videos with different widths to a fixed height preserving aspect ratio with ffmpeg

    - by Axarydax
    I'd like to convert a lot of video files to flash video for our company's website. I have a requirement that all of the videos must be in 360p format, so their size would be Nx360. FFMpeg uses -s argument to specify target resolution as WxH. I don't know Width, as it depends on source file aspect ratio. If source is 640x480, target will be 480x360. If source is 848x480, target will be 636x360. Is there a way to do it with some switch of ffmpeg? That it will preserve aspect ratio and I'll only specify the height of target video? I could easily solve it by making a program that will launch ffprobe to get source video size, calculate aspect ratio and then calculate a new width.

    Read the article

  • no dual screens with 11.10 and Asus m4A89 GTD Pro

    - by Alex
    I'm having an issue getting dual monitors working for Kubuntu 11.10. I have Asus m4A89 GTD pro/USB3 mother board with integrated Ati HD4290 graphics chip. When I try to enable multiple monitors through the system settings, it says "This module is only for configuring systems with a single desktop spread across multiple monitors. You do not appear to have this configuration." I had previously attempted to fix this problem with another installation of Ubuntu 11.10, but ended up having to reinstall ubuntu because i messed up the software center dependencies. After I installed Ubuntu the first time, a notification showed up asking me to install an Ati graphics driver. I installed this driver, then restarted, and dual monitors did not work. That was when I went to the ATI site and attempted to install the fglrx driver. When I tried to run the shell script for the fglrx driver, it said i had a previous version of an fglrx driver installed, and needed to remove it in order to install the new one. So I looked up some tutorial on how to remove it and found some apt-get remove command, which i ran. Then I was able to install the new driver. Dual monitors still did not work, and i couldn't use the software center any more because it was corrupted and was unable to repair itself. So i just reinstalled ubuntu, and now i'm trying to go about this the correct way. Does anyone have this same configuration and which driver works for you?

    Read the article

  • How to upgrade XBMC Live via command line?

    - by sunpech
    I've been unable to do a fresh install of XBMC Live 9.11 to my hard drive. Everytime it fails at the Install System step. But I am able to get XBMC Live 9.04.1 to install successfully. How do I upgrade XBMC Live 9.04.1 to 9.11? I understand that Ctrl+Shift+F2 brings up the command line, but what are the next set of commands to run?

    Read the article

  • droid cam makefile understanding and error

    - by nerorevenge
    I tried installing the droid cam on my fedora 19 (64 bit) . Link to the droid cam application is here and whenever I try to install it , the Makefile which is as follows is invoked obj-m := v4l2loopback-dc.o all: make -C /lib/modules/`uname -r`/build M=`pwd` test: gcc test.c -o test clean: make -C /lib/modules/`uname -r`/build M=`pwd` clean insmod: sudo insmod v4l2loopback-dc.ko width=320 height=240 rmmod: sudo rmmod v4l2loopback-dc.ko and here is the error -- INSTALL: Webcam parameters: '320' and '240' -- INSTALL: Building v4l2loopback-dc.ko make -C /lib/modules/`uname -r`/build M=`pwd` make: *** /lib/modules/3.9.5-301.fc19.x86_64/build: No such file or directory. Stop. make: *** [all] Error 2 -- INSTALL: v4l2loopback-dc.ko not built.. Failure build happens to be a symbolic link.I was wondering what exactly is the makefile trying to and why is it failing?

    Read the article

  • Is using Windows Update to find Drivers for laptop good enough?

    - by user22105
    I'm trying to install drivers for my laptop after fresh install of Windows 8. From the manufacturer's website, there are many drivers and utilities available. But after running Windows Update(which finds drivers for the hardware on my laptop), I only had to install a few (the ones with Unidentified Device) I'm wondering if this is good enough? or would I skip some important drivers like Chipset Thermo management etc etc that could potentially cause harm to my laptop?

    Read the article

  • How to solve package issues/dependencies

    - by Wolfgang Kuehne
    Background info I am trying to install Veins simulation environment by following the tutorial provided by the author. In step 1 it is required to install some packages in Linux, the tutorial suggest this commands to be executed on Terminal: sudo apt-get install build-essential gcc g++ bison flex perl tcl-dev tk-dev blt libxml2-dev zlib1g-dev default-jre doxygen graphviz libwebkitgtk-1.0-0 openmpi-bin libopenmpi-dev libpcap-dev autoconf automake libtool libxerces-c2-dev proj libgdal1-dev libfox-1.6-dev When I execute this command, I immediately get: E: Package 'proj' has no installation candidate Then I remove the proj from the command and execute it again without proj in it, next I get: The following packages have unmet dependencies: libgdal1-dev : Depends: libgdal-dev but it is not going to be installed E: Unable to correct problems, you have held broken packages. So, I remove libgdal1-dev from the command as well. And it executes file, by downloading the remaining packages. To troubleshoot the problem with proj and libdgal1-dev I go to the Synaptic Package Manager. libgdal1-dev I search for libgdal1-dev in Synaptic Package Manager and I get an entry. I Mark for Installation and then Synaptic Package Manager suggests removing libxerces-c2-dev which is actually added via the initial command. Should I trust Synaptic Package Manager with this suggestion, and proceed further? proj What should I do about proj. There are some packages in Synaptic Package Manager such as proj-bin or libproj-dev. Should I install them? I think proj has to do with this and this What should I do to make sure that this simulation tool works fine?

    Read the article

  • Windows 2012 Server: Enable .NET 3.5

    - by Meengla
    I have a Windows 2012 Server which needs .NET 3.5 installed. For background info/solutions, please see this: http://sqlblog.com/blogs/sergio_govoni/archive/2013/07/13/how-to-install-netfx3-on-windows-server-2012-required-by-sql-server-2012.aspx I have tried this but it doesn't work for me. Here is what I am trying: Using a Windows 2012 ISO file, 'mount' on the 2012 Server as 'D' drive and then tried both GUI and Command prompt. In case of GUI, I specified the 'alternate source' path to the D drive's 'source/sxs folder but that failed without giving enough info. In case of the command prompt, here is what's happening: dism /online /enable-feature /featurename:NetFx3 /source:d:\sources\sxs I get error: Installed but Parent feature not enabled. So I tried another approached: dism /online /enable-feature /featurename:NetFx3 /all /source:d:\sources\sxs the above command, per http://blogs.technet.com/b/askcore/archive/2012/05/14/windows-8-and-net-framework-3-5.aspx is supposed to enable parent elements; but running this I get error like 'source not found'. Is there some error in my second command? What else I could do? This is Windows 2012 Server Standard edition. Thanks!

    Read the article

  • Macbook Pro Triple Boot OS X Lion, Windows 7 and Windows 8

    - by Lloyd Sparkes
    MacBook Pro (Summer 2010 Model, Basic Model) I currently have OS X Lion and Windows 7 running side by side on my MacBook Pro. However I have a need to get Windows 8 running as well in this mix (a Virtual Machine is not good enough, I need the performance). I have created a suitably sized parition (80GB) that is recognizable in Boot Camp. However every time I try to boot from the USB stick (that worked to install Windows 8 on my PC) using the latest version of rEFIt, it just boots Windows 7 and not the Windows 8 installer. I cannot start the installation within Windows 7 as it will just install over Windows 7. I'm guessing the Boot Camp emulation is doing something werid to stop the "Press any key to install Windows..." message from appearing (which should happen if the installer detects Windows is already installed (e.g. if you left your install disk in). Is there a way to get around this / force the installer to start? (Note I cannot start the Windows 7 installer either if I wanted to install a second copy of Windows 7 to upgrade to Windows 8)

    Read the article

  • Problems Dual Boot

    - by user104108
    A few months I decided to install Ubuntu 12.04 on my PC alongside with my Windows 7 partition. In order to do that and avoid any mistake, I followed the steps of these tutorial: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/2/ Everything was going well until I decided to update to the 12.10 realese. I don't know what happened, but after I updated my Ubuntu, it stoped working, it didn't even launched, when I turned on my pc and choose to run "Ubuntu 12.04" on the Grub Screen, a weird messaged appeared. Well, so I decided to install the Ubuntu 12.10 and forget about the 12.04 partition, no problem. I erased the partitions used for the Ubuntu 12.04 with EaseUS partition Manager. However, when I start my PC, there is still the option of "Ubuntu 12.04" to chose, is that bad? And what about now, can I use the Windows Installer of Ubuntu ( http://www.ubuntu.com/download/help/install-ubuntu-with-windows ) to install the Ubuntu 12.10 ? What should I do to have Ubuntu 12.10 and Windows 7 in dual boot again? Thanks; Thales.

    Read the article

  • Error installing Arch Linux

    - by Garethj94
    So, I am trying to install Arch Linux on my Acer Aspire 4830tg, but I keep running into problems. Some background knowledge, I am trying to install Arch off a usb stick and I got the iso image using bittorrent. I am also trying to install it alongside of Windows 8 (which is already installed). So when I boot into Arch linux I get this error: :: Mounting '/dev/disk/by-label/ARCH_201212' to 'run/archiso/bootmnt' Waiting 30 seconds for device /dev/disk/by-label/ARCH_201212 ... ERROR: '/dev/disk/by-label/ARCH_201212' device did not show up after 30 seconds... Falling back to interactive prompt You can try to fix the problem manually, log out when you are finished sh: can't access tty; job control turned off I know that it will work if I run it on a virtual machine but whenever I try to install it on my laptop I keep getting this error. And since you can't register for the Arch forums without a Arch terminal to run their captcha command I can't ask this on their forums.

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >