Search Results

Search found 42786 results on 1712 pages for 'install from source'.

Page 708/1712 | < Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >

  • How do I configure nVidia drivers on a Portable Ubuntu setup?

    - by Nicholas Flynt
    I've been pulling my hair out over this one for a couple of days now, google is no help. I've created a wonderful (until this issue) portable copy of Ubuntu linux that will boot on mostly anything by using a USB enclosure for my laptop's 80GB SATA drive. So far so good, it boots and runs on everything, and on non-nVidia card setups was even detecting the drivers, or letting me install the required drivers for hardware acceleration and compiz. Because you know, the wobble windows are the most awesome thing ever. Anyway, my desktop machine had an nVidia card, so I'm thinking, sure, I'll just install the nVidia drivers like before and everything will work happily. Not so-- now the desktop and any other nVidia cards work great, but it seems to have completely disabled any other graphics cards. When the kernel module detects that an nVidia card isn't present, it shoots up this nasty little dialog box giving me the option to boot into "low graphics" mode, which doesn't even allow me to use the correct screen resolution, much less see the installed graphics card and try to configure a driver for it. Is there any way to configure Ubuntu (with the dreaded nVidia kernel module) so that it can use nVidia's drivers when an nVidia card is present, and default to the normal (not low-graphics) setup in other cases, so that it has a fair chance of using what's actually present? I'm not afraid to much with config files, I just don't know the underlying system well enough to feel comfortable diving in without a push in the right direction. Thanks guys!

    Read the article

  • OpenSSL: how to setup an OCSP server for checking third-party certificates?

    - by StackedCrooked
    I am testing the Certificate Revocation functionality of a CMTS device. This requires me to setup a OCSP responder. Since it will only be used for testing I assume that the minimal implementation provided by OpenSSL should suffice. I have extracted the a certificate from a cable modem, copied it to my PC and converted it to the PEM format. Now I want to register it in the OpenSSL OCSP database and start a server. I have completed all these steps, but when I do a client request my server invariably responds with "unknown". It seems to be completely unaware of my certificate's existence. I would greatly appreciate if anyone would be willing to have a look at my code. For your convenience, I have created a single script consisting of a sequential list of all used commands, from setting up the CA until starting the server: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/AllCommands.sh You can also find the custom config file and the certificate that I am testing with: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/ Any help would be greatly appreciated.

    Read the article

  • OpenSSL: how to setup an OCSP server for checking third-party certificates?

    - by StackedCrooked
    I am testing the Certificate Revocation functionality of a CMTS device. This requires me to setup a OCSP responder. Since it will only be used for testing I assume that the minimal implementation provided by OpenSSL should suffice. I have extracted the a certificate from a cable modem, copied it to my PC and converted it to the PEM format. Now I want to register it in the OpenSSL OCSP database and start a server. I have completed all these steps, but when I do a client request my server invariably responds with "unknown". It seems to be completely unaware of my certificate's existence. I would greatly appreciate if anyone would be willing to have a look at my code. For your convenience, I have created a single script consisting of a sequential list of all used commands, from setting up the CA until starting the server: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/AllCommands.sh You can also find the custom config file and the certificate that I am testing with: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/ Any help would be greatly appreciated.

    Read the article

  • Cannot perform a PECL installation

    - by Petrusa
    I have been trying to do a few PECL installations, but all of them return the same type of error. Something related to timezones? Im running RedHat x86_64 es5. Attempting to install geoip-1.0.7: root@server [~]# pecl install geoip-1.0.7 downloading geoip-1.0.7.tgz ... Starting to download geoip-1.0.7.tgz (9,416 bytes) .....done: 9,416 bytes Warning: strtotime(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead in PEAR/Validate.php on line 489 Warning: strtotime(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead in /usr/local/lib/php/PEAR/Validate.php on line 489 3 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/geoip-1.0.7 running: /root/tmp/pear/geoip/configure checking for egrep... grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... configure: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. See `config.log' for more details. ERROR: `/root/tmp/pear/geoip/configure' failed What is going on? Anyone that could assist please...

    Read the article

  • How long will a USB key with an OS installed on it last?

    - by Xananax
    I've heard numerous times that installing an OS on a USB key is a bad thing to do, as USBs typically have a certain number of writes before dying, and installing an OS on it will wear it out (unless it's used sporadically for rescue purposes). Nonetheless, I am very tempted to install some flavour of Linux (Ubuntu or Arch, I haven't decided yet) on a small, transportable, USB Key. My problem is, although you read a lot that it's "bad", you are never told how bad. How long would it last (provided, say, a pc that is 24/7 on)? A month? A year? Five years? Is there recipes to make it last longer? Is there any reason beside weariness that should prevent me from attempting this? I mean, if it can be calculated, then I could theoretically shield myself by doing regular backups on another key when the deadline gets close (for example). Notes I am not talking of using a USB as a live CD, but actually installing the OS on it.) When I say "USB Key", I refer to the little USBs with a flash memory, not an external USB hard drive. For the curious, my reason is that I work in a lot of different places, on different PCs, and I have a very customized session, with my own WM, my own key bindings, my own scripts, , a selection of plugins for firefox and chrome, etc, and currently I am synchronizing all this through a mix of dropbox, git, and transporting files on USBs, and and it's becoming a chore. It would be much simpler for me to just plug the USB and mount the hard disk of the PC I am using and use it's processing power without actually needing to install any OS on it.

    Read the article

  • EFI vs MBR - Installing Windows Server 2008 R2 or 2012 on 8TB

    - by Riaan de Lange
    I'm having some difficulty installing Windows Server 2008 R2 and Windows Server 2012 on an Intel Server platform. The server specs is as follows: Intel Grizzly Pass Server System - R2308GZ4GC 2x Intel Xeon 2620 - 2.0 GHZ - BX80621E52620 132 GB of Memory REG-DIMM - TS1GKR72V6H 4x Seagate Constellation ES 2TB 3.5" 7200rpm 6GB/S - ST32000645NS Intel Big Laurel 4CH 6G SAS RAID 512MB - RS2BL040 On the Intel RAID Controller Setup, I have setup the HDD to be in RAID-0 - for testing purposes. (Ultimately configured in RAID-5) So, the total size of HDD space I can use is 7.6 TB something... When I install the Server OS's, they don't seem to go beyond 2 TB (1.76 TB) I have read up on EFI and UEFI boot, and this seems to work in 2012, but I could not install any drivers for the motherboard... So, I also tried EFI for 2008R2, and this worked while installing the OS, it did not however work with the Windows Boot Manager option in the BIOS. It kept on freezing once it tries to load the partition. My idea was to allocate the complete 8 TB for the OS, and load a few VM's on there. I have now started with a new approach where I'll have a 256 GB OS Partition, and a secondary 7.5 TB Data partition. Oh, and I also did a diskpart - convert disk to gpt whilst installing 2008R2. The whole disk was accessible, 7.6TB Can anyone please clarify that EFI/UEFI is meant for larger boot volumes? Bigger than 2TB. If I were to have an ideal situation where my OS is run on a SSD, 256GB, and I can attach the 8 TB drives as normal disk to the OS? I'm I correct in saying that if I wanted to boot from a 8TB partition, I would need to force the BIOS to boot from EFI? The limit for MBR is 2 TB as far as I know now... *FYI: The motherboard is EFI-ready

    Read the article

  • Hylafax: Encounter "No font metric information" when try to send a fax

    - by Chau Chee Yang
    I am using Hylafax 6.0.5 on Fedora 13 x86_64. As there are no rpm package available for Fedora 13, I use the source tar ball to install hylafax myself. Everything seems fine during compile and install. I try to send a fax with sendfax and encounter error: # sendfax -n -d <fax-number> /etc/passwd /usr/local/sbin/textfmt: No font metric information found for "Courier-Bold". Usage: /usr/local/sbin/textfmt [-1] [-2] [-B] [-c] [-D] [-f fontname] [-F fontdir(s)] [-m N] [-o #] [-p #] [-r] [-U] [-Ml=#,r=#,t=#,b=#] [-V #] files... >out.ps Default options: -f Courier -1 -p 11bp -o 0 Error converting document; command was "/usr/local/sbin/textfmt -B -f Courier-Bold -Ml=0.4in -p 11 -s default >'/tmp//sndfaxp5GdJ9' <'/etc/passwd'" It seems like there is problem with font problem. I have ghostscript-fonts installed too. I can't find hyla.conf in path /etc/hylafax. There is no /etc/hylafax path in my file system. All configuration files seems located in /var/spool/hylafax/etc. Please advice. Thank you.

    Read the article

  • FreeBSD slow transfers - RFC 1323 scaling issue?

    - by Trey
    I think I may be having an issue with window scaling (RFC 1323) and am hoping that someone can enlighten me on what's going on. Server: FreeBSD 9, apache22, serving a static 100MB zip file. 192.168.18.30 Client: Mac OS X 10.6, Firefox 192.168.17.47 Network: Only a switch between them - the subnet is 192.168.16/22 (In this test, I also have dummynet filtering simulating an 80ms ping time on all IP traffic. I've seen nearly identical traces with a "real" setup, with real internet traffic/latency also) Questions: Does this look normal? Is packet #2 specifying a window size of 65535 and a scale of 512? Is packet #5 then shrinking the window size so it can use the 512 scale and still keep the overall calculated window size near 64K? Why is the window scale so high? Here are the first 6 packets from wireshark. For packets 5 and 6 I've included the details showing the window size and scaling factor being used for the data transfer. Code: No. Time Source Destination Protocol Length Info 108 6.699922 192.168.17.47 192.168.18.30 TCP 78 49190 http [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=8 TSval=945617489 TSecr=0 SACK_PERM=1 115 6.781971 192.168.18.30 192.168.17.47 TCP 74 http 49190 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 WS=512 SACK_PERM=1 TSval=2617517338 TSecr=945617489 116 6.782218 192.168.17.47 192.168.18.30 TCP 66 49190 http [ACK] Seq=1 Ack=1 Win=524280 Len=0 TSval=945617490 TSecr=2617517338 117 6.782220 192.168.17.47 192.168.18.30 HTTP 490 GET /utils/speedtest/large.file.zip HTTP/1.1 118 6.867070 192.168.18.30 192.168.17.47 TCP 375 [TCP segment of a reassembled PDU] Details: Transmission Control Protocol, Src Port: http (80), Dst Port: 49190 (49190), Seq: 1, Ack: 425, Len: 309 Source port: http (80) Destination port: 49190 (49190) [Stream index: 4] Sequence number: 1 (relative sequence number) [Next sequence number: 310 (relative sequence number)] Acknowledgement number: 425 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) Window size value: 130 [Calculated window size: 66560] [Window size scaling factor: 512] Checksum: 0xd182 [validation disabled] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2617517423, TSecr 945617490 [SEQ/ACK analysis] TCP segment data (309 bytes) Note: originally posted http://forums.freebsd.org/showthread.php?t=32552

    Read the article

  • Invalid Parameter on node puppet

    - by chandank
    I am getting an error of err: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter port at /etc/puppet/manifests/nodes/node.pp:652 on node test-puppet My puppet class: (The Line 652 at node.pp) node 'test-puppet' { class { 'syslog_ng': host => "newhost", ip => "192.168.1.10", port => "1999", logfile => "/var/log/test.log", } } On the module side class syslog_ng::config ( $host , $ip , $port, $logfile){ file {'/etc/syslog-ng/syslog-ng.conf': ensure => present, owner => 'root', group => 'root', content => template('syslog-ng/syslog-ng.conf.erb'), notify => Service['syslog-ng'], require => Class['syslog_ng::install'], } file {"/etc/syslog-ng/conf/${host}.conf": ensure => present, owner => 'root', group => 'root', notify => Service['syslog-ng'], content => template("syslog-ng/${host}.conf.erb"), require => Class['syslog_ng::install'], } } I think I am doing it as per the puppet documentation.

    Read the article

  • Can one really fry a monitor by setting the wrong HorizSync and VertRefresh?

    - by rumtscho
    I've encountered this problem on several different systems with several different monitors: a monitor functions perfectly under Windows. I install a Linux and the max resolution is at some impossibly low value, mostly 640x480, changing it in Xorg.conf doesn't work. The X.org log file then shows that the driver cannot determine the correct refresh rate for the monitor, so it ignores everything in Xorg.conf and just loads in some default minimalistic mode. Googling the problem leads to an easy solution: set the HorizSync and VertRefresh in Xorg.conf, and everything works. The problem seems to be a common one, and I've seen dozens of results recommending the solution. Each of them contains the warning that you should use the value ranges provided with the monitor. Because if you don't, and your video card sends a signal with the wrong refresh rate, this can damage your monitor. Of course, you don't have a user manual for your monitor any more. If you are lucky to find one on the attic or on the net, it doesn't contain any information about the supported refresh rate. So you just type in the value suggested in the solution description, which varies wildly depending on your source, and cross your fingers. You restart, and... ... you've set the wrong values. So the monitor shows a short message like "input signal out of range", and you do a hard restart, repair your Xorg.conf in recovery mode, and everything is fine, including your monitor. So does this warning reflect a real possibility, or is it just a geeky urban legend? Or is it something which used to happen in the past, before manufacturers started protecting the monitors against it? Is it technically possible with every monitor technology, or is it maybe something which can only happen to a CRT? If you think that it's true, why? Have you ever witnessed a monitor die from the wrong refresh config, or have you read of it in a reputable source?

    Read the article

  • Why is a FLAC encoded from a decoded MP3 bigger than the MP3?

    - by Ryan Thompson
    To be more precise than in the title, suppose I have a MP3 file that is 320 kbps. If I decompress it, then logically, all the data except for roughly 320 kilobits out of each second of audio should be redundant data, able to be compressed away. So, when I encode the decompressed file to FLAC, or any other lossless codec, why is it so much larger? On a related note, is it theoretically possible to losslessly recover the source mp3 audio from a decompressed wav? (I know the mp3 itself is lossy. I'm asking if it's possible to re-encode without any further loss.) EDIT: Let me clarify the related question, and the rationale behind it. Suppose I have a wav that was decompressed from an MP3 file (and assume I don't have the mp3 itself for some reason). If I don't want to lose any more quality, I can re-encode it with FLAC or any other lossless encoder and get a larger file just to maintain the same quality. Or, I can re-encode it to mp3 again and get the same size as the original but lose more data. Obviously, neither of these cases is ideal. I can either have the original size or the original quality, but not both (I mean the quality of the original mp3, not the original lossless source). My question is: Can we get both? Is it theoretically possible to recover the lossy compressed data from the lossy decompressed data, without losing even more? If it is possible, I could imagine a lossless compression algorithm that compresses the audio with FLAC. Then it also scans the audio for any signs of previous lossy compression, and if detected, recompresses it losslessly to the original lossy file. Then it keeps whichever file is smaller.

    Read the article

  • How do I configure a new (non-OS) raid device under Windows 7?

    - by GregH
    I recently installed 3 new 1TB drives in my Windows 7 (64 bit) system. These are in addition to the 10k rpm disk that I already have running the Windows 7 OS. My intent is to create a RAID 5 volume with the 3 disks. I don't seem to have a problem configuring the bios and creating the resulting 1.9 TB RAID volume. I run in to the problem when I try booting in to Windows. I get a quick flash of a blue screen and then am prompted by windows to do a repair. It tries to repair and then reboots. This sequence lasts indefinitely. If I re-configure the bios back to non-RAID (ACHI) then windows boots fine. The strange thing is that the 1.9 TB volume I configured through the bios actually shows up in windows! Strange since the motherboard is not set up with RAID. I assume that I somehow have to install the RAID drivers from the mobo manufacturer. How do I do this? Is the reason I'm getting the blue screen a result of not having the RAID drivers installed? It's strange because I can find plenty of documentation on how to set up RAID and do a fresh install Windows on to the RAID device, but nothing on how to set up a RAID device on an already running system. Advice is appreciated.

    Read the article

  • CryptSvc not matched by Windows 7 Firewall rule

    - by theultramage
    I am using Windows Firewall in conjunction with a third-party tool to get notified about new outbound connection attempts (Windows Firewall Notifier or Windows Firewall Control). The way these tools do it is by setting the firewall to deny by default, and to add an auditing policy to log blocked connections into the Security event log. Then they watch the log, and display notification about newly added entries. netsh advfirewall set allprofiles firewallpolicy blockinbound,blockoutbound auditpol /set /subcategory:{0CCE9226-69AE-11D9-BED3-505054503030} /failure:enable With this configuration in place, I now need to craft outbound allow rules for applications and system services. Here is the rule for CryptSvc, the service frequently used for certificate validation and revocation checking: netsh advfirewall firewall add rule name="Windows Cryptographic Services" action=allow enable=yes profile=any program="%SystemRoot%\system32\svchost.exe" service="CryptSvc" dir=out protocol=tcp remoteport=80,443 The problem is, this rule does not work. Unless I change the scope to "all programs and services" (which is really unhealthy), connection denied events like the following will keep appearing in the security log: Event 5157, Microsoft Windows security auditing. The Windows Filtering Platform has blocked a connection. Application Information: Process ID: 1476 (<- svchost.exe with CryptSvc and nothing else) Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Direction: Outbound Source Address: 192.168.0.1 Source Port: 49616 Destination Address: 2.16.52.16 Destination Port: 80 Protocol: 6 (<- TCP) To make sure it's CryptSvc, I have let the connection through and reviewed its traffic; I also configured CryptSvc to run in its own svchost instance to make it more obvious: ;sc config CryptSvc type= share sc config CryptSvc type= own So... why is it not matching the firewall rule, and how to fix that?

    Read the article

  • Using UDF on a USB flash drive

    - by CesarB
    After failing to copy a file bigger than 4G to my 8G USB flash drive, I formatted it as ext3. While this is working fine for me so far, it will cause problems if I want to use it to copy files to someone which does not use Linux. I am thinking of formatting it as UDF instead, which I hope would allow it to be read (and possibly even written) on the three most popular operating systems (Windows, MacOS, and Linux), without having to install any extra drivers. However, from what I found on the web already, there seem to be several small gotchas related to which parameters are used to create the filesystem, which can reduce the compability (but most of the pages I found are about optical media, not USB flash drives). I would like to know: Which utility should I use to create the filesystem? (So far I have found mkudffs and genisoimage, and mkudffs seems the best option.) Which parameters should I use with the chosen utility for maximum compability? How compatible with the most common versions of these three operating systems UDF actually is? Is using UDF actually the best idea? Is there another filesystem which would have better compatibility, with no problematic restrictions like the FAT32 4G file size limit, and without having to install special drivers in every single computer which touches it?

    Read the article

  • How to set up a serial connection to a Windows 7 computer

    - by oli_arborum
    I need to set up a "dial in" connection to a Windows 7 (Ultimate) computer via a serial null-modem cable to be able to connect from a Windows XP client to that computer and exchange data over IP. Question 1: How do I do that? I did neither find the information via Google nor in the MSDN. Seems like no one tried ever before... ;-) I already managed to install a legacy modem device called "Communications cable between two computers" and found the menu entry "New Incoming Connection..." in Network and Internet Network Connections. When I finish this wizard I get the message that the "Routing and Remote Access service" cannot be started. In the event viewer I see the following error messages: "The currently configured authentication provider failed to load and initialize successfully. The requested name is valid, but no data of the requested type was found." (Source: RemoteAccess, EventID: 20152) "The Routing and Remote Access service terminated with service-specific error The requested name is valid, but no data of the requested type was found." (Source: Service Control Manager, EventID: 7024) The Windows 7 installation is "naked", i.e. no additional software or services are installed. Question 2: Am I on the right path to set up the connection? Question 3: How can I get the Routing and Remote Access service running?

    Read the article

  • How to use the AWUS036H on MacBook Pro with Lion and Backtrack in VM?

    - by Swader
    I have the AWUS036H USB WiFi adapter and have recently upgraded the OSX to Lion. The thing is, there are no drivers for Lion for the AWUS036H, and I would have to boot into 32bit mode every time I want to launch the adapter as per instructions here: http://www.youtube.com/watch?v=n9_HAGi1ce0 I also want to install BackTrack as I deal in networks a lot for my company. While this would be a simple matter on any other laptop, the company issued Macbook does not allow booting into any OS other than MacOSX or Windows with Bootcamp. Now, since dual booting into BT is not an option, I would like Backtrack to run in VM inside my MacOSX Lion - and this it does. It works like a charm inside VirtualBox. But since there are no 64bit drivers for the wifi adapter, Lion doesn't recognize it and cannot install it. This, in turn, means that Backtrack cannot see it even though AWUS036H usually works flawlessly with BT. How can I make my VM-based BT see the wifi adapter even if the parent OS doesn't see it, if at all? Is there a way, or am I better off buying a new WiFi adapter that supports OSX 10.7 such as the AWUS036NHR?

    Read the article

  • Issues installing apache debian

    - by Belgin Fish
    I'm having issues installing apache2, and pretty much everything in general, I'm using debian. I run sudo apt-get install apache2 and then it returns root@debian:~# apt-get install apache2 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: apache2 : Depends: apache2-mpm-worker (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-prefork (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-event (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-itk (= 2.2.16-6+squeeze7) but it is not going to be installed Depends: apache2.2-common (= 2.2.16-6+squeeze7) but it is not going to be installed E: Broken packages Not really sure what's up :S Seems like it can't find any of the required packages for anything, Anyone know what I'm doing wrong?

    Read the article

  • DEB: "Provides:" field ignored

    - by Creshal
    I need to replace a package with a custom one, which gets its own name (foo-origpackage). To allow it to be used as drop-in replacement, I added the Provides: origpackage line to the control file. apt-cache show foo-origpackage lists the "Provides" entry just fine. However, when I want to install a file depending on origpackage, it fails ("Package origpackage not installed"). Is there some distinction between "real" and virtual packages I'm missing? EDIT: To be precise, what I want to replace is xen-utils-common for Squeeze. My tao-xen-utils-common has the following control file: Source: tao-xen-utils-common Section: kernel Priority: optional Maintainer: Creshal <[email protected]> Build-Depends: debhelper Standards-Version: 3.8.0 Homepage: http://tao.at Package: tao-xen-utils-common Architecture: all Depends: gawk, lsb-base, udev, xenstore-utils, tao-firewall Provides: xen-utils-common Conflicts: xen-utils-common Replaces: xen-utils-common Description: Xen administrative tools - common files (modified) The userspace tools to manage a system virtualized through the Xen virtual machine monitor. Modified for use with TAO Firewall. Installing xen-utils-4.0 fails, however: foo@bar# apt-cache showpkg tao-xen-utils-common Package: tao-xen-utils-common Versions: 4.0.0-1tao1 (/var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages MD5: 7c2503f563fca13b33b4eb3cbcb3c129 Reverse Depends: tao-firewall,tao-xen-utils-common tao-firewall,tao-xen-utils-common Dependencies: 4.0.0-1tao1 - gawk (0 (null)) lsb-base (0 (null)) udev (0 (null)) xenstore-utils (0 (null)) tao-firewall (0 (null)) xen-utils-common (0 (null)) xen-utils-common (0 (null)) Provides: 4.0.0-1tao1 - xen-utils-common Reverse Provides: foo@bar# apt-get install xen-utils-4.0 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: xen-utils-common Suggested packages: xen-docs-4.0 The following packages will be REMOVED: tao-xen-utils-common The following NEW packages will be installed: xen-utils-4.0 xen-utils-common Edit:foo@bar# apt-cache policy xen-utils-4.0 xen-utils-4.0: Installed: (none) Candidate: 4.0.1-4 Version table: 4.0.1-4 0 500 http://ftp.at.debian.org/debian/ stable/main amd64 Packages 4.0.1-4 0 500 http://security.debian.org/ stable/updates/main amd64 Packages

    Read the article

  • Which ports are needed for NTLM (Windows Authentication) to connect to SQL Server?

    - by Adam Bellaire
    I've got SQL server running on a machine which is not in a domain, and which is not operating in mixed mode (it's running with "Windows Authentication"). I'm trying to connect to it from a Linux web server running freetds via TCP/IP, using NTLM to authenticate. The firewall on the SQL server is very restrictive. 1433 is open to my web server, but I'm getting conflicting information from the web on what additional ports (TCP/UDP) are needed for NTLM to succeed. It is currently fail; I can talk on 1433 to request NTLM, but the actual authentication always fails. One source says 137, 138, 139, but those are just the NetBIOS ports. Do I really need those? Another source says 135. Still others seem to say 1434... I can't make heads or tails of it. Dammit Jim, I'm a programmer, not a network administrator! EDIT: The exact error message: Msg 18452, Level 14, State 1, Server , Line 0 Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection. Msg 20002, Level 9, State -1, Server OpenClient, Line -1 Adaptive Server connection failed I am attempting to connect with a remote machine username, i.e. 'servername\username'. Some sources recommend that I set up mirrored accounts on the local and remote machines, but the local machine is running Linux, not IIS under Windows.

    Read the article

  • domain2.com redirects to domain1.com in Apache

    - by Dmitry Mikhaylov
    I created new virtual host, but when I try to request it, Apache redirects me to another virtual host. What could cause this problem? <VirtualHost XXX.XXX.XXX.XXX:80 > ServerName domain1.com AddDefaultCharset utf-8 CustomLog /var/www/httpd-logs/domain1.com.access.log combined DocumentRoot /home/user/www/domain1.com ErrorLog /var/www/httpd-logs/domain1.com.error.log ServerAdmin [email protected] ServerAlias www.domain1.com SuexecUserGroup user user AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml AddType application/x-httpd-php-source .phps php_admin_value open_basedir "/home/user:." php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -f [email protected]" php_admin_value upload_tmp_dir "/home/user/mod-tmp" php_admin_value session.save_path "/home/user/mod-tmp" ScriptAlias /cgi-bin/ /home/user/www/domain1.com/cgi-bin/ </VirtualHost> <VirtualHost XXX.XXX.XXX.XXX:80 > ServerName domain2.com CustomLog /dev/null combined DocumentRoot /home/user/www/domain2.com ErrorLog /dev/null ServerAdmin [email protected] ServerAlias www.domain2.com SuexecUserGroup user user AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml AddType application/x-httpd-php-source .phps php_admin_value open_basedir "/home/user:." php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -f [email protected]" php_admin_value upload_tmp_dir "/home/user/mod-tmp" php_admin_value session.save_path "/home/user/mod-tmp" </VirtualHost> "apache2ctl -S" output: VirtualHost configuration: XXX.XXX.XXX.XXX:80 is a NameVirtualHost default server domain1.com (/etc/apache2/apache2.conf:266) port 80 namevhost domain1.com (/etc/apache2/apache2.conf:266) port 80 namevhost domain2.com (/etc/apache2/apache2.conf:284) XXX.XXX.XXX.XXX:443 is a NameVirtualHost default server domain1.com (/etc/apache2/apache2.conf:246) port 443 namevhost domain1.com (/etc/apache2/apache2.conf:246) wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server www.example.com (/etc/apache2/apache2.conf:239) port 443 namevhost www.example.com (/etc/apache2/apache2.conf:239) *:80 is a NameVirtualHost default server domain1.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost domain1.com (/etc/apache2/sites-enabled/000-default:1)

    Read the article

  • OpenWrt vs DDWrt

    - by Ioan Paul Pirau
    I have a TP-Link Wr1043ND router and I want to install one of these two firmwares: OpenWRT DD-WRT I read that I can install custom packages and do much more than I can with the original firmware. I would like to ask someone with experience in using both OpenWRT and DD-WRT which he would recommend and why. And to give a few reference points I'm interested in: reliability – network stability both on cable and wireless and on the usb drive performance – network speed, very important also usb drive speed configurability – the possibility to add extensions such as a torrent client, FTP, SSH, WWW and SVN server directly ease of use – the ease of installation and configuration of the router support/docs – how much info there is if you stumble upon a problem and you have to find some documentation, or if there's any free support (but that's a longshot) Of course I don't imagine that I will find the perfect firmware and that one is vastly superior over the other. Also if there's anyone out there who uses one of these firmwares on a TP-Link Wr1043ND, it would be great to get some feedback about the impact of the changes from the original firmware. P.S. I'm open also for Tomato if it's the better one.

    Read the article

  • setting up git on cygwin - openssl

    - by user23020
    I'm trying to get git running in cygwin on a windows 7 machine I have git unpacked and the directory git-1.7.1.1 when i run make install from within that directory, I get CC fast-import.o In file included from builtin.h:4, from fast-import.c:147: git-compat-util.h:136:19: iconv.h: No such file or directory git-compat-util.h:140:25: openssl/ssl.h: No such file or directory git-compat-util.h:141:25: openssl/err.h: No such file or directory In file included from builtin.h:6, from fast-import.c:147: cache.h:9:21: openssl/sha.h: No such file or directory In file included from fast-import.c:156: csum-file.h:10: error: parse error before "SHA_CTX" csum-file.h:10: warning: no semicolon at end of struct or union csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:17: error: parse error before '}' token fast-import.c: In function `store_object': fast-import.c:995: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:995: error: (Each undeclared identifier is reported only once fast-import.c:995: error: for each function it appears in.) fast-import.c:995: error: parse error before "c" fast-import.c:1000: warning: implicit declaration of function `SHA1_Init' fast-import.c:1000: error: `c' undeclared (first use in this function) fast-import.c:1001: warning: implicit declaration of function `SHA1_Update' fast-import.c:1003: warning: implicit declaration of function `SHA1_Final' fast-import.c: At top level: fast-import.c:1118: error: parse error before "SHA_CTX" fast-import.c: In function `truncate_pack': fast-import.c:1120: error: `to' undeclared (first use in this function) fast-import.c:1126: error: dereferencing pointer to incomplete type fast-import.c:1127: error: dereferencing pointer to incomplete type fast-import.c:1128: error: dereferencing pointer to incomplete type fast-import.c:1128: error: `ctx' undeclared (first use in this function) fast-import.c: In function `stream_blob': fast-import.c:1140: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:1140: error: parse error before "c" fast-import.c:1154: error: `pack_file_ctx' undeclared (first use in this functio n) fast-import.c:1154: error: dereferencing pointer to incomplete type fast-import.c:1160: error: `c' undeclared (first use in this function) make: *** [fast-import.o] Error 1 I'm guessing that most of these errors are due to the iconv.h and openssl files which apparently are missing, but I can't figure out how I'm supposed to install those (if I am), or if there is some other way to get around this.

    Read the article

  • Nvidia Linux Driver Huge Resolution

    - by darxsys
    I'm trying to setup a working CUDA SDK on my Linux Mint. I'm new to Linux and everything connected with it. So, I tried following some steps on how to install CUDA. Firstly, I downloaded a Linux driver from here: http://developer.nvidia.com/cuda/cuda-downloads version 295.41. After that, I barely found a way to run it. I did it like this: 1. typed in sudo init 1 in terminal and switched to root 2. typed service mdm stop 3. ran the *.run file downloaded from the link above Then it started installing the driver. It gave some warning messages, but I ignored it. After installation, I typed init 5 and it came back to GUI screen, BUT everything is huge. I restarted, still huge. My screen resolution is 640x480 on a 17 inch laptop monitor. I tried running Nvidia X Server Settings, but it says: "You do not appear to be using Nvidia X Driver. Please edit your X configuration file." I tried that. Nothing happened. I cant change the resolution because that Nvidia Settings thing gives no options. Then I googled some things, installing some packages - nothing. The biggest problem is I don't understand whats really going on. My laptop is a Samsung with i7 and Nvidia Gt 650M with optimus. I cant even install bumblebee, but that is something I will try if I manage to get my resolution to default. Please, help!

    Read the article

  • SQL Server 2008 R2 - Cannot create database snapshot

    - by Chris Diver
    Server: Windows Server 2008 R2 X64 Enterprise SQL: SQL Server 2008 R2 Enterprise X64 I have a default SQL Server instance, the SQL Server service account is running as a domain user. I am trying to create a database snapshot in the directory where the mdf files are stored. The T-SQL syntax is correct. The file system is NTFS. The error message I get is: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 5119, Level 16, State 1, Line 1 Cannot make the file "e:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestDB.ss" a sparse file. Make sure the file system supports sparse files. The local SQLServerMSSQLUser$db$MSSQLSERVER group has Full Control permission on the folder where I am trying to create the snapshot. I can fix the problem in two ways, neither of which are suitable. Add the SQL Server service (domain) account to the local Administrators group and restart the SQL service. Grant the local SQLServerMSSQLUser$db$MSSQLSERVER group Full control on E:\ I have tried to change the owner of the DATA directory to SQLServerMSSQLUser$db$MSSQLSERVER to no avail. I have no issue creating a new database Why can I not create a snapshot by giving permission only on the DATA folder? Update 23/09/2010: I have tried mrdenny's suggestion with no luck (but learned something new in the process), I suspect the problem may be due to the fact that the domain is a windows 2000 domain running in mixed mode. I had to install hotfix KB976494 for Server 2008 R2, as the SQL Server 2008 R2 installer would not verify the service account correctly with the domain. I noticed that Server 2000 isn't a supported operating system for SQL 2008 R2 but cannot find anything that would suggest it shouldn't work in a 2000 domain. I dis-joined the test server from the domain and changed the service accounts to the local service account and I still have the same issue. I will try to re-install the server without joining the domain and without the hotfix and see if the issue persists.

    Read the article

  • Mac Mini drive problems but SMART verified: bad hard drive or controller?

    - by Zac Thompson
    I have a 3-year-old Intel Mac Mini at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least. DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way. Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer. Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

    Read the article

< Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >