Search Results

Search found 53962 results on 2159 pages for 'error detection'.

Page 1011/2159 | < Previous Page | 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018  | Next Page >

  • WordPress: can't access WordPress.com and other external sites?

    - by Rax Olgud
    Hello, I recently started a WordPress blog using hosting at MyDomain (they offer the application "natively"). The blog works fine, however I have two plugins I can't seem to install correctly. First, the WordPress.com Stats plugin requires the API Key. When I input it, I get the following message: Error from last API Key attempt: Your blog was unable to connect to WordPress.com. Please ask your host for help. (transport error - could not open socket: 110 Connection timed out) Second, the Akismet plugin is not configured. When I go to Akismet page to insert my API key, it has the following message: There was a problem connecting to the Akismet server. Please check your server configuration. I assume the two issues are related... I approached my hosting provider about the subject and all they said is that they don't support WordPress, only provide means to install it. To clarify, up to this point I have only been able to install plugins that don't require an API key. What can I do to diagnose the problem and fix it? As a work-around, are there comparable stats and anti-spam plugins that don't require an API key? Many thanks.

    Read the article

  • Sending USR2 to mongrel_rails sometimes results in an “Address already in use” on the restart

    - by Ben
    We have a rolling-restart mode for our mongrel cluster that sends a USR2 signal to each running process. This works great, most of the time. But very occasionally the mongrel process will shutdown, and then fail to restart, with the following error: /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize_without_backlog': Address already in use - bind(2) (Errno::EADDRINUSE) from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `listener' Looking though the mongrel source, the USR2 handler calls a synchronous stop on the running server, so it ought to block until the socket has been released. Has anyone seen this error? Does anyone have any ideas what might cause it? (I asked this question over on StackOverflow initially, but thought it might be more appropriate here)

    Read the article

  • Compiling realtime kernel from RHEL 6 MRG sources on CentOS 6

    - by Sashka B
    I'm trying to compile kernel-rt-2.6.33.9-rt31.75.el6rt.src.rpm from RHEL6 MRG source RPMs on Centos 6 x86_64 system. It's first time I'm doing this, so I did research on how to do this properly. From what I found, I did: rpm -ihv kernel-rt-2.6.33.9-rt31.75.el6rt.src.rpm cd ~/rpmbuild/SPECS nano kernel-rt.spec rpmbuild -bb kernel-rt.spec 2> build-err.log | tee build-out.log in kernel-rt.spec I've disbleed compilation of variants I don' need - ie compile only rt and firmware. Also defined not to build debuginfo. After compilation finished, I've got in ~/rpmbuild/RPMS/x86_64/ two files: kernel-rt-2.6.33.9-rt31.75.el6rt.x86_64.rpm kernel-rt-devel-2.6.33.9-rt31.75.el6rt.x86_64.rpm but when I tried to install kernel, I got error message: $ sudo rpm -ihv kernel-rt-2.6.33.9-rt31.75.el6rt.x86_64.rpm error: Failed dependencies: kernel-rt-firmware = 2.6.33.9-rt31.75.el6rt is needed by kernel-rt-2.6.33.9-rt31.75.el6rt.x86_64 There was no folder ~/rpmbuild/RPMS/noarch - where I would expect it to show up. Also, I've tried rpmbuild --rebuild kernel-rt-2.6.33.9-rt31.75.el6rt.src.rpm, but got same results... What am I doing wrong? I've seen this question, but it suggests what I tried already and I want to build kernel myself, not use pre-built from SLC.

    Read the article

  • Oracle® Database Express Edition roblem running on Win Server 2003 with MS SQl Server 2008 [closed]

    - by totoz
    Hi I have on Win Server 2003 MS SQL Server 2008 and also IIS is running. I try learn Oracle, so first I installed Oracle® Database Express Edition. I tried connect viac web browser on Oracle Server on url http://127.0.0.1:8080/apex I got this expcetion in browser The page cannot be found The page you are looking for might have been removed, had its name changed, or is temporarily unavailable. Please try the following: Make sure that the Web site address displayed in the address bar of your browser is spelled and formatted correctly. If you reached this page by clicking a link, contact the Web site administrator to alert them that the link is incorrectly formatted. Click the Back button to try another link. HTTP Error 404 - File or directory not found. Internet Information Services (IIS) Technical Information (for support personnel) Go to Microsoft Product Support Services and perform a title search for the words HTTP and 404. Open IIS Help, which is accessible in IIS Manager (inetmgr), and search for topics titled Web Site Setup, Common Administrative Tasks, and About Custom Error Messages. Why I can not log on Oracle Home Page?

    Read the article

  • USB To Serial under OpenSuse 11.3

    - by Lars
    I have a LogiLink USB-To-Serial adapter. This has the PL2303 chip inside. When I insert the device: [26064.927083] usb 7-1: new full speed USB device using uhci_hcd and address 9 [26065.076090] usb 7-1: New USB device found, idVendor=067b, idProduct=2303 [26065.076099] usb 7-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [26065.076105] usb 7-1: Product: USB-Serial Controller [26065.076110] usb 7-1: Manufacturer: Prolific Technology Inc. [26065.079181] pl2303 7-1:1.0: pl2303 converter detected [26065.091296] usb 7-1: pl2303 converter now attached to ttyUSB0 So the device is recognized and the converter is attached to ttyUSB0. When I do screen /dev/ttyUSB0 9600 I get the error: bash: /dev/ttyUSB0: Permission denied So I went looking in the file permissions. ls -l from the /dev folder reports: crw-rw---- 1 root dialout 188, 0 2011-04-26 15:47 ttyUSB0 I added my user lars to the dialout group. When I use the commands groups under lars it shows that I'm in the group. Though I still recieve the permissions denied error, as lars, and as root. I'm trying to connect to a console cable to configure some Cisco switches. My OS is OpenSuse 11.3 x86_64 with kernel version 2.6.34.7-0.7-desktop.

    Read the article

  • Powerpoint 2010 crash on quickstyle menu

    - by Marcus Lindblom
    Windows 7 64-bit, recent install, added Office 2010. When I create some boxes and open the quick-style menu, it crashes (or stops responding, in windowese, and then it sends an error report). I've run the "Repair" from the installer, but it didn't help. There's nothing in Windows Update I've Googled and searched on Microsoft's site, but n Any other ideas? (It's not related to this ppt crash question as that concerns PPT 2007. Error in event log: Faulting application name: POWERPNT.EXE, version: 14.0.4754.1000, time stamp: 0x4b967cf2 Faulting module name: KERNELBASE.dll, version: 6.1.7600.16385, time stamp: 0x4a5bdfe0 Exception code: 0xe0000003 Fault offset: 0x000000000000aa7d Faulting process id: 0xee8 Faulting application start time: 0x01cb9dc710fd76d8 Faulting application path: C:\Program Files\Microsoft Office\Office14\POWERPNT.EXE Faulting module path: C:\Windows\system32\KERNELBASE.dll Report Id: 58766562-09ba-11e0-90d1-00215a139192 And: Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: POWERPNT.EXE P2: 14.0.4754.1000 P3: 4b967cf2 P4: KERNELBASE.dll P5: 6.1.7600.16385 P6: 4a5bdfe0 P7: e0000003 P8: 000000000000aa7d P9: P10: Attached files: C:\Users\marcusl\AppData\Local\Temp\CVR86DD.tmp.cvr C:\Users\marcusl\AppData\Local\Temp\WERC765.tmp.WERInternalMetadata.xml These files may be available here: C:\Users\marcusl\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_POWERPNT.EXE_d45f313d77f7e52cc8682b2b64cc3898127c2c_1106e3ac Analysis symbol: Rechecking for solution: 0 Report Id: 58766562-09ba-11e0-90d1-00215a139192 Report Status: 1

    Read the article

  • rpm file conflict after alien conversion

    - by Zitrax
    I have a program for which I generate a .deb file. The .deb file works fine on the systems I have tried it on (also tested with lintian). Previously it has worked to use alien to convert this to .rpm and install it on Suse. However it is now about a year since I tried it the last time and now I get an error when trying to install the alien made rpm on Fedora 11, I get this error: file /usr/share/icons/default.kde from install of testpkg-0.2-2.i386 conflicts with file from package kdelibs3-3.5.10-13.fc11.1.i586 Listing the content of the rpm file: $ rpm -qlp testpkg-0.2-2.i386.rpm / /usr /usr/games /usr/games/testpkg /usr/lib /usr/lib/libfmod-3.75.so /usr/share /usr/share/app-install /usr/share/app-install/icons /usr/share/app-install/icons/testpkg.png /usr/share/applications /usr/share/applications/testpkg.desktop /usr/share/doc /usr/share/doc/testpkg /usr/share/doc/testpkg/changelog.gz /usr/share/doc/testpkg/copyright /usr/share/games /usr/share/games/testpkg /usr/share/games/testpkg/images /usr/share/games/testpkg/images/bb.dat /usr/share/games/testpkg/images/bb_bg.dat /usr/share/games/testpkg/images/bubblemad_8x8.png /usr/share/games/testpkg/images/goldfont.png /usr/share/games/testpkg/lvl /usr/share/games/testpkg/lvl/lvl001.txt /usr/share/games/testpkg/lvl/lvl002.txt /usr/share/games/testpkg/lvl/lvl003.txt /usr/share/games/testpkg/lvl/lvl004.txt /usr/share/games/testpkg/lvl/lvl005.txt /usr/share/games/testpkg/lvl/lvl006.txt /usr/share/games/testpkg/lvl/lvl007.txt /usr/share/games/testpkg/music /usr/share/games/testpkg/music/alfa.it /usr/share/games/testpkg/music/beta.it /usr/share/games/testpkg/sounds /usr/share/games/testpkg/sounds/bounce.wav /usr/share/games/testpkg/sounds/click.wav /usr/share/games/testpkg/sounds/warning.wav /usr/share/icons /usr/share/icons/default.kde /usr/share/icons/default.kde/16x16 /usr/share/icons/default.kde/16x16/apps /usr/share/icons/default.kde/16x16/apps/testpkg.png /usr/share/man /usr/share/man/man6 /usr/share/man/man6/testpkg.6.gz Am I wrong in putting the kde icons in /usr/share/icons/default.kde which seem to be a symbolic link ? It's a symbolic link on both Kubuntu 9.10 and Fedora 11 though. Sounds like a common situation that the same directory is needed for different packages, so why is it a conflict ?

    Read the article

  • CloneZilla PXE Boot Without NFS

    - by John
    I am trying to setup CloneZilla to be bootable via PXE without using NFS. I do not have NFS running on our PXE server and would like to keep it that way. However, most of the information that I have found online indicates that you need to setup NFS in order to PXE boot CloneZilla. I believe that I am pretty close in getting it to work, but am not sure where to go next. Listed below are the different PXE menu option configurations that I have used so far. LABEL Clonezilla Live MENU LABEL Clonezilla Live KERNEL utilities/clonezilla/vmlinuz APPEND initrd=utilities/clonezilla/initrd.img boot=live live-config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" o$ I have also tried the following append lines, without success: APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=788 fetch=tftp://10.130.155.23/filesystem.squashfs APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=normal nomodeset nosplash fetch=tftp://10.130.155.23/filesystem.squashfs Each of them have resulted in a no go with the following error: "Unable to find a live file system on the network". It looks like it gets to the point of trying to load the filesystem.squashfs file, hangs, and then throws the error. Any help would be greatly appreciated.

    Read the article

  • SuperMicro BMC on OpenSuSE Linux --cannot access from LAN

    - by Kendall
    Hi, I have an (old) SMC-001 IPMI device on an (old) X6DVL-EG2 motherboard. My problem is that I cannot access the BMC from LAN. I'm also getting some interesting output from ipmitool. First, the setup. I enable Console Redirection in the BIOS, turn BIOS Redirection after POSt to "disabled". I then modprobe'ed for ipmi_msghandler, ipmi_devintf and ipmi_si. I then found ipmi0 under /dev. So far so good. Since I want console redirection over serial, I modified /boot/grub/menu.lst: http://pastebin.com/YYJmhusQ I then modified "/etc/inittab" as follows: S1:12345:respawn:/sbin/agetty -L 19200 ttyS1 ansi Networking I set as following, using "ipmitool" ipaddr: 192.168.3.164 netmask: 255.255.255.0 defgw: 192.168.3.1 The above are correct for my environment. To test it I do: ipmitool -I open chassis power off which responds by powering off the machine. When I to access from another computer on the network, however, I get an error message: host# ipmitool -I lanplus -H 192.168.10.164 -U Admin -a chassis power status Error: Unable to establish LAN session Unable to get Chassis Power Status "Admin" seems to be a valid user name: host# ipmitool -I open user list 1 2 Admin true false true USER The interesting output from ipmitool I initially mentioned: host # ipmitool -I open lan set 1 access on Set Channel Access for channel 1 failed: Request data field length limit exceeded Also, newload4:/home/gjones # ipmitool channel info 1 Channel 0x1 info: Channel Medium Type : 802.3 LAN Channel Protocol Type : IPMB-1.0 Session Support : session-less Active Session Count : 0 Protocol Vendor ID : 7154 Get Channel Access (volatile) failed: Request data field length limit exceeded The output of "ipmitool -I open lan print 1" is here: http://pastebin.com/UZyL6yyE Any help/suggestions is greatly appreciated; I've been working with this thing for a few hours now with no success.

    Read the article

  • Apache ProxyPass with SSL

    - by BBonifield
    I have a QA setup that consists of multiple internal development servers and one world-accessible provisioning machine that is setup to proxy pass the web traffic. Everything works fine for non-SSL requests, but I'm having a hard time getting the SSL logic working as well. Here's a few example vhost blocks. <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev2.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev2.site.com </VirtualHost> I end up seeing the following error in the provisioner's error log. [Fri Jan 28 12:50:59 2011] [warn] [client 1.2.3.4] proxy: no HTTP 0.9 request (with no host line) on incoming request and preserve host set forcing hostname to be dev1.site.com for uri / As well as the following entry in the destination QA machine's access log. 192.168.168.101 - - [22/Feb/2011:08:34:56 -0600] "\x16\x03\x01 / HTTP/1.1" 301 326 "-" "-"

    Read the article

  • Excel cannot access the file with IIS7&Windows Serer 2008 R2(64bit)

    - by user838204
    I have a web project(.Net4) that needs to access Excel file, but it ends up with the following error message: Error occured during file generation.Microsoft Excel cannot access the file 'D:\xx\xx\abc.xls'. There are several possible reasons: • The file name or path does not exist. (Actually it's there) • The file is being used by another program.(It cant happen) • The workbook you are trying to save has the same name as a currently open workbook. In IIS7, I use DefaultAppPool with the Identity "myservice" who's under the Group of Administrators. In the Authentication Page of my website under IIS, Anonymous Authentication was enabled and set to "Application pool identity" and ASP.NET Impersonation was disabled. After searching the solution for hours, I found the following but NONE of them work Create folder in C:\Windows\SysWOW64\config\systemprofile\Desktop. Plz refer:this Grant rights of "myservice" in Component Services. Plz refer:this One thing strange, there is nothing in the Group of IIS_IUSRS. Is that normal? Cause I remember at least two users (DefaultAppPool & Classic .Net AppPool). Plz tell me how to fix the access problem. I assume that's permission problem of IIS but I cant solve it. Thank you.

    Read the article

  • Trying to run an ASP.NET MVC application using Mono on Apache with FastCGI

    - by Arda Xi
    I have a hosting account with DreamHost and I would like to use the same account to run ASP.NET applications. I have an application deployed in a subdomain, a .htaccess with a handler like this: # Define the FastCGI Mono launcher as an Apache handler and let # it manage this web-application (its files and subdirectories) SetHandler monoWrapper Action monoWrapper /home/arienh4/<domain>/cgi-bin/mono.fcgi virtual My mono.fcgi is set up as such: #!/bin/sh #umask 0077 exec >>/home/arienh4/tmp/mono-fcgi.log exec 2>>/home/arienh4/tmp/mono-fcgi.err echo $(date +"[%F %T]") Starting fastcgi-mono-server2 cd / chmod 0700 /home/arienh4/tmp/mono-fcgi.sock echo $$>/home/arienh4/tmp/mono-fcgi.pid # stdin is the socket handle export PATH="/home/arienh4/mono/bin:$PATH" export LD_LIBRARY_PATH="/home/arienh4/mono/lib:$LD_LIBRARY_PATH" export TMP="/home/arienh4/tmp" export MONO_SHARED_DIR="/home/arienh4/tmp" exec /home/arienh4/mono/bin/mono /home/arienh4/mono/lib/mono/2.0/fastcgi-mono-server2.exe \ /logfile=/home/arienh4/logs/fastcgi-mono-web.log /loglevels=All \ /applications=/:/home/arienh4/<domain> I took this from the Mono site for CGI. I'm not sure if I'm doing it correctly though. This code is resulting in this error: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. I have no idea what's causing this. As far as I can see, Mono isn't even hit (no log files are created).

    Read the article

  • SSL Certifcate Request s2003 DC CA DNS Name not Avaiable.

    - by Beuy
    I am trying to submit a request for an SSL certificate on a Domain Controller in order to enable LDAP SSL, and having no end of problems. I am following the information provided at http://support.microsoft.com/default.aspx?scid=kb;en-us;321051 & http://adldap.sourceforge.net/wiki/doku.php?id=ldap_over_ssl Steps taken so far: Create Servername.inf with the following information ;----------------- request.inf ----------------- [Version] Signature="$Windows NT$ [NewRequest] Subject = "CN=servername.domain.loc" ; replace with the FQDN of the DC KeySpec = 1 KeyLength = 1024 ; Can be 1024, 2048, 4096, 8192, or 16384. ; Larger key sizes are more secure, but have ; a greater impact on performance. Exportable = TRUE MachineKeySet = TRUE SMIME = False PrivateKeyArchive = FALSE UserProtected = FALSE UseExistingKeySet = FALSE ProviderName = "Microsoft RSA SChannel Cryptographic Provider" ProviderType = 12 RequestType = PKCS10 KeyUsage = 0xa0 [EnhancedKeyUsageExtension] OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication ;----------------------------------------------- Create Certificate request by running: certreq -new Servername.inf Servername.req Attempt to submit Certificate request to CA by running: certreq -submit -attrib "CertificateTemplate: DomainController" request.req At which point I get the following error: The DNS name is unavailable and cannot be added to the Subject Alternate Name. 0x8009480f (-2146875377) Trouble shooting steps I have taken so far 1. Modify the Domain Controller Template to supply Subject Name in Request restart Certificate Service, include SAN in Request, same error. 2. Re-installed Certificate Services / IIS / Restarted machine countless times Any help resolving the issue would be greatly appreciated.

    Read the article

  • .net Framework won't install on Server 2003 SP2 x64

    - by Yvan JANSSENS
    Hi, When I install the .net Framework 3.5 SP1 on my rental VPS, I get the message that setup has failed. It's a Server 2003 VPS w/ SP2 installed (64-bit). The .net Framework v 2.0 installed correctly. How do I fix this? This is the installation log: [03/10/10,07:44:46] Microsoft .NET Framework 2.0a x64: [2] Failed to fetch setup file in CBaseComponent::PreInstall() [03/10/10,07:44:47] setup.exe: [2] ISetupComponent::Pre/Post/Install() failed in ISetupManager::InternalInstallManager() with HRESULT -2147467260. [03/10/10,07:44:48] setup.exe: [2] CSetupManager::RunInstallPhase() - Call to Pre/Install/Post for InstallComponents failed [03/10/10,07:44:49] setup.exe: [2] CSetupManager::RunInstallPhaseAndCheckResults() - RunInstallPhase() returned a NULL piActionResults [03/10/10,07:44:49] setup.exe: [2] CSetupManager::RunInstallFromList() - RunInstallPhaseAndCheckResults failed [2] [03/10/10,07:44:51] setup.exe: [2] ISetupManager::RunInstallLists(IP_PREINSTALL failed in ISetupManager::RunInstallFromThread() [03/10/10,07:44:52] setup.exe: [2] ISetupManager::RunInstallFromThread() failed in ISetupManager::RunInstall() [03/10/10,07:44:53] setup.exe: [2] CSetupManager::Run() - Call to RunInstall() failed [03/10/10,07:44:59] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0a x64 is not installed. [03/10/10,07:45:00] WapUI: [2] DepCheck indicates XPSEPSC x64 Installer was not attempted to be installed. [03/10/10,07:45:02] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.0 SP2 x64 was not attempted to be installed. [03/10/10,07:45:02] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.5 (x64) 'package' was not attempted to be installed. [03/11/10,14:19:23] Microsoft .NET Framework 3.0 SP2 x64: [2] Error: Installation failed for component Microsoft .NET Framework 3.0 SP2 x64. MSI returned error code 1604 [03/11/10,14:26:14] WapUI: [2] DepCheck indicates Microsoft .NET Framework 3.0 SP2 x64 is not installed. Thanks!! Yvan

    Read the article

  • Cygwin - Repo with Separate Git/Working Dir Doesn't Work

    - by Kyle Lacy
    Since I've switched to OS X and Vim, I've found it easiest to manage all of my 'dotfiles' (all of my configuration files and miscellaneous scripts) with Git. Having already set up my dotfiles in a repo following this tutorial, I figured it would also be easy enough to migrate all of my settings into my Cygwin setup on my Windows partition. Already having the repo setup on Github, I simply clone'd the repo, and moved all of the files over to my home directory, making it a mirror of my OS X home directory. Unfortunately, I cannot seem to use the actual repo any further within Cygwin. The problem is that I cannot use my dotfiles repo with git within Cygwin. The setup is unique from most normal git repos, in that the working directory and the git directory are in different locations. Specifically, the working directory is $HOME (/Users/kyle on OS X, /home/kyle in Cygwin), and the git repo is $HOME/.dotfiles.git. So, if I wanted to get the status of the repo, for example, I would type the following command (which I alias to reduce typing, of course): git --work-tree=$HOME --git-dir=$HOME/.dotfiles.git status -uno While this works fine on OS X, this refuses to work within Cygwin. Regardless of whether or not I use my alias, or whether or not I substitute $HOME by hand, I get the following git error: fatal: Not a git repository: /home/Kyle/dotfiles/.git/modules/.build/git I don't understand where this error comes from, but the path /home/Kyle/dotfiles was the original location of the git repo when I initially cloned it. Additionally, it's important to note that the repo relies heavily on submodules. If specifics are necessary, the repo in question can be found on GitHub. The commands I ran to setup the repo in Cygwin can also be found within the Readme file.

    Read the article

  • qemu-img: Could not open $FILE

    - by HTTP500
    I received a single-file VMDK from a vendor that has a virtual appliance for a particular product I'm interested in evaluating. We run a KVM solution (Proxmox) so I tried converting the file but on that system qemu-img blows up. (I was able to convert (multipart) VMDK files from bitnami without error.) So I figured I'll just yum install qemu-img on a RHEL 6.3 VM and do it there. But despite the fact that I can file the file just fine when I run qemu-img on it I get this error that it can't open the file: [root@host dir]# file 1.vmdk 1.vmdk: VMware4 disk image [root@host dir]# qemu-img info 1.vmdk qemu-img: Could not open 'vmdk' I've seen some other people post on the interwebs that they've had this problem but none of them seem to have a resolution. Does anyone have any ideas? I have checked the MD5SUM already. EDIT1: [root@host dir]# qemu-img info -f vmdk 1.vmdk qemu-img: Could not open '1.vmdk' EDIT2: Ran strace per suggestion. Not sure what to look for... Here is a possible: ioctl(3, CDROM_DRIVE_STATUS, 0x7fffffff) = -1 ENOTTY (Inappropriate ioctl for device)

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • OpenVPN and TomatoVPN

    - by Bill Johnson
    Wondering if someone can help me with the following. I have updated my Linksys router with TomatoVPN and used the following config: Interface Type:TAP Protocol:UDP Port:1195 Firewall Custom Authorization Mode:Static Key I have then inserted the static key generated in OpenVPN saved and started the service. connect.ovpn. # Use the following to have your client computer send all traffic through your router # (remote gateway) remote (entered my DNS/DHCP servers external IP address here) port 1195 dev tap secret static.key.txt proto udp comp-lzo route-gateway 192.168.1.1 redirect-gateway float I've then placed my static key in a file in the same directory as your connect.ovpn (static.key.txt) Now OpenVPN is installed on a laptop that I use at home. I have plugged in the laptop to my home connection and started connect.ovpn The Local Area Connection is connected as 'Home Network 3' - and when I start OpenVPN it is connected as 'Local Area Connection 2' and this is showing as 'Unidentified Network' and it appears there is no network access. TAP-Win32 Adapter V9 appears to be the adaptors name and the IP and DNS properties are set to automatic. If I open up the OpenVPN GUI it shows an error message saying "Connecting to connect has failed". Looking at the error message behind this pop-up one line says "TCP/UDP Socket bind failed on local address [undef]:1195 Address already in use [WSAEADDRINUSE] Could anyone possibly help me further with this please?

    Read the article

  • run script as another user from a root script with no tty stdin

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root and has no tty standard in. Below I give my four different attempts which all fail. : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for trqaining user to be set. : su - training -c 'training_command' This looks like it (http://serverfault.com/questions/44400/run-a-shell-script-as-a-different-user) but gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers (a la http://superuser.com/questions/119376/bash-su-script-giving-an-error-standard-in-must-be-a-tty) but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' This one gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this error. : ssh -p100 training@localhost 'source $HOME/.bashrc; training_command' This one is more of a joke to show desparation. Even this one fails with Host key verification failed. (the host key IS in known_hosts, etc). Note: all of 2,3,4 work as they should if I run the wrapper script from a root shell. problems only occur if the system service monitor (daemontools) launches it (no tty terminal I guess). I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice. (this has also been posted on superuser: http://superuser.com/questions/434235/script-calling-script-as-other-user)

    Read the article

  • Last GUID used up - new ScottGuID unique ID to replace it

    - by Eilon
    You might have heard in recent news that the last ever GUID was used up. The GUID {FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF} was just consumed by a soon to be released project at Microsoft. Immediately after the GUID's creation the word spread around the Microsoft campuses around the globe. Microsoft's approximately 100,000 worldwide employees then started blogging, tweeting, and facebooking about the dubious "achievement." The following screenshot shows GUIDGEN (the Windows tool for creating GUIDs) with the last ever GUID. All GUIDs created by projects at Microsoft must be registered in a central repository for record keeping. This allows quick-fix engineers, security engineers, anti-malware developers, and testers to do a quick look up of an unknown GUID and find out if it belongs to Microsoft. The following screenshot shows the Microsoft GUID Tracker internal application and the last few GUIDs being used up by various Microsoft projects. What is perhaps more interesting than the news about the GUID is the project that used that last GUID. The recent announcements regarding the development experience for the Windows Phone 7 Series (WP7S) all involve free editions of Visual Studio 2010. One of the lesser known developer tools is based on a resurrected project that many of you are probably familiar with, but have never used. The tool is in fact Microsoft Bob 7 Series (MB7S). MB7S is an agent-based approach for mobile phone app development. The UI incorporates both natural language interfaces and motion gesture behaviors, similar to the Windows Phone 7 Series “Metro” interface. If it works, it will help to expand the breadth of mobile app developers. After the GUID: The ScottGuID It came as no big surprise that eventually the last GUID would be used up. Knowing this, a group of engineers at Microsoft has designed, implemented, and tested a replacement to the GUID: The ScottGuID. There are several core principles of the ScottGuID: 1. The concepts used in ScottGuIDs must be easily understood by a developer who is already familiar with GUIDs 2. There must exist a compatibility layer between ScottGuIDs and GUIDs 3. A ScottGuID must be usable in a practical manner in non-computing environments 4. There must exist ScottGuID APIs for all common platforms: Win32/Win64/WinCE, .NET (incl. Silverlight), Linux, FreeBSD, MacOS (incl. iPhone OS), Symbian, RIM BlackBerry, Google Android, etc. 5. ScottGuIDs must never run out ScottGuID use cases One of the more subtle principles of the ScottGuID is principle #3. While technically a GUID could be used in any environment, it was not practical to do so in terms of data entry and error detection. In order to have the ScottGuID be a true universal ID it must be usable in non-computing environments. Prior to the announcement of the ScottGuID there have been a number of until-now confidential projects. One of the tools that will soon become public is ScottGuIDGen, which is in essence an updated version of GUIDGEN that can create ScottGuIDs. The following screenshot shows a sample ScottGuID. To demonstrate the various applications of the ScottGuID there were test deployments around the globe. The following examples are a small showcase of the applications that have already been prototyped. Log in to Hotmail: Pay for gas: Sign in to Twitter: Dispense cat food: Conclusion I hope that this brief introduction to the ScottGuID shows how technology can continue to move forward, even when it appears there is a point that cannot be passed. With a small number of principles, a team of smart engineers, and a passion for "getting it right" the ScottGuID should last well past our lifetimes. In the coming months expect further announcements regarding additional developer tools, samples, whitepapers, podcasts, and videos. Please leave a comment on this post if you have any questions about the ScottGuID or what you would like to see us do with it. With ScottGuID, the possibilities are nearly endless and we want to stretch their reach as far as possible.

    Read the article

  • Windows 2003 R2 x86 Mini dump fails to write to disk

    - by Randy K
    I have 3 blade servers that are Blue Screening with a 0xC2 error as far as we can tell randomly. When it started happening I found that the servers weren't set to do provide a dump because they each have 16GB RAM and a 16GB swap file divided over 4 partitions in 4GB files. I set them to provided a small dump file (64K mini dump), but the dump files aren't being written. On start up the server event log is reporting both Event ID 45 "The system could not sucessfully load the crash dump driver." and Event ID 49 "Configuring the Page file for crash dump failed. Make sure there is a page file on the boot partition and that is large enough to contain all physical memory." I understanding is the the small dump shouldn't need a swap file large enough for all physical memory, but the error seems to point to this not being the case. The issue of course is that the max swap file size is 4GB, so this seems to be impossible. Can anyone point me where to go from here?

    Read the article

  • iTunes' clandestine proxy settings

    - by pilcrow
    Problem: One user's iTunes consults a defunct HTTP proxy, but only for iTunes Store HTTP requests -- other iTunes web requests are unproxied. How do I dismiss this spurious proxy setting? Background: It's not as easy as Internet Options. Years ago my network had a mandatory HTTP proxy at 172.31.1.1:8080. When we switched to the 192.168.1/24 space and eliminated the proxy, this user's iTunes -- the only iTunes user at the time -- could no longer contact the iTunes Store, an operation which fails with "unknown error -9808". This has been the case through several iTunes.exe upgrades over the years and prevents, among other things, activation of a new or newly upgraded iPhone. wireshark and TCPView confirm that this user's iTunes.exe is attempting to contact the long-defunct http proxy when attempting to reach the iTunes Store, but is otherwise unproxied. Curious details: No other iTunes.exe HTTP traffic for this user is affected -- iTunes can successfully make HTTP chatter at Apple's servers. No other web traffic at all is proxied, whether this user or others, iTunes or browser, etc. I cannot find the spurious proxy setting anywhere in the registry nor on disk, though perhaps I haven't thought of every place to look and every format to look for. Other users who have experienced the same error code all seem to have unrelated web configuration problems (certificate validation, for example). UPDATE in response to Phoshi's excellent suggestion, reinstallation hasn't done the trick.

    Read the article

  • Apache2 Virtual Host with ScriptAlias returning 403

    - by sissonb
    I am trying to reference my libs directory which is a sibling directory to my DocumentRoot. I am using the following ScriptAlias to try to accomplish this. ScriptAlias /libs/ "../libs" But when I go to example.com/libs/ I get a the following error Forbidden You don't have permission to access /libs/ on this server I am able to view the libs directory using the following configuration so I don't think it's a file permission error. <VirtualHost *> ServerName example.com ServerAlias www.example.com DocumentRoot C:/www/libs <VirtualHost *> More relevant httpd.cong setting below <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "C:/www"> Options Indexes FollowSymLinks AllowOverride None Order Deny,Allow Deny from none Allow from all </Directory> NameVirtualHost * <VirtualHost *> ServerName example.com ServerAlias www.example.com DocumentRoot C:/www/example ScriptAlias /libs/ "../libs" <Directory "C:/www/libs"> Options Indexes FollowSymLinks AllowOverride None Options +ExecCGI Order Deny,Allow Deny from none Allow from all </Directory> </VirtualHost>

    Read the article

  • Remote Desktop to Server 2008R2 fails from one particular Win7 client

    - by Jesse McGrew
    I have a VPS running Windows Web Server 2008 R2. I'm able to connect using Remote Desktop from my home PC (Windows 7), personal laptop (Windows 7), and work laptop (Windows XP). However, I cannot connect from my work PC (Windows 7). I receive the error "The logon attempt failed" in the RDP client, and the server event log shows "An account failed to log on" with this explanation: Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: username Account Domain: hostname Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: JESSE-PC Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I can connect from the offending work PC if I start up Windows XP Mode and use the RDP client inside that. The server is part of a domain but my account is local, so I'm logging in using a username of the form hostname\username. None of the clients are part of a domain. The server uses a self-signed certificate, and connecting from home I get a warning about that, but connecting from work I just get the logon error.

    Read the article

  • Hyper V Server 2012 Remote Management Using Workgroup

    - by Chris Kolenko
    I'm trying to remotely manage Hyper V server 2012 from a windows 8 pc, both client and server are on a workgroup. I've spent about 3-4 hours trying to get this working with no luck so far trying the following: Creating a new administrator on the server with the same details as the client ie. username / password. Add an entry into my hosts file to point to the remote ip by server name. Tried using HVRemote. Disabled both firewalls. The error that I'm getting is RPC Service Unavailable. How can I accomplish what I'm trying to do? Update Some of the operations on the Hyper-V Manager work. IE. Virtual Switch Works. I can open the New VM Wizard. I run into an error when creating a new Virtual Hard Disk tho. I've tried creating a VM without a hard disk, which works. Using the new hard disk wizard does not work either. I still can not see any Virtual Machines. RPC server unavailable. Unable to establish communication between 'ServerName' and 'ClientName'

    Read the article

< Previous Page | 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018  | Next Page >