Search Results

Search found 47712 results on 1909 pages for 'looking for a script'.

Page 492/1909 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • Chat client/server that offers certain options.

    - by MrStatic
    A few friends of mine and including me for the past few years have hosted an irc server on one of our dedicated servers. We use it for just personal reasons. We like the aspect of it that it allows private messages and chat rooms. Plus the ability to send/receive files and is cross platform (Linux/Mac/Windows/Mobile). Our domain is about to expire and we are looking to move onto something new. So we are looking for a chat client / server that offers the following: Chatroom (multiple people chatting together in the same box) File transfers Private messages Cross Platform (especially mobile ie: Blackberry and Android) We don't have to host it but that isn't out of the question Logging on the client side (we say a lot of crap and like to go back and quote said crap) SSL/TLS Some sort of encryption some of the stuff said/sent is of a sensitive nature ie: business nature

    Read the article

  • How many virtual processors or cores should I assign to my Guest OS?

    - by reidLinden
    I've just received an upgraded Host machine, and am looking to push some of those advances to my workstations Guest OS(s). In particular, I used to have a single processor, with 2 cores, so my Guest OS only had 1/1. Now, I've got a single processor with 8 cores, so I'm curious about what would be recommended for my Guest OS now? 1 processor/4 cores? 2 processors/2 cores? 4 processors/1 core? My instinct says to stick with the number of physical processors (or less), but, is that based on reality? I spent a good while looking for an answer to this, but perhaps my google-karma isn't in my favor today.

    Read the article

  • What does Embedded SATA Controller : ATA mean?

    - by paulH
    I have a Poweredge R510 server with a PERC H700 Integrated RAID controller that is exhibiting slower than expected disk speeds (RAID 1 and RAID 10 arrays) and I'm looking at the configuration of the server. Running the command omreport chassis biossetup on the server shows me the following configuration setting: Embedded SATA Controller : ATA I can also see that the possible options for this setting are: off | ata | qdma | raid I've been looking online to find out what this setting means and what the various options refer to but I've been unable to find anything particularly helpful, so I was hoping that somebody here could help to enlighten me. Thanks, Paul.

    Read the article

  • probems using ssh from cron

    - by Travis
    I am attempting to automate a script that executes commands on remote machines via ssh. I have public key authentication setup between the machines using ssh-agent. The script runs fine when executed from the command prompt. I suspect my problem is that cron isn't starting the ssh-agent due to it's minimalist environment. Here is the output when I add the -v flag to ssh: debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: gssapi-with-mic debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering public key: /home/<user>/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 149 debug1: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> debug1: Trying private key: /home/<user>/.ssh/id_dsa debug1: Next authentication method: password debug1: Authentications that can continue: publickey,gssapi-with-mic,password Permission denied, please try again. debug1: Authentications that can continue: publickey,gssapi-with-mic,password Permission denied, please try again. debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: No more authentication methods to try. Permission denied (publickey,gssapi-with-mic,password). How can I make this work? Thanks!

    Read the article

  • Backup software for Windows Server 2008 R2 Enterprise with 4 virtual machines (Exchange, SQL, AD, SharePoint)

    - by MadBoy
    What are the options for backup software for: HOST - Windows Server 2008 R2 Enterprise with HyperV VIRTUAL - Windows Server 2008 R2 Enterprise with Exchange 2010 VIRTUAL - Windows Server 2008 R2 Enterprise with SQL Express / SharePoint VIRTUAL - Windows Server 2008 R2 Enterprise with Terminal Services (10 users working on it) VIRTUAL - Windows Server 2008 R2 Enterprise with AD/DNS What I'm looking at is possibility of having an offsite backup thru FTP, maybe copy to usb/esata/lan drives for easy taking backup data outside of company. What I've been looking at: - Symantec Exec Backup 2010 System Recovery has an offsite backup but I would need 5 licenses and it doesn't have granular recovery. - Symantec Exec Backup 2010 seems OK but a bit expensive - Microsoft DPM 2010 requires full SQL Standard and for each machine I would need 4 Enterprise licenses. But does it allow Offsite backup without need for additional license and server outside of company (for doing DPM backup of DPM). What other options? This is 10 people company and so the costs matter but also convenience and security. Offsite backup is requirement.

    Read the article

  • Bandwidth preserving browsing mode

    - by Elazar Leibovich
    I'm looking at some methods to browse the web, in situations where bandwidth is scarce (such as, flaky wifi connection, or mobile phone internet provider who overcharges the bandwidth). One thing that would save alot of bandwidth is not downloading images while browsing. This approach has two main drawbacks Sometimes a site's layout depends of images. There are some images you wish to see (thus disabling images downloading through firefox settings is not quite convenient). I'm looking therefor for a method that would allow me to Use some heuristic to find out which images are related to the website layout and allow them to be downloaded. Select a particular image from a website, download and display it. Maybe there's a firefox extension for that?

    Read the article

  • Immediately tell which output was sent to stderr

    - by Clinton Blackmore
    When automating a task, it is sensible to test it first manually. It would be helpful, though, if any data going to stderr was immediately recognizeable as such, and distinguishable from the data going to stdout, and to have all the output together so it is obvious what the sequence of events is. One last touch that would be nice is if, at program exit, it printed its return code. All of these things would aid in automating. Yes, I can echo the return code when a program finishes, and yes, I can redirect stdout and stderr; what I'd really like it some shell, script, or easy-to-use redirector that shows stdout in black, shows stderr interleaved with it in red, and prints the exit code at the end. Is there such a beast? [If it matters, I'm using Bash 3.2 on Mac OS X]. Update: Sorry it has been months since I've looked at this. I've come up with a simple test script: #!/usr/bin/env python import sys print "this is stdout" print >> sys.stderr, "this is stderr" print "this is stdout again" In my testing (and probably due to the way things are buffered), rse and hilite display everything from stdout and then everything from stderr. The fifo method gets the order right but appears to colourize everything following the stderr line. ind complained about my stdin and stderr lines, and then put the output from stderr last. Most of these solutions are workable, as it is not atypical for only the last output to go to stderr, but still, it'd be nice to have something that worked slightly better.

    Read the article

  • PowerChute for VMware ESX4

    - by ITGuy24
    Hi, I am looking for a free way of installing PowerChute for VMWare. The 2.2.3 and 2.2.4 Linux versions do not support VMWare ESX even though I think prior versions did. APC is now charging $100 for the install CD which I think is a joke considering the price of our Symmetra UPS. VMWare support should be free. Edit: I see someone voted this question to be closed, so to be clear i am looking for a free and legal way of getting PowerChute support for VMWare, by either using an older version or a custom script. Also for future reference if you are voting to close a questions please leave a comment explaining why.

    Read the article

  • Does anyone use the L-Track trackball?

    - by thethinman
    I've been using the Logitech Trackman Marble Mouse for years. Now I'm looking for a trackball with a scroll wheel, larger and heavier ball, and preferably rollers instead of pins. It must be finger (not thumb) operated. The Kensington Expert Mouse is close, but from what I've read the scroll wheel is poorly implemented. They also switched from rollers to pins. I bought a Kensington Orbit Trackball and it's not bad but the scroll wheel is rough and the ball is the same as the marble mouse. I'm still looking for something better. I found the L-Trac and it looks good but there's little info on the web. Has anyone used it and can provide their impressions? Or can you point out another option?

    Read the article

  • Postfix unable to find local server

    - by Andrew
    I'm working with postfix on fedora 9 and I'm attempting to make some changes to a system setup by my predecessor. Currently the postfix server on [mail.ourdomain.com] is setup to forward mail sent to two addresses to another server for processing. The other server [www01.ourdomain.com] receives the email and sends it to a PHP script to be processed. Then that PHP script generates and sends a response to the user who sent the original email. We're adding more web servers to the system and as a result we've decided to move these processing scripts to our admin [admin.ourdomain.com] server to make them easier to keep track of. I've already setup and tested the processing scripts on [admin.ourdomain.com], and on the mail server doing the forwarding [mail.ourdomain.com] I added [admin.ourdomain.com] to /etc/hosts and also added another, aside from the one for [www01.ourdomain.com], entry to /etc/postfix/transport for [admin.ourdomain.com]. I also restarted postfix as well. I've tested the communication from [mail.ourdomain.com] to [admin.ourdomain.com] using telnet and the [admin.ourdomain.com] domain and everything runs correctly. But as soon as I change the forward address and attempt to send an email to the mail server I get a bounce message stating "Host or domain name not found. Name service error for name=admin.ourdomain.com type=A: Host not found". If I change the forward settings back to [www01.ourdomain.com] then everything works fine. Is there some setting I'm missing in Postfix? The server itself and telnet work fine it just seems to be postfix that's not able to discover the location of [admin.ourdomain.com].

    Read the article

  • nginx: SSI working on Apache backend, but not on gunicorn backend

    - by j0nes
    I have nginx in front of an Apache server and a gunicorn server for different parts of my website. I am using the SSI module in nginx to display a snippet in every page. The websites include a snippet in this form: For static pages served by nginx everything is working fine, the same goes for the Apache-generated pages - the SSI include is evaluated and the snippet is filled. However for requests to my gunicorn backend running a Python app in Django, the SSI include does not get evaluated. Here is the relevant part of the nginx config: location /cgi-bin/script.pl { ssi on; proxy_pass http://default_backend/cgi-bin/script.pl; include sites-available/aspects/proxy-default.conf; } location /directory/ { ssi on; limit_req zone=directory nodelay burst=3; proxy_pass http://django_backend/directory/; include sites-available/aspects/proxy-default.conf; } Backends: upstream django_backend { server dynamic.mydomain.com:8000 max_fails=5 fail_timeout=10s; } upstream default_backend { server dynamic.mydomain.com:80; server dynamic2.mydomain.com:80; } proxy_default.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; What is the cause for this behaviour? How can I get SSI includes working for my pages generated on gunicorn? How can I debug this further?

    Read the article

  • Linux Development System Layout.Configuration

    - by tom smith
    Hi. Looking to create a linux based development/test system. I'm the only one using it. Will be using a variant of rhel/centos/fedora, with a 640G drive, and an external 250G as a kind of backup. Looking for thoughts/comments on the layout/config of the drive for the install/creation process. My primary goal is to be able to "backup"/restore the work product so i'd like OS to be separate from everything else. Thoughts/commnents/ponters appreciated. Thanks

    Read the article

  • Mount TMPFS instead of ro /dev

    - by schiggn
    I am working on a ARM-Based embedded system with a custom Debian Linux based on kernel 2.6.31. In the final system, the Root file system is stored as squashfs on flash. Now, the folder /dev is created by udev, but since there is no hot plugging functionality needed and booting time is critical, I wanted to delete udev and "hard code" the /dev folder (read here, page 5). because i still need to change parameters of the devices (with ioctl /sysfs) this does not work for me in this case. so i thought of mounting a tmpfs on /dev and change the parameters there. is this possible? and how to do best? my approach would be: delete /dev from RFS create tar containing basic devices mount tmpfs /dev untar tar-file into /dev change parameters Could this work? Do you see any problems? I found out, that you can mount on top of already mounted mount point, is it somehow possible just to take data with while mounting the new file system? if so that would be very convenient! Thanks Update: I just tried that out, but I'm stuck at a certain point. I packed all my devices into devices.tar, packed it into /usr of my squashfs and added the following lines to mountkernfs.sh, which is executed right after INIT. #mount /dev on tmpfs echo -n "Mounting /dev on tmpfs..." mount -o size=5M,mode=0755 -t tmpfs tmpfs /dev mknod -m 600 /dev/console c 5 1 mknod -m 600 /dev/null c 1 3 echo "done." echo -n "Populating /dev..." tar -xf /usr/devices.tar -C /dev echo "done." This works fine on the version over NFS, if I place printf's in the code, I can see it executing, if I comment out the extracting part, its complaining about missing devices. Booting OK mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. System Clock set to: Thu Sep 13 11:26:23 UTC 2012. INIT: Entering runlevel: 2 UBI: attaching mtd8 to ubi0 Commenting out the extraction of the tar mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Thu Sep 13 12:24:00 UTC 2012 ... (warning). INIT: Entering runlevel: 2 libubi: error!: cannot open "/dev/ubi_ctrl" So far so good. But if I pack the whole story into a squashfs and boot from there, it is acting strange. It's telling me while booting that it is unable to open an initial console and its throwing errors on mounting the UBIFS devices, but finally provides a login anyway. Over that my echo's are not executed. If I then log in, /dev is mounted as TMPFS as desired and all the devices reside inside. When I redo the "mount" command to mount the UBIFS partitions it is executed whitout problem and useable. From squashfs VFS: Mounted root (squashfs filesystem) readonly on device 31:15. Freeing init memory: 136K Warning: unable to open an initial console. mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 UBIFS error (pid 484): ubifs_get_sb: cannot open "ubi1_0", error -19 Additionally, a part of the rest of the bootscripts is still exexuted, but not all of them. Does anyone has a clue why? Other question, is 5MB enough/too much for /dev?

    Read the article

  • Microsoft Word 2010 Header and Footer assistance

    - by CBP
    I am using Microsoft Word 2010 to format a book. I need different headers for different parts of the book. I have put them in different sections but my problem is when I click on Header to change or delete certain headers it changes the pages around in my document. For instance when I am looking at the page regularly the writing and section for page 20 is different then the writing and section when I am looking at it with the header and footer open. This is very frustrating because I can not properly change the headers. I hope this makes sense and somebody can explain what is going on and how to resolve this!

    Read the article

  • apt-get : Size mismatch

    - by Cédric Girard
    I created a private deb repository to spread a software and it's updates to 600 Ubuntu netbooks. Each time the network is connected, my script try to do a apt-get update. But sometimes (quite often in fact), I have this : Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch The server is an 2.2 Apache, HTTPS only. There is no error on it's logs. Here is the script : apt-get update apt-get dist-upgrade --force-yes --yes Here is the complete output of apt-get Ign https://myserver maverick Release.gpg Ign https://myserver/ubuntu/ maverick/main Translation-en Ign https://myserver maverick Release Ign https://myserver maverick/main i386 Packages/DiffIndex Ign https://myserver maverick/main i386 Packages Ign https://myserver maverick/main i386 Packages Hit https://myserver maverick/main i386 Packages Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following packages will be upgraded: majdb utilitaires voosicomat 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 6207kB/6273kB of archives. After this operation, 0B of additional disk space will be used. WARNING: The following packages cannot be authenticated! utilitaires voosicomat majdb Get:1 https://myserver/ubuntu/ maverick/main voosicomat all 2.0.1 [4755kB] Get:2 https://myserver/ubuntu/ maverick/main majdb all 1.0.17 [1452kB] Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch Fetched 7091kB in 21s (324kB/s) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Regards Cédric

    Read the article

  • Why won't my service start, and why doesn't upstart output any errors?

    - by Alex Waters
    I am trying to 'start gunicorn' as a service via upstart as user ale. I'm using gunicorn/flask on ubuntu 12.04 w/ init (upstart 1.5) Here is my /etc/init/gunicorn.conf setuid btw setgid flask script export HOME=/home/btw export WORKON_HOME=$HOME/.virtualenvs . $HOME/.virtualenvs/default/bin/activate cd $HOME/flask workon default gunicorn -c gunicorn.py bw:app end script It doesn't output anything other than gunicorn start/running, process 12992. If i then do 'status gunicorn' I get stop/waiting. any ideas on how to debug this? I tried following http://upstart.ubuntu.com/wiki/Debugging but it didn't help. If I do the following as user ale in the app's directory: 1. workon default 2. gunicorn -c gunicorn.py bw:app then Gunicorn runs fine. Here is ~/flask/gunicorn.py: bind = "0.0.0.0:8080" workers = 3 backlog = 2048 worker_class = "gevent" debug = True daemon = False pidfile ="/tmp/gunicorn.pid" log_level = "debug" accesslog = "/var/log/gunicorn/access.log" errorlog = "/var/log/gunicorn/error.log" user = "btw" group = "flask" Also, /var/log/error.log doesn't show anything new when I try to start the Gunicorn service. If I start it manually, it shows that the workers have been loaded, etc. Thanks for any help / suggestions!

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

  • Syncronizing multiple exchange servers 2007

    - by Mustafa Ismail Mustafa
    We're introducing a new exchange server for several reasons. After the introduction of the new server, and synchronizing it with the old one (mailboxes, contacts, rules, the whole shebang) we're going to be formatting the old machine, install XenServer 5.5 on it, and create a slices, one of which will have Exchange server, which again will need to be synchronized. Then, we'll have 2 different routes to the mail servers (mx1, mx2) so that if there is an outage on one, the other should be available. So now, I'm wondering how to synch? I can move a mailbox from one server to the other, and I'm sure that can be done in bulk, but that's not what I'm looking for. I'm looking to make both Servers equal, the first time so I can make a backup of the original, and the second time so that they can be made into peers. This is with Exchange 2007 on Windows 2008 R2 (x64) Suggestions? TIA

    Read the article

  • How can I override mod-php5's .php mapping to php4-cgi per VirtualHost or Directory?

    - by geocoo
    I am running Debian Linux with apache2 and libapache2-mod-php5 5.3.3-7. I have one VirtualHost which requires php4. So I researched and compiled php4-cgi. However, I cannot seem to: Override mod-php5's mapping of .php in that vhost (or even globally, without disabling php completley). Even find where that mapping is made, in hope of disabling it and enabling mod-php5 or php4-cgi per vhost. This is my php4-cgi mapping (Inside the one php4 vhost): ScriptAlias /php4 /usr/local/php4/bin <Directory /usr/local/php4/bin> Options +ExecCGI +FollowSymLinks </Directory> <Directory /www/test> AddHandler php4-cgi-script .php Action php4-cgi-script /php4/php Options +ExecCGI </Directory> This does not work, mod-php5 still runs all .php files in that vhost/directory. If I change the file extension in the AddHandler above from .php to .php4, then .php4 files do run php4-cgi as expected, but I can't change all the files in the app to .php4. I thought maybe I could disable the mod-php5's mapping in my vhost or directory, then do my cgi-config (as above) but many combinations of these in different contexts did not work: RemoveHandler .php RemoveType .php php_flag engine off (this seems to even disable my php4-cgi so that wont work) The only other place I can find any mapping is in /etc/mime.types, but commenting out the relevant lines and restarting apache2 does not affect mod-php5's .php mapping. I have searched as much as I can, it is now a mystery to me. Any help or direction would be greatly appreciated.

    Read the article

  • Mac OS X Disk Encryption - Automation

    - by jfm429
    I want to setup a Mac Mini server with an external drive that is encrypted. In Finder, I can use the full-disk encryption option. However, for multiple users, this could become tricky. What I want to do is encrypt the external volume, then set things up so that when the machine boots, the disk is unlocked so that all users can access it. Of course permissions need to be maintained, but that goes without saying. What I'm thinking of doing is setting up a root-level launchd script that runs once on boot and unlocks the disk. The encryption keys would probably be stored in root's keychain. So here's my list of concerns: If I store the encryption keys in the system keychain, then the file in /private/var/db/SystemKey could be used to unlock the keychain if an attacker ever gained physical access to the server. this is bad. If I store the encryption keys in my user keychain, I have to manually run the command with my password. This is undesirable. If I run a launchd script with my user credentials, it will run under my user account but won't have access to the keychain, defeating the purpose. If root has a keychain (does it?) then how would it be decrypted? Would it remain locked until the password was entered (like the user keychain) or would it have the same problem as the system keychain, with keys stored on the drive and accessible with physical access? Assuming all of the above works, I've found diskutil coreStorage unlockVolume which seems to be the appropriate command, but the details of where to store the encryption key is the biggest problem. If the system keychain is not secure enough, and user keychains require a password, what's the best option?

    Read the article

  • How can Django/WSGI and PHP share / on Apache?

    - by Mark Snidovich
    I have a server running an established PHP site, as well as some Django apps. Currently, a VirtualHost set up for PHP listens on port 80, and requests to certain directories are proxied to a VirtualHost set up for Django with WSGI. I'd like to change it so Django handles anything not existing as a PHP script or static file. For example, / -parsed by PHP as index.php /page.php -parsed as PHP normally /images/border.jpg -served as a static file /johnfreep -handled by Django (interpreted by urls.py) /pages/john -handled by Django /(anything else) - handled by Django I have a few ideas. It seems the options are 'php first' or 'wsgi first'. set up Django on port 80, and set Apache to skip all the known PHP, CSS or image files. Maybe using SetHandler? Anything else goes to Django to be parsed by urls.py. Set up a script referring everything to Django as a 404 handler on PHP. So, if a file is not found for a name, it sends the request path to a VirtualHost running Django to be parsed.

    Read the article

  • How to set up the jdbc driver to connect to hsqldb from libreoffice?

    - by rumtscho
    I am trying to "split" a LibreOffice .odb file into a HSQL database and an OpenOffice document containing forms and macros. I am trying to follow the instructions from this thread: Within a few minutes you can convert your embedded HSQLDB to a stand-alone HSQLDB which is just a very fine database engine. 1) Download and extract the current version from http://hsqldb.org/ and point the Java class path in ToolsOptionsJava to the new hsqldb.jar 2) Extract the database folder from your embedded database and rename the files data, properties, script to name.data name.properties, name.script where "name." is an arbitrary name prefix. 3) Connect a Base document to an existing JDBC database such as jdbc:hsqldb:file:/home/chenier/hsqldb/name;default_schema=true;shutdown=true;hsqldb.default_table_type=cached;get_column_name=false (again, "name" refers to your own file name prefix). This local single-user connection gives you much more than the embedded HSQLDB. 4) Copy queries, forms and reports from the old database over to the new one. The wizard presents me with a window expecting two inputs: a "Datasource URL" and a "JDBC driver class". As far as I can tell, the tutorial above only tells me what to put into the Datasource URL. As for the JDBC driver class, I have no idea what to write into this field. I tried the fully-qualified name of the Java class, org.hsqldb.jdbc.JDBCDriver as given in the HSQLDB documentation. When that failed, I tried the physical path /var/lib/hsqldb/lib/hsqldb.jar (although that should have been unnecessary, because first I pointed to this path as described under 1 and then restarted LibreOffice). In both cases, "Test class" failed with the message "The JDBC driver could not be loaded". OpenOffice's documentation doesn't say anything sensible about the field, it was something like "enter the JDBC driver in this box". Any ideas what I should enter there to get the connection working?

    Read the article

  • bond0 and xen = crash

    - by Rajat
    Bonding with xen 1 - Stop all guests. Reboot dom0 after running "chkconfig xend off" and "chkconfig xendomains off". 2 - Configure bond0 by enslaving eth0 and eth1 to it. I added the below two entries to /etc/modprobe.conf. alias bond0 bonding options bond0 mode=6,miimon=100 Content of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none Content of /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none Content of /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 IPADDR= NETMASK= ONBOOT=yes BOOTPROTO=static USERCTL=no Did "modprobe bond0" and "service network restart" after that. 3 - Edit /etc/xen/xend-config.sxp Change (network-script network-bridge) To (network-script 'network-bridge netdev=bond0') 4 - Start xend. "service xend start". 5 - chkconfig xend on. 6 - modprode bond0 7 - more /proc/net/bonding/bond0 8 - Create guest images as usual and bridge it to xenbr0. about config i did for my xen kernel rhel 5.3 after i reboot the host server i get in place bond0 get pbond0 and its get disconnect from network only i ping to my vm's on the host server any one have any idea why xen bond0 is acting like that or what is solutions to come out of pbond0 to bond0.

    Read the article

  • systemd: enabling cherokee service as a `unit file`

    - by Calvin Cheng
    So I am learning how to use systemd to initialize my services automatically on server reboot. So of course, I first make sure I have systemd and some optional systemd related packages installed. pacman -S systemd initscripts-systemd Installation seems to go well and checking, I can see that systemd and its dependency libsystemd are installed. And the optional package initscripts-systemd is also installed:- [root@li280-195 ~]# pacman -Ss systemd extra/libsystemd 44-5 [installed] systemd client libraries extra/systemd 44-5 [installed] system and service manager extra/systemd-sysvcompat 2-2 sysvinit compat symlinks for systemd community/initscripts-systemd 20120412-1 [installed] Arch specific systemd initialization/bootup scripts for systemd community/systemd-arch-units 20120412-2 Arch specific Systemd unit files Next, I ensure that systemd is loaded up when my server reboots, via grub in grub's /boot/grub/menu.lst file like this:- kernel /boot/vmlinuz root=/dev/xvda ro init=/bin/systemd Rebooting my server to check, all loads up well and I can check that systemd is operational via:- systemctl list-unit-files However, I don't see my cherokee initialization script (which is simply created at /etc/rc.d/cherokee when I installed cherokee earlier via pacman -S cherokee) being listed as one of my unit files. So the question is, how do I do that? How do I put my cherokee initialization script under systemd's control?

    Read the article

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >