Search Results

Search found 24814 results on 993 pages for 'linux distro'.

Page 436/993 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

  • Getting PAM/user info into php - something like Net_Finger instead of a db?

    - by digitaltoast
    I've got a very small user group who just need to login, upload, check and then move specific files to a different area when ready. Right now, I use the nginx PAM auth module to log them in against their unix accounts. As their login is their home directory, I've already got the info to send the uploads to the right area - one line of php and no database needed. But I'm maintaining a separate DB just so PHP can welcome them, grab their email and send them an email when processed. Yes, sure I could use nosql or sqlite instead so as to not need a whole mysql install. But it occurred to me that as I've got all these blank user fields for phone numbers I could populate with any data, that I could use something like php's Net_Finger. Which failed for me with: sudo pear install Net_Finger Starting to download Net_Finger-1.0.1.tgz (1,618 bytes) ....done: 1,618 bytes could not extract the package.xml file from "/build/buildd/php5-5.5.9+dfsg/pear-build-download/Net_Finger-1.0.1.tgz" Download of "pear/Net_Finger" succeeded, but it is not a valid package archive Error: cannot download "pear/Net_Finger" At which point I thought I'd stop, and take a ServerFault reality check - is this a really bad/dangerous/stupid idea just to stop me having to maintain details in two places rather than one? It there a better way? Googling shows that it's not an oft-asked thing, so perhaps with good reason?

    Read the article

  • How to configure OpenVPN server to use custom default gateway?

    - by Arenim
    I have a vpn server at address 10.1.0.2 and the server have another ip in it's network -- 10.0.0.2 in his subnet (it's a tun2socks router). But default server's gateway is NOT 10.0.0.2 (and it's ok) but another external IP. I want all the client's traffic to be forwarded through this ip address -- 10.0.0.2. Here is part of my server's config: dev tap0 server-bridge 10.1.0.1 255.255.255.0 10.1.0.50 10.1.0.100 push "route 10.0.0.0 255.255.255.0" ; now client can ping 10.0.0.2 push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 10.1.0.1" push "dhcp-option WINS 10.1.0.1" in fact i want some like push "redirect-gateway 10.0.0.2" How can I achieve this?

    Read the article

  • Creating ip alias on bonded interface ie. bond0:1

    - by bobothechimp
    System: HP Proliant DL360 G5 running CentOS 5.4 Bonded interface is working fine for a long time. I just went to add an alias the way I always have on a regular interface, and on first check it works (pinging on the local box) but it is not accessable from outside (iptables is turned off). In addition with this setup the normal network response started to decline, hanging for around a minute before I could get a prompt on login. Here are my config files: [root network-scripts]# cat ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no [root network-scripts]# cat ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no [root network-scripts]# cat ifcfg-bond0 DEVICE=bond0 BONDING_OPTS="mode=1 miimon=100" BOOTPROTO=none ONBOOT=yes NETWORK=10.2.1.0 NETMASK=255.255.255.0 IPADDR=10.2.1.11 USERCTL=no [root network-scripts]# cat ifcfg-bond0:1 DEVICE=bond0:1 BOOTPROTO=static ONBOOT=yes NETWORK=10.2.1.0 NETMASK=255.255.255.0 IPADDR=10.2.1.12 USERCTL=no any thoughts?

    Read the article

  • How expensive is a hostname in htaccess? Other solutions possible?

    - by Nanne
    For easy allow or disallowing of dynamic IP-adresses you can add them as a hostname in a .htaccess file. As I have read from: .htaccess allow from hostname? it does a reverse lookup on the connecting ip address, seeing if the response matches the allowed name. (Well, actually Apache is doing a double lookup, first a reverse lookup and then a forward lookup on the result of the reverse.) This is the reason we are currently not using dynamic-ip hostnames in the .htaccess: this "sounds" quite heavy: 2 extra lookups for every request. Is this indeed quite heavy, and would a reasonably busy server that is rather looking for less then more load get away with this :)? (e.g.: how does this 'load' compare to the rest? If a request is 1000 times more expensive then the lookups it might be negligible. otoh, it could be that final straw :) ) Are there other solutions? I can write a script that does a lookup of the hostname and put it in .htaccess files ofcourse, but this feels a bit like a hack.

    Read the article

  • How to troubleshoot this memory usage?

    - by Camran
    I have a classifieds website. I use PHP, MySql, and SOLR. Solr uses a Servlet Container, in my case JETTY, which is java application. I just noticed that something was terribly wrong on my website. I opened the terminal and entered the "top" command and noticed that JAVA was EATING all the cpu and mem. Now I thought "Ok, maybe I need more mem and cpu" So I increased it. But along with the increase the java app started eating more. This has never happened before, and it is either a bug, or a hack of some kind. Anyways, I need to troubleshoot this now, and so I wonder how do I do this? Can I somehow pinpoint exactly when the memory usage started to go up from some error log? How does one troubleshoot this? How do I prevent it? Is it possible to prevent too many requests somehow, if they are within a timeline? Thanks

    Read the article

  • setting up a samba PDC -error with testparm

    - by Rungano
    Hi guys I have installed a samba PDC but when I test the samba configurations file I am getting errors like these, "Invalid combination of parameters for service homes. Map system can only work if create mask includes octal 010 (S_IXGRP)." My Configuration file is as follows [homes] comment = Home Directories path = /home_srv1/%u valid users = %S read only = No create mask = 0660 directory mask = 0770 browseable = No I tried to google but with no luck, Serverfault is always my best hope. Thanks for helping out.

    Read the article

  • BIND having trouble resolving service.graphicly.com

    - by Keith Burgoyne
    Since about two weeks ago, we haven't been able to resolve service.graphicly.com: dig @192.168.0.12 service.graphicly.com ; <<>> DiG 9.3.4-P1 <<>> @192.168.0.12 service.graphicly.com ; (1 server found) ;; global options: printcmd ;; connection timed out; no servers could be reached Digging on the name servers listed for graphicly.com shows that service.graphicly.com is a CNAME to takecomicsadmin.cloudapp.net. Digging on cloudapp.net's name servers seems to fail: dig @NS1.LIVEDNS.MSFT.NET takecomicsadmin.cloudapp.net ; <<>> DiG 9.3.4-P1 <<>> @NS1.LIVEDNS.MSFT.NET takecomicsadmin.cloudapp.net ; (1 server found) ;; global options: printcmd ;; connection timed out; no servers could be reached Somehow, my home ISP's name servers can resolve service.graphicly.com without issue. Has anyone else noticed this problem? Does anyone know what the cause of this problem could be? Thanks!

    Read the article

  • create symlink to another machine

    - by microchasm
    Hi, I have 2 machines. Both running CentOS. Box1 is webserver with apache, php. Box2 is mysql, and file storage. The files will only be accessible from Box1 within the webapp. I'd like to somehow create a symlink or somesuch on box1 to a folder on box2 where uploaded files can be stored and retrieved. Security in mind, what would be the best way to go about linking these 2 boxes up in a transparent (to apache) way? NB: the boxes are connected directly to each other via a crossover cable; no lan access to box2. Much thanks!

    Read the article

  • Server market shares

    - by Bill Gray
    here can I find somewhat reliable indications of server market shares, without having to fork out $$$$$ for IDC or Gartner reports? I have considered the W3 statistics, net applications etc, and these are not what I would consider reliable. Is there anything more, that is free?

    Read the article

  • What is the `shadow` group used for?

    - by Shtééf
    On my Ubuntu 9.10 system, there's a shadow system group. There does not appear to be any user assigned to this group at all. The only files that I can find belonging to this group are /etc/shadow and /etc/gshadow. I'm aware that the purpose of these files is to store the passwords separately, out of reach from regular users who still might want to access passwd for other reasons. But what is the purpose of the shadow group? The reason I'm curious about this, is because I'm thinking about configuring nsswitch.conf to store it elsewhere, and would like to know if anything is actually trying to access the shadow database using shadow group credentials.

    Read the article

  • Setting up a transparent proxy with only one box.

    - by Scott Chamberlain
    I am playing around with transparent proxies, unfortunately I do not have two machines to test it out with. The current way I am doing things is the program makes a request to a computer on port 80, I use iptables -t nat -A OUTPUT -p tcp --destination-port 80 -j REDIRECT --to-port 1234 to redirect to my proxy that I am playing with. the proxy will send out a request to port 81 (as all outbound port 80 are being fed back in to the proxy so I want to do something like iptables -t nat -A OUTPUT -p tcp --destination-port 81 -j DNAT --to-destination xxxx:80 The problem lies with the xxxx part. How do I change the destination port without changing changing the destination ip? Or am I doing this setup completely wrong, I am learning after all and constructive criticism is definitely appreciated. The machine I am using is pretty low end so I would like not not have to create a VM with a second box unless absolutely necessary.

    Read the article

  • Constructor and Destructor of a singleton object called twice

    - by Bikram990
    I'm facing a problem in singleton object in c++. Here is the explanation: Problem info: I have a 4 shared libraries (say libA.so, libB.so, libC.so, libD.so) and 2 executable binary files each using one another shared library( say libE.so) which deals with files. The purpose of libE.so is to write data into a file and if the executable restarts or size of file exceeds a certain limit it is zipped and a new file is created with time stamp in name. It is using singleton object. It exports a handler class for getting and using singleton. Compressing only happens in the above said two cases. The user/loader executable can specify the starting name of file only no other control is provided by handler class. libA.so, libB.so, libC.so and libD.so have almost same behavior. They all have a class and declare and object of an handler which gets the instance of the singleton in libE.so and uses it for further purpose. All these libraries are linked to two executable binary files. If only one of the two executable runs then its fine, But if both executable runs one after other then the file of the first started executable gets compressed. Debug info: The constructor and destructor of the singleton object is called twice.(for each executable) The object of singleton is a static object and never deleted. The executable is not able to exit/return gives: glibc detected * (exe1 or exe2): double free or corruption (!prev): some_addr * Running with binaries valgrind gives that the above error is due to the destructor of the singleton object. Thanks

    Read the article

  • Find command exclude files whose path match a certain pattern

    - by user40570
    I have a find command that looks for files that was modified recently and outputs the date find /path/on/server -mtime -1 -name '*.js' -exec ls -l {} \; I would like it to exclude any deeply nested folder that matches a certain pattern e.g. there are a number of folders that have a "statistics" directory and ".svn" directories. So i'd like to be able to say if the file that was modified yesterday is in a folder named statistics ignore it. Or perhaps not search for files in those folders at all.

    Read the article

  • OpenSSH (Windows) does not forward X11

    - by Shulhi Sapli
    I'm running Ubuntu 13.04 in VM and I wanted to do X11 forwarding to my host (Win 8), so far it works fine using PuTTY and XMing server for Windows. But I am curious why it doesn't work if I use OpenSSH binaries (it comes together with Git for windows). This is what I've done so far: ssh -X [email protected] (also tried with -Y) then gedit but received error of Cannot open display. echo $DISPLAY came out as empty. So, I try to export DISPLAY=localhost:0.0 but it still won't work. The DISPLAY environment that I set is exactly as when it runs with Putty. I also try changing the DISPLAY to 192.168.2.3:0.0 and other display number as well, but still it won't work. Of course I could just use Putty to make it work, but I was wondering why OpenSSH binaries does not work. I have enabled all settings required in both /etc/ssh/ssh_config and /etc/ssh/sshd_config. If I run with -v option, this is what I get F:\SkyDrive\Projects> ssh -X -v [email protected] OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to 192.168.2.3 [192.168.2.3] port 22. debug1: Connection established. debug1: identity file /c/Users/Shulhi/.ssh/identity type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_rsa type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.1p1 Debian-4 debug1: match: OpenSSH_6.1p1 Debian-4 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.2.3' is known and matches the RSA host key. debug1: Found key in /c/Users/Shulhi/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/Shulhi/.ssh/identity debug1: Trying private key: /c/Users/Shulhi/.ssh/id_rsa debug1: Next authentication method: password [email protected]'s password: It seems that there is no request for X11 (I'm not sure if there is should be one too here). Any pointers why it doesn't work?

    Read the article

  • Hardware recommendations for building an Ubuntu encrypted file server

    - by Robert Mashlan
    I would like to build a file server for my home network using Ubuntu. It will serve files from RAID1 configured disks, either in the OS or in hardware. It will be connected to a Gigabit ethernet LAN. The disks will use an encrypted file system. It will serve samba shares. I would like a recommendation on what kind of processing power/memory I would need to build a box that would be able to sustain the full capacity of the Gigabit ethernet connection in a file transfer for a single connection with the overhead of serving from an encrypted disk. I'm not looking to build a dream server, I just want enough processing capacity for high performance (and reliable) file sharing and spend as little as possible for it. This may be tangential, but what kind of hardware would I need to have a server be able to reliably go into a low power mode when no requests are being made of it?

    Read the article

  • Listing packages in a repositiory?

    - by noloader
    I'm working on Ubuntu 12.04 Server. I want to install OpenStack, so I enabled the Cloud Archive repo: sudo add-apt-repository cloud-archive:havana After the subsequent update and upgrade, I noticed python-crypto changed. python-crypto recently fixed a CVE, so I would like to ensure I'm using the patched version of python-crypto. I'd also like to compare the python-crypto in both Ubuntu and Cloud Archive. How does one list the package information for both Ubuntu::python-crypto and CloudArchive::python-crypto? (And sorry I could not tag this with apt-cache. Its not available in the list of tags). Thanks in advance

    Read the article

  • grep + sed for find & replace fun!

    - by Jim Greenleaf
    I have a dev copy of a website set up that has quite a few hardcoded references to its live counterpart. I would like to replace all occurrences of "www." with "dev." in all files. I think I can use a combination of grep + sed, but I'm not sure how.

    Read the article

  • running a web server with encrypted file system (all or part of it)

    - by Carlos
    Hi, I need a webserver (lamp) running inside a virtual machine (#1) running as a service (#2) in headless mode (#3) with part or the whole filesystem encrypted (#4). The virtual machine will be started with no user intervention and provide access to a web application for users in the host machine. Points #1,#2 and #3 are checked and proved to be working fine with Sun VirtualBox, so my question is for #4: Can I encrypt the all filesystem and still access the webserver (using a browser) or will grub ask me for a password? If encrypting the all filesystem is not an option, can I encrypt only /home and /var/www ? will apache/php be able to use files in /home or /var/www without asking for a password or mounting these partitions manually? Thanks

    Read the article

  • Will deleting partitions affect my hard drive in any way?

    - by Portali5t
    I installed a Suse partition of around 200 Gigabytes on my hard drive, primarily running Windows 7. I am sick of Suse's crap, and just want to get rid of the OS and get that partition back for Windows' use. Is it as simple as that partition gets deleted,and I can choose what partition that space goes to, or is it communal that all partitions can access? I know next to nothing about partitions, so any help would be great. Also, if someone knows HOW to delete partitions, that would be a great help too. Thanks!

    Read the article

  • Two distinct mount points with one device

    - by user1761555
    After being disappointed with Ubuntu's release update feature, I finally decided to have separate mount points for / and /home. Towards this, I reformatted my HDD giving most of my drive to sda1(meant to be /home) and allocated about 40GB to rootfs (/). Unfortunately, I would also like to have a /projects which is to be located on sda1. Currently, sda1 is being mounted as /dev/sda1 on /home type ext4 (rw) I've tried looking online for a solution to this problem..however, I'm not sure as to what to look for! Is it possible to mount the 'home' directory of sda1 as /home and 'projects' directory of sda1 as /projects?

    Read the article

  • ssh can't connect after server ip changed

    - by Kery
    I have a server with ubuntu installed. After I change the network configuration and restart server, ssh client can't connect server any more. But in the server I can use ssh client to connect itself and the netstat command shows that sshd is listening port 22. And in my computer (win7) ping command is OK to server's new IP. The configuration in /etc/network/interfaces is: auto eth0 iface eth0 inet static address 10.80.x.x netmask 255.255.255.0 gateway 10.80.x.1 I'm very confused about this. Hope somebody can give me some idea. Thank you in advance!!!

    Read the article

  • Cannot Login as root

    - by Josh Moore
    At my work we ship our product on pre-installed servers as a software/hardware package. We are using open SUSE 10.3 for the OS and we setup and we always log in with the root user to do maintenance on the box. Recently we just had box returned to us that the customer said the could not longer connect to the box through the network interface. So when I started to work on the box I run into the this problem: At the command prompt to login i type the user name "root" and hit enter. Then even before it asks me for a password I get "Login incorrect". I have never seen this behavior before and could not find any information about it online. Does anybody know what is going on? Thanks.

    Read the article

  • Crontab -- scheduling my backups

    - by Garfonzo
    I want to do a backup every Friday night (no, this is not the whole backup routine, just part of it). Each Friday night's backup will not be overwritten until 4 weeks later. So, essentially, I have a four revolving backups: Week1, week2, week3, and week4. Now, I need the week1 backup script to run every 4 weeks. But I also want week2's script to run every four weeks. I know that I can tell the crontab to execute something every X weeks/days/hours/whatever. However, how do I set it up so that each of these four scripts actually run on different weeks, how do I avoid all 4 scripts running on the same night, then dutifully waiting for weeks only to all run again? Thanks.

    Read the article

  • Free Hosting control panel

    - by John Maxim
    Hello All, I'm in the mid of researching for one of the best hosting control panels. The server I run is Ubuntu and I have some experience with ISPConfig 2 & 3. Since I haven't explored any others available, what are the recommended ones for an Ubuntu server? I asked because I find that there seems to be some disabling and modifications required for an Ubuntu server if I need to use ispconfig which causes the server to change its actual way of running. It's quite good though, but any more recommended ones ? Something more organic? which doesn't require much breaking and changing. I'm not asking for the simple one, I don't mind going extra mile to install a powerful one but just try sticking with most Ubuntu's conventions will be an ideal one for me. And of course, if there happens to be something that meets the requirement as mentioned "Ubuntu conventions" and also simple to install at the same time, that'd be a bonus. Thanks in advance.

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >