Search Results

Search found 49554 results on 1983 pages for 'database users'.

Page 822/1983 | < Previous Page | 818 819 820 821 822 823 824 825 826 827 828 829  | Next Page >

  • Creepy MySQL error during hard work

    - by Kiewic
    Hi, I have a MySQL database installed on a OpenSuse 11.1 server (it is a Bitnami image). The database works fine, it can stay many days without any error, but when MySQL receives a huge amount of transactions, it dies immediately. The next screen shows the error: Moreover, I don't know how to restart MySQL. I have tried this: /opt/bitnami/mysql/bin/mysqld start But it doesn't work, that gives me the next output: 110209 17:09:01 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root! 110209 17:09:01 [ERROR] Aborting 110209 17:09:01 [Note] /opt/bitnami/mysql/bin/mysqld.bin: Shutdown complete It doesn't matter which kind of statements are executing, if they are a huge amount, MySQL dies. The MySQL server version is 5.1.30 What can be causing these sudden failures?

    Read the article

  • Strange mysql problem moving website from Ubuntu server to Mac server

    - by evan
    I'm moving a website (php/mysql) from an Ubuntu server to a OSX 10.6 server. I've set up apache to run php scripts and setup the newest version of mysql on the mac. I just copied all of the php files and dumped/imported all of the mysql databases (including the mysql users database). When I visit the page being served on the Mac the page is able to connect to the database, but not query. Specifically this function mysql_error() is returning this error message NO SUCH FILE OR DIRECTORY The reason it's strange is that I'm able to change the php connection strings for mysql on the Ubuntu server so that it points to the Mac server and the page works correctly (so it seems mysql is correctly set up on the mac and definitely contains all of the users and tables it should). Thinking it was something to do with file permissions on the mac, I've changed all of the files 755 and it hasn't helped. Any ideas? Thanks!! UPDATE: I've found this error which I'm relatively certain is related to this problem in /var/log/apache2/error_log PHP Warning: mysql_query(): A link to the server could not be established

    Read the article

  • How do I permanently delete /var/log/lastlog?

    - by GregB
    My /var/log/lastlog file is huge. I know it's really only a few kilobytes, but tar isn't smart enough to know that, so when I image a virtual machine, my restore fails because it thinks I'm trying to load more data than I have capacity on my disk. I want to delete /var/log/lastlog and stop any and all logging to the file. I'm aware of the security implications. This logging needs to stop to preserve my backup strategy. I've made a change to /etc/pam.d/login which I was told would disable logging to /var/log/lastlog, but it does not appear to work as /var/log/lastlog keeps growing. # Prints the last login info upon succesful login # (Replaces the `LASTLOG_ENAB' option from login.defs) #session optional pam_lastlog.so Any ideas? EDIT For anyone interested, I use Centrify Express to authenticate my users via LDAP. Centrify Express is "free", but one of the drawbacks is that I can't manage user UIDs via LDAP, so they are given a dynamic UID when they login to a server. Centrify picks some crazy high UID values (so they don't conflict with local users on the server, presumably). /var/log/lastlog is indexed by UID, and grows to accommodate the largest UID on the system. This means that when a Centrify user logs in, they get a UID in the upper-end of the UID range, which causes lastlog to allocate an obscene amount of space, according to the file system. ~$ ll /var/log/lastlog -rw-rw-r-- 1 root root 291487675780 Apr 10 16:37 /var/log/lastlog ~$ du -h /var/log/lastlog 20K /var/log/lastlog More Into --- Sparse Files

    Read the article

  • Clone MySQL DB - errors with CREATE VIEW/SHOW VIEW privileges

    - by user43537
    Running MySQL 5.0.32 on Debian 4.0 (Etch). I'm trying to clone a WordPress MySQL database completely (structure and data) on the same server. I tried a dump to an .sql file and an import into a new empty database from the command line, but the import fails with errors saying the user does not have the "SHOW VIEW" or "CREATE VIEW" privilege. Trying it with PHPMyAdmin doesn't work either. I also tried doing this with the MySQL root user (not named "root" though) and it shows an "Access Denied" error. I'm terribly confused as to where the problem is. Any pointers on cloning a MySQL DB and granting all privileges to a user account would be great (specifically for MySQL 5.0.32). Thanks!

    Read the article

  • Windows 2008 R2 file share - any way to "lock it down" outside of a 3rd party app?

    - by TheCleaner
    I have a 3rd party app that "makes a call" to write files to a file share on our network using the currently logged in credentials of the Windows domain user. Meaning the 3rd party app doesn't pass the apps credentials but simply issues a behind the scenes copy command to take a source file specified and copy/move it to the destination "repository" on the file share. The basic premise is that it keeps revisions/approvals for Document Control (think svn/git I guess, similar to this question: Lock down Windows folder to only be updatable by SVN). This all works fine...but here's my issue: I need a way to lock down the file share from being accessed/modified outside of using the 3rd party app (meaning prevent explorer/word/excel/etc from getting to that share). I know I can do the following: make the share a hidden share ($) - this definitely helps. Most users would have zero clue on how to get to such a share. Solves probably 95% of my issue. go one step further and set the "Hidden" attribute on the folders in the hidden share - this would go a little further in that even if a user knows the path to the hidden share like \\server\hidden$ they still won't see folders in that share without changing their explorer options to "show hidden files/folder Any other ideas on how I can lock this down? The users still need modify rights to this share/folders since the 3rd party app relies on their Windows permissions to that location when copying the files into it. I can't really use 3rd party tools to password protect the folder/share without causing the 3rd party app functions to fail.

    Read the article

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • FTP Server with advanced features

    - by Nikolas Sakic
    Hi, We supply zone-files to our customers. Some zone files are big about 300MB and some are quite small, maybe like 1MB. We had this issue that someone setup a script to continually download the file. Imagine downloading 300MB file a few hundred times a day. Since, we don't have packet-shaper to throttle the traffic, we need to upgrade ftp server and use add-on modules to limit the download somehow. We currently use proftpd server. Also note that there are different users for different domains - say, if you want to download zone file for .INFO domain, then you use a particular user. That user can't download any other zone's file. This is what we are looking for: Have maximum of 400MB download per user per day. Or even have different download limit for different users per day. Have one connection per user at any time. Max # of connection (non-simultaneous) per user per day is 5. Anyone trying to exceed that gets banned for 24 hours. Has anyone used FTP server with similar restrictions above? Does anyone have any ideas where I can start? Any help would be appreciated. Thanks. -N

    Read the article

  • Apache: Isn't chmod 755 enough to set up symlink or alias on Apache httpd on Mac OS 10.5?

    - by eed3si9n
    On my Mac OS 10.5 machine, I would like to set up a subfolder of ~/Documents like ~/Documents/foo/html to be http://localhost/foo. The first thing I thought of doing is using Alias as follows: Alias /foo /Users/someone/Documents/foo/html <Directory "/Users/someone/Documents/foo/html"> Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </Directory> This got me 403 Forbidden. In the error_log I got: [error] [client ::1] (13)Permission denied: access to /foo denied The subfolder in question has chmod 755 access. I've tried specifying likes like http://localhost/foo/test.php, but that didn't work either. Next, I tried the symlink route. Went into /Library/WebServer/Documents and made a symlink to ~/Documents/foo/html. The document root has Options Indexes FollowSymLinks MultiViews This still got me 403 Forbidden: Symbolic link not allowed or link target not accessible: /Library/WebServer/Documents/foo What else do I need to set this up? Solution: $ chmod 755 ~/Documents In general, the folder to be shared and all of its ancestor folder needs to be viewable by the www service user.

    Read the article

  • Windows Explorer and UAC: run elevated

    - by syneticon-dj
    I am profoundly annoyed by UAC and switch it off for my admin user wherever I can. Yet, there are situations where I can't - especially if those are machines not under my continuous administration. In this case, I am always challenged with the task of traversing directories using my administrative user via the Windows Explorer where regular users do not have "read" permissions. The possible two approaches to this problem so far: change the ACLs to the directory in question to include my user (Windows conveniently offers the Continue button in the "You don't currently have permissions to access this folder" dialog. This obviously sucks since more often than not I do not want to change ACLs but just look into the folder's contents use an elevated cmd.exe prompt along with a bunch of command line utilities - this usually takes a lot of time when browsing through large and / or complex directory structures What I would love to see would be a way to run Windows Explorer in elevated mode. I have yet to find out how to do so. But other suggestions solving this problem in an unobtrusive way without changing the entire system's configuration (and preferably without the need for downloading / installing anything) are very welcome, too. I have seen this post with a suggestion for altering HKCR - interesting, but it changes the behavior for all users, which I am not allowed to do in most situations. Also, some folks have suggested using UNC paths to access the folders - unfortunately this does not work when accessing the same machine (i.e. \\localhost\c$\path) as the "Administrators" group membership is still stripped from the token and a re-authentication (and thus the creation of a new token) would not happen when accessing localhost.

    Read the article

  • How to view / enumerate / obtain a list of all effective rights / permissions on an Active Directory object?

    - by Laura
    I am new to Server Fault and was hoping to find an answer to a question that I have been struggling with for the past week or so. I have been recently asked by my management to furnish a list of all the effective rights / permissions delegated on the Active Directory object for our Domain Admins group. I initially figured I'd use the Effective Permissions Tab in Active Directory Users and Computers but had two problems with it. The first was that it doesn't seem very accurate and the second was that it requires me to enter the name of a specific user, and it only shows me what it figures are effective permissions for that user. Now, we have more than a 1000 users in our environment so there's no way I can possibly enter 1000 user names one by one. Plus, there is no way to export that information either. I also looked at dsacls from MS but it doesn't do effective permissions. Someone pointed me to a tool called ADUCAdmin but that seems to falsely claim to do effective permissions. Could someone kindly help me find a way to obtain this listing? Basically, I need to generate a list of all the modify effective permissions granted on the Domain Admins group object along with the list of all the admins to which these permissions are granted. In case it helps, I don't need a fancy listing - simple text / CSV output would be enough I would be grateful for any assistance since this is time and security sensitive for us.

    Read the article

  • SSH does not allow the use of a key with group readable permissions

    - by scjr
    I have a development git server that deploys to a live server when the live branch is pushed to. Every user has their own login and therefore the post-receive hook which does the live deployment is run under their own user. Because I don't want to have to maintain the users public keys as authorized keys on the remote live server I have made up a set of keys that 'belong's to the git system to add to remote live servers (In the post-receive hook I am using $GIT_SSH to set the private key with the -i option). My problem is that because of all the users might want to deploy to live, the git system's private key has to be at least group readable and SSH really doesn't like this. Here's a sample of the error: XXXX@XXXX /srv/git/identity % ssh -i id_rsa XXXXX@XXXXX @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0640 for 'id_rsa' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: id_rsa I've looked around expecting to find something in the way of forcing ssh to just go through with the connection but I've found nothing but people blindly saying that you just shouldn't allow access to anything but a single user.

    Read the article

  • Best way to run site through https on server which can't add additional certs

    - by penguin
    So I'm in a curious situation in that I am using a particular server to host things, which I can't host anywhere else (it has access to user databases etc which can't otherwise be accessed). I've been in quite a bit of discussion with the sysadmin at it looks like the only way to run our site: www.foo.com over https may be through some sort of proxy. Currently, users go to www.foo.com and are redirected to https:// host-server.com/foo, as there is an SSL cert installed on that. I want users to be on https:// www.foo.com. I'm told that for various reasons it's going to be very difficult to add an additional SSL cert to the host server. So I was wondering if it is possible to have the DNS records point to a new server, which then creates the HTTPS connection with the browser. Then it forwards requests to https:// host-server.com/foo and feeds the replies back to the original requester. Does this make sense? And would it be at all feasible? My experience with SSL is limited at best, so thanks in advance for your help :) ps gaps in hyperlinks as ServerFault was getting unhappy with the number of links I was posting!

    Read the article

  • writing data onto a linux live-dvd

    - by stanleyxu2005
    I have a server machine with a dvd-writer. I want to burn a linux live-dvd (openSUSE is preferred) with a pre-configured web server, so that after booting the web server should be ready to serve. The web server has a sqlite database (with very less data). But after rebooting the system, all data in the database will get lost. Is it possible to store all necessary data onto this live-dvd as well? If it were a usb drive. I would create two partition and mount the second partition with read-write permission. But I have no idea how to create two partition onto a dvd Any hint is appreciated

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • Redeploy using Active Directory

    - by Noam Gal
    I am trying to use group policy to deploy our msi through AD. For some strange reason, when I overwrite the msi with a newer version, and then go to the policy, and click on "Redeploy Application", the application gets uninstalled on the users' machines, and all reg keys, binaries and shortcuts are gone from them. The "Add/Remove Programs" still contain the application entry. I have managed to create a minimal vdproj that does nothing but write its current Product Version to a registry key, and created two versions of it (1.0.0 and 1.1.0). I still face the same problems when using this msi in my AD environment. I did check that my Package Codes and Product Codes are different for both versions, and that the Upgrade Codes are identical. I also checked the RemovePreviousVersion to true. Checking with some other msi (firefox 3.0.0 and 3.6.3) I downloaded from a site specifically for AD deploy, it worked just as expected (first installing the 3.0.0, then I over-written the msi, and clicked on "Redeploy", and the users got 3.6.3 after the next log-off-log-on). What am I missing here?

    Read the article

  • System Center 2012 Service Manager change request status stuck at new

    - by Chuck Herrington
    The guy that built and setup this system left rather abruptly and I've taken over. My current issues are I have several change requests that are stuck at New. They do not move to Pending or In Progress. The system is not sending emails when incidents are getting assigned to people. This used to work on this system. I have done a lot of searching and the usual solution to this of stopping and restarting the system center services does not help. Can anyone give me any ideas of where else to look? Update: From all the searching I have done it seemed like I was at the point of re-installing. My initial installation of SCSM 2012 was on a machine that was upgraded from SCSM 2010 and also hosted SCCM 2007 and WSUS. We decided to give it a fresh start on a new server by installing a second instance of the SCSM server on a brand new 2008 R2 server then promoting the new server to the workflow master using the procedures outlined in this article - Dealing with Multiple management Servers. I've gotten to the point where we have both the old and the new server up and the new server has been promoted. I had hopped to get spammed by emails all the sudden due to the workflow taking off, but no such luck. Once all the clients are reconfigured to point to the new server we still plan to decommission the old server but at this point it seems to be that the problem is in the database. Short of any other input from the community, my next plan is to install a 180 day trial on a test server, complete with a separate database so that I can do a side by side comparison between a completely fresh install and what I have now and see if I can find any differences. While that install is running I also plan on investigating the event logs to see if there is anything in there that can shed some light on what is happening on the new server. Update 2: So I've now got a test SCSM server up with a completely fresh install including Database and it seems to be able to transition Change Requests from New to In Progress. I'm attempting to find differences between the two. Stay Tuned! Update 3: In looking through the event log on the new SCSM machine i discovered: Log Name: Operations Manager Source: OpsMgr Root Connector Date: 10/9/2013 3:48:18 PM Event ID: 28000 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: scsm02 Description: The Root connector received an exception from the SDK Service while submitting task status: Cannot set availability on a health service that doesn't exist. This lead me to Event ID 2800 logged after installing secondary server for System Center 2012 Service Manager SP1. I contacted MS to obtain the hotfix, BIG warning here, turns out the hotfix is not so "hot". In order to apply this hotfix, you have to uninstall then reinstall using the files they supply. :( This is where I am at now ... Update 4: Not much luck after the re-install. The errors in the event log have gone away on the new server but the workflows still aren't running and neither the event log nor the workflow status screen seem to indicate why. I've done a comparison of the Activity and the Change Request Event Workflows and I've removed everything from the production system that is not in my fresh test system (which is everything), shut down the services, cleared out the cache folders and restarted the services and still no joy. At the moment the only thing I can think to do is either a)nuke the entire system including the database and start over, losing all of our data in the process or b)contact MS (which is probably going to cost us a butt load of money and time in the end to only advise us to do the same thing. Maybe more idea's will come after coffee ... No answers came after coffee. Attempting to contact MS. Managed to get to their first line of defense, gave them our SA number and someone is supposed to call me back. I am trying to log into my incident on their site to update my ticket with the link to this thread but when i click on the link in the email they sent me it goes to a "Sorry, the page you requested is not available" page ... Linux is looking better and better all the time.

    Read the article

  • MSDTC Port 135 open bi-directionally

    - by Stephen Lacy
    I have two servers, a web application server and an SQL Server database running on its own server. I have a firewall between these two servers. Do I have to open port 135 on both the SQL Server and the Web Application Server. Does the SQL Server open its own connection to the Web Application Server on port 135 or any other port? Do I have to in component services point the Web Application Server MSDTC at the SQL Database Server? If the firewall is completely open, the settings in component services set to allow remote connections, remote administration etc is there any other settings that need to be changed in order to allow remote connections to the SQL Server MSDTC?

    Read the article

  • Openfire Installation Issue - Can't Login to admin panel

    - by Lobe
    I am trying to get Openfire to install on an Ubuntu virtual machine, however upon completing the web based installer, I am unable to login to the admin panel. So far I: downloaded Debian installer Installed using stock options Added database and built the structure using supplied SQL file Completed web based installer I am now trying to login using username: admin and my password, however I constantly get a wrong username/password error. There is a record generated in the MySQL database showing the admin user with an encrypted password, and changing to an unencoded password doesn't work. What is the problem here?

    Read the article

  • Symantec Backup Exec 12 Tape Alert.

    - by Adam
    Every day, I run 5 backups using 6 tapes. Each day, when I run the inventory, I get a tape alert Error. This occurs every day, on the same job. The error is: Job 'Inventory Daily ********' has reported Multiple Tape Alerts on server '******' Please refer to job log *****.xml for more information. When i look at the Job log, the Utility Job Information says: The device has reported the following TapeAlert diagnostic information Information- The library has been manually turned offline and is unavailable for use. Robotic library for device: PV132T 500 Warning - Library security has been compromised. Robotic Library for device: PV132T 500. Critical - The library has detected a inconsistency in its inventory. 1.Redo the library inventory to correct the inconsistency. 2. Restart the operation. Check the applications users manual or hardware users manual for specific instructions on redoing the library inventory. Roboric Library for Device PV132T 500. When I run the same inventory for a second time, the job completes successfully. I am using Symantec Backup Exec 12 running on Windows Server 2008. I am using a Dell Powervault 132T 500 tape drive. If anyone can help me on how to resolve this problem, it would be very much appreciated.

    Read the article

  • Equivalent of phpMyAdmin for MSSQL?

    - by Tedd Hansen
    Is there any webinterface for administrating MSSQL similar to phpMyAdmin (for MySQL)? I want a self-service setup where developers can create a database through webinterface and upload/download backups of the database without local access. I've considered phpMSAdmin, but it hasn't had a release since 2006 so I'm not sure its worth the effort of setting it up. If there is something else (free or not-so-free) that would be great. My question is similar to this one posted 2 years ago, but no good webinterface was found back then. SQL Web Data Administrator seems interesting, but it lacks a few features - most notably creating new databases (also, not updated since 2007).

    Read the article

  • mySQL sphinx Index location on Ubuntu

    - by Marcus Morris
    I am trying to test out a Sphinx cookbook, but I need a database to do so. I have created the database locally, but I need to know where the default path is for the index of the table I created. This is the error I am currently getting when trying to run sphinx because the path to the index is wrong: WARNING: index 'phoneindex': preload: failed to open /var/lib/mysql/mysql.sph: No such file or directory; NOT SERVING FATAL: no valid indexes to serve Where can I find mysql.sph? Or how/when is that file created? Thanks!

    Read the article

  • IIS Hangs on SQL Connections when running ASP.net applications

    - by PaulWaldman
    We have a database server running SQL 2000 and two web servers hosting ASP.net applications. All three servers are running Windows Server 2003 SP2. Our issue is repeatable after about 2 weeks, IIS on one web server is no longer able to establish SQL connections. Static content loads fine. Other non-IIS applications are still able to contact the SQL database server. ODBC functionality also still works. While running SQL profiler a connection is never established from IIS when it is in this state. The only way to fix this situation is to restart the web server. There are no firewalls installed on any machines.

    Read the article

  • IIS7.5 - about app pool ID's and folder read/write access

    - by merk
    I did some searching and it looks like for each app pool, there should be an account created called IIS APPPOOL\AppPoolName - however i can see no such account when i try to modify the permissions on a folder to give that app write access. The closest I have found is the IIS_IUSRS group. Now, if i go into that group and look at the members, i see several IIS APPPOOL\PoolName members. But where are these members coming from? Why don't they show up under the users? And why can't i add a specific one to a folder? It doesn't make sense to me to add the IIS_IUSRS group to a folder since they gives every site access to the folder. To be more specific, I'm setting up wordpress and it unfortunately wants write access to the root folder. So i want to restrict it as much a possible. I was trying to figure out how to set it so that the WP root folder has write access only for the ID that the blog's app pool is running under. When i drill down into the IIS_IUSRS group, i do not see the app pool for the blog listed there. The settings for the blog's app pool are: No managed code, Classic, ApplicationPoolIdentity, and it's named 'blog' So any explanations regarding these users that are created for the app pools, and why the blog doesn't seem to belong to the iusrs group? thanks

    Read the article

  • Cannot send email to info@ or support@

    - by user3022598
    I am trying to send email from my gmail account to a couple user accounts I have on my new Centos server. The email is setup correctly and I can send receive from accounts ok except info and support. I tried to setup two users "info" and "support" I have a php form that sends out email that works fine for all users except info and support. To test this and make sure that something did not change from yesterday i just created a new user "frank" and tried the submit form and it worked fine. From my gmail account i can email "frank" however i cannot email "info" or "support" The logs I pulled are as follows and i think i see the issue but no idea how to fix it. Aug 15 12:20:55 mail postfix/qmgr[1568]: 1815C20A83: from=, size=1815, nrcpt=1 (queue active) Aug 15 12:20:55 mail postfix/local[2270]: 1815C20A83: to=, relay=local, delay=0.28, delays=0.26/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Aug 15 12:17:13 mail postfix/qmgr[1568]: 3C18520A7F: from=, size=1818, nrcpt=1 (queue active) Aug 15 12:17:13 mail postfix/local[2201]: 3C18520A7F: to=, orig_to=, relay=local, delay=0.28, delays=0.25/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Aug 15 12:15:24 mail postfix/qmgr[1568]: 2F79420A79: from=, size=1813, nrcpt=1 (queue active) Aug 15 12:15:24 mail postfix/local[2155]: 2F79420A79: to=, orig_to=, relay=local, delay=0.29, delays=0.27/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) For some reason frank goes out fine, however support and info go to root? Why?

    Read the article

  • MySQL/Apache: Replace spaces with underscores only in certain URLs

    - by javipas
    I'm having a problem with some images I'm using on my WordPress blog. After a migration I renamed every image replacing spaces with underscores, so HIDDEN_264_4062_FOTO_IDF los MID.jpg was renamed to HIDDEN_264_4062_FOTO_IDF_los_MID.jpg But althought the trick was necessary and worked for most of the posts, some of them try to find the old image, with spaces: This is not found http://www.example.com/files/HIDDEN_264_4062_FOTO_IDF%20los%20MID.jpg and this should be the right URL http://www.example.com/files/HIDDEN_264_4062_FOTO_IDF_los_MID.jpg Careful, though, 'cause the "%20" is only shown on the browser: the text on the database shows spaces, not "%20". I'd like to know if maybe I could make a SQL query in my WordPress MySQL database that replaces spaces in .jpg files with underscores. The path of the images is always the same, so the rule should transform this: /files/HIDDEN_264_4062_FOTO_IDF los MID.jpg /files/HIDDEN_264_4062_FOTO_IDF_los_MID.jpg the "/files/HIDDEN_264_" part is always the same, but the rest varies. Is some way to perform this? Maybe a rewrite rule on Apache (our current webserver)?

    Read the article

< Previous Page | 818 819 820 821 822 823 824 825 826 827 828 829  | Next Page >