Search Results

Search found 19087 results on 764 pages for 'zend log'.

Page 322/764 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Excessive PHP errors in Joomla

    - by Rodnower
    I have Joomla 2.5 installed on Windows 7 with Apache 2 and PHP 5. I have countless PHP errors in the log like the following: [01-Sep-2012 19:33:55 UTC] PHP Strict standards: Only variables should be assigned by reference in C:\ammon_dev\ammon\plugins\system\jquery\jquery.php on line 24 [01-Sep-2012 19:33:55 UTC] PHP Stack trace: [01-Sep-2012 19:33:55 UTC] PHP 1. {main}() C:\ammon_dev\ammon\administrator\index.php:0 [01-Sep-2012 19:33:55 UTC] PHP 2. JAdministrator->route() C:\ammon_dev\ammon\administrator\index.php:40 [01-Sep-2012 19:33:55 UTC] PHP 3. JApplication->triggerEvent() C:\ammon_dev\ammon\administrator\includes\application.php:106 [01-Sep-2012 19:33:55 UTC] PHP 4. JDispatcher->trigger() C:\ammon_dev\ammon\libraries\joomla\application\application.php:670 [01-Sep-2012 19:33:55 UTC] PHP 5. JEvent->update() C:\ammon_dev\ammon\libraries\joomla\event\dispatcher.php:161 [01-Sep-2012 19:33:55 UTC] PHP 6. call_user_func_array() C:\ammon_dev\ammon\libraries\joomla\event\event.php:71 [01-Sep-2012 19:33:55 UTC] PHP 7. plgSystemJquery->onAfterRoute() C:\ammon_dev\ammon\libraries\joomla\event\event.php:71 I tried disabling error logging in php.ini: error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT Unfortunately that does not make a difference. Joomla isn’t in debug mode, and I am sure that I’m editing the correct copy of php.ini because other changes I make to it take effect. Any ideas why I am getting so many errors or how to stop it from exploding the log?

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • Can't make nodejs mingw32: pkg-config can't find gnutils

    - by valya
    I'm trying to compile nodejs using MSYS, mingw32 on Windows 7-64 Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ ./configure Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program LINK : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LINK.exe Checking for program LIB : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LIB.exe Checking for program MT : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\MT.exe Checking for program RC : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\RC.exe Checking for msvc : ok Checking for msvc : ok Checking for library dl : not found Checking for library execinfo : not found Checking for gnutls >= 2.5.0 : fail --- libeio --- Checking for library pthread : not found Checking for function pthread_create : not found error: the configuration failed (see 'C:\\msys\\1.0\\home\\Valentin_Golev\\node js\\build\\config.log') I have gnutils built and installed! I've checked the config.log, and there was a command: pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls I typed it in the console Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls Package gnutls was not found in the pkg-config search path. Perhaps you should add the directory containing `gnutls.pc' to the PKG_CONFIG_PATH environment variable No package 'gnutls' found But, Valentin Golev@VALYASNOTEBOOK ~ $ $PKG_CONFIG_PATH sh: c:/msys/1.0/local/lib/pkgconfig: is a directory Valentin Golev@VALYASNOTEBOOK ~ $ cd $PKG_CONFIG_PATH Valentin Golev@VALYASNOTEBOOK /local/lib/pkgconfig $ ls gnutls-extra.pc gnutls.pc What am I doing wrong?

    Read the article

  • How do I make Nginx redirect all requests for files which do not exist to a single php file?

    - by Richard
    I have the following nginx vhost config: server { listen 80 default_server; access_log /path/to/site/dir/logs/access.log; error_log /path/to/site/dir/logs/error.log; root /path/to/site/dir/webroot; index index.php index.html; try_files $uri /index.php; location ~ \.php$ { if (!-f $request_filename) { return 404; } fastcgi_pass localhost:9000; fastcgi_param SCRIPT_FILENAME /path/to/site/dir/webroot$fastcgi_script_name; include /path/to/nginx/conf/fastcgi_params; } } I want to redirect all requests that don't match files which exist to index.php. This works fine for most URIs at the moment, for example: example.com/asd example.com/asd/123/1.txt Neither of asd or asd/123/1.txt exist so they get redirected to index.php and that works fine. However, if I put in the url example.com/asd.php, it tries to look for asd.php and when it can't find it, it returns 404 instead of sending the request to index.php. Is there a way to get asd.php to be also sent to index.php if asd.php doesn't exist?

    Read the article

  • Why is SSH finding remote keys for other accounts?

    - by Brian Pontarelli
    This is a strange issue I'm having with SSH from my Macbook Pro to a Linux (Ubuntu 11.10) server. I have a DSA key setup on the remote Linux server under my home directory like this: /home/me/.ssh/authorzied_keys I also have the same DSA key setup for a few other accounts on the machine named "foo" and "bar". I can log into all of the accounts fine without any password. Therefore, the DSA keys are all setup correctly. The strange behavior I'm seeing is when debugging the SSH connection. During the connection, the SSH debug is outputting this: debug2: key: /Users/me/.ssh/id_dsa (0x7f91a1424220) debug2: key: /home/foo/.ssh/id_dsa (0x7f91a1425620) debug2: key: /home/bar/.ssh/id_rsa (0x7f91a1425c60) debug2: key: /Users/me/.ssh/id_rsa (0x0) This is strange for so many reasons, but essentially, why is SSH listing out keys on the server (/home/foo/.ssh/id_dsa and /home/bar/.ssh/id_rsa)? These files don't even exist on the server, so why are they listed? I'm not logging into the "foo" or "bar" accounts, so why is SSH even listing those? On my Macbook Pro, I only have a DSA key, but SSH is listing out an RSA key, what's that all about? Another user on the server doesn't get any of these messages when they log in and they have the exact same setup for their DSA key and the exact same Macbook Pro setup as mine? Does anyone know what these messages are and why SSH is outputting them?

    Read the article

  • Team Foundation Server 2008 - TF220056 Error during installation

    - by David
    I'm attempting to install Team Foundation Server 2008 on a Windows Server 2003 instance that exists under Hyper-V. The SQL Server database itself is held on the root partition of the Hyper-V server and has the Reporting Services installed (so I've solved the TF220059 error already). After hitting "Next " after typing the name of the SQL Server I get this error: --------------------------- Microsoft Visual Studio 2008 Team Foundation Server Setup --------------------------- TF220056: An unrecoverable error occurred while trying to check the status of the Team Foundation database. Installation cannot continue. Check the install log for more details. --------------------------- OK --------------------------- The error log's stack trace makes it look like a bug in the TFS installer itself: [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: System.IO.IOException: The directory name is invalid. [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at System.IO.Path.GetTempFileName() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.get_Log() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.Run() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.CommandLine.RunCommand(String[] args) [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: The directory name is invalid. [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe check failed with error code: 100 I'm running the installer as the domain Administrator, although the server is a Terminal Server in Application Mode, might that be the cause of the problems?

    Read the article

  • Black Screen on Logon (windows 7 home premium)

    - by Blacknight334
    i have been having some trouble with a dell Xps 15 laptop that i recently purchased. it is under a month old, and a problem has occurred, upon logging on just after start up, the computer will just sit on a black log on screen (with the mouse still visible and active) for a few minutes. it is extremely annoying, especially when im in a rush. the laptop is under a month old. So far, i have tried to update the drivers, all windows update, and still, nothing. also, it doesnt seem to do it when i log into safe mode, or if it does, it will do it for less than 10 seconds, then load the desktop (in normal boot, it usually takes a few minutes). i have also run a number of the inbuilt diagnostics, but found no errors. i want to avoid having to do a system restore for as long as i can. does anyone know anything that can help? (the laptop is running a 500gb SSD, 2gb Nvidia 640m, 8gb ram, 3rd gen i7 quad core with 8 threads) thanks.

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • OS X: Storing MySQL data securely, on an encrypted FileVault image using a soft link

    - by GJ
    I am trying to get a macports-installed MySQL to use a data directory stored inside my FileVault-protected home dir. I used sudo cp -a /opt/local/var/db/mysql5 ~/db/ (the -a to ensure file permissions remain intact) and then replaced the original mysql5 directory with a soft link: sudo ln -s ~/db/mysql5 /opt/local/var/db/mysql5 However, when I now try to start MySQL it fails. It follows the soft link at least to the extent that it modifies some files in the ~/db/mysql5 dir, notably the error log which gets appended to it this: 110108 15:33:08 mysqld_safe Starting mysqld daemon with databases from /opt/local/var/db/mysql5 110108 15:33:08 [Warning] '--skip-locking' is deprecated and will be removed in a future release. Please use '--skip-external-locking' instead. 110108 15:33:08 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead. 110108 15:33:08 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 110108 15:33:08 [Warning] Setting lower_case_table_names=2 because file system for /opt/local/var/db/mysql5/ is case insensitive 110108 15:33:08 [Note] Plugin 'FEDERATED' is disabled. 110108 15:33:08 [Note] Plugin 'ndbcluster' is disabled. /opt/local/libexec/mysqld: Table 'mysql.plugin' doesn't exist 110108 15:33:08 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110108 15:33:09 InnoDB: Started; log sequence number 4 1596664332 110108 15:33:09 [ERROR] /opt/local/libexec/mysqld: Can't create/write to file '/opt/local/var/db/mysql5/mac.local.pid' (Errcode: 13) 110108 15:33:09 [ERROR] Can't start server: can't create PID file: Permission denied 110108 15:33:09 mysqld_safe mysqld from pid file /opt/local/var/db/mysql5/gPod.local.pid ended I can't see why MySQL can't create the pid file, since manually creating it using the _mysql user succeeds (sudo -u _mysql touch mac.local.pid from inside ~/db/mysql5) Any ideas how to resolve this?

    Read the article

  • pdo_mysql installation

    - by Arsenal
    Hello, On my server I'm trying to instal PHP Projekt (6). This requires pdo_mysql however. I thought this installation of pdo_mysql would be rather straightforward... I tried using pecl (pecl install pdo_mysql) after installing devel etc, but this came up with a permission denied error. I solved this by using directories that were accessible. It then came up with a cant run C compiled programs however. It also says to check config.log for more details but ironically config.log is automatically removed if the installation process fails... When I try to compile and output a "hello world".c however, it works perfectly. I then tried to download the pdo_mysql stuff and install in myself (using configure and make install). This seemed to do the job, but when I restarted my apache ... no sign of pdo_mysql anywhere even though I adjusted my php.ini file I have read somewhere that you need to recompile PHP with the option pdo_mysql enabled. But how does one do that (I'm using CentOS4). And isn't there any other way than that??? Thanks!

    Read the article

  • CentOS iscsi initiator has session but there is no block device

    - by jcalfee314
    I have installed the scsi-target-utils package on CentOS and I used it to perform a discovery. The discovery did give me an active session. I restarted the iscsi service but I do not see any new devices (fdisk -l). I see in /var/log/messages that my connection is operational now. I'm not sure how to debug this further. Can someone direct me into fixing this? discovery: iscsiadm -m discovery -t sendtargets -p 192.168.0.155 returns: 192.168.0.155:3260,-1 iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 Just to verify it actually worked: iscsiadm -m session returns tcp: [1] 192.168.0.155:3260,1 iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 restarting as the directions say to do: service iscsi restart output written to /var/log/message Stopping iscsi: Sep 20 12:14:22 localhost kernel: connection1:0: detected conn error (1020) [ OK ] Starting iscsi: Sep 20 12:14:22 localhost kernel: scsi1 : iSCSI Initiator over TCP/IP Sep 20 12:14:22 localhost iscsid: Connection1:0 to [target: iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3, portal: 192.168.0.155,3260] through [iface: default] is shutdown. Sep 20 12:14:22 localhost iscsid: Could not set session2 priority. READ/WRITE throughout and latency could be affected. [ OK ] [root@db iscsi]# Sep 20 12:14:23 localhost iscsid: Connection2:0 to [target: iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3, portal: 192.168.0.155,3260] through [iface: default] is operational now Ran a login command: iscsiadm -m node -T iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 -p 192.168.0.155 -l No errors, no logging occurred. Next I compared the output from "fdisk -l|egrep dev" both with the iscsi session and without. There is no difference. I suppose I could just look in /etc/mtab. Any ideas on how I can get an iscsi device?

    Read the article

  • How can I recover my system after running 'mkfs' on the system partition?

    - by Filip Podgórny
    I am not a Linux user, and was doing some homework, I blindly typed sudo mkfs ext3 dev/sda2 (I had Ubuntu as Windows installation). I've done few more things, and turned Ubuntu off to switch on Windows back. No operating system installed - this is the message I'm getting. I plugged my HDD onto another computer and all my files are still there. What should I do to get my windows installation back? df -l (before mkfs) /dev/loop0 29G 2,0G 27G 8% / udev 3,0G 4,0K 3,0G 1% /dev tmpfs 1,2G 900K 1,2G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,0G 1,3M 3,0G 1% /run/shm /dev/sda3 455G 123G 333G 27% /host /dev/sdb1 1,9G 820M 1,1G 43% /media/PHONE CARD mkfs output (polish, sorry) mke2fs 1.41.14 (22-Dec-2010) Etykieta systemu plików= Typ OS: Linux Rozmiar bloku=1024 (log=0) Rozmiar fragmentu=1024 (log=0) Stride=0 bloków, szerokosc Stripe=0 bloków 25688 i-wezlów, 102400 bloków 5120 bloków (5.00%) zarezerwowanych dla superuzytkownika Pierwszy blok danych=1 Maksymalna liczba bloków systemu plików=67371008 13 grup bloków 8192 bloków w grupie, 8192 fragmentów w grupie 1976 i-wezlów w grupie Kopie zapasowe superbloku zapisane w blokach: 8193, 24577, 40961, 57345, 73729 Zapis tablicy i-wezlów: zakonczono Tworzenie kroniki (4096 bloków): wykonano Zapis superbloków i podsumowania systemu plików: wykonano Ten system plików bedzie automatycznie sprawdzany co kazde 30 montowan lub co 180 dni, zaleznie co nastapi pierwsze. Mozna to zmienic poprzez tune2fs -c lub -i.

    Read the article

  • Export-Mailbox Error

    - by tuck918
    All, I am using export-mailbox to move some data and it is working fine until I get this error: StatusMessage : Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000 This is the command I am using: export-mailbox -identity mailboxA -targetmailbox mailboxB -targetfolder folderA -allowmerge We are on SP2 and I am running this under an account that is not a domain or enterprise admin. THe account has Exchange Server Administrator Permission Both Source and Target Exchange Mailbox Server. THe account is part of the Local Administrators Group Member Both Source and Target Exchange Mailbox Server. This account has Full Access permission on both the target and source servers. THe issue happens at any time and I am only trying to run this on one mailbox, the only mailbox I need to run it on. THe event log is "Error Exchange Migration Export Mailbox Event 1008". The log under migration logs just shows that it was running okay then it gives the same error as above "Error was found for mailboxA ([email protected]) because: Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000, error code: -1056749164" Any ideas on what to do/try?

    Read the article

  • Munin graphing by CGI

    - by Vaughn Hawk
    I have Munin working just fine, but any time I try to do cgi graphing - it just stops graphing... no errors in the log, nothing. I've followed the instructions here: http://munin-monitoring.org/wiki/CgiHowto - and it should be working - here's my munin.conf setup, at least the parts that matter: dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin tmpldir /etc/munin/templates graph_strategy cgi cgiurl /usr/lib/cgi-bin cgiurl_graph /cgi-bin/munin-cgi-graph And then the host info yada yada - graph_strategy cgi and cgrurl are commented out in munin.conf - that's because if I uncomment them, graphing stops working. Again, I get no errors in logs, just blank images where the graphs used to be. Comment out cgi? As soon as munin html runs again, everything is back to normal. I'm running the latest version of munin and munin-node - I've tried fastcgi and regular cgi - permissions for all of the directories involved are munin:www-data - and my httpd.conf file looks like this: ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory /usr/lib/cgi-bin/> AllowOverride None SetHandler fastcgi-script Options ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Location /cgi-bin/munin-cgi-graph> SetHandler fastcgi-script </Location> Does anyone have any ideas? Without this working, at least from what I understand, Munin just graphs stuff, even if no one is looking at them - you add 100 servers to graph, and this starts to become a problem. Hope someone has ran into this and can help me out. Thanks!

    Read the article

  • Can not access SQLServer database

    - by btrey
    I'm trying to convert an Access database to use a SQLServer backend. I've upsized the database and everything works on the server, but I'm unable to access it remotely. I'm running SQLServer Express 2005 on Windows Server 2003. The server is not configured as a domain controller, nor connected to a domain. The computers I'm trying to access the server from are part of a domain, but there are no local domain controllers. I'm at a remote location and the computers are configured and connected to the domain at the home office, then shipped to us. We normally log in with cached credentials and VPN into the home office when we need to access the domain. I can use Remote Desktop Connection to access the 2k3 server which is running SQLServer. If I log into the server with my username, I can bring up the database, access it via the Trusted Connection, and the database works. If I try to run the database locally, however, I get the Server Login dialog box. I can not use a Trusted Connection because my local login is to the home office domain and is not recognized by the SQLServer machine. If I try to use the username/password that is local to the SQLServer, I get a login failed error. I've tried entering the username as "username", "workgroup/username" (where "workgroup" is the name of the workgroup on the SQLServer), "sqlservername/username" and "[email protected]" where "1.2.3.4" is the IP of the SQLServer. In all cases, I get a login failed error. As I said, I can login to the server via Remote Desktop Connection with the same username and password and use the database, so permissions for the username appear to be correct for both a remote connection and for database access. Not sure where to go from here and any assistance would be appreciated.

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking?

    Read the article

  • Apache taking up a lot of CPU while running request-tracker4

    - by bhowmik
    I am trying out a request-tracker installation on an EC2 micro instance. The specs for the micro instance are as follows 1) Ubuntu 12.04 64bit, 613MB RAM, 8GB Hard Drive 2) Running request-tracker 4.0.4 from the repository, perl 5.14.2, Apache2, MySQL5 3) Request-tracker4.0.4 running with mod_perl2 and Worker mpm 4) Apache configured with Worker MPM. Config snippet given below Timeout 150 KeepAlive On MaxKeepAliveRequests 60 KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> Now when I start Apache2 it works fine for some time and after a while the CPU load shoots up to 99% or more. Usually it is one or more Apache processes doing this. I've tried a to modify the worker module configuration without any luck. The log files for both Apache2 and request-tracker4 are set to log debug messages and don't show anything to indicate what could be causing this. The system gets a maximum of 5 users at any given time and usually (90% of the time) it is just 2. I've just installed it and we only have 20 tickets in the database. I don't think its the memory thats causing the issue since the server isn't swapping or even close to it and I hardly see the memory usage go up. Would appreciate any pointers on how to go about troubleshooting this. In case it helps I've also tried this out a similar installation on a small instance (Identical settings except RAM bumped upto 1.7GB) and I still see the issue.

    Read the article

  • Ubuntu Server attack? how to solve?

    - by saky
    Hello, Something (Someone) is sending out UDP packets sent from our whole ip range. This seems to be multicast DNS. Our server host provided this (Our IP Address is masked with XX): Jun 3 11:02:13 webserver kernel: Firewall: *UDP_IN Blocked* IN=eth0 OUT= MAC=01:00:5e:00:00:fb:00:30:48:94:46:c4:08:00 SRC=193.23X.21X.XX DST=224.0.0.251 LEN=73 TOS=0x00 PREC=0x00 TTL=255 ID=0 DF PROTO=UDP SPT=5353 DPT=5353 LEN=53 Jun 3 11:02:23 webserver kernel: Firewall: *UDP_IN Blocked* IN=eth0 OUT= MAC=01:00:5e:00:00:fb:00:30:48:94:46:c4:08:00 SRC=193.23X.21X.XX DST=224.0.0.251 LEN=73 TOS=0x00 PREC=0x00 TTL=255 ID=0 DF PROTO=UDP SPT=5353 DPT=5353 LEN=53 Jun 3 11:02:32 webserver kernel: Firewall: *UDP_IN Blocked* IN=eth0 OUT= MAC=01:00:5e:00:00:fb:00:30:48:94:46:c4:08:00 SRC=193.23X.21X.XX DST=224.0.0.251 LEN=73 TOS=0x00 PREC=0x00 TTL=255 ID=0 DF PROTO=UDP SPT=5353 DPT=5353 LEN=53 Jun 3 11:02:35 webserver kernel: Firewall: *UDP_IN Blocked* IN=eth0 OUT= MAC=01:00:5e:00:00:fb:00:30:48:94:46:c4:08:00 SRC=193.23X.21X.XX DST=224.0.0.251 LEN=73 TOS=0x00 PREC=0x00 TTL=255 ID=0 DF PROTO=UDP SPT=5353 DPT=5353 LEN=53 I checked my /var/log/auth.log file and found out that someone from China (Using ip-locator) was trying to get in to the server using ssh. ... Jun 3 11:32:00 server2 sshd[28511]: Failed password for root from 202.100.108.25 port 39047 ssh2 Jun 3 11:32:08 server2 sshd[28514]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.100.108.25 user=root Jun 3 11:32:09 server2 sshd[28514]: Failed password for root from 202.100.108.25 port 39756 ssh2 Jun 3 11:32:16 server2 sshd[28516]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.100.108.25 user=root ... I have blocked that IP address using this command: sudo iptables -A INPUT -s 202.100.108.25 -j DROP However, I have no clue about the UDP multicasting, what is doing this? who is doing it? and how I can stop it? Anyone know?

    Read the article

  • Very long (>300s) request processing time on Apache Server serving static content from particular IP

    - by Ron Bieber
    We are running an Apache 2.2 server for a very large web site. Over the past few months we have been having some users reporting slow response times, while others (including our resources, both on the internal network and our home networks) do not see any degradation in performance. After a ton of investigation, we finally found a "Deny from none" statement in our configuration that was causing reverse DNS lookups (which were timing out) that solved the bulk of our issues, but we still have some customers that we are seeing in the Apache logs (using %D in the log format) with request processing times of 300s for images, css, javascript and other static content. We've checked all Deny / Allow statements for reoccurrence of "none", as well as all other things we know of that would cause reverse DNS lookups (such as using "REMOTE_HOST" in rewrite rules, using %a instead of %h in our log format configuration) as well as verified that HostnameLookups is set to "Off". As an aside, we've also validated that reverse DNS lookups for folks having this problem do not time out - so I'm fairly certain DNS is not an issue in this case. I've run out of ideas. Are there any Apache configuration scenarios that someone can point me to that I might be missing that would cause request times for static content to take so long only for certain users? Thank you in advance.

    Read the article

  • Apache ScriptAlias and access error?

    - by Parhs
    First of all after much pain i figured out how to make it work in Apache 2.4 windowz. Here is my configuration that seems to work successfully for git clone and push and everything. Problem First of all my configuration works. There is a "Require all denied" at / directory. I want only git functionality and nothing else. Example Request from a git client 192.168.100.252 - - [07/Oct/2012:04:44:51 +0300] "GET /git/simple/info/refs?service=git-upload-pack HTTP/1.1" 200 264` Error caused by that Request [Sun Oct 07 04:44:51.903334 2012] [authz_core:error] [pid 6988:tid 956] [client 192.168.100.252:13493] AH01630: client denied by server configuration: C:/git-server/web/simple There isnt any error at gitclient everything works fine but i get this at error log. Is there any solution for this error to not appear?I worry about log size. <VirtualHost *:80> DocumentRoot "C:\git-server\web" ServerName git.****censored**** DirectoryIndex index.php SetEnv GIT_PROJECT_ROOT c:/git-server/repositories SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER ScriptAlias /git/ "C:/Program Files (x86)/Git/libexec/git-core/git-http-backend.exe/" <LocationMatch "^/.*/git-receive-pack$"> Options +ExecCGI AuthType Basic AuthName intranet AuthUserFile "C:/git-server/config/users" Require valid-user </LocationMatch> <Directory /> Options All Require all denied </Directory> <Directory "C:\Program Files (x86)\Git\libexec\git-core"> Options +ExecCGI Options All Require all granted </Directory> </VirtualHost>

    Read the article

  • How to get nginx to pass HTTP_AUTHORIZATION header to Apache

    - by codeinthehole
    Am using Nginx as a reverse proxy to an Apache server that uses HTTP Auth. For some reason, I can't get the HTTP_AUTHORIZATION header through to Apache, it seems to get filtered out by Nginx. Hence, no requests can authenticate. Note that the Basic auth is dynamic so I don't want to hard-code it in my nginx config. My nginx config is: server { listen 80; server_name example.co.uk ; access_log /var/log/nginx/access.cdk-dev.tangentlabs.co.uk.log; gzip on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 120; location / { proxy_pass http://localhost:81/; } location ~* \.(jpg|png|gif|jpeg|js|css|mp3|wav|swf|mov|doc|xls|ppt|docx|pptx|xlsx|swf)$ { if (!-f $request_filename) { break; proxy_pass http://localhost:81; } root /var/www/example; } } Anyone know why this is happening? Update - turns out the problem was something I had overlooked in my original question: mod_wsgi. The site in question here is a Django site, and it turns out that Apache does get the auth variables passed through, however mod_wsgi filters them out. The resolution is to use: WSGIPassAuthorization On See http://www.arnebrodowski.de/blog/508-Django,-mod_wsgi-and-HTTP-Authentication.html for more details

    Read the article

  • Using gitlab behind Apache proxy all generated urls are wrong

    - by Hippyjim
    I've set up Gitlab on Ubuntu 12.04 using the default package from https://about.gitlab.com/downloads/ {edit to clarify} I've set up Apache to proxy and run the nginx server the package installed on port 8888 (or so I thought). As I had Apache installed already I have to run nginx on localhost:8888. The problem is, all images (such as avatars) are now served from http://localhost:8888, and all the checkout urls Gitlab gives are also localhost - instead of using my domain name. If I change /etc/gitlab/gitlab.rb to use that url, then Gitlab stops working and gives a 503. Any ideas how I can tell Gitlab what URL to present to the world, even though it's really running on localhost? /etc/gitlab/gitlab.rb looks like: # Change the external_url to the address your users will type in their browser external_url 'http://my.local.domain' redis['port'] = 6379 postgresql['port'] = 2345 unicorn['port'] = 3456 and /opt/gitlab/embedded/conf/nginx.conf looks like: server { listen localhost:8888; server_name my.local.domain; [Update] It looks like nginx is still listening on the wrong port if I don't specify localhost:8888 as the external_url. I found this in /var/log/gitlab/nginx/error.log 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: still could not bind() Apache setup looks like: <VirtualHost *:80> ServerName my.local.domain ServerSignature Off ProxyPreserveHost On AllowEncodedSlashes NoDecode <Location /> ProxyPass http://localhost:8888/ ProxyPassReverse http://127.0.0.1:8888 ProxyPassReverse http://my.local.domain </Location> </VirtualHost> Which seems to proxy everything back ok if Gitlab listens on localhost:8888 - I just need Gitlab to start displaying the right URL, instead of localhost:8888.

    Read the article

  • dovecot/postfix: can send & receive via webmin, however squirrel mail and outlook fail to connect

    - by Jonathan
    I have just finished setting up dovecot and postfix on my server (centos 5.5/apache) earlier today. So far I've been able to get email working through webmin (can send/receive to and from external domains). However, attempting to telnet xxx.xxx.xx.xxx 110 returns the following errors: Connected to xxx.xxx.xx.xxx. Escape character is '^]'. +OK Dovecot ready. USER mailtest +OK PASS ********* +OK Logged in. -ERR [IN-USE] Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 22:55:48] Connection closed by foreign host. Which further logs the following error dovecot: Feb 11 21:32:48 Info: pop3-login: Login: user=, method=PLAIN, rip=::ffff:xxx.xxx.xx.xxx, lip=::ffff:xxx.xxx.xx.xxx, TLS dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 21:32:48] dovecot: Feb 11 21:32:48 Info: POP3(mailtest): Couldn't open INBOX top=0/0, retr=0/0, del=0/0, size=0 Also, when attempting to login to squirrelmail or access the account via thunderbird/live mail etc, it obviously fails with a similar issue. Any suggestions or outside thinking on this would be a massive help! I've pretty much exhausted every resource, and tried every suggestion for my dovecot.conf file, but so far nothing seems to work :( I feel like it may be a permissions/ownership issue, but i'm lost as to specifics.

    Read the article

  • Tracking down the cause of a web service fault running in IIS

    - by BG100
    I have built a web service in Visual Studio 2008, and deployed it on IIS 7 running on Windows Server 2008 R2. It has been extensively tested, handles all errors gracefully and logs any uncaught errors to a file using log4net. The system normally runs perfectly, but occasionally (2 or 3 times a day) a fault occurs and screws up the application which needs an iisreset to get it working again. When the fault occurs I get some random errors that are caught by my catch-all error log, such as: Value cannot be null. Could not convert from type 'System.Boolean' to type 'System.DateTime'. Sequence contains no elements These errors are raised on every request until I manually do an iisreset, but there is no sight of them when the system is running normally. The web service is a stateless request-response application. Nothing is stored in the session. It could be load related, but I doubt it as there are only around 30 or 40 requests per minute. This is proving very tricky to track down. I can't work out what is putting IIS into this bad state, and why it needs an iisreset to get it working again. Nothing is reported in the event log. Can anyone suggest how to enable more extensive logging that might catch this fault? Also, does anyone know what might be causing the problem?

    Read the article

  • OS X: MySQL not dealing properly with data directory via soft link

    - by GJ
    I am trying to get a macports-installed MySQL to use a data directory stored inside my FileVault-protected home dir. I used sudo cp -a /opt/local/var/db/mysql5 ~/db/ (the -a to ensure file permissions remain intact) and then replaced the original mysql5 directory with a soft link: sudo ln -s ~/db/mysql5 /opt/local/var/db/mysql5 However, when I now try to start MySQL it fails. It follows the soft link at least to the extent that it modifies some files in the ~/db/mysql5 dir, notably the error log which gets appended to it this: 110108 15:33:08 mysqld_safe Starting mysqld daemon with databases from /opt/local/var/db/mysql5 110108 15:33:08 [Warning] '--skip-locking' is deprecated and will be removed in a future release. Please use '--skip-external-locking' instead. 110108 15:33:08 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead. 110108 15:33:08 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 110108 15:33:08 [Warning] Setting lower_case_table_names=2 because file system for /opt/local/var/db/mysql5/ is case insensitive 110108 15:33:08 [Note] Plugin 'FEDERATED' is disabled. 110108 15:33:08 [Note] Plugin 'ndbcluster' is disabled. /opt/local/libexec/mysqld: Table 'mysql.plugin' doesn't exist 110108 15:33:08 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110108 15:33:09 InnoDB: Started; log sequence number 4 1596664332 110108 15:33:09 [ERROR] /opt/local/libexec/mysqld: Can't create/write to file '/opt/local/var/db/mysql5/mac.local.pid' (Errcode: 13) 110108 15:33:09 [ERROR] Can't start server: can't create PID file: Permission denied 110108 15:33:09 mysqld_safe mysqld from pid file /opt/local/var/db/mysql5/gPod.local.pid ended I can't see why MySQL can't create the pid file, since manually creating it using the _mysql user succeeds (sudo -u _mysql touch mac.local.pid from inside ~/db/mysql5) Any ideas how to resolve this?

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >