Search Results

Search found 17537 results on 702 pages for 'doctrine query'.

Page 607/702 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • nginx error page and internal directives not working as expected

    - by Romain
    I'd like to setup my nginx server to return a specific error page on HTTP 50x status codes, and I'd like this page to be unavailable by a direct request from users (e.g., http//mysite/internalerror). For that, I'm using nginx's internal directive, but I must be missing something, as when I put that directive on my /internalerror location, nginx returns a custom 404 error (which isn't even my own 404 error page) when a page crashes. So, to summarize, here's what seems to happen: GET /Home nginx passes the query to Python I'm simulating an application bug to get the 502 error code nginx tries to return /InternalError from its error_page rule because of the internal rule, it finally fails back to a custom 404 error code <-- why? the documentation says error_page directives are not concerned by internal: http://wiki.nginx.org/HttpCoreModule#internal Here's an extract from nginx.conf with a few comments to point things out: error_page 404 /NotFound; error_page 500 502 503 504 =500 /InternalError; # HTTP 500 Error page declaration location / { try_files /Maintenance.html $uri @pythonbackend; } location @pythonbackend { include uwsgi_params; uwsgi_pass unix:///tmp/uwsgi.sock; } location ~* \.(py|pyc)$ { # This internal location works OK and returns my own 404 error page internal; } location /__Maintenance.html { # This one also works fine internal; } location ~* /internalerror { # This one doesn't work and returns nginx's 404 error page when I trigger an error somewhere on my site internal; } Thanks very much for your help!!

    Read the article

  • How to debug slow queries in Django+Postgres

    - by lacker
    My database queries from Django are starting to take 1-2 seconds and I'm having trouble figuring out why. Not too big a site, about 1-2 requests per second (that hit Django; static files are just served from nginx.) The thing that confuses me is, I can replicate the slowness in the Django shell using debug mode. But when I issue the exact same queries at an sql prompt they are fast. It takes about a second for a query to return, but when I check connection.queries it reports the time as under 10 ms. Here's an example (from the Django shell): >>> p = PlayerData.objects.get(uid="100000521952372") >>> a = time.time(); p.save(); print time.time() - a 1.96812295914 >>> for d in connection.queries: print d["time"] ... 0.002 0.000 0.000 How can I figure out where this extra time is being spent? I'm using Apache+mod_wsgi in daemon mode, but this happens with just the django shell as well, so I figure it is not apache-related.

    Read the article

  • Change Windows Authentication user for Sql Server Management Studio

    - by Asmor
    We're using Sql Server 2005 with Windows Authentication setup. So normally, when you log in using e.g. Sql Server Management Studio, it forces you to log in at MACHINE_NAME\Username. Anyways, on this one particular computer, the person said they had to make a new account called User01 to do something and showed me where she'd created it under security in the "master" system database. And so now when she logs in, it's listed as MACHINE_NAME\User01 (not the actual Windows user name). It's still set to Windows Authentication, though, and I'm unable to change the login name. Now here's where the real problem comes in... I didn't realize that she was being logged in under this user name at the time, and I disabled it to see what would happen. Now I can't log into the server under her account. I created a new account in Windows called test, and as expected SSMS had the username as MACHINE_NAME\test, and I was able to log in fine. However, the area where the User01 account was listed is not visible to me as far as I can tell and so I can't reenable it. I also tried running the following query: alter login User01 ENABLE And got this error: Msg 15151, Level 16, State 1, Line 1 Cannot alter the login 'User01', because it does not exist or you do not have permission. So in a nutshell, ideally I'd like to reenable User01 somehow, just to get things back to where they used to be. Failing that, how can I force SSMS to log in using the Windows account name as it should be, rather than trying to use User01?

    Read the article

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • how would it be possible to discover a cable modem's MAC remotely?

    - by amateurenthusiast
    i was reading the back archives of a canadian privacy law blog, and he linked to a judicial decision. apparently as part of an investigation in which were used yahoo chat and google's old 'hello' image trading program the officer was able to determine a suspect's modem's MAC address: In order to determine who STEPHTOSH was, the officer did a trace on a programme called WHO IS in an effort to learn from where STEPHTOSH was coming. WHO IS is a command program available to the public. The officer was able to ascertain that the person using the name STEPHTOSH was a Rogers Internet customer. The officer was able to obtain the Internet Protocol address, also known as the I.P. There is only one location for an I.P., which is unique to that subscriber. By use of the website known as DNS STUFF.com, one is able to find with which company this I.P. is registered. It was ascertained that the I.P. address used by STEPHTOSH was registered to Rogers Cable, from the Toronto area. The officer also learned the Cable Modem MAC address used by STEPHTOSH. This was all the information the officer was able to amass. now it was my understanding that the MAC address of any given device can only be accessed if you're only one 'hop' away on the Internet. the suspect in question was in Markham and the officer part of the Toronto Police, so it's conceivable that they both might have used Rogers internet. but would that still put them only one 'hop' away from each other? i thought the first hop after the modem was usually the ISP? and if he'd used a netBIOS query against this guy's machine it would return the ethernet card's MAC, not the modem's. so is this guy on the same rogers subnet as the suspect's cable modem, is that functionality part of google's Hello (i could only think that it would be possible if Hello operated as a virtual LAN or something), does the officer have remote access to the arp caches of the routers at Rogers or is he just full of crap and lying to make his case stronger?

    Read the article

  • Batch file to create many files with special characters

    - by MollyO
    Essential info: I have a file "DB_OUTPUT.TXT" with 304 lines that I need to turn into 304 files (one per line). Each line contains many special characters and may be up to tens of thousands of characters long. For these reasons, I'm having difficulty using a cmd.exe batch file (which limits the amount of input) and the echo command (which would try to execute each special character, short of me having to escape them all). I also have a file "DB_OUTPUT_FILENAMES.TXT" containing a distinct filename for each line-soon-to-be-file from "db_output.txt". So line 1 of DB_OUTPUT.TXT needs to be the body of a new file with a name equal to line 1 of DB_OUTPUT_FILENAMES.TXT. Extra info: As you may have guessed, DB_OUTPUT.TXT is output from a database; it contains 304 records with 6 or 7 columns at a fixed width with the last column being a SQL query. Each of these lines (db records) will be used as a script to create new database objects, which is why the special characters need to be preserved. Question: Is there a way to do this in a batch-like fashion? I'd be happy with either a Windows solution or a Linux one.

    Read the article

  • SQL Full-Text indexing not populating

    - by Sam
    We installed a clustered SQL 2005 installation on windows 2008 and reattached our san drives from another machine and restored to do a migration to new hardware. There have been a few minor issues, but this one has me stuck. Trying to populate Full-Text indexes is not working. I create a basic table with some simple text in a new database and get the same results as old indexes. 2010-09-27 10:30:46.85 spid19s Informational: Full-text Full population initialized for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'). Population sub-tasks: 1. 2010-09-27 10:31:15.36 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001DF. Attempt will be made to reindex it. 2010-09-27 10:31:15.37 spid19s The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\MSFTE.DLL'. 2010-09-27 10:31:15.37 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001E0. Attempt will be made to reindex it. The rebuild/repopulate procedure finishes, but I get zero rows in the index. The .dll in the message is present and the service accounts have access to this. My FTData also has data in it, so it seems there wouldn't be permission issue on this folder. Application throws this error: “PHP Warning: mssql_query() [function.mssql-query]: message: Full-text catalog 'ikm_PageIndex_FText' is in an unusable state. Drop and re-create this full-text catalog. (severity 16) in E:\Inetpub\knowledgebase_insidemesa\lib\database\mssql.php on line 154” A microsoft discussion is the only post I found which had claimed to fix this - said it was registry related, but then didn't post the fix.

    Read the article

  • PHPMyAdmin: "General relation features: Disabled"

    - by Simón
    I've been looking around for something like this for a while, and I've found some tips on similar issues, but not exactly the same. I really don't know what to do. I downloaded and installed WAMP, and I have a MySQL and PHPMyAdmin setup according to common indications that can be found everywhere (securing MySQL root account, etc.). When I log into PHPMyAdmin (either as root or as pma), I see the following message at the bottom of the page: The additional features for working with linked tables have been deactivated. To find out why click here. And when following the link, got a page with the following: Server: localhost $cfg['Servers'][$i]['pmadb'] ... OK $cfg['Servers'][$i]['relation'] ... OK General relation features: Disabled $cfg['Servers'][$i]['table_info'] ... OK Display Features: Disabled $cfg['Servers'][$i]['table_coords'] ... OK $cfg['Servers'][$i]['pdf_pages'] ... OK Creation of PDFs: Disabled $cfg['Servers'][$i]['column_info'] ... OK Displaying Column Comments: Disabled Bookmarked SQL query: Disabled Browser transformation: Disabled $cfg['Servers'][$i]['history'] ... OK SQL history: Disabled $cfg['Servers'][$i]['designer_coords'] ... OK Designer: Disabled Somebody please explain to me, why the heck if all settings are "OK" the features remain "Disabled"? Note: at first all the settings were "not OK" and I managed to add the settings to config.inc.php, and then created the tables using scripts/create_tables.php. Of course I have already tried restarting the server or clearing the browser cache (several times, so I am sure the problem comes elsewhere).

    Read the article

  • Bad font anti-aliasing in Ubuntu

    - by Juliano
    I'm switching from Fedora 8 to Ubuntu 9.04, and I can't seem to get it to get a good font anti-aliasing to work. It seems that Ubuntu's fontconfig tries to keep characters in integral pixel widths. This makes text more difficult to read, when 1 pixel is too thin and 2 pixels is too thick. Check the image below. In Fedora, when fontconfig anti-aliasing is enabled, fonts have their thickness proportional to the font size. Below, the thickness is different for 8, 9 and 10pt sizes. In Ubuntu, on the other hand, even when anti-aliasing is enabled, all 8, 9 and 10pt sizes have 1 pixel thickness. This makes reading larges amount of text difficult. I'm using the very same home directory, and I already checked that X resources are the same in both systems: ~% xrdb -query | grep Xft Xft.antialias: 1 Xft.dpi: 96 Xft.hinting: 1 Xft.hintstyle: hintfull Xft.rgba: none GNOME settings: ~% gconftool-2 -a /desktop/gnome/font_rendering antialiasing = grayscale hinting = full dpi = 96 rgba_order = rgb So, the question is: What should I change in the new box (Ubuntu) in order to get anti-aliasing like in the old box (Fedora)?

    Read the article

  • MySQL Server hitting 100% unexpectedly (Amazon AWS RDS)

    - by Luc
    Please help! We've been struggling with this one for months. This week we upped our RDS instance to the highest performing instance and although the occurrences have reduced, we're still having our DB all of a sudden hit 100%. It comes out of nowhere. Sometimes 2am, sometimes midday. I've ruled out a DOS - our pages access logs have normal traffic I've ruled out memcached suddenly dieing (hits and misses continue as normal). The SHOW PROCESSLIST while we have issues reports about 500 queries in queue. If I kill them off or restart the server, they just keep coming back and then eventually out of knowhere, our server resumes back to normal. Sometimes up to 3 hours. Our bad performing queries take .02 seconds to execute when the server eventually returns back to normal but while we're in this 100% CPU physco phase, those queries never finish executing. Please help!!!!! Anybody know anything about MYSQL query optimization? Could it be the server deciding to use different indexes all of a sudden, which puts it into a spiral?

    Read the article

  • How do I delete hardlinks, symbolic links, junction points, etc please?

    - by jonny
    I could be wrong, but I'm yet to hear a valid argument for the exploitability that these things deliver...outweighing their very dubious / debatable functionality. They seem to me to be marginally handy, but I don't think I have any need for them. I do have a need for security, however. How can I delete their entire functionality permanently from my hard drive, please? Microsoft only has pages on how to create them; which seems almost peculiar to the point of being dubious (at least, to me...) And just a dumb command line question, am I correct in assuming fsutil hardlink list c: will enumerate every single hardlink on that drive? C:\Windows\system32>fsutil hardlink list c: \Windows\System32 Also, how do I delete symbolic links please ;) But I'd just rather have all symbolic linking and recursion-creating stuff removed, if that's possible? C:\Windows\system32>fsutil behavior query symlinkevaluation Local to local symbolic links are enabled. Local to remote symbolic links are enabled. Remote to local symbolic links are disabled. Remote to remote symbolic links are disabled.

    Read the article

  • What are the possible disadvantages of enabling the "data access" server option in sys.servers for t

    - by Corp. Hicks
    We plan to change the default server options of an SQL2k5 server instance by enabling data access. The reason is that we want to run "SELECT * FROM OPENQUERY(LOCALSERVER, '...')" -like statements on the server. What are the possible disadvantages of enabling server option "data access" (alias sys.servers.is_data_access_enabled) for the local server (sys.servers.server_id = 0)? (There must be a reason for MS setting this option to disabled by default...) EDIT: it turns out that I'm not the first person to ask this question: http://sqlblogcasts.com/blogs/piotr_rodak/archive/2009/11/22/data-access-setting-on-local-server.aspx "The DATA ACCESS server option is not very well documented in my opinion - the Books On Line say it is a property of linked servers. It doesn't mention at all that you actually can have it enabled on your local server to enable OPENQUERY calls. I noticed that when you disable DATA ACCESS on a linked server, you can't query any table located on it (I tested it on my loopback server) neither using OPENQUERY nor four-part naming convention. You can still call procedures (with four-part naming) that return rowsets. Well, the interesting question is why it is disabled by default on local server - I suppose to discourage users from using OPENQUERY against it." It also seems that the author of the post (Pjotr Rodak) is a Stack Overflow user :-)

    Read the article

  • Using OSX home directories from linux

    - by Steffen
    I'm running an OSX (Snow Leopard) Server with OpenDirectory, which is nothing else than a modified OpenLDAP with some Apple-specific schemas. However, I want to reuse this directory on some of my Linux (Debian Squeeze) boxes. It's no problem to authenticate against OSXs LDAP Server, this works fine already. What I struggle with is the way the home folders are specified in OSX. If I query the passwd config on one of my linux machines, the OSX imported entries are looking like this myaccount:x:1034:1026:Firstname Lastname:/Network/Servers/hostname.example.com/Volumes/MyShare/Users/myaccount:/bin/bash While those network home folders might be fine for OSX-Clients, I don't want those server based paths on my linux machines. I saw that there is an NFSHomeDirectory Attribute in the OSX User inspector, but if I change this the whole user home path gets changed. Since my users should be able to login on both systems, OSX and Linux, this is not what I want. Does anyone have an idea how I must configure OSX to make my linux machines use home folders like /net/myaccount and leave the configuration for OSX clients untouched?

    Read the article

  • Cloudfront - How to invalidate objects in a distribution that was transformed from secured to public?

    - by Gil
    The setting I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format: https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront). What happened At some point, the URL singing expired and would return a 403. Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public. I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403. The response header looks like: HTTP/1.1 403 Forbidden Server: CloudFront Date: Mon, 18 Aug 2014 15:16:08 GMT Content-Type: text/xml Content-Length: 110 Connection: keep-alive X-Cache: Error from cloudfront Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront) X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B== Things I tried I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful. The question Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated? Thanks for your help!

    Read the article

  • AWS VPC public web application connecting to database via VPN

    - by Chris
    What I am trying to do is set up a web application that is public facing but makes calls to a database that is on an internal network. I have been trying to set up an AWS VPC with a public subnet, private subnet, and hardware VPN access but I can't seem to get it to work. Can someone help me understand what the process flow here should be? My understanding is that I need a public subnet to handle the website requests and then a private subnet to connect to the VPN but what I do not understand is how to send requests down the chain and get the response. Basically what I am asking is how can I query the database via VPN from that public website? I've tried during rout forwarding but I can't successfully complete the process. Does anyone have any advice on something I can read on this subject or an FAQ on setting something like this up? Is it even possible? I'm out of my league here, this is not my area of expertise but I'm being asked to solve this problem. Any help would be appreciated. Thanks

    Read the article

  • Courier IMAP always disconnects since update

    - by Raffael Luthiger
    Since one of our customers updated their server courier does not handle IMAP connections properly any more. POP3 works without any problems. When I try to test IMAP with telnet then it is always like this: $ telnet domain.com 143 Trying 188.40.46.214... Connected to domain.com. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE ACL ACL2=UNION STARTTLS] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information. 01 LOGIN [email protected] test Connection closed by foreign host. I enabled debugging in the authdaemond but the output does not really help much: Apr 12 23:10:04 servername authdaemond: received auth request, service=imap, authtype=login Apr 12 23:10:04 servername authdaemond: authmysql: trying this module Apr 12 23:10:04 servername authdaemond: SQL query: SELECT login, password, "", uid, gid, homedir, maildir, quota, "", concat('disableimap=',disableimap,',disablepop3=',disablepop3) FROM mail_user WHERE login = '[email protected]' Apr 12 23:10:04 servername authdaemond: password matches successfully Apr 12 23:10:04 servername authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n Apr 12 23:10:04 servername authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n Right after the "Authenticated" line the output stops. There is no other message. And in no other log file I've checked I could find any other related message. The system was updated from Ubuntu 10.10 to 12.04. How could I get more information? Or does anybody have an idea what could go wrong here?

    Read the article

  • Is there a way to run CUDA applications with the CUDA device being a secondary adapter?

    - by Slartibartfast
    I've been trying to run a CUDA program on a remote computer which has Windows 7 installed. The GPU is GeForce GTX 480. One of the problems I've been facing is that, the computer has two adapters, 1) Standard VGA Adapter 2) NVIDIA GeForce GTX 480 Even though this shows in the device manager. The desktop uses the standard VGA Adapter. I'm assuming this is because the Standard VGA is the primary adapter. Also the device manager shows that the monitor is connected to the standard VGA Adapter. In this scenario if i try to run any CUDA application it fails to recognise a CUDA capable device. Is it necessary for the NVIDIA adapter to be the primary one? Or is there any way to use CUDA when the graphics card is a secondary adapter. I've seen a few posts in the NVIDIA forums on this before, one suggests using another low cost NVIDIA card as the primary adapter, but that is currently not an option. I couldn't find any other solutions. Thanks I tried running the deviceQuery test from the NVIDIA GPU Computing Samples. This was the result i obtained CUDA Device Query (Runtime API) version (CUDART static linking) cudaGetDeviceCount FAILED CUDA Driver and Runtime version may be mismatched FAILED The driver version I'm using is 263.06. The CUDA version is 3.2 I ran the same test on my desktop which also has windows 7 and a GeForce GTX 465. The CUDA toolkit version is 3.2. The driver version was the same and the test passed, although it failed with an older driver.

    Read the article

  • postfix smtp_fallback_relay for deferred messages to a single domain

    - by EdwardTeach
    I use Postfix to send messages to a mail server outside my organization which frequently rejects/defers my mail. My Postfix server sees that these messages are deferred and tries again, eventually getting through. Final delivery can take up to an hour, which makes my users unhappy. In comparison, mail from my Postfix server to other hosts works normally. I have now found out about a second, unofficial MX for this domain that does not reject/defer mail. This second MX does not appear when doing a DNS MX query for the domain. Therefore, for the problem domain I would like to use this second MX as a fallback. That is: whenever mail is deferred by the primary MX, try again on the unofficial second MX. I see that there is already a postfix configuration "smtp_fallback_relay". However the documentation seems to indicate that I can not restrict usage of the fallback to a single domain. The documentation also doesn't mention deferred message handling. So is there a way to configure a single-domain, deferred-retry fallback host in Postfix? For reference, I am including my postconf output (the host names and ip addresses are fake): alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/etc/postfix/legacy_mailman, ldap:/etc/postfix/ldap-aliases.cf append_dot_mydomain = no biff = no config_directory = /etc/postfix default_destination_concurrency_limit = 2 inet_interfaces = all inet_protocols = all local_destination_concurrency_limit = 2 local_recipient_maps = $alias_maps mailbox_size_limit = 0 mydestination = myhost.my.network, localhost.my.network, localhost, my.network myhostname = myhost.my.network mynetworks = 127.0.0.0/8, [::ffff:127.0.0.0]/104, [::1]/128, 10.10.10.0/24 myorigin = my.network readme_directory = no recipient_delimiter = + relay_domains = $mydestination relayhost = smtp_fallback_relay = the.problem.host smtp_header_checks = smtpd_banner = $myhostname ESMTP $mail_name virtual_alias_maps = hash:/etc/postfix/virtual

    Read the article

  • MySQL Master - Master Broken

    - by Recc
    I've Inherited a Mysql master master system, I've noticed the second master (lets call it slave from now on as it's running on a 'slave' machine) stopped getting its db's updated. I saw that Master: Slave_IO_Running: Yes Slave_SQL_Running: Yes Slave: (with an error I truncated) Slave_IO_Running: Yes Slave_SQL_Running: No Last_Errno: 1062 Last_Error: Error 'Duplicate entry '3' for key 'PRIMARY'' on [...] I don't know what caused it to process considering we cant get duplicate there. What's important is to resume normal operations; Right now I've stop slave; on the Master and stop slave; on the Slave because I saw that if I change records on the Slave the changes Do Get Propagated to Master which is in active use. How do I: Force sync EVERYTHING from master to slave without affecting data on master? Then hopefully have slave pickup replication as usual? UPDATE OK I Tried deleting all tables on slave then it complained in that error section that the 'table' doesnt exist. So i made a no data dump of Master, and made sure I have only empty tables in Secondary (slave). I start slave; on slave BUT now it's complaining about bloody alter table statements for instance: Last_Errno: 1060 Last_Error: Error 'Duplicate column name [...] Query: 'ALTER TABLE [...] How to skip the fracking alter statements I just want to replicate the bloody data and be done with it, my tables have the lates changes already FFS and now its complaining about changes made after the replication seized weeks ago How do I reset the log or something? OUTSTANDING Why would this start happening? The "Secondary" is propagating to "Primary". "Primary" is not propagating to "Secondary". But any fixes I tried to do left it in the same state Yes-Yes Yes-No with same Last_Error. I think around that time the server was taken off the network, could that confuse MySQL in some way?

    Read the article

  • Courier-imap login problem after upgrading / enabling verbose logging

    - by halka
    I've updated my mail server last night, from Debian etch to lenny. So far I've encountered a problem with my postfix installation, mainly that I managed to broke the IMAP access somehow. When trying to connect to the IMAP server with Thunderbird, all I get in mail.log is: Feb 12 11:57:16 mail imapd-ssl: Connection, ip=[::ffff:10.100.200.65] Feb 12 11:57:16 mail imapd-ssl: LOGIN: ip=[::ffff:10.100.200.65], command=AUTHENTICATE Feb 12 11:57:16 mail authdaemond: received auth request, service=imap, authtype=login Feb 12 11:57:16 mail authdaemond: authmysql: trying this module Feb 12 11:57:16 mail authdaemond: SQL query: SELECT username, password, "", '105', '105', '/var/virtual', maildir, "", name, "" FROM mailbox WHERE username = '[email protected]' AND (active=1) Feb 12 11:57:16 mail authdaemond: password matches successfully Feb 12 11:57:16 mail authdaemond: authmysql: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> Feb 12 11:57:16 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> ...and then Thunderbird proceeds to complain that it cant' login / lost connection. Thunderbird is definitely not configured to connect through SSL/TLS. POP3 (also provided by Courier) is working fine. I've been mainly looking for a way to make the courier-imap logging more verbose, like can be seen for example here. Edit: Sorry about the mess, I've found that I've been funneling the log through grep imap, which naturally didn't display entries for authdaemond. The verbose logging configuration entry is found in /etc/courier/imapd under DEBUG_LOGIN=1 (set to 1 to enable verbose logging, set to 2 to enable dumping plaintext passwords to logfile. Careful.)

    Read the article

  • Help setting up an secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • Squid3 not caching simple request and response

    - by Nick Spacek
    Hi folks, I've pared down my squid.conf to try to figure this out: http_port 80 accel defaultsite=host.to.cache cache_peer ip.to.cache parent 80 0 no-query originserver acl our_sites dstdomain host.to.cache http_access allow our_sites refresh_pattern . 1 20% 4320 Requests are being proxied correctly, so that's a start. Here's a request: GET http://host.to.cache/path?some_param=true Accept: */* Accept-Charset: ISO-8859-1,utf-8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en Connection: keep-alive Host: host.to.cache User-Agent: myuseragent And the response: Connection: keep-alive Content-Length: 585 Content-Type: application/xml Date: Thu, 06 Jan 2011 18:33:11 GMT Via: 1.0 localhost (squid/3.0.STABLE19) X-Cache: MISS from localhost X-Cache-Lookup: MISS from localhost:80 The response has no caching-related headers, but I thought that refresh_pattern would set a default behavior for responses without caching-related headers. For my test, I wanted to cache everything for one minute at minimum. Am I missing something obvious? I did take a peek at this question: Squid isn't caching ...and ran through the page here: http://www.mnot.net/cache_docs/ briefly, but didn't see anything relevant (not to say that there isn't, I could have missed something). Thanks for any help.

    Read the article

  • Show full URI/URL in Chrome's developer tools Network tab

    - by Lev
    When using Chrome to debug, I find it incredibly difficult to be efficient due to the fact that I don't see how I can force the "Network" tab of the developer tools to show the full request URI. It will show the full URI if you hover the link and wait a second, but this is incredibly counterproductive. All of my AJAX requests are sent to ajax.php, and handled by using query string arguments, like: ajax.php?do=profile-set ajax.php?do=game-save ... etc. Since I use AJAX extensively, my network tab is filled with "ajax.php", but I have to manually hover each and every entry to find the request I am looking for. Surely there has got to be another way!? I am constantly fed up by something new in Firefox and immediately force myself back into Chrome, but it is always the developer tools in Chrome that keep me from using it for an extended period of time. Hopefully I can find out how to do this so I can continue using Chrome as my numero uno. I've provided a screen shot to show you where I mean:

    Read the article

  • Data from a table in 1 DB needed for filter in different DB...

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID]

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >