Daily Archives

Articles indexed Friday November 30 2012

Page 4/15 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Return if remote stored procedure fails

    - by njk
    I am in the process of creating a stored procedure. This stored procedure runs local as well as external stored procedures. For simplicity, I'll call the local server [LOCAL] and the remote server [REMOTE]. USE [LOCAL] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[monthlyRollUp] AS SET NOCOUNT, XACT_ABORT ON BEGIN TRY EXEC [REOMTE].[DB].[table].[sp] --This transaction should only begin if the remote procedure does not fail BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp1] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp2] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp3] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp4] COMMIT END TRY BEGIN CATCH -- Insert error into log table INSERT INTO [dbo].[log_table] (stamp, errorNumber, errorSeverity, errorState, errorProcedure, errorLine, errorMessage) SELECT GETDATE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE(), ERROR_PROCEDURE(), ERROR_LINE(), ERROR_MESSAGE() END CATCH GO When using a transaction on the remote procedure, it throws this error: OLE DB provider ... returned message "The partner transaction manager has disabled its support for remote/network transactions.". I get that I'm unable to run a transaction locally for a remote procedure. How can I ensure that the this procedure will exit and rollback if any part of the procedure fails?

    Read the article

  • ibatis throwing NullPointerException

    - by Prashant P
    i am trying to test ibatis with DB. I get NullPointerException. Below are the class and ibatis bean config, <select id="getByWorkplaceId" parameterClass="java.lang.Integer" resultMap="result"> select * from WorkDetails where workplaceCode=#workplaceCode# </select> <select id="getWorkplace" resultClass="com.ibatis.text.WorkDetails"> select * from WorkDetails </select> POJO public class WorkplaceDetail implements Serializable { private static final long serialVersionUID = -6760386803958725272L; private int code; private String plant; private String compRegNum; private String numOfEmps; private String typeIndst; private String typeProd; private String note1; private String note2; private String note3; private String note4; private String note5; } DAOimplementation public class WorkplaceDetailImpl implements WorkplaceDetailsDAO { private SqlMapClient sqlMapClient; public void setSqlMapClient(SqlMapClient sqlMapClient) { this.sqlMapClient = sqlMapClient; } @Override public WorkplaceDetail getWorkplaceDetail(int code) { WorkplaceDetail workplaceDetail=new WorkplaceDetail(); try{ **workplaceDetail= (WorkplaceDetail) this.sqlMapClient.queryForObject("workplaceDetail.getByWorkplaceId", code);** }catch (SQLException sqlex){ sqlex.printStackTrace(); } return workplaceDetail; } TestCode public class TestDAO { public static void main(String args[]) throws Exception{ WorkplaceDetail wd = new WorkplaceDetail(126, "Hoonkee", "1234", "22", "Service", "Tele", "hsgd","hsgd","hsgd","hsgd","hsgd"); WorkplaceDetailImpl impl= new WorkplaceDetailImpl(); **impl.getWorkplaceDetail(wd.getCode());** impl.saveOrUpdateWorkplaceDetails(wd); System.out.println("dhsd"+impl); } } I want to select and insert. I have marked as ** ** as a point of exception in above code Exception in thread "main" java.lang.NullPointerException at com.ibatis.text.WorkplaceDetailImpl.getWorkplaceDetail(WorkplaceDetailImpl.java:19) at com.ibatis.text.TestDAO.main(TestDAO.java:11)

    Read the article

  • JQuery Remove() doesn't work

    - by Xander Guerin
    I have a DIV element inside another as follows: <div id="filters"> <div class="filterData">hello</div> </div> and I'm trying to remove the element: $("#filters").remove('.filterData'); Problem is, it doesn't. I've tested on other elements on my page and it works. The thing is, I cannot append to it, show or hide it, use .empty. I've also changed it to be a DIV with 'filterData' as the ID and told JQuery to remove it but it refuses to... Has anyone had a stuborn element like this before? EDIT: I'm also trying to remove it inside a $(document).ready function so I have no idea.

    Read the article

  • Plotting 500 US cities to a map

    - by sqlman
    I have 500 US cities in a MySQL table. I have the city name, state, longitude and latitude. I want to visually see these cities plotted on a map of the US. How can I do this? Are they any free tools available? Google Maps or Google Earth maybe? Obviously, it would take forever to plot each city individually. So I need a quick way of doing it, either through a program or by exporting the table as a spreadsheet and uploading it into some kind of generator that will do the plotting for me. Please let me know your ideas. Thanks.

    Read the article

  • Spring MVC, REST, and HATEOAS

    - by SingleShot
    I'm struggling with the correct way to implement Spring MVC 3.x RESTful services with HATEOAS. Consider the following constraints: I don't want my domain entities polluted with web/rest constructs. I don't want my controllers polluted with view constructs. I want to support multiple views. Currently I have a nicely put together MVC app without HATEOAS. Domain entities are pure POJOs without any view or web/rest concepts embedded. For example: class User { public String getName() {...} public String setName(String name) {...} ... } My controllers are also simple. They provide routing and status, and delegate to Spring's view resolution framework. Note my application supports JSON, XML, and HTML, yet no domain entities or controllers have embedded view information: @Controller @RequestMapping("/users") class UserController { @RequestMapping public ModelAndView getAllUsers() { List<User> users = userRepository.findAll(); return new ModelAndView("users/index", "users", users); } @RequestMapping("/{id}") public ModelAndView getUser(@PathVariable Long id) { User user = userRepository.findById(id); return new ModelAndView("users/show", "user", user); } } So, now my issue - I'm not sure of a clean way to support HATEOAS. Here's an example. Let's say when the client asks for a User in JSON format, it comes out like this: { firstName: "John", lastName: "Smith" } Let's also say that when I support HATEOAS, I want the JSON to contain a simple "self" link that the client can then use to refresh the object, delete it, or something else. It might also have a "friends" link indicating how to get the user's list of friends: { firstName: "John", lastName: "Smith", links: [ { rel: "self", ref: "http://myserver/users/1" }, { rel: "friends", ref: "http://myserver/users/1/friends" } ] } Somehow I want to attach links to my object. I feel the right place to do this is in the controller layer as the controllers all know the correct URLs. Additionally, since I support multiple views, I feel like the right thing to do is somehow decorate my domain entities in the controller before they are converted to JSON/XML/whatever in Spring's view resolution framework. One way to do this might be to wrap the POJO in question with a generic Resource class that contains a list of links. Some view tweaking would be required to crunch it into the format I want, but its doable. Unfortunately nested resources could not be wrapped in this way. Other things that come to mind include adding links to the ModelAndView, and then customizing each of Spring's out-of-the-box view resolvers to stuff links into the generated JSON/XML/etc. What I don't want is to be constantly hand-crafting JSON/XML/etc. to accommodate various links as they come and go during the course of development. Thoughts?

    Read the article

  • apache, shibboleth, load balancing aliase, ssl

    - by Nikolaidis Fotis
    Good morning folks Could you give me a bit of help with the following problem ? I have a dns load balancing mechanism and an alias (hostAlias) which may point to host01, or host02 I want to configure apache and shibboleth to work with that alias. What happens is ... User types : https://hostAlias (it points to host01) apache host01 : redirect to shibboleth shibboleth host01 : redirect to **https://hostAlias.cern.ch/Shibboleth.sso/ADFS** Now, there are two cases. Either this time hostAlias will point again to host01 , or it will point to host02. If it points to host02, host01 will not get the anwser and the authentication fails. Also, about ssl certificates, I guess that each host will need its own certificate. right ? Should I need a certificate with DNS aliases ? Thanks in advance !

    Read the article

  • Installing MySQL on Ubuntu 12 fails on a clean installation

    - by Keenora Fluffball
    I do have the problem, that even if I uninstall mysql completely and do a restart, it still doesn't install mysql. This is the error I get: Paketlisten werden gelesen... Fertig Abhängigkeitsbaum wird aufgebaut Statusinformationen werden eingelesen... Fertig Die folgenden zusätzlichen Pakete werden installiert: libdbd-mysql-perl libmysqlclient18 mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server-5.5 mysql-server-core-5.5 Vorgeschlagene Pakete: tinyca mailx Die folgenden NEUEN Pakete werden installiert: libdbd-mysql-perl libmysqlclient18 mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server mysql-server-5.5 mysql-server-core-5.5 0 aktualisiert, 8 neu installiert, 0 zu entfernen und 0 nicht aktualisiert. Es müssen 26,2 MB an Archiven heruntergeladen werden. Nach dieser Operation werden 94,2 MB Plattenplatz zusätzlich benutzt. Möchten Sie fortfahren [J/n]? J Hole:1 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-common all 5.5.28-0ubuntu0.12.10.1 [13,4 kB] Hole:2 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main libmysqlclient18 amd64 5.5.28-0ubuntu0.12.10.1 [949 kB] Hole:3 http://de.archive.ubuntu.com/ubuntu/ quantal/main libdbd-mysql-perl amd64 4.021-1 [97,7 kB] Hole:4 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-client-core-5.5 amd64 5.5.28-0ubuntu0.12.10.1 [1.941 kB] Hole:5 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-client-5.5 amd64 5.5.28-0ubuntu0.12.10.1 [8.332 kB] Hole:6 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-server-core-5.5 amd64 5.5.28-0ubuntu0.12.10.1 [5.983 kB] Hole:7 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-server-5.5 amd64 5.5.28-0ubuntu0.12.10.1 [8.842 kB] Hole:8 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-server all 5.5.28-0ubuntu0.12.10.1 [11,6 kB] Es wurden 26,2 MB in 1 min 5 s geholt (399 kB/s) Vorkonfiguration der Pakete ... Vormals nicht ausgewähltes Paket mysql-common wird gewählt. (Lese Datenbank ... 68073 Dateien und Verzeichnisse sind derzeit installiert.) Entpacken von mysql-common (aus .../mysql-common_5.5.28-0ubuntu0.12.10.1_all.deb) ... Vormals nicht ausgewähltes Paket libmysqlclient18:amd64 wird gewählt. Entpacken von libmysqlclient18:amd64 (aus .../libmysqlclient18_5.5.28-0ubuntu0.12.10.1_amd64.deb) ... Vormals nicht ausgewähltes Paket libdbd-mysql-perl wird gewählt. Entpacken von libdbd-mysql-perl (aus .../libdbd-mysql-perl_4.021-1_amd64.deb) ... Vormals nicht ausgewähltes Paket mysql-client-core-5.5 wird gewählt. Entpacken von mysql-client-core-5.5 (aus .../mysql-client-core-5.5_5.5.28-0ubuntu0.12.10.1_amd64.deb) ... Vormals nicht ausgewähltes Paket mysql-client-5.5 wird gewählt. Entpacken von mysql-client-5.5 (aus .../mysql-client-5.5_5.5.28-0ubuntu0.12.10.1_amd64.deb) ... Vormals nicht ausgewähltes Paket mysql-server-core-5.5 wird gewählt. Entpacken von mysql-server-core-5.5 (aus .../mysql-server-core-5.5_5.5.28-0ubuntu0.12.10.1_amd64.deb) ... Trigger für man-db werden verarbeitet ... mysql-common (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... Vormals nicht ausgewähltes Paket mysql-server-5.5 wird gewählt. (Lese Datenbank ... 68251 Dateien und Verzeichnisse sind derzeit installiert.) Entpacken von mysql-server-5.5 (aus .../mysql-server-5.5_5.5.28-0ubuntu0.12.10.1_amd64.deb) ... Vormals nicht ausgewähltes Paket mysql-server wird gewählt. Entpacken von mysql-server (aus .../mysql-server_5.5.28-0ubuntu0.12.10.1_all.deb) ... Trigger für man-db werden verarbeitet ... Trigger für ureadahead werden verarbeitet ... libmysqlclient18:amd64 (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... libdbd-mysql-perl (4.021-1) wird eingerichtet ... mysql-client-core-5.5 (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... mysql-client-5.5 (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... mysql-server-core-5.5 (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... mysql-server-5.5 (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... AppArmor parser error for /etc/apparmor.d/usr.sbin.mysqld in /etc/apparmor.d/usr.sbin.mysqld at line 9: >>abstractions/mysql<< konnte nicht ge?ffnet werden start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: Fehler beim Bearbeiten von mysql-server-5.5 (--configure): Unterprozess installiertes post-installation-Skript gab den Fehlerwert 1 zurück dpkg: Abhängigkeitsprobleme verhindern Konfiguration von mysql-server: mysql-server hängt ab von mysql-server-5.5; aber: Paket mysql-server-5.5 ist noch nicht konfiguriert. dpkg: Fehler beim Bearbeiten von mysql-server (--configure): Abhängigkeitsprobleme - verbleibt unkonfiguriert Trigger für libc-bin werden verarbeitet ... ldconfig deferred processing now taking place Es wurde kein Apport-Bericht verfasst, da die Fehlermeldung darauf hindeutet, dass dies lediglich ein Folgefehler eines vorherigen Problems ist. Fehler traten auf beim Bearbeiten von: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) Do you have any clue, whats going on here?

    Read the article

  • Postfix block senders outside from local domains

    - by Tibor Peter Toth
    I would like to block every mail that is coming in from a domain that is running on my server. Example: I have domain1.com on my mail server and I'm getting a mail from outside with an email address of [email protected] Then I know it's a Spam, because domain1.com is on my server, so the sender cannot come from outside. I want postfix to check for this, and simply block these kind of emails. I know this is a function in postfix, just don't know which one. Thanks.

    Read the article

  • Can I use MX records to deliver some addresses to Google Apps and some to my server?

    - by Josh
    I have whm installed on my VPS, which my domain MX records are pointing to: 0:mail.mydomain.com and whm/cpanel has mail forwarding rules which pipes certain @mydomain email addresses into my CRM software. But for certain email addresses I want to forward into Google Apps. For example, [email protected], [email protected] pipes into cPanel -- CRM (mail.mydomain.com) but [email protected] should be going to Google MX records. Is that possible? The reason why is I want to register for Google Apps such as analytics and other Google services under [email protected]. My initial thoughts were to add a subdomain such as [email protected] and point that subdomain's MX records to Google.. but I want to avoid this if possible.

    Read the article

  • How to share a internet connection in a network with openSuse 12.2?

    - by eneepo
    I have to computer and opensuse 12.2 on both of them: Internet 192.168.2.1 | eth0 eth1 First Computer 192.168.2.2 192.168.2.10 | eth0 Second Computer 192.168.2.11 I am following the instruction on http://www.swerdna.net.au/suseics.html but as soon as I enable eth1 on first computer I lose connection with internet. I follow all the instructions I don't know which part I'm miss-configuring.

    Read the article

  • Load testing nginx inside AWS

    - by andy
    I'm trying to load test nginx running on AWS. I need to try to optimise it to handle 1Gbps of inbound traffic. Currently I've got it to peak at 85Mbit/s by running nginx on an m1.large with 4 other machines hitting it by using ab with -i (for head requests) -k (keepalives) -r (ignore failed requests) -n 500000 -c 20000. I'm struggling to generate more than 85 Mbit/s traffic from 4 machines, yet when I do scp a large file I get nearly 0.25Gbit/s of traffic going over the network. Are there any tools or approaches that I could use to load test nginx that might generate more load? I'm only interested in inbound traffic, so perhaps a DoS tool could help if it chucks away responses? I'm hitting a very small (40 byte) static asset, and have peaked at handling 50K concurrent connections and getting 25k reqs/s when just using a single load generator machine.

    Read the article

  • LDAP object class violation: attribute ou not allowed in suffix?

    - by Paramaeleon
    I am about to set up a LDAP directory. It is used as a tool to communicate user permissions from a web application to WebDav file system access, e.g. adding a user to the web platform shall allow login to the file system with the same credentials. There are no other usages intended. Following this German tutorial which encourages the use of the attributes c, o, ou etc. over dc, I configured the following suffix and root: suffix "ou=webtool,o=myOrg,c=de" rootdn "cn=ldapadmin,ou=webtool,o=myOrg,c=de" Server starts and I can connect to it by LDAP Admin, which reports “LDAP error: Object lacks”. Well, there aren’t any objects yet. I now want to create the root and admin elements from shell. I created an init.ldif file: dn: ou=webtool,o=myOrg,c=de objectclass: dcObject objectclass: organization dc: webtool o: webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Trying to load the file runs into an error, telling me that ou is not allowed: server:~ # ldapadd -x -D "cn=ldapadmin,ou=webtool,o=myOrg,c=de" -W -f init.ldif Enter LDAP Password: adding new entry "ou=webtool,o=myOrg,c=de" ldap_add: Object class violation (65) additional info: attribute 'ou' not allowed I am not using ou anywhere except in the suffix, so the question: Isn’t it allowed here? What is allowed here? Here is my answer. I am not allowed to post it as answer for 8 hours, so don’t mind that it is part of the question by now. I will move it outside some day, if I don’t forget to do so. There are numberous dependencies for the creation of elements, and error messages are rather confusing if you don’t know of the concept. The objectclass isn’t necessarily dcObject for the databases’ root node, as it is likely to guess when you read several tutoriales. Instead, it must correspond to the object’s type: Here, for a name starting with ou=, it must be organizationalUnit. I found this piece of information in these tables [Link removed due to restriction: Oops! Your edit couldn't be submitted because: We're sorry, but as a spam prevention mechanism, new users can only post a maximum of two hyperlinks. Earn more than 10 reputation to post more hyperlinks. Link is below]. Further on, the object class dictates which properties must and can be added in the record. Here, organizationalUnit must have an ou: entry and must not have neither dc: nor o: entry. The healthy init.ldif file looks like that: dn: ou=webtool,o=myOrg,c=de objectclass: organizationalUnit ou: LDAP server for my webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Note: The page also states: “While many objectClasses show no MUST attributes you must (ouch) follow any hierarchy […] to determine if this is the really case.” I thought that would mean my root record would have to provide the must fields for c= and o= (c: and o:, respectively) but this isn’t the case. Link in answer is (1): http :// www (dot) zytrax (dot) com/books/ldap/ape/ "Appendix E: LDAP - Object Classes and Attributes"

    Read the article

  • nginx conditional Accept header

    - by manu_v
    Some mobile devices send the following incorrect requests to our servers : GET / HTTP/1.0 Accept: User-Agent : xxx The empty Accept header causes our Ruby on Rails server to throw back a 500 error. In Apache, the following directive allows us to rewrite the header before sending it to the application RoR server in order to cope with the broken devices : RequestHeader edit Accept ^$ "*/*" early We're currently setting up nginx, but achieving the same work-around is proving difficult. We are able to set : proxy_set_header Accept */*; However, this seems to have to be done inconditionally. Whenever trying to do : if ($http_accept !~ ".") { proxy_set_header Accept */*; } It complains with the message : "proxy_set_header" directive is not allowed here So, using nginx, how can we set the HTTP Accept header to */* when it is empty before sending the request to the application server ?

    Read the article

  • emails not sending from CentOS 5.6 VM on Win7 via PHP code

    - by crmpicco
    I am experiencing an issue where my CentOS 5.6 (Final) VM running on Windows 7 has stopped sending emails from my PHP code. I'm confident this isn't a coding issue as I have the exact same code running in my office and emails send correctly from there, hence why I believe this to be a networking/configuration issue. In my /etc/hosts/ file on my VM I have the following: 127.0.0.1 localhost.localdomain localhost 192.168.0.9 crmpicco.co.uk m.crmpicco.co.uk dev53.localdomain When I run setup on my VM the DNS configuration is set to dev53.localdomain and my Primary DNS is 192.168.0.1. In My /var/log/maillog files I see a lot of this sort of thing: Nov 19 14:36:58 dev53 sendmail[21696]: qAJEawI7021696: from=<[email protected]>, size=12858, class=0, nrcpts=1, msgid=<1353335817.9103820024efb30b451d006dc4ab3370@PHPMAILSERVER>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Nov 19 14:36:58 dev53 sendmail[21693]: qAJEawvd021693: [email protected], [email protected] (48/48), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=42681, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (qAJEawI7021696 Message accepted for delivery) Nov 19 14:36:59 dev53 sendmail[21698]: qAJEawI7021696: to=<[email protected]>, delay=00:00:01, xdelay=00:00:01, mailer=esmtp, pri=132858, relay=mailserver.fletcher.co.uk. [213.171.216.114], dsn=5.0.0, stat=Service unavailable Is this likely to be a configuration issue?

    Read the article

  • Granting rights on postgresql database to another user

    - by Austin
    I'm trying to set up a system with a PostgreSQL database per user, with a PHP-FPM resource pool for an associated account. I need to grant all privileges on the database to the other user, but it seems that it's only possible to do this for tables. I've tried grant all privileges on database username to username_shadow but this gives only limited privileges. I've upgraded to PGSQL 9.2, which has the ability to grant privileges on schema, but I can't get a useful result. How do I simply make another user have all the privileges of the first on the same database?

    Read the article

  • puppet agent doesn't retrieve files from master

    - by nicmon
    I have a very basic question regarding to Puppet 3.0.1 configuration. I setup a puppet master server (CentOS) with 2 agents (CentOS and Windows 7), all 3 can ping and access each other. There is no error at all. I have copied a file under /etc/puppet/files/test2.txt my site.pp (/etc/puppet/manifests) contains these lines: node default { include test file { "/tmp/testmaster.txt": owner => root, group => root, mode => 644, source => "puppet:///files/test2.txt" } } but there will no file be created on agent servers under /tmp/ once I run "puppet agent --test" here is the output: [root@agent1 ~]# puppet agent --test Info: Retrieving plugin Info: Caching catalog for agent1.mydomain.com Info: Applying configuration version '1354267916' Finished catalog run in 0.02 seconds "puppet apply /etc/puppet/manifests/site.pp" creates the testmaster.txt under /tmp/ on master.

    Read the article

  • PostgreSQL RAID configuration

    - by Yoldar-Zi
    I'm stuck how best to configure disk array. We have Hp P2000 G3 disk array with 24 SAS physical disks 300Gb each. We need to configure this array got 2 copies of PostgreSQL 9.2 because two different system. As we know it's recommended to store database and transaction logs(pg_xlog) files on separate disks. So we must setup 4 logical disk: 2 for transaction logs with RAID 1 2 for database with RAID 10 Is this right scheme of distribution? Or may be it is best to just make one big RAID 10 with 4 logical disks?

    Read the article

  • append $myorigin to localpart of 'from', append different domain to localpart of incomplete recipient address

    - by PJ P
    We have been having some trouble getting Postfix to behave in a very specific fashion in which sender and recipient addresses with only a localpart (i.e. no @domain) are handled differently. We have a number of applications that use mailx to send messages. We would like to know the username and hostname of the sending party. For example, if root sends an email from db001.company.local, we would like the email to be addressed from [email protected]. This is accomplished by ensuring $myorigin is set to $myhostname. We also want unqualified recipients to have a different domain appended. For example, if a message is sent to 'dbadmin' it should qualify to '[email protected]'. However, by the nature of Postfix and $myorigin, an unqualified recipient would instead qualify to [email protected]. We do not want to adjust the aliases on all servers to forward appropriately. (in fact, every possible recipient doesn't have an entry in /etc/passwd) All company employees have mailboxes on Exchange, which Postfix eventually routes to, and no local Linux/Unix mailboxes are used or access. We would love to tell our application owners to ensure they use a fully qualified email address for all recipients, but the powers that be dictate that any negligence must be accommodated. If we were to keep $myorigin equal to $myhostname, we could resolve this issue by having an entry such as the following in 'recipient_canonical_maps': @$myorigin @company.com However, unfortunately, we cannot use variables in these map files. We also want to avoid having to manually enter and maintain the actual hostname in 'recipient_canonical_maps' for each server. Perhaps once our servers are 'puppetized' we can dynamically adjust this file, but we're not there yet. After an afternoon of fiddling I've decided to reach out. Any thoughts? Thanks in advance.

    Read the article

  • load-causing processes disappearing from "top" ps -o pcpu shows bogus numbers

    - by Alec Matusis
    I administer a large number of servers, and I have this problem only with Ubuntu 10.04 LTS: I run a server under normal load (say load average 3.0 on an 8-core server). The "top" command shows processes taking certain % of CPU that cause this load average: say PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11008 mysql 20 0 25.9g 22g 5496 S 67 76.0 643539:38 mysqld ps -o pcpu,pid -p11008 %CPU PID 53.1 11008 , everything is consistent. The all of the sudden, the process causing the load average disappears from "top", but the process continues to run normally (albeit with a slight performance decrease), and the system load average becomes somewhat higher. The output of ps -o pcpu becomes bogus: # ps -o pcpu,pid -p11008 %CPU PID 317910278 1587 This happened to at least 5 different severs (different brand new IBM System X hardware), each running different software: one httpd 2.2, one mysqld 5.1, and one Twisted Python TCP servers. Each time the kernel was between 2.6.32-32-server and 2.6.32-40-server. I updated some machines to 2.6.32-41-server, and it has not happened on those yet, but the bug is rare (once every 60 days or so). This is from an affected machine: top - 10:39:06 up 73 days, 17:57, 3 users, load average: 6.62, 5.60, 5.34 Tasks: 207 total, 2 running, 205 sleeping, 0 stopped, 0 zombie Cpu(s): 11.4%us, 18.0%sy, 0.0%ni, 66.3%id, 4.3%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 74341464k total, 71985004k used, 2356460k free, 236456k buffers Swap: 3906552k total, 328k used, 3906224k free, 24838212k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 805 root 20 0 0 0 0 S 3 0.0 1493:09 fct0-worker 982 root 20 0 0 0 0 S 1 0.0 111:35.05 fioa-data-groom 914 root 20 0 0 0 0 S 0 0.0 884:42.71 fct1-worker 1068 root 20 0 19364 1496 1060 R 0 0.0 0:00.02 top Nothing causing high load is showing on top, but I have two highly loaded mysqld instances on it, that suddenly show crazy %CPU: #ps -o pcpu,pid,cmd -p1587 %CPU PID CMD 317713124 1587 /nail/encap/mysql-5.1.60/libexec/mysqld and #ps -o pcpu,pid,cmd -p1624 %CPU PID CMD 2802 1624 /nail/encap/mysql-5.1.60/libexec/mysqld Here are the numbers from # cat /proc/1587/stat 1587 (mysqld) S 1212 1088 1088 0 -1 4202752 14307313 0 162 0 85773299069 4611685932654088833 0 0 20 0 52 0 3549 27255418880 5483524 18446744073709551615 4194304 11111617 140733749236976 140733749235984 8858659 0 552967 4102 26345 18446744073709551615 0 0 17 5 0 0 0 0 0 the 14th and 15th numbers according to man proc are supposed to be utime %lu Amount of time that this process has been scheduled in user mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). This includes guest time, guest_time (time spent running a virtual CPU, see below), so that applications that are not aware of the guest time field do not lose that time from their calculations. stime %lu Amount of time that this process has been scheduled in kernel mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). On a normal server, these numbers are advancing, every time I check the /proc/PID/stat. On a buggy server, these numbers are stuck at a ridiculously high value like 4611685932654088833, and it's not changing. Has anyone encountered this bug?

    Read the article

  • ISC Bind support for GSS-TSIG DDNS Updates?

    - by netlinxman
    First, has anyone EVER configured ISC bind 9.5.0 OR greater with support for GSS-TSIG Dynamic DNS Updates AND gotten it to work? If so, what is the configuration that was used to make that happen? I feel close to having this working. I see that GSS cred passes w/o apparent error during the TKEY negotiation with an Active Directory DC and the BIND DNS server: client 192.168.0.30#52314: query gss cred: "DNS/[email protected]", GSS_C_ACCEPT, 4294967256 gss-api source name (accept) is [email protected] process_gsstkey(): dns_tsigerror_noerror client 192.168.0.30#52314: send But, when the Update is sent, it is refused: client 192.168.0.30#58330: update client 192.168.0.30#58330: updating zone 'example.com/IN': update failed: rejected by secure update (REFUSED) client 192.168.0.30#58330: send Does anyone have this working in the real world?

    Read the article

  • /etc/crontab or any user crontab is not being executed

    - by ian
    My server is CentOS 5. When I edit /etc/crontab or edit any user(including root) crontab via "crontab -e" command, it just adds "(system) RELOAD (/etc/crontab)" or "(admin) RELOAD (cron/admin)" in the log. No CMD in the /var/log/cron. Sample entry in /var/log/cron: Aug 10 10:21:33 localhost crontab[31688]: (root) BEGIN EDIT (root) Aug 10 10:21:42 localhost crontab[31688]: (root) REPLACE (root) Aug 10 10:21:42 localhost crontab[31688]: (root) END EDIT (root) Aug 10 10:22:01 localhost crond[2688]: (root) RELOAD (cron/root) Result of "service crond status": crond (pid 1345) is running... The command "cat /var/log/messages | grep cron" does not give anything. Contents of /etc/cron.allow: admin root Contents of /etc/crontab: SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # run-parts 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly * * * * * root run-parts /bin/date >> /data/date.txt Result of ps aux |grep cron: root 1345 0.0 0.1 5268 1204 ? Ss 11:43 0:00 crond Contents of admin's crontab: * * * * * /bin/date >> /data/date.txt Note that it's not only admin's crontab that's not running. All cron jobs are not running. Any ideas why they aren't running?

    Read the article

  • Microsoft Arc Mouse OS X

    - by meepz
    I recently bought a new Mac Book Pro with Mountain Lion 10.8 on it. The only portable mouse I have is my Microsoft Arc Mouse. I wanted to use it with the laptop so I installed the IntelliPoint 8.2 for Mac from Microsofts website. According to their website, this driver is for OS X 10.4-10.7. Now I thought that wouldnt be too much of an issue but unfortunately for me, the driver installs fine and the mouse is detected but I get no movement and when I click the buttons nothing happens. I took the mouse with me on a business trip to EU and before I left I checked if the mouse worked with my desktop which is running Windows 7 and it worked without any hiccups. I'm not too sure where the OS differs from 10.7 to 10.8. I found an article online but it doesn't pertain to my mouse, although it could be of assistance. I have tried my version of their adjustment but I am not too knowledgable on low level hardware/software modifications so I may have done it wrong. heres the link: http://refluxions.wordpress.com/2008/08/18/mac-os-x-mouse-madnessfixed/ I get the following details when I check mouse info in the IntelliPoint preferences pane: The following Microsoft mouse devices are currently connected to your Macintosh driven by the Intellipoint software. Arc Mouse Vendor name: Microsoft Product name: Microsoft AE 2.4GHz Transceiver 5.0 Vendor ID: 045E Product ID: 074F Device version: 0140 if anyone has any suggestions on how to fix this it would be greatly appreciated! I love the mouse and im here in EU for another two weeks. Thanks

    Read the article

  • UBUNTU 12.10 loaded. after that boot sector changed from win to grub

    - by Rupam Roy
    After installing Ubuntu 12.10, to my pc and giving a patjh in the external HD, its root dir only went into that and all files on the hd of my PC.Now i required the Ex HD everytime to go to either Win or Linux. I deleted the partition made by linux from disk management of Win, and now want to change the boot sector of my HD of PC back to win. Pc is not starting up and showing Grub failure. I have the original win 7 os. I tried with that going to the command line, but what is the command that takes me to DVD. I ve tried 'cd dvd' and 'cd/ dvd'. Plese help.

    Read the article

  • Windows 7 re-installation

    - by GTX OC
    I need to reinstall Windows 7 but the problem is that all my partitions are set to Dynamic disk type.Windows cannot install on Dynamic disc type partitions.During the installation process there is an option to format the drive but I cannot change the disc type.Is there any way to convert the partitions back into primary so that I can re-install Windows?I am a complete fool.I don't even know why I converted the partition in which Windows was installed into dynamic type. I have a 1TB HDD having 4 partitions and all of them are dynamic. Thanks in advance.

    Read the article

  • Running phpmyadmin xampp Ubuntu 12.10

    - by Luigi Tiburzi
    I know it is a common problem and there are many solutions on the web but I'm trying everything and anything is working, I can't have phpmyadmin running on my machine. I installed XAMPP through: sudo tar xvfz ./Downloads/xampp-linux-1.8.1.tar.gz -C /opt then I did the chmod trick supposed to make an end to access issues and I change the default location to my php projects from /var/www to Dropbox/php. Then I started XAMPP in the usual way: sudo /opt/lampp/lampp start When I tried to run one of my php projects the output on the web is fine but if for example I try to write localhost on my browser I get: It works and not the usual XAMPP interface and most of all when I try to access localhost/phpmyadmin I get the login page, insert username (root) and password and I get: You don't have permission to access /phpmyadmin/index.php on this server. Apache/2.2.22 (Ubuntu) Server at localhost Port 80 I tried the Required all granted trick and some others but nothing is working. I even tried to uninstall phpmyadmin and reinstall it but this is not working too. I don't know hot to proceed. Thanks for your help.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >