Search Results

Search found 22893 results on 916 pages for 'message queue'.

Page 151/916 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Extended JMS Support

    - by ACShorten
    In a previous post I discussed the real time JMS integration we added in FW4.1 and also as patches for FW2.2. There are some additional aspects of this integration I did not mention which may be of interest: JMS Topic Support - In the post I concentrated on talking about JMS Queue support but failed to mention that the MDB and outgoing real time JMS also supports JMS Topics. JMS Queues are typically used for point to point decoupled integration and JMS Topics are used for hub integration that uses Publish and Subscribe. JMS Selector Support - By default the MDB will process every message from a JMS resource (Queue or Topic). If you want to alter this behaviour to selectively filter JMS messages then you can use JMS Selectors to specify the conditions for the MDB to selectively process JMS messages based upon conditions. JMS Selectors allow filters to be specified on elements in the JMS Header and JMS Message Properties using SQL like syntax. Note: JMS Selectors do not support filters on the body elements. JMS Header Support - It is possible to place custom information in the JMS Header and JMS Message Properties for outgoing messages (so that other applications can use JMS selectors if necessary as well). This is only available when installing Patches 11888040 (FW4.1) and 11850795 (FW2.2). These facilities coupled with the JMS facilities described in the previous posts gives the product integration capabilities in JMS which can be used with configuration rather than coding. Of course, the JMS facility I have described can also be used in conjunction with SOA Suite to provide greater levels of traceability and management.

    Read the article

  • 2 way SSL between SOA and OSB

    - by Johnny Shum
    If you have a need to use 2 way SSL between SOA composite and external partner links, you can follow these steps. Create the identity keystores, trust keystores, and server certificates. Setup keystores and SSL on WebLogic Setup server to use 2 way SSL Configure your SOA composite's partner link to use 2 way SSL Configure SOA engine two ways SSL In this case,  I use SOA and OSB for the test.  I started with a separate OSB and SOA domains.  I deployed two soap based proxies on OSB and two composites on SOA.  In SOA, one composite invokes a OSB proxy service, the other is invoked by the OSB.  Similarly,  in OSB,  one proxy invokes a SOA composite and the other is invoked by SOA. 1. Create the identity keystores, trust keystores and the server certificates Since this is a development environment, I use JDK's keytool to create the stores and use self signing certificate.  For production environment, you should use certificates from a trusted certificate authority like Verisign.    I created a script below to show what is needed in this step.  The only requirement is when creating the SOA identity certificate, you MUST use the alias mykey. STOREPASS=welcome1KEYPASS=welcome1# generate identity keystore for soa and osb.  Note: For SOA, you MUST use alias mykeyecho "creating stores"keytool -genkey -alias mykey -keyalg "RSA" -sigalg "SHA1withRSA" -dname "CN=soa, C=US" -keystore soa-default-keystore.jks -storepass $STOREPASS -keypass $KEYPASS keytool -genkey -alias osbkey -keyalg "RSA" -sigalg "SHA1withRSA" -dname "CN=osb, C=US" -keystore osb-default-keystore.jks -storepass $STOREPASS -keypass $KEYPASS# listing keystore contentsecho "listing stores contents"keytool -list -alias mykey -keystore soa-default-keystore.jks -storepass $STOREPASSkeytool -list -alias osbkey -keystore osb-default-keystore.jks -storepass $STOREPASS# exporting certs from storesecho "export certs from  stores"keytool -exportcert -alias mykey -keystore soa-default-keystore.jks -storepass $STOREPASS -file soacert.derkeytool -exportcert -alias osbkey -keystore osb-default-keystore.jks -storepass $STOREPASS -file osbcert.der # import certs to trust storesecho "import certs"keytool -importcert -alias osbkey -keystore soa-trust-keystore.jks -storepass $STOREPASS -file osbcert.der -keypass $KEYPASSkeytool -importcert -alias mykey -keystore osb-trust-keystore.jks -storepass $STOREPASS -file soacert.der  -keypass $KEYPASS SOA suite uses the JDK's SSL implementation for outbound traffic instead of the WebLogic's implementation.  You will need to import the partner's public cert into the trusted keystore used by SOA.  The default trusted keystore for SOA is DemoTrust.jks and it is located in $MW_HOME/wlserver_10.3/server/lib.   (This is set in the startup script -Djavax.net.ssl.trustStore).   If you use your own trusted keystore, then you will need to import it into your own trusted keystore. keytool -importcert -alias osbkey -keystore $MW_HOME/wlserver_10.3/server/lib/DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase  -file osbcert.der -keypass $KEYPASS If you do not perform this step, you will encounter this exception in runtime when SOA invokes OSB service using 2 way SSL Message send failed: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target  2.  Setup keystores and SSL on WebLogic First, you will need to login to the WebLogic console, navigate to the server's configuration->Keystore's tab.   Change the Keystores type to Custom Identity and Custom Trust and enter the rest of the fields. Then you navigate to the SSL tab, enter the fields in the identity section and expand the Advanced section.  Since I am using self signing cert on my VM enviornment, I disabled Hostname verification.  In real production system, this should not be the case.   I also enabled the option "Use Server Certs", so that the application uses the server cert to initiate https traffic (it is important to enable this in OSB). Last, you enable SSL listening port in the Server's configuration->General tab. 3.  Setup server to use 2 way SSL If you follow the screen shot in previous step, you can see in the Server->Configuration->SSL->Advanced section, there is an option for Two Way Client Cert Behavior,  you should set this to Client Certs Requested and Enforced. Repeat step 2 and 3 done on OSB.  After all these configurations,  you have to restart all the servers. 4.  Configure your SOA composite's partner link to use 2 way SSL You do this by modifying the composite.xml in your project, locate the partner's link reference and add the property oracle.soa.two.way.ssl.enabled.   <reference name="callosb" ui:wsdlLocation="helloword.wsdl">    <interface.wsdl interface="http://www.examples.com/wsdl/HelloService.wsdl#wsdl.interface(Hello_PortType)"/>    <binding.ws port="http://www.examples.com/wsdl/HelloService.wsdl#wsdl.endpoint(Hello_Service/Hello_Port)"                location="helloword.wsdl" soapVersion="1.1">      <property name="weblogic.wsee.wsat.transaction.flowOption"                type="xs:string" many="false">WSDLDriven</property>   <property name="oracle.soa.two.way.ssl.enabled">true</property>    </binding.ws>  </reference> In OSB, you should have checked the HTTPS required flag in the proxy's transport configuration.  After this,  rebuilt the composite jar file and ready to deploy in the EM console later. 5.  Configure SOA engine two ways SSL Oracle SOA Suite uses both Oracle WebLogic Server and Sun Secure Socket Layer (SSL) stacks for two-way SSL configurations. For the inbound web service bindings, Oracle SOA Suite uses the Oracle WebLogic Server infrastructure and, therefore, the Oracle WebLogic Server libraries for SSL.  This is already done by step 2 and 3 in the previous section. For the outbound web service bindings, Oracle SOA Suite uses JRF HttpClient and, therefore, the Sun JDK libraries for SSL.  You do this by configuring the SOA Engine in the Enterprise Manager Console, select soa-infra->SOA Administration->Common Properties Then click at the link at the bottom of the page:  "More SOA Infra Advances Infrastructure Configuration Properties" and then enter the full path of soa identity keystore in the value field of the KeyStoreLocation attribute.  Click Apply and Return then navigate to the domain->security->credential. Here, you provide the password to the keystore.  Note: the alias of the certficate must be mykey as described in step 1, so you only need to provide the password to the identity keystore.   You accomplish this by: Click Create Map In the Map Name field, enter SOA, and click OK Click Create Key Enter the following details where the password is the password for the SOA identity keystore. 6.  Test and Trouble Shooting Once the setup is complete and server restarted, you can deploy the composite in the EM console and test it.  In case of error,  you can read the server log file to determine the cause of the error.  For example, If you have not setup step 5 and test 2 way SSL, you will see this in the log when invoking OSB from BPEL: java.lang.Exception: oracle.sysman.emSDK.webservices.wsdlapi.SoapTestException: oracle.fabric.common.FabricInvocationException: Unable to access the following endpoint(s): https://localhost.localdomain:7002/default/helloword ####<Sep 22, 2012 2:07:37 PM CDT> <Error> <oracle.soa.bpel.engine.ws> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0AFDAEF20610F8FD89C5> ............ <11d1def534ea1be0:-4034173:139ef56d9f0:-8000-00000000000002ec> <1348340857956> <BEA-000000> <got FabricInvocationException sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target If you have not enable WebLogic SSL to use server certificate in the console and invoke SOA composite from OSB using two ways SSL, you will see this error: ####<Sep 22, 2012 2:07:37 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-00000000000000e2> <1348340857776> <BEA-090485> <CERTIFICATE_UNKNOWN alert was received from localhost.localdomain - 127.0.0.1. The peer has an unspecified issue with the certificate. SSL debug tracing should be enabled on the peer to determine what the issue is.> ####<Sep 22, 2012 2:07:37 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-00000000000000e4> <1348340857786> <BEA-090485> <CERTIFICATE_UNKNOWN alert was received from localhost.localdomain - 127.0.0.1. The peer has an unspecified issue with the certificate. SSL debug tracing should be enabled on the peer to determine what the issue is.> ####<Sep 22, 2012 2:27:21 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-0000000000000124> <1348342041926> <BEA-090497> <HANDSHAKE_FAILURE alert received from localhost - 127.0.0.1. Check both sides of the SSL configuration for mismatches in supported ciphers, supported protocol versions, trusted CAs, and hostname verification settings.> References http://docs.oracle.com/cd/E23943_01/admin.1111/e10226/soacompapp_secure.htm#CHDCFABB   Section 5.6.4 http://docs.oracle.com/cd/E23943_01/web.1111/e13707/ssl.htm#i1200848

    Read the article

  • Ubuntu One file sync error: SSL Handshake

    - by Jay Ó Broin
    Ubuntu One repeatedly tries to sync my files but keeps disconnecting before anything is uploaded. Here are some of the messages from syncdaemon.log: 2012-01-08 12:12:34,068 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection started to host fs-2.ubuntuone.com, port 443. 2012-01-08 12:12:34,256 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection made. 2012-01-08 12:12:34,257 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection made. 2012-01-08 12:13:08,832 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection lost, reason: [Failure instance: Traceback (failure with no frames): <class 'OpenSSL.SSL.Error'>: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')]]. 2012-01-08 12:13:08,833 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'protocol_version' failed with the error: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')] 2012-01-08 12:13:08,844 - ubuntuone.SyncDaemon.ActionQueue - WARNING - Connection lost: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')] 2012-01-08 12:13:38,550 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'WAITING' (queues WORKING connection 'With User With Network')>; queue: 1378; hash: 0) ---- 2012-01-08 12:15:08,870 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection started to host fs-2.ubuntuone.com, port 443. 2012-01-08 12:15:09,033 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection made. 2012-01-08 12:15:09,034 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection made. 2012-01-08 12:15:33,676 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection lost, reason: [Failure instance: Traceback (failure with no frames): <class 'OpenSSL.SSL.Error'>: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')]]. 2012-01-08 12:15:33,677 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'protocol_version' failed with the error: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')] 2012-01-08 12:15:33,692 - ubuntuone.SyncDaemon.ActionQueue - WARNING - Connection lost: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')] 2012-01-08 12:15:38,551 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'WAITING' (queues WORKING connection 'With User With Network')>; queue: 1378; hash: 0) ---- I'm using Ubuntu 11.10.

    Read the article

  • Reminder: True WCF Asynchronous Operation

    - by Sean Feldman
    A true asynchronous service operation is not the one that returns void, but the one that is marked as IsOneWay=true using BeginX/EndX asynchronous operations (thanks Krzysztof). To support this sort of fire-and-forget invocation, Windows Communication Foundation offers one-way operations. After the client issues the call, Windows Communication Foundation generates a request message, but no correlated reply message will ever return to the client. As a result, one-way operations can't return values, and any exception thrown on the service side will not make its way to the client. One-way calls do not equate to asynchronous calls. When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call. However, once the call is queued, the client is unblocked and can continue executing while the service processes the operation in the background. This usually gives the appearance of asynchronous calls.

    Read the article

  • WebLogic Weekly for June 20th, 2011

    - by james.bayer
    Welcome the first the first edition of the WebLogic Weekly.  The WebLogic Server team has been trying to extend our community outreach to new mediums like an Oracle WebLogic Youtube Channel (how-to videos and feature showcases), Twitter (sharing WebLogic links, typically blogs), and a Facebook page to do a better job sharing information, providing learning alternatives to product documentation and perhaps most importantly collecting feedback from all of our users using the tools they prefer.  This is our attempt to provide a round-up what has been going on in WebLogic over the past week.  If you would like to have something shared here, use the #weblogic tag on tweets, post on the Oracle WebLogic facebook page, or comment on these blog entries. Blogs WebLogic Server: Listing Groups of an Authenticated User by Steve Button Weblogic, QBrowser And Topics by Eric Elzinga Weblogic, Topics And (Non)-Durable Subscribers by Eric Elzinga Database Web Service using Toplink DB Provider by Vishal Jain WebLogic Server – Use the Execution Context ID in Applications – Lessons From Hansel and Gretel by James Bayer Getting All Server’s Lifecycle State in a Domain by Jay SenSharma Steps to Move Messages From One Queue To Another Queue Using WLST (Updated Version) by Ravish Mody Events If you want to share a story of something innovative you or your organization has done with WebLogic Server or other Fusion Middleware, you could win a pass to Oracle Open World 2011 and share the story there.  See Ruma Sanyal's posting on the Application Grid blog for details.  The deadline for submissions is July 22nd, 2011.

    Read the article

  • Autoscaling in a modern world&hellip;. Part 1

    - by Steve Loethen
    It has been a while since I have had time to sit down and blog.  I need to make sure I take the time.  It helps me to focus on technology and not let the administrivia keep me from doing the things I love. I have been focusing on the cloud for the last couple of years.  Specifically the  PaaS platform from Microsoft called Azure.  Time to dig in.. I wanted to explore Autoscaling.  Autoscaling is not native part of Azure.  The platform has the needed connection points.  You can write code that looks at the health and performance of your application components and react to needed scaling changes.  But that means you have to write all the code.  Luckily, an add on to the Enterprise Library provides a lot of code that gets you a long way to being able to autoscale without having to start from scratch. The tool set is primarily composed of a Autoscaler object that you need to host.  This object, when hosted and configured, looks at the performance criteria you specify and adjusts your application based on your needs.  Sounds perfect. I started with the a set of HOL’s that gave me a good basis to understand the mechanics.  I worked through labs 1 and 2 just to get the feel, but let’s start our saga at the end of lab3.  Lab3 end results in a web application, hosted in Azure and a console app running on premise.  The web app has a few buttons on it.  One set adds messages to a queue, another removes them.  A second set of buttons drives processor utilization to 100%.  If you want to guess, a safe bet is that the Autoscaler is configured to react to a queue that has filled up or high cpu usage.  We will continue our saga in the next post…

    Read the article

  • Git-based storage and publishing, infrastructure advice

    - by Joel Martinez
    I wanted to get some advice on moving a system to "the cloud" ... specifically, I'm looking to move into some of Windows Azure's managed services, as right now I'm managing a VM. Basically, the system operates on some data stored in a github git repository. I'll describe the current architecture: Current system (all hosted on a single server): GitHub - configured with a webhook pointing at ... ASP.NET MVC application - to accept the webhook from git. It pushes a message onto ... Azure service bus Queue - which is drained by ... Windows Service - pulls the message from the queue and ... Fetches the latest data from the git repository (using GitLib2Sharp) onto the local disk and finally ... Operates on the data in git to produce a static HTML website hosted/served by IIS. The system works really well, actually ... but I would like to get out of the business of managing the VM, and move to using some combination of Azure web and worker roles. But because the system relies so heavily on the git repository on the local filesystem, I'm finding it difficult to figure out how to architect in the cloud. I know you can get file system access, so in theory I could just fetch the repository if there's nothing on disk ... but the performance/responsiveness of the system sort of depends on the repository being available and only having to fetch diffs, which is relatively quick. As opposed to periodically having to fetch the entire (somewhat large) git repository if the web or worker role was recycled, or something. So I would love some advice on how you would architect such a system :) Ultimately, the only real requirement is to be able to serve HTML content that's been produced from the contents of a git repository (in a relatively responsive manner, from a publishing perspective) ... please feel free to ask any clarifying questions if there's something I omitted. Thanks!

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Distribute Sort Sample Service

    - by kaleidoscope
    How it works? Using the front-end of the service, a user can specify a size in MB for the input data set to sort. Algorithm CreateAndSplit The CreateAndSplit task generates the input data and stores them as 10 blobs in the utility storage. The URLs to these blobs are packaged as Separate work items and written to the queue. · Separate The Separate task reads the blobs with the random numbers created in the CreateAndSplit task and places the random numbers into buckets. The interval of the numbers that go into one bucket is chosen so that the expected amount of numbers (assuming a uniform distribution of the numbers in the original data set) is around 100 kB. Each bucket is represented as a blob container in utility storage. Whenever there are 10 blobs in one bucket (i.e., the placement in this bucket is complete because we had 10 original splits), the separate task will generate a new Sort task and write the task into the queue. · Sort The Sort task merges all blobs in a single bucket and sorts them using a standard sort algorithm. The result is stored as a blob in utility storage. · Concat The concat task merges the results of all Sort tasks into a single blob. This blob can be downloaded as a text file using this Web page. As the resulting file is presented in text format, the size of the file is likely to be larger than the specified input file. Anish

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • OpenGL ES Loading

    - by kuroutadori
    I want to know what is the norm of loading rendering code. Take a button. When the application is loaded, a texture is loaded which has the image of the button on it. When the button is tapped, it then adds a loader into a queue, which is loaded on render thread. It then loads up an array buffer with vertexes and tex coords when render is called. It then adds to a render tree. Then it renders. the render function looks like this void render() { update(); mBaseRenderer->render(); } update() is when the queue is checked to see if anything needs loading. mBaseRenderer->render() is the render tree. What I am asking then is, should I even have the update() there at all and instead have everything preloaded before it renders? If I can have it loaded when need, for instance when there is tap, then how can it be done (My current code causes an dequeueing buffer error (Unknown error: -75) which I assume is to do with OpenGL ES and the context)?

    Read the article

  • How should I host our scalable worker processes?

    - by Pieter Breed
    We are designing a new architecture for an enterprise business. The principles we've followed so far is not to develop what you can (possible buy and) deploy, ie, don't reinvent any wheels. In this way we've decided on CQRS, RabbitMQ, Riak and a bunch of other things. We still need to write /some/ business code though and these will be in the form of worker processes, which will consume commands from a message queue and after any side-effects, produce events onto another message queue. The idea behind this is that via the competing-consumers design we will have a scalable design right out of the box. One option is of writing a management infrastructure that will know how to: deploy code instantiate processes kill processes update configuration etc IE provide fault tolerance and scalability. Also, this is exactly what something like GAE and Heroku does for you, but in a public setting and in our organization, public is bad. My question is, is there an out-of-the-box solution that we can use to host our consumers in? Like a private cloud or private platform-as-a-service. Private Heroku or GAE. Is there some kind of software or software product with which we can do all of these things and thereby get scalability and fault tolerance over our consumers?

    Read the article

  • print jobs are held until the VirtualBox guest OS is reboot

    - by broiyan
    Here is the setup: VirtualBox 4.1.20 (which the Help window describes as 4.1.12_Ubuntu) Extension Pack 4.1.20 (for USB support) Windows 7 Home Premium as a guest operating system on VirtualBox Ubuntu 12.04 with dist-upgrade's to September 2012 as the host operating system. Fuji Xerox DocuPrint P205b, which I believe is a GDI printer, connected via USB. The problem is that often print jobs will sit in the print queue and nothing comes out of the printer. The printer status for the first item in the queue will be Printing even though nothing happens. Then upon rebooting Windows, the print jobs get printed, seemingly simultaneous to the rebooting process; that is as Windows reloads. One way to avoid this problem is to reboot Windows with the printer cable attached, and then submit the print jobs. The print jobs get printed in a timely manner. Perhaps VirtualBox has a problem with USB being plug-n-play and hot pluggable. It's not convenient to have the printer plugged in when Windows boots because: One, this is a laptop, and Two, I may be boot Windows for a purpose other than printing and not anticipate needing to print. Are there any recommendable fixes for this problem?

    Read the article

  • #altnetseattle - Kanban

    - by GeekAgilistMercenary
    The two main concepts of Kanban is to keep the queues minimum and to maintain visibility. Management/leadership needs to make sure the Kanban Queue doesn’t get starved.  This is key and also very challenging, being the queue needs to be minimal but also can’t get too small during the course of work.  This is to maintain maximum velocity. Phases of the Kanban need to be kept flowing too, bottlenecks need removed ASAP when brought up. Victory Wall – I dig that idea.  Somewhere to look to see the success of the team. The POs work in Rally or other tools for some client management, but it causes issues with the lack of "visibility" – a key fundamental ideal & part of Kanban. One of the big issues is fitting things into a sprint, when Kanban is used with Scrum, but longer sprints are wasteful. Kanban work sizes are of a set size. At this point I got a bit side tracked by the actual conversation and missed out on note taking.  Overall, people doing Kanban and Lean Style Software Development I would say are some of the happiest coders around.  The clean focus, good velocity, sizing, and other approaches that are inferred by Kanban help developers be the rock stars and succeed. This is definitely a topic I will be commenting on a lot more in the near future.

    Read the article

  • What determines which Javascript functions are blocking vs non-blocking?

    - by Sean
    I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of "non-blocking" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node "event queue". What I DON'T understand is what determines whether an individual javascript operation is "blocking" vs. "non-blocking". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as "a long time"? Is there some sort of trigger within these operations that removes them from the normal "event queue"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!

    Read the article

  • unable to send mail from postfix on Ubuntu 12.04

    - by gilmad
    I'm trying to send an email through Google from my localhost. (via PHP5.3) But Google keeps on blocking my requests. I tried to follow the solutions given to a few similar questions, but for some reason they do not work. I followed these instructions to configure it - http://www.dnsexit.com/support/mailrelay/postfix.html Now for the config data: my main.cf file looks like that: relayhost = [smtp.gmail.com]:587 smtp_fallback_relay = [relay.google.com] smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = my sasl_passwd looks like that: [smtp.gmail.com]:587 [email protected]:password and that is how the mail.log rows look like: Dec 14 10:24:50 COMP-NAME postfix/pickup[5185]: 1C3987E0EDD: uid=33 from= Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: 1C3987E0EDD: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: from=, size=483, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/smtp[5501]: 1C3987E0EDD: to=, relay=smtp.gmail.com[173.194.70.109]:587, delay=0.61, delays=0.19/0/0.32/0.1, dsn=5.7.0, status=bounced (host smtp.gmail.com[173.194.70.109] said: 530 5.7.0 Must issue a STARTTLS command first. w3sm8024250eel.17 (in reply to MAIL FROM command)) Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: C20677E0EDE: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/bounce[5502]: 1C3987E0EDD: sender non-delivery notification: C20677E0EDE Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: C20677E0EDE: from=<, size=2532, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: removed

    Read the article

  • Which Ubuntu linux kernel tree matches my installed kernel?

    - by Rmano
    Answering a recent question, and before that, trying to see if a patch which is fundamental for my machine had been included in a kernel release, I have found the following problem: How can I match the kernel version I have for my kernel, which is [:~] % uname -a Linux samsung-romano 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux with the exact kernel source, which I suppose should be stored in http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=summary? In that page there are quite a lot of tags, for example: But none of them correspond to 3.13.0-29 which is my running kernel right now. The mapping should be in https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable, where it is said that the 3.13 Ubuntu kernel is based on 3.13.11 --- I think. But from there to finding the tree I have installed is not straightforward. Notice: I know I can install the kernel source corresponding with my installed kernel. But I do not want to install them; I would like ti have a pointer to the git tree to be able to browse it online (and check for commits, patches, etc.). The best options seems to go to linux3.13-y.review or linux3.13-y.queue, but I am unable to find where this tree are marked for the release - if I understand well the policy, in -review the patches are accumulated for testing, and in -queue accumulated for the next minor release/update --- but I am unable to find the exact release tree. I mean, a tag equivalent to 3.13.0-29 was cut here.

    Read the article

  • Android 2D terrain scrolling

    - by Nikola Ninkovic
    I want to make infinite 2D terrain based on my algorithm.Then I want to move it along Y axis (to the left) This is how I did it : public class Terrain { Queue<Integer> _bottom; Paint _paint; Bitmap _texture; Point _screen; int _numberOfColumns = 100; int _columnWidth = 20; public Terrain(int screenWidth, int screenHeight, Bitmap texture) { _bottom = new LinkedList<Integer>(); _screen = new Point(screenWidth, screenHeight); _numberOfColumns = screenWidth / 6; _columnWidth = screenWidth / _numberOfColumns; for(int i=0;i<=_numberOfColumns;i++) { // Generate terrain point and put it into _bottom queue } _paint = new Paint(); _paint.setStyle(Paint.Style.FILL); _paint.setShader(new BitmapShader(texture, Shader.TileMode.REPEAT, Shader.TileMode.REPEAT)); } public void update() { _bottom.remove(); // Algorithm calculates next point _bottom.add(nextPoint); } public void draw(Canvas canvas) { Iterator<Integer> i = _bottom.iterator(); int counter = 0; Path path = new Path(); path.moveTo(0, _screen.y); while (i.hasNext()) { path.lineTo(counter, _screen.y-i.next()); counter += _columnWidth; } path.lineTo(_screen.x, _screen.y); path.lineTo(0, _screen.y); canvas.drawPath(path2, _paint); } } The problem is that the game is too 'fast', so I tried with pausing thread with Thread.sleep(50); in run() method of my game thread but then it looks too torn. Well, is there any way to slow down drawing of my terrain ?

    Read the article

  • multi-thread in mmorpg server

    - by jean
    For MMORPG, there is a tick function to update every object's state in a map. The function was triggered by a timer in fixed interval. So each map's update can be dispatch to different thread. At other side, server handle player incoming package have its own threads also: I/O threads. Generally, the handler of the corresponding incoming package run in I/O threads. So there is a problem: thread synchronization. I have consider two methods: Synchronize with mutex. I/O thread lock a mutex before execute handler function and map thread lock same mutex before it execute map's update. Execute all handler functions in map's thread, I/O thread only queue the incoming handler and let map thread to pop the queue then call handler function. These two have a disadvantage: delay. For method 1, if the map's tick function is running, then all clients' request need to waiting the lock release. For method 2, if map's tick function is running, all clients' request need to waiting for next tick to be handle. Of course, there is another method: add lock to functions that use data which will be accessed both in I/O thread & map thread. But this is hard to maintain and easy to goes incorrect. It needs carefully check all variables whether or not accessed by both two kinds thread. My problem is: is there better way to do this? Notice that I said map is logic concept means no interactions can happen between two map except transport. I/O thread means thread in 3rd part network lib which used to handle client request.

    Read the article

  • &ldquo;Why do transactional messages all have the same priority?&rdquo;

    - by John Breakwell
    I answered this question on the MSMQ forum on MSDN and thought it worth sharing here. The poster wanted to know why all transactional messages have a fixed priority of zero (instead of 0 through 7). They wanted the guaranteed delivery of messages to a queue but wanted to assign different levels of priority. Some aspects of MSMQ were defined way back in the last century and this is one of them. If I remember right, the reason was to avoid the following scenario: You have a single transaction of 3 messages (a, b and c) with priorities 5, 3 and 4 respectively. The messages are sent in order a,b,c The messages arrive in the queue and are arranged in order a,c,b to reflect priority order This breaks the guaranteed order part of the transaction.  I know that very few people send more than one message in a transaction but that is a scenario that MSMQ has to be able to handle; for the majority, including yourself, this scenario is irrelevant which is why you are surprised by the absence of transactional priorities. The options, therefore, available to the Microsoft developers were to: Implement code that allowed you to send messages with variable priority as long as any messages within the same transaction were the same priority, or Define a set priority for all transactional messages As you can understand, option 1 would be a complicated arrangement with all the necessary enforcement, error handling, user education and documentation, etc. Sure, it would have been possible to implement option 1 but I expect the product group decided to invest the development time in some other aspect of MSMQ. Now, with five versions out there, it would be confusing to change how the product operates, in addition to potentially breaking exisiting systems that have been working fine for years. So, balancing cost and risk against customer demand, I would not expect this feature to ever change.

    Read the article

  • Undefined symbols after installing new xcode 3.2.3 build

    - by toofah
    I want to move to the new XCode 3.2.3 GM Seed build for development, but when I bring up my project I get 'base sdk missing' because my project is set to use iPhone SDK 3.0. If I change 'base SDK' to iPhone 3.2 or 4.0 and then compile I get a lot of errors that I don't understand. I dumped a few of them below. Can anyone tell me what I am missing? Also, can someone confirm that if I choose 'base sdk' of iPhone 3.2 or 4.0 that I can still choose 'target device' of iPhone 3.0 and not force my customers to install the new SDK. I really don't want to be the app that forces my customers to upgrade their OS. Thanks! Undefined symbols: ".objc_class_name_NSObject", referenced from: .objc_class_name_FlurryAPI in libFlurry.a(FlurryAPI.o) .objc_class_name_FlurrySession in libFlurry.a(FlurrySession.o) .objc_class_name_FlurryHTTPEater in libFlurry.a(FlurryHTTPEater.o) .objc_class_name_FlurryHTTPResponse in libFlurry.a(FlurryHTTPResponse.o) .objc_class_name_FlurryConnectionDelegate in libFlurry.a(FlurryConnectionDelegate.o) .objc_class_name_FlurryAd in libFlurry.a(FlurryAd.o) .objc_class_name_FlurryAdParser in libFlurry.a(FlurryAdParser.o) literal-pointer@_OBJC@_cls_refs@NSObject in libFlurry.a(FlurryAdView.o) .objc_class_name_FlurryAdImage in libFlurry.a(FlurryAdImage.o) .objc_class_name_FlurryAdImpression in libFlurry.a(FlurryAdImpression.o) .objc_class_name_FlurryPageViewDelegate in libFlurry.a(FlurryPageViewDelegate.o) .objc_class_name_FlurryAdTheme in libFlurry.a(FlurryAdTheme.o) .objc_class_name_FlurryAdHook in libFlurry.a(FlurryAdHook.o) .objc_class_name_FlurryAdProperties in libFlurry.a(FlurryAdProperties.o) .objc_class_name_FlurryFileCache in libFlurry.a(FlurryFileCache.o) .objc_class_name_FlurryEvent in libFlurry.a(FlurryEvent.o) .objc_class_name_FlurryProtocolData in libFlurry.a(FlurryProtocolData.o) .objc_class_name_FlurryAdAssignment in libFlurry.a(FlurryAdAssignment.o) .objc_class_name_FlurryAdAppStoreConnectionDelegate in libFlurry.a(FlurryAdAppStoreConnectionDelegate.o) .objc_class_name_FlurryHeartBeater in libFlurry.a(FlurryHeartBeater.o) .objc_class_name_FlurryImageCache in libFlurry.a(FlurryImageCache.o) .objc_class_name_FlurryUtil in libFlurry.a(FlurryUtil.o) .objc_class_name_FlurryAdNavigationDelegate in libFlurry.a(FlurryAdNavigationDelegate.o) .objc_class_name_FlurryAdLocation in libFlurry.a(FlurryAdLocation.o) .objc_class_name_FlurryAdDimension in libFlurry.a(FlurryAdDimension.o) .objc_class_name_FlurryAdTextStyle in libFlurry.a(FlurryAdTextStyle.o) ".objc_class_name_NSFileManager", referenced from: literal-pointer@_OBJC@_cls_refs@NSFileManager in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@NSFileManager in libFlurry.a(FlurryFileCache.o) ".objc_class_name_NSString", referenced from: literal-pointer@_OBJC@_cls_refs@NSString in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@NSString in libFlurry.a(FlurryHTTPEater.o) literal-pointer@_OBJC@_cls_refs@NSString in libFlurry.a(FlurryHTTPResponse.o) literal-pointer@_OBJC@_cls_refs@NSString in libFlurry.a(FlurryAd.o) literal-pointer@_OBJC@_cls_refs@NSString in libFlurry.a(FlurryAdParser.o) literal-pointer@_OBJC@_cls_refs@NSString in libFlurry.a(FlurryAdCanvasViewController.o) literal-pointer@_OBJC@_cls_refs@NSString in libFlurry.a(FlurryFileCache.o) literal-pointer@_OBJC@_cls_refs@NSString in libFlurry.a(FlurryImageCache.o) ".objc_class_name_NSError", referenced from: literal-pointer@_OBJC@_cls_refs@NSError in libFlurry.a(FlurryUtil.o) "_OBJC_METACLASS_$_FlurryAPI", referenced from: _OBJC_METACLASS_$_NFlurryAPI in NFlurryAPI.o ".objc_class_name_UIWindow", referenced from: literal-pointer@_OBJC@_cls_refs@UIWindow in libFlurry.a(FlurryAdCanvasViewController.o) ".objc_class_name_NSException", referenced from: literal-pointer@_OBJC@_cls_refs@NSException in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@NSException in libFlurry.a(FlurryUtil.o) ".objc_class_name_UIColor", referenced from: literal-pointer@_OBJC@_cls_refs@UIColor in libFlurry.a(FlurryAdParser.o) literal-pointer@_OBJC@_cls_refs@UIColor in libFlurry.a(FlurryAdView.o) literal-pointer@_OBJC@_cls_refs@UIColor in libFlurry.a(FlurryAdCanvasViewController.o) literal-pointer@_OBJC@_cls_refs@UIColor in libFlurry.a(FlurryAdCanvasView.o) "_OBJC_CLASS_$_FlurryAPI", referenced from: _OBJC_CLASS_$_NFlurryAPI in NFlurryAPI.o ".objc_class_name_NSMutableSet", referenced from: literal-pointer@_OBJC@_cls_refs@NSMutableSet in libFlurry.a(FlurryAdAssignment.o) ".objc_class_name_UIFont", referenced from: literal-pointer@_OBJC@_cls_refs@UIFont in libFlurry.a(FlurryAdView.o) literal-pointer@_OBJC@_cls_refs@UIFont in libFlurry.a(FlurryAdCanvasView.o) ".objc_class_name_UIImage", referenced from: literal-pointer@_OBJC@_cls_refs@UIImage in libFlurry.a(FlurryAdParser.o) literal-pointer@_OBJC@_cls_refs@UIImage in libFlurry.a(FlurryAdImage.o) ".objc_class_name_UIApplication", referenced from: literal-pointer@_OBJC@_cls_refs@UIApplication in libFlurry.a(FlurryAdView.o) literal-pointer@_OBJC@_cls_refs@UIApplication in libFlurry.a(FlurryAdCanvasViewController.o) literal-pointer@_OBJC@_cls_refs@UIApplication in libFlurry.a(FlurryAdAppStoreConnectionDelegate.o) ".objc_class_name_UILabel", referenced from: literal-pointer@_OBJC@_cls_refs@UILabel in libFlurry.a(FlurryAdView.o) literal-pointer@_OBJC@_cls_refs@UILabel in libFlurry.a(FlurryAdCanvasViewController.o) literal-pointer@_OBJC@_cls_refs@UILabel in libFlurry.a(FlurryAdCanvasView.o) ".objc_class_name_UIView", referenced from: literal-pointer@_OBJC@_cls_refs@UIView in libFlurry.a(FlurryAdView.o) .objc_class_name_FlurryAdView in libFlurry.a(FlurryAdView.o) literal-pointer@_OBJC@_cls_refs@UIView in libFlurry.a(FlurryAdCanvasViewController.o) .objc_class_name_FlurryAdListView in libFlurry.a(FlurryAdListView.o) ".objc_class_name_NSMutableString", referenced from: literal-pointer@_OBJC@_cls_refs@NSMutableString in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@NSMutableString in libFlurry.a(FlurryHTTPEater.o) literal-pointer@_OBJC@_cls_refs@NSMutableString in libFlurry.a(FlurryAdView.o) ".objc_class_name_NSTimer", referenced from: literal-pointer@_OBJC@_cls_refs@NSTimer in libFlurry.a(FlurryHeartBeater.o) ".objc_class_name_NSMutableData", referenced from: literal-pointer@_OBJC@_cls_refs@NSMutableData in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@NSMutableData in libFlurry.a(FlurryConnectionDelegate.o) literal-pointer@_OBJC@_cls_refs@NSMutableData in libFlurry.a(FlurryAdImpression.o) literal-pointer@_OBJC@_cls_refs@NSMutableData in libFlurry.a(FlurryEvent.o) ".objc_class_name_NSNumber", referenced from: literal-pointer@_OBJC@_cls_refs@NSNumber in libFlurry.a(FlurryAPI.o) literal-pointer@_OBJC@_cls_refs@NSNumber in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@NSNumber in libFlurry.a(FlurryAdParser.o) literal-pointer@_OBJC@_cls_refs@NSNumber in libFlurry.a(FlurryAdView.o) literal-pointer@_OBJC@_cls_refs@NSNumber in libFlurry.a(FlurryAdImpression.o) literal-pointer@_OBJC@_cls_refs@NSNumber in libFlurry.a(FlurryAdCanvasViewController.o) "_objc_exception_match", referenced from: +[FlurrySession createActiveFlurrySession:] in libFlurry.a(FlurrySession.o) +[FlurrySession dataForSessions:requestAds:] in libFlurry.a(FlurrySession.o) +[FlurrySession initialTimestamp] in libFlurry.a(FlurrySession.o) ".objc_class_name_UINavigationItem", referenced from: literal-pointer@_OBJC@_cls_refs@UINavigationItem in libFlurry.a(FlurryAdCanvasViewController.o) ".objc_class_name_UIViewController", referenced from: literal-pointer@_OBJC@_cls_refs@UIViewController in libFlurry.a(FlurryAdView.o) .objc_class_name_FlurryAdCanvasViewController in libFlurry.a(FlurryAdCanvasViewController.o) ".objc_class_name_NSMutableArray", referenced from: literal-pointer@_OBJC@_cls_refs@NSMutableArray in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@NSMutableArray in libFlurry.a(FlurryHTTPEater.o) literal-pointer@_OBJC@_cls_refs@NSMutableArray in libFlurry.a(FlurryAdParser.o) literal-pointer@_OBJC@_cls_refs@NSMutableArray in libFlurry.a(FlurryImageCache.o) literal-pointer@_OBJC@_cls_refs@NSMutableArray in libFlurry.a(FlurryAdNavigationDelegate.o) ".objc_class_name_UIScreen", referenced from: literal-pointer@_OBJC@_cls_refs@UIScreen in libFlurry.a(FlurryAdCanvasViewController.o) ".objc_class_name_NSURLCache", referenced from: literal-pointer@_OBJC@_cls_refs@NSURLCache in libFlurry.a(FlurryHTTPEater.o) ".objc_class_name_NSNotificationCenter", referenced from: literal-pointer@_OBJC@_cls_refs@NSNotificationCenter in libFlurry.a(FlurryAPI.o) literal-pointer@_OBJC@_cls_refs@NSNotificationCenter in libFlurry.a(FlurryAdParser.o) literal-pointer@_OBJC@_cls_refs@NSNotificationCenter in libFlurry.a(FlurryAdView.o) literal-pointer@_OBJC@_cls_refs@NSNotificationCenter in libFlurry.a(FlurryHeartBeater.o) ".objc_class_name_NSInvocation", referenced from: literal-pointer@_OBJC@_cls_refs@NSInvocation in libFlurry.a(FlurryPageViewDelegate.o) ".objc_class_name_NSURL", referenced from: literal-pointer@_OBJC@_cls_refs@NSURL in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@NSURL in libFlurry.a(FlurryHTTPEater.o) literal-pointer@_OBJC@_cls_refs@NSURL in libFlurry.a(FlurryAdView.o) literal-pointer@_OBJC@_cls_refs@NSURL in libFlurry.a(FlurryAdCanvasViewController.o) "_objc_exception_extract", referenced from: +[FlurryAPI startSession:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI startSession:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI endSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI endSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI pauseSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI pauseSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI resumeSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI resumeSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:withParameters:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:withParameters:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:timed:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:timed:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:withParameters:timed:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:withParameters:timed:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI endTimedEvent:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI endTimedEvent:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logError:message:exception:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logError:message:exception:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logError:message:error:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logError:message:error:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI countPageViews:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI countPageViews:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI countPageView] in libFlurry.a(FlurryAPI.o) +[FlurryAPI countPageView] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setUserID:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setUserID:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setEventLoggingEnabled:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setEventLoggingEnabled:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setServerURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setServerURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setCanvasURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setCanvasURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setLandscapeCanvasURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setLandscapeCanvasURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppStoreURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppStoreURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setSessionReportsOnCloseEnabled:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setSessionReportsOnCloseEnabled:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppVersion:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppVersion:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setGender:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setGender:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAge:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAge:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI getHook:xLoc:yLoc:view:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI getHook:xLoc:yLoc:view:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI getHook:xLoc:yLoc:view:attachToView:orientation:canvasOrientation:autoRefresh:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI getHook:xLoc:yLoc:view:attachToView:orientation:canvasOrientation:autoRefresh:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI updateHook:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI updateHook:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI removeHook:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI removeHook:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI openCatalog:canvasOrientation:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI openCatalog:canvasOrientation:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppCircleDelegate:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppCircleDelegate:] in libFlurry.a(FlurryAPI.o) +[FlurrySession createActiveFlurrySession:] in libFlurry.a(FlurrySession.o) +[FlurrySession createActiveFlurrySession:] in libFlurry.a(FlurrySession.o) +[FlurrySession sendSessionsToServerWithTimeout:useWebView:requestAds:] in libFlurry.a(FlurrySession.o) +[FlurrySession sendSessionsToServerWithTimeout:useWebView:requestAds:] in libFlurry.a(FlurrySession.o) +[FlurrySession dataForSessions:requestAds:] in libFlurry.a(FlurrySession.o) +[FlurrySession dataForSessions:requestAds:] in libFlurry.a(FlurrySession.o) +[FlurrySession initialTimestamp] in libFlurry.a(FlurrySession.o) +[FlurrySession initialTimestamp] in libFlurry.a(FlurrySession.o) +[FlurryAdParser oldInstance] in libFlurry.a(FlurryAdParser.o) +[FlurryAdParser instance] in libFlurry.a(FlurryAdParser.o) -[FlurryAdView initWithAd:hook:xLoc:yLoc:parent:attachToView:orientation:canvasOrientation:autoRefresh:] in libFlurry.a(FlurryAdView.o) -[FlurryAdView initWithAd:hook:xLoc:yLoc:parent:attachToView:orientation:canvasOrientation:autoRefresh:] in libFlurry.a(FlurryAdView.o) -[FlurryAdView refreshWithAd] in libFlurry.a(FlurryAdView.o) -[FlurryAdView refreshWithAd] in libFlurry.a(FlurryAdView.o) -[FlurryAdView updateToOrientation] in libFlurry.a(FlurryAdView.o) -[FlurryAdView updateToOrientation] in libFlurry.a(FlurryAdView.o) -[FlurryAdView touchesEnded:withEvent:] in libFlurry.a(FlurryAdView.o) -[FlurryAdView touchesEnded:withEvent:] in libFlurry.a(FlurryAdView.o) -[FlurryAdView alertView:clickedButtonAtIndex:] in libFlurry.a(FlurryAdView.o) -[FlurryAdView alertView:clickedButtonAtIndex:] in libFlurry.a(FlurryAdView.o) -[FlurryAdView checkBannerLocation] in libFlurry.a(FlurryAdView.o) -[FlurryAdView checkBannerLocation] in libFlurry.a(FlurryAdView.o) -[FlurryAdView dealloc] in libFlurry.a(FlurryAdView.o) -[FlurryAdView dealloc] in libFlurry.a(FlurryAdView.o) -[FlurryPageViewDelegate navigationController:didShowViewController:animated:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate navigationController:didShowViewController:animated:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate navigationController:willShowViewController:animated:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate navigationController:willShowViewController:animated:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:shouldSelectViewController:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:shouldSelectViewController:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:didSelectViewController:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:didSelectViewController:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:willBeginCustomizingViewControllers:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:willBeginCustomizingViewControllers:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:willEndCustomizingViewControllers:changed:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:willEndCustomizingViewControllers:changed:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:didEndCustomizingViewControllers:changed:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:didEndCustomizingViewControllers:changed:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryAdCanvasViewController dealloc] in libFlurry.a(FlurryAdCanvasViewController.o) -[FlurryAdCanvasViewController dealloc] in libFlurry.a(FlurryAdCanvasViewController.o) +[FlurryFileCache createInstanceWithApiKey:] in libFlurry.a(FlurryFileCache.o) +[FlurryAdAssignment createInstance] in libFlurry.a(FlurryAdAssignment.o) +[FlurryHeartBeater createAndStartInstance:] in libFlurry.a(FlurryHeartBeater.o) +[FlurryImageCache createInstanceWithFileCache:] in libFlurry.a(FlurryImageCache.o) ".objc_class_name_NSMutableURLRequest", referenced from: literal-pointer@_OBJC@_cls_refs@NSMutableURLRequest in libFlurry.a(FlurryHTTPEater.o) ".objc_class_name_NSRunLoop", referenced from: literal-pointer@_OBJC@_cls_refs@NSRunLoop in libFlurry.a(FlurryHTTPEater.o) ".objc_class_name_NSKeyedUnarchiver", referenced from: literal-pointer@_OBJC@_cls_refs@NSKeyedUnarchiver in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@NSKeyedUnarchiver in libFlurry.a(FlurryFileCache.o) ".objc_class_name_NSData", referenced from: literal-pointer@_OBJC@_cls_refs@NSData in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@NSData in libFlurry.a(FlurryAdParser.o) ".objc_class_name_NSDate", referenced from: literal-pointer@_OBJC@_cls_refs@NSDate in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@NSDate in libFlurry.a(FlurryHTTPEater.o) literal-pointer@_OBJC@_cls_refs@NSDate in libFlurry.a(FlurryAdView.o) literal-pointer@_OBJC@_cls_refs@NSDate in libFlurry.a(FlurryAdImpression.o) literal-pointer@_OBJC@_cls_refs@NSDate in libFlurry.a(FlurryEvent.o) ".objc_class_name_UIBarButtonItem", referenced from: literal-pointer@_OBJC@_cls_refs@UIBarButtonItem in libFlurry.a(FlurryAdCanvasViewController.o) ".objc_class_name_NSURLRequest", referenced from: literal-pointer@_OBJC@_cls_refs@NSURLRequest in libFlurry.a(FlurryAdCanvasViewController.o) literal-pointer@_OBJC@_cls_refs@NSURLRequest in libFlurry.a(FlurryAdAppStoreConnectionDelegate.o) ".objc_class_name_UIDevice", referenced from: literal-pointer@_OBJC@_cls_refs@UIDevice in libFlurry.a(FlurrySession.o) literal-pointer@_OBJC@_cls_refs@UIDevice in libFlurry.a(FlurryAdView.o) ".objc_class_name_UIImageView", referenced from: literal-pointer@_OBJC@_cls_refs@UIImageView in libFlurry.a(FlurryAdView.o) literal-pointer@_OBJC@_cls_refs@UIImageView in libFlurry.a(FlurryAdCanvasViewController.o) literal-pointer@_OBJC@_cls_refs@UIImageView in libFlurry.a(FlurryAdCanvasView.o) "_objc_exception_try_exit", referenced from: +[FlurryAPI startSession:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI endSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI pauseSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI resumeSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:withParameters:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:timed:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:withParameters:timed:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI endTimedEvent:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logError:message:exception:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logError:message:error:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI countPageViews:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI countPageView] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setUserID:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setEventLoggingEnabled:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setServerURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setCanvasURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setLandscapeCanvasURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppStoreURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setSessionReportsOnCloseEnabled:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppVersion:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setGender:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAge:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI getHook:xLoc:yLoc:view:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI getHook:xLoc:yLoc:view:attachToView:orientation:canvasOrientation:autoRefresh:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI updateHook:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI removeHook:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI openCatalog:canvasOrientation:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppCircleDelegate:] in libFlurry.a(FlurryAPI.o) +[FlurrySession createActiveFlurrySession:] in libFlurry.a(FlurrySession.o) +[FlurrySession createActiveFlurrySession:] in libFlurry.a(FlurrySession.o) +[FlurrySession sendSessionsToServerWithTimeout:useWebView:requestAds:] in libFlurry.a(FlurrySession.o) +[FlurrySession dataForSessions:requestAds:] in libFlurry.a(FlurrySession.o) +[FlurrySession dataForSessions:requestAds:] in libFlurry.a(FlurrySession.o) +[FlurrySession initialTimestamp] in libFlurry.a(FlurrySession.o) +[FlurrySession initialTimestamp] in libFlurry.a(FlurrySession.o) +[FlurrySession initialTimestamp] in libFlurry.a(FlurrySession.o) +[FlurryAdParser oldInstance] in libFlurry.a(FlurryAdParser.o) +[FlurryAdParser instance] in libFlurry.a(FlurryAdParser.o) -[FlurryAdView initWithAd:hook:xLoc:yLoc:parent:attachToView:orientation:canvasOrientation:autoRefresh:] in libFlurry.a(FlurryAdView.o) -[FlurryAdView refreshWithAd] in libFlurry.a(FlurryAdView.o) -[FlurryAdView updateToOrientation] in libFlurry.a(FlurryAdView.o) -[FlurryAdView touchesEnded:withEvent:] in libFlurry.a(FlurryAdView.o) -[FlurryAdView alertView:clickedButtonAtIndex:] in libFlurry.a(FlurryAdView.o) -[FlurryAdView checkBannerLocation] in libFlurry.a(FlurryAdView.o) -[FlurryAdView dealloc] in libFlurry.a(FlurryAdView.o) -[FlurryPageViewDelegate navigationController:didShowViewController:animated:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate navigationController:willShowViewController:animated:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:shouldSelectViewController:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:didSelectViewController:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:willBeginCustomizingViewControllers:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:willEndCustomizingViewControllers:changed:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryPageViewDelegate tabBarController:didEndCustomizingViewControllers:changed:] in libFlurry.a(FlurryPageViewDelegate.o) -[FlurryAdCanvasViewController dealloc] in libFlurry.a(FlurryAdCanvasViewController.o) +[FlurryFileCache createInstanceWithApiKey:] in libFlurry.a(FlurryFileCache.o) +[FlurryAdAssignment createInstance] in libFlurry.a(FlurryAdAssignment.o) +[FlurryHeartBeater createAndStartInstance:] in libFlurry.a(FlurryHeartBeater.o) +[FlurryImageCache createInstanceWithFileCache:] in libFlurry.a(FlurryImageCache.o) ".objc_class_name_NSDateFormatter", referenced from: literal-pointer@_OBJC@_cls_refs@NSDateFormatter in libFlurry.a(FlurrySession.o) "_objc_exception_try_enter", referenced from: +[FlurryAPI startSession:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI startSession:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI endSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI endSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI pauseSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI pauseSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI resumeSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI resumeSession] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:withParameters:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:withParameters:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:timed:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:timed:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:withParameters:timed:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logEvent:withParameters:timed:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI endTimedEvent:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI endTimedEvent:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logError:message:exception:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logError:message:exception:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logError:message:error:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI logError:message:error:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI countPageViews:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI countPageViews:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI countPageView] in libFlurry.a(FlurryAPI.o) +[FlurryAPI countPageView] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setUserID:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setUserID:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setEventLoggingEnabled:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setEventLoggingEnabled:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setServerURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setServerURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setCanvasURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setCanvasURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setLandscapeCanvasURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setLandscapeCanvasURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppStoreURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppStoreURL:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setSessionReportsOnCloseEnabled:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setSessionReportsOnCloseEnabled:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppVersion:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppVersion:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setGender:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setGender:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAge:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAge:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI getHook:xLoc:yLoc:view:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI getHook:xLoc:yLoc:view:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI getHook:xLoc:yLoc:view:attachToView:orientation:canvasOrientation:autoRefresh:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI getHook:xLoc:yLoc:view:attachToView:orientation:canvasOrientation:autoRefresh:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI updateHook:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI updateHook:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI removeHook:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI removeHook:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI openCatalog:canvasOrientation:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI openCatalog:canvasOrientation:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppCircleDelegate:] in libFlurry.a(FlurryAPI.o) +[FlurryAPI setAppCircleDelegate:] in libFlurry.a(FlurryAPI.o) +[FlurrySession createActiveFlurrySession:] in libFlurry.a(FlurrySession.o) +[FlurrySession createActiveFlurrySession:] in libFlurry.a(FlurrySession.o) +[FlurrySession sendSessionsToServerWithTimeout:useWebView:requestAds:] in libFlurry.a(FlurrySession.o)

    Read the article

  • How do I send automated e-mails from Drupal using Messaging and Notifications?

    - by Adrian
    I am working on a Notifications plugin, and after starting to write my notes down about how to do this, decided to just post them here. Please feel free to come make modifications and changes. Eventually I hope to post this on the Drupal handbook as well. Thanks. --Adrian Sending automated e-mails from Drupal using Messaging and Notifications To implement a notifications plugin, you must implement the following functions: Use hook_messaging, hook_token_list and hook_token_values to create the messages that will be sent. Use hook_notifications to create the subscription types Add code to fire events (eg in hook_nodeapi) Add all UI elements to allow users to subscribe/unsubscribe Understanding Messaging The Messaging module is used to compose messages that can be delivered using various formats, such as simple mail, HTML mail, Twitter updates, etc. These formats are called "send methods." The backend details do not concern us here; what is important are the following concepts: TOKENS: tokens are provided by the "tokens" module. They allow you to write keywords in square brackets, [like-this], that can be replaced by any arbitrary value. Note: the token groups you create must match the keys you add to the $events-objects[$key] array. MESSAGE KEYS: A key is a part of a message, such as the greetings line. Keys can be different for each send method. For example, a plaintext mail's greeting might be "Hi, [user]," while an HTML greeing might be "Hi, [user]," and Twitter's might just be "[user-firstname]: ". Keys can have any arbitrary name. Keys are very simple and only have a machine-readable name and a user-readable description, the latter of which is only seen by admins. MESSAGE GROUPS: A group is a bunch of keys that often, but not always, might be used together to make up a complete message. For example, a generic group might include keys for a greeting, body, closing and footer. Groups can also be "subclassed" by selecting a "fallback" group that will supply any keys that are missing. Groups are also associated with modules; I'm not sure what these are used for. Understanding Notifications The Notifications module revolves around the following concepts: SUBSCRIPTIONS: Notifications plugins may define one or more types of subscriptions. For example, notifications_content defines subscriptions for: Threads (users are notified whenever a node or its comments change) Content types (users are notified whenever a node of a certain type is created or is changed) Users (users are notified whenever another user is changed) Subscriptions refer to both the user who's subscribed, how often they wish to be notified, the send method (for Messaging) and what's being subscribed to. This last part is defined in two steps. Firstly, a plugin defines several "subscription fields" (through a hook_notifications op of the same name), and secondly, "subscription types" (also an op) defines which fields apply to each type of subscription. For example, notifications_content defines the fields "nid," "author" and "type," and the subscriptions "thread" (nid), "nodetype" (type), "author" (author) and "typeauthor" (type and author), the latter referring to something like "any STORY by JOE." Fields are used to link events to subscriptions; an event must match all fields of a subscription (for all normal subscriptions) to be delivered to the recipient. The $subscriptions object is defined in subsequent sections. Notifications prefers that you don't create these objects yourself, preferring you to call the notifications_get_link() function to create a link that users may click on, but you can also use notifications_save_subscription and notifications_delete_subscription to do it yourself. EVENTS: An event is something that users may be notified about. Plugins create the $event object then call notifications_event($event). This either sends out notifications immediately, queues them to send out later, or both. Events include the type of thing that's changed (eg 'node', 'user'), the ID of the thing that's changed (eg $node-nid, $user-uid) and what's happened to it (eg 'create'). These are, respectively, $event-type, $event-oid (object ID) and $event-action. Warning: notifications_content_nodeapi also adds a $event-node field, referring to the node itself and not just $event-oid = $node-nid. This is not used anywhere in the core notifications module; however, when the $event is passed back to the 'query' op (see below), we assume the node is still present. Events do not refer to the user they will be referred to; instead, Notifications makes the connection between subscriptions and events, using the subscriptions' fields. MATCHING EVENTS TO SUBSCRIPTIONS: An event matches a subscription if it has the same type as the event (eg "node") and if the event matches all the correct fields. This second step is determined by the "query" hook op, which is called with the $event object as a parameter. The query op is responsible for giving Notifications a value for all the fields defined by the plugin. For example, notifications_content defines the 'nid', 'type' and 'author' fields, so its query op looks like this (ignore the case where $event_or_user = 'user' for now): $event_or_user = $arg0; $event_type = $arg1; $event_or_object = $arg2; if ($event_or_user == 'event' && $event_type == 'node' && ($node = $event_or_object->node) || $event_or_user == 'user' && $event_type == 'node' && ($node = $event_or_object)) { $query[]['fields'] = array( 'nid' => $node->nid, 'type' => $node->type, 'author' => $node->uid, ); return $query; After extracting the $node from the $event, we set $query[]['fields'] to a dictionary defining, for this event, all the fields defined by the module. As you can tell from the presence of the $query object, there's way more you can do with this op, but they are not covered here. DIGESTING AND DEDUPING: Understanding the relationship between Messaging and Notifications Usually, the name of a message group doesn't matter, but when being used with Notifications, the names must follow very strict patterns. Firstly, they must start with the name "notifications," and then are followed by either "event" or "digest," depending on whether the message group is being used to represent either a single event or a group of events. For 'events,' the third part of the name is the "type," which we get from Notification's $event-type (eg: notifications_content uses 'node'). The last part of the name is the operation being performed, which comes from Notification's $event-action. For example: notifications-event-node-comment might refer to the message group used when someone comments on a node notifications-event-user-update to a user who's updated their profile Hyphens cannot appear anywhere other than to separate the parts of these words. For 'digest' messages, the third and fourth part of the name come from hook_notification's "event types" callback, specifically this line: $types[] = array( 'type' => 'node', 'action' => 'insert', ... 'digest' => array('node', 'type'), ); $types[] = array( 'type' => 'node', 'action' => 'update', ... 'digest' => array('node', 'nid'), ); In this case, the first event type (node insertion) will be digested with the notifications-digest-node-type message template providing the header and footer, likely saying something like "the following [type] was created." The second event type (node update) will be digested with the notifications-digest-node-nid message template. Data Structure and Callback Reference $event The $event object has the following members: $event-type: The type of event. Must match the type in hook_notification::"event types". {notifications_event} $event-action: The action the event describes. Most events are sorted by [$event-type][$event-action]. {notifications_event}. $event-object[$object_type]: All objects relevant to the event. For example, $event-object['node'] might be the node that the event describes. $object_type can come from the 'event types' hook (see below). The main purpose appears to be to be passed to token_replace_multiple as the second parameter. $event-object[$event-type] is assumed to exist in the short digest processing functions, but this doesn't appear to be used anywhere. Not saved in the database; loaded by hook_notifications::"event load" $event-oid: apparently unused. The id of the primary object relevant to this event (eg the node's nid). $event-module: apparently unused $event-params[$key]: Mainly a place for plugins to save random data. The main module will serialize the contents of this array but does not use it in any way. However, notifications_ui appears to do something weird with it, possibly by using subscriptions' fields as keys into this array. I'm not sure why though. hook_notifications op 'subscription types': returns an array of subscription types provided by the plugin, in the form $key = array(...) with the following members: event_type: this subscription can only match events whose $event-type has this value. Stored in the database as notifications.event_type for every individual subscription. Apparently, this can be overiden in code but I wouldn't try it (see notifications_save_subscription). fields: an unkeyed array of fields that must be matched by an event (in addition to the event_type) for it to match this subscription. Each element of this array must be a key of the array returned by op 'subscription fields' which in turn must be used by op 'query' to actually perform the matching. title: user-readable title for their subscriptions page (eg the 'type' column in user/%uid/notifications/subscriptions) description: a user-readable description. page callback: used to add a supplementary page at user/%uid/notifications/blah. This and the following are used by notifications_ui as a part of hook_menu_alter. Appears to be partially deprecated. user page: user/%uid/notifications/blah. op 'event types': returns an array of event types, with each event type being an array with the following members: type: this will match $event-type action: this will match $event-action digest: an array with two ordered (non-keyed) elements, "type" and "field." 'type' is used as an index into $event-objects. 'field' is also used to group events like so: $event-objects[$type]-$field. For example, 'field' might be 'nid' - if the object is a node, the digest lines will be grouped by node ID. Finally, both are used to find the correct Messaging template; see discussion above. description: used on the admin "Notifications-Events" page name: unused, use Messaging instead line: deprecated, use Messaging instead Other Stuff This is an example of the main query that inserts an event into the queue: INSERT INTO {notifications_queue} (uid, destination, sid, module, eid, send_interval, send_method, cron, created, conditions) SELECT DISTINCT s.uid, s.destination, s.sid, s.module, %d, // event ID s.send_interval, s.send_method, s.cron, %d, // time of the event s.conditions FROM {notifications} s INNER JOIN {notifications_fields} f ON s.sid = f.sid WHERE (s.status = 1) AND (s.event_type = '%s') // subscription type AND (s.send_interval >= 0) AND (s.uid <> %d) AND ( (f.field = '%s' AND f.intval IN (%d)) // everything from 'query' op OR (f.field = '%s' AND f.intval = %d) OR (f.field = '%s' AND f.value = '%s') OR (f.field = '%s' AND f.intval = %d)) GROUP BY s.uid, s.destination, s.sid, s.module, s.send_interval, s.send_method, s.cron, s.conditions HAVING s.conditions = count(f.sid)

    Read the article

  • Creating Item Templates as Visual Studio 2010 Extensions

    - by maziar
    Technorati Tags: Visual Studio 2010 Extension,T4 Template,VSIX,Item Template Wizard This blog post briefly introduces creation of an item template as a Visual studio 2010 extension. Problem specification Assume you are writing a Framework for data-oriented applications and you decide to include all your application messages in a SQL server database table. After creating the table, your create a class in your framework for getting messages with a string key specified.   var message = FrameworkMessages.Get("ChangesSavedSuccess");   Everyone would say this code is so error prone, because message keys are not strong-typed, and might create errors in application that are not caught in tests. So we think of a way to make it strong-typed, i.e. create a class to use it like this:   var message = Messages.ChangesSavedSuccess; in Messages class the code looks like this: public string ChangesSavedSuccess {     get { return FrameworkMessages.Get("ChangesSavedSuccess"); } }   And clearly, we are not going to create the Messages class manually; we need a class generator for it.   Again assume that the application(s) that intend to use our framework, contain multiple sub-systems. So each sub-system need to have it’s own strong-typed message class that call FrameworkMessages.Get method. So we would like to make our code generator an Item Template so that each developer would easily add the item to his project and no other works would be necessary.   Solution We create a T4 Text Template to generate our strong typed class from database. Then create a Visual Studio Item Template with this generator and publish it.   What Are T4 Templates You might be already familiar with T4 templates. If it’s so, you can skip this section. T4 Text Template is a fine Visual Studio file type (.tt) that generates output text. This file is a mixture of text blocks and code logic (in C# or VB). For example, you can generate HTML files, C# classes, Resource files and etc with use of a T4 template.   Syntax highlighting In Visual Studio 2010 a T4 Template by default will no be syntax highlighted and no auto-complete is supported for it. But there is a fine visual studio extension named ‘Visual T4’ which can be downloaded free from VisualStudioGallery. This tool offers IntelliSense, syntax coloring, validation, transformation preview and more for T4 templates.     How Item Templates work in Visual Studio Visual studio extensions allow us to add some functionalities to visual studio. In our case we need to create a .vsix file that adds a template to visual studio item templates. Item templates are zip files containing the template file and a meta-data file with .vstemplate extension. This .vstemplate file is an XML file that provides some information about the template. A .vsix file also is a zip file (renamed to .vsix) that are open with visual studio extension installer. (Re-installing a vsix file requires that old one to be uninstalled from VS: Tools > Extension Manager.) Installing a vsix will need Visual Studio to be closed and re-opened to take effect. Visual studio extension installer will easily find the item template’s zip file and copy it to visual studio’s template items folder. You can find other visual studio templates in [<VS Install Path>\Common7\IDE\ItemTemplates] and you can edit them; but be very careful with VS default templates.   How Can I Create a VSIX file 1. Visual Studio SDK depending on your Visual Studio’s version, you need to download Microsoft Visual Studio SDK. Note that if you have VS 2010 SP1, you will need to download and install VS 2010 SP1 SDK; VS 2010 SDK will not be installed (unless you change registry value that indicated your service pack number). Here is the link for VS 2010 SP1 SDK. After downloading, Run it and follow the wizard to complete the installation.   2. Create the file you want to make it an Item Template Create a project (or use an existing one) and add you file, edit it to make it work fine.   Back to our own problem, we need to create a T4 (.tt) template. VS-Prok: Add > New Item > General > Text Template Type a file name, ex. Message.tt, and press Add. Create the T4 template file (in this blog I do not intend to include T4 syntaxes so I just write down the code which is clear enough for you to understand)   <#@ template debug="false" hostspecific="true" language="C#" #> <#@ output extension=".cs" #> <#@ Assembly Name="System.Data" #> <#@ Import Namespace="System.Data.SqlClient" #> <#@ Import Namespace="System.Text" #> <#@ Import Namespace="System.IO" #> <#     var connectionString = "server=Maziar-PC; Database=MyDatabase; Integrated Security=True";     var systemName = "Sys1";     var builder = new StringBuilder();     using (var connection = new SqlConnection(connectionString))     {         connection.Open();         var command = connection.CreateCommand();         command.CommandText = string.Format("SELECT [Key] FROM [Message] WHERE System = '{0}'", systemName);         var reader = command.ExecuteReader();         while (reader.Read())         {             builder.AppendFormat("        public static string {0} {{ get {{ return FrameworkMessages.Get(\"{0}\"); }} }}\r\n", reader[0]);         }     } #> namespace <#= System.Runtime.Remoting.Messaging.CallContext.LogicalGetData("NamespaceHint") #> {     public static class <#= Path.GetFileNameWithoutExtension(Host.TemplateFile) #>     { <#= builder.ToString() #>     } } As you can see the T4 template connects to a database, reads message keys and generates a class. Here is the output: namespace MyProject.MyFolder {     public static class Messages     {         public static string ChangesSavedSuccess { get { return FrameworkMessages.Get("ChangesSavedSuccess"); } }         public static string ErrorSavingChanges { get { return FrameworkMessages.Get("ErrorSavingChanges"); } }     } }   The output looks fine but there is one problem. The connectionString and systemName are hard coded. so how can I create an flexible item template? One of features of item templates in visual studio is that you can create a designer wizard for your item template, so I can get connection information and system name there. now lets go on creating the vsix file.   3. Create Template In visual studio click on File > Export Template a wizard will show up. if first step click on Item Template on in the combo box select the project that contains Messages.tt. click next. Select Messages.tt from list in second step. click next. In the third step, you should choose References. For this template, System and System.Data are needed so choose them. click next. write down template name, description, if you like add a .ico file as the icon file and also preview image. Uncheck automatically add the templare … . Copy the output location in clip board. click finish.     4. Create VSIX Project In VS, Click File > New > Project > Extensibility > VSIX Project Type a name, ex. FrameworkMessages, Location, etc. The project will include a .vsixmanifest file. Fill in fields like Author, Product Name, Description, etc.   In Content section, click on Add Content. choose content type as Item Template. choose source as file. remember you have the template file address in clipboard? now paste it in front of file. click OK.     5. Build VSIX Project That’s it, build the project and go to output directory. You see a .vsix file. you can run it now. After restarting VS, if you click on a project > Add > New Item, you will see your item in list and you can add it. When you add this item to a project, if it has not references to System and System.Data they will be added. but the problem mentioned in step 2 is seen now.     6. Create Design Wizard for your Item Template Create a project i.e. Windows Application named ‘Framework.Messages.Design’, but better change its output type to Class Library. Add References to Microsoft.VisualStudio.TemplateWizardInterface and envdte Add a class Named MessagesDesigner in your project and Implement IWizard interface in it. This is what you should write: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TemplateWizard; using EnvDTE; namespace Framework.Messages.Design {     class MessageDesigner : IWizard     {         private bool CanAddProjectItem;         public void RunStarted(object automationObject, Dictionary<string, string> replacementsDictionary, WizardRunKind runKind, object[] customParams)         {             // Prompt user for Connection String and System Name in a Windows form (ShowDialog) here             // (try to provide good interface)             // if user clicks on cancel of your windows form return;             string connectionString = "connection;string"; // Set value from the form             string systemName = "system;name"; // Set value from the form             CanAddProjectItem = true;             replacementsDictionary.Add("$connectionString$", connectionString);             replacementsDictionary.Add("$systemName$", systemName);         }         public bool ShouldAddProjectItem(string filePath)         {             return CanAddProjectItem;         }         public void BeforeOpeningFile(ProjectItem projectItem)         {         }         public void ProjectFinishedGenerating(Project project)         {         }         public void ProjectItemFinishedGenerating(ProjectItem projectItem)         {         }         public void RunFinished()         {         }     } }   before your code runs  replacementsDictionary contains list of default template parameters. After that, two other parameters are added. Now build this project and copy the output assembly to [<VS Install Path>\Common7\IDE] folder.   your designer is ready.     The template that you had created is now added to your VSIX project. In windows explorer open your template zip file (extract it somewhere). open the .vstemplate file. first of all remove <ProjectItem SubType="Code" TargetFileName="$fileinputname$.cs" ReplaceParameters="true">Messages.cs</ProjectItem> because the .cs file is not to be intended to be a part of template and it will be generated. change value of ReplaceParameters for your .tt file to true to enable parameter replacement in this file. now right after </TemplateContent> end element, write this: <WizardExtension>   <Assembly>Framework.Messages.Design</Assembly>   <FullClassName>Framework.Messages.Design.MessageDesigner</FullClassName> </WizardExtension>   one other thing that you should do is to edit your .tt file and remove your .cs file. Lines 8 and 9 of your .tt file should be:     var connectionString = "$connectionString$";     var systemName = "$systemName$"; this parameters will be replaced when the item is added to a project. Save the contents to a zip file with same file name and replace the original file.   now again build your VSIX project, uninstall your extension. close VS. now run .vsix file. open vs, add an item of type messages to your project, bingo, your wizard form will show up. fill the fields and click ok, values are replaced in .tt file added.     that’s it. tried so hard to make this post brief, hope it was not so long…   Cheers Maziar

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • Where would a spam bot be located?

    - by Tim
    I have a hosted website using a free hosting service, I received an email this afternoon saying that I have been suspended because my account has been compromised. Basically, someone is using my email account to mass send spam. I've changed all the passwords and everything but when my Gmail pulls the emails from the host it's still downloading loads of spam messages that show like this: This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: [email protected] SMTP error from remote mail server after end of data: host 198.91.80.251 [198.91.80.251]: 554 5.6.0 id=23634-03 - Rejected by MTA on relaying, from MTA([127.0.0.1]:10030): 554 Error: This email address has lost rights to send email from the system ------ This is a copy of the message, including all the headers. ------ Return-path: <[email protected]> Received: from keenesystems.com ([66.135.33.211]:2370 helo=server211) by absolut.x10hosting.com with esmtpsa (TLSv1:RC4-MD5:128) (Exim 4.77) (envelope-from <[email protected]>) id 1TGwSW-002hHe-Lc for [email protected]; Wed, 26 Sep 2012 13:35:44 -0500 MIME-Version: 1.0 Date: Wed, 26 Sep 2012 13:35:43 -0500 X-Priority: 3 (Normal) X-Mailer: Ximian Evolution 3.9.9 (8.5.3-6) Subject: New staff members wanted at Auction It Online From: [email protected] Reply-To: [email protected] To: "Nadia Monti" <[email protected]> Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Message-ID: <OUTLOOK-IDM-9aed7054-6a3e-e1a4-1d5c-3e73377652a6@server211> Date : 26 September 2012=0ATime : 13:35=0ASender : Dennise Halcomb Head = Office Manager of RJ Auction Drop-Off Int.=0A=0ANice to meet you Nadia M= onti=0A=0ARJ ADO Ltd., a USA based company, offers a significant amount = of goods worldwide for our customers on eBay and other auction venues. = Our company's main target is to provide a suitable and cost-effective se= rvice for any person, company or fundraising company. The main purpose o= f the administrative assistant / sales support representative is to cont= ribute to the sales force and add convenience to our cost-effective serv= ice dedicated to individuals, businesses, and organizations worldwide. O= ur HR department obtained your resume from one of the various job-orient= ed websites just to offer you this post.=0A=0AWorking Schedule: This is = a part time and home-based offer. You won't need to spend more than 3 ho= urs each day. Your =0Aschedule will be flexible.=0A=0ASalary: At the end= of the trial period (it lasts for 1 month) you will be paid 1,800 EUR. = With the average volume of clients your overall income will raise up to = 3,000 EUR per month. After the trial period is over your base salary wil= l grow up to 2,500 EUR per month, so you will earn 5% commission from th= e transactions completed.=0A=0AWhere?: Italy Wide. As it is a stay at ho= me position all the communication will be carried out via email and via = phone.=0A=0ARequirements: Access to the internet during the workday and = basic microsoft office skills are needed. Basic knowledge of English is = required (most of the contacts will be in English).=0A=0ACosts and Fees:= There are NO costs at any time for our employees. All fees related to t= his position are covered by the RJ ADO Co. Ltd..=0A=0AFurther Hiring Pro= cess: If you are interested in position we offer, please reply to this e= mail and send us the copy of your resume for verification.=0A=0AAfter re= viewing all of the received applications we will reply to successful app= licants only. Then we'll offer to these successful applicants a position= within our firm on a trial period basis for one month beginning from th= e date you sign a trial agreement. During this trial period you will rec= eive full guidance and support. Employees on a one monthly trial period = are evaluated at least one week prior to the end of their trial. During = the trial, your supervisor can recommend termination. At the end of the = trial period, the supervisor can offer continued employment, extension o= f trial period, or termination. After the trial period you may ask for m= ore hours or continue full-time.=0A=0AIf you are interested in this posi= tion, just reply to this email and send any questions you have and the c= opy of your resume for verification.=0A=0AThank You,=0AHR-Manager of RJ = ADO Co. Ltd.=0A=0APermission Settings=0AYou have been referred to RJ Auc= tion Drop-Off If you feel you received this email in error or do not wis= h to receive future messages, please reply to this message with "remove"= in the subject field. We will immediately update our database according= ly. =0AWe apologize for any inconvenience caused.=0A=0ARJ Auction Drop-O= ff Co. Ltd. I'm not aware of how this has happened. I'm not sure how anyone could have got hold of my password. It's a simple wordpress install, at some point recently my host went down and there was a fresh install of wordpress with default admin accounts, I have a feeling it could be something to do with this. My question is, even though I've changed all my passwords it's all still happening, is there annywhere in paticular this script would be stored on my host. I really can't deal with having my hosting account suspended and my email account sending all this spam.

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >