Search Results

Search found 74550 results on 2982 pages for 'wcf data service'.

Page 255/2982 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Windows Service is not Working.

    - by prateeksaluja20
    I had made a windows service in visual studio 2008 in C#.inside the service i had written only single line code try { System.Diagnostics.Process.Start(@"E:\Users\Sk\Desktop\category.txt"); } catch { } then i add the project insatller & change the serviceProcessInstaller1 Account proerty as local system Also change the serviceInstaller1 start type proerty as Automatic. then i build the project.it was succesfull. after that i add another project that was setup project.i had added pprimary project output & i had added the custom action as "Primary output from DemoWindowsService (Active)".then buil the setup.setup was build sucessfully.then i install the setup & then went to services start the service.service stated properly butit was not performing the task. i had checked the path is correct & also i tried to do System.Diagnostics.Process.Start(@"E:\Windows\system32\notepad.exe") but still result is same.i tried alot but not getting the ans soleasse help me to solve this problem.

    Read the article

  • Web Service Security java

    - by WhoAmI
    My web service was created some time back using IBM JAX-RPC. As a part of enhancement, I need to provide some security to the existing service. One way is to provide a handler, all the request and response will pass through that handler only. In the request I can implement some authentication rules for each and every application/user accessing it. Other than this, What are the possible ways for securing it? I have heard someting called wsse security for web service. Is it possible to implement it for the JAX-RPC? Or it can be implemented only for JAX-WS? Need some helpful inputs on the wsse security so that i can jump learning it. Other than handler and wsse security, any other possible way to make a service secure? Please help.

    Read the article

  • Windows Service on NetworkService account can't access remote (shared) directory

    - by Aetius
    I'm trying to remotely access a shared folder with a Windows Service set to NetworkService account permissions. However, I get errors when I try to do this, e.g. Directory.Exists(servicePath) returns false, FileSystemWatcher doesn't recognize activity in the directory. If I change the service's account to LocalSystem, these methods work. I don't want to give the service root-level access, though. It seems to be a permissions problem, so how can I give the service permission to access the directory and monitor it?

    Read the article

  • where to store web service exceptions?

    - by ICoder
    Hello all, I am working on building a web service (using c#) and this web service will use MS Sql server database. Now, I am trying to build an (exceptions log system) for this web service. Simply, I want to save every exception on the web service for future use (bug tracing), so, where is the best place to save these exceptions? Is it good idea to save it in the database? What if the exception is in the connection to the database itself? I really appreciate your help and your ideas. Thanks

    Read the article

  • SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008

    - by pinaldave
    Note: Please read the complete post before taking any actions. This blog post would discuss SHRINKFILE and TRUNCATE Log File. The script mentioned in the email received from reader contains the following questionable code: “Hi Pinal, If you could remember, I and my manager met you at TechEd in Bangalore. We just upgraded to SQL Server 2008. One of our jobs failed as it was using the following code. The error was: Msg 155, Level 15, State 1, Line 1 ‘TRUNCATE_ONLY’ is not a recognized BACKUP option. The code was: DBCC SHRINKFILE(TestDBLog, 1) BACKUP LOG TestDB WITH TRUNCATE_ONLY DBCC SHRINKFILE(TestDBLog, 1) GO I have modified that code to subsequent code and it works fine. But, are there other suggestions you have at the moment? USE [master] GO ALTER DATABASE [TestDb] SET RECOVERY SIMPLE WITH NO_WAIT DBCC SHRINKFILE(TestDbLog, 1) ALTER DATABASE [TestDb] SET RECOVERY FULL WITH NO_WAIT GO Configuration of our server and system is as follows: [Removed not relevant data]“ An email like this that suddenly pops out in early morning is alarming email. Because I am a dead, busy mind, so I had only one min to reply. I wrote down quickly the following note. (As I said, it was a single-minute email so it is not completely accurate). Here is that quick email shared with all of you. “Hi Mr. DBA [removed the name] Thanks for your email. I suggest you stop this practice. There are many issues included here, but I would list two major issues: 1) From the setting database to simple recovery, shrinking the file and once again setting in full recovery, you are in fact losing your valuable log data and will be not able to restore point in time. Not only that, you will also not able to use subsequent log files. 2) Shrinking file or database adds fragmentation. There are a lot of things you can do. First, start taking proper log backup using following command instead of truncating them and losing them frequently. BACKUP LOG [TestDb] TO  DISK = N'C:\Backup\TestDb.bak' GO Remove the code of SHRINKING the file. If you are taking proper log backups, your log file usually (again usually, special cases are excluded) do not grow very big. There are so many things to add here, but you can call me on my [phone number]. Before you call me, I suggest for accuracy you read Paul Randel‘s two posts here and here and Brent Ozar‘s Post here. Kind Regards, Pinal Dave” I guess this post is very much clear to you. Please leave your comments here. As mentioned, this is a very huge subject; I have just touched a tip of the ice-berg and have tried to point to authentic knowledge. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    - by jamiet
    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it. A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project. The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server. In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT: This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE. http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/ Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team: The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine. I, like many others, would rather not have to install full blown Visual Studio and so I asked: Is there any chance you'll ever support any of these scenarios: Installation of all build/deploy pre-requisites without installing the VS shell? TFS shipping with all of the pre-requisites for doing SSDT project build/deploys 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that? Kevin replied again: The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc. Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc. What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that. Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you. So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead. @Jamiet

    Read the article

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Az OTP Bank az Oracle Warehouse Builder-t használja

    - by Fekete Zoltán
    Az Oracle.com-on az ügyfél sikertörténetek között az imént jelent meg a következo dokumentum: OTP Bank Data Warehouse Development Team Improves Service Level and Lowers Reporting Lead Time for Business Fields by 80%, azaz az OTP Bank az adattárház fejlesztéshez az Oracle Warehouse Builder ETL-ELT eszközt használja. AZ OTP Bank Tranzakciós Adattárház fejleszto csapata magasabb minoségi szintre emelte a belso megrendeloknek nyújtott szoltáltatásait, amely egyik eredménye, hogy 80%-al csökkentette az üzletágak közötti riportolási folyamatok átfutási idotartamát. A magyar nyelvu sikertörténet innen töltheto le. A legfontosabb eredmények az OWB kapcsán: - ETL folyamatok sztenderdizációján keresztül elért adatminoség javulás, OWB - Oracle Business Intelligence EE: az üzleti területek és az IT fejlesztés közötti együttmoködés hatékonyabb - sztenderdizált ETL és riportolási folyamatok: - fix jelentés készletek hatására tudatos üzleti metaadat kezelés - egységes terminológia - komplex banki folyamatok pontos ismerete: üzleti területek és IT fejlesztok számára - hatékony banki együttmoködés - a megrendeléstol az adatpublikációig tartó folyamatok idotartama lecsökkent - az ad-hoc riportok elkészítése a korábbi 1,5 hétrol 80%-al, átlagosan 2 munkanapra csökkent

    Read the article

  • SQL SERVER – Identifying Column Data Type of uniqueidentifier without Querying System Tables

    - by pinaldave
    I love interesting conversations with related to SQL Server. One of my friends Madhivanan always comes up with an interesting point of conversation. Here is one of the conversation between us. I am very confident this blog post will for sure enable you with some new knowledge. Madhi: How do I know if any table has a uniqueidentifier column used in it? Pinal:  I am sure you know that you can do it through some DMV or catalogue views. Madhi: I know that but how can we do that without using DMV or catalogue views? Pinal: Hm… what can I use? Madhi: You can use table name. Pinal: Easy, just say SELECT YourUniqueIdentCol FROM Table. Madhi: Hold on, the question seems to be not clear to you – you do know the name of the column. The matter of the fact, you do not know if the table has uniqueidentifier column. Only information you have is table name. Pinal: Madhi, this seems like you are changing the question when I am close to answer. Madhi: Well, are you clear now? Let me say it again – How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is table name and you are allowed to return any kind of error if table does not have uniqueidentifier column. Pinal: Do you know the answer? Madhi: Yes. I just wanted to test your knowledge about SQL. Pinal: I will have to think. Let me accept I do not know it right away. Can you share the answer please? Madhi: I won! Here it goes! Pinal: When I have friends like you – who needs enemies? Madhi: (laughter which did not stop for a minute). CREATE TABLE t ( GuidCol UNIQUEIDENTIFIER DEFAULT newsequentialid() ROWGUIDCOL, data VARCHAR(60) ) INSERT INTO t (data) SELECT 'test' INSERT INTO t (data) SELECT 'test1' SELECT $rowguid FROM t DROP TABLE t This is indeed very interesting to me. Please note that this is not the optimal way and there will be many other ways to retrieve uniqueidentifier name and value. What I learned from this was if I am in a rush to check if the table has uniqueidentifier and I do not know the name of the same, I can use SELECT TOP (1) $rowguid and quickly know the name of the column. I can later use the same columnname in my query. Madhi did teach me this new trick. Did you know this? What are other ways to get the check uniqueidentifier column existence in a database? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Where do service implementations fit into the Microsoft Application Architecture guidelines?

    - by tuespetre
    The guidelines discuss the service layer with its service interfaces and data/message/fault contracts. They also discuss the business layer with its logic/workflow components and entities as well as the 'optional' application facade. What is unclear still to me after studying this guide is where the implementations of the service interfaces belong. Does the application facade in the business layer implement these interfaces, or does a separate 'service facade' exist to make calls to the business layer and it's facade/raw components? (With the former, there would be less seemingly trivial calls to yet another layer, though with the latter I could see how the service layer could remove the concerns of translating business entities to data contracts from the business layer.)

    Read the article

  • Le service de "Cloud computing" de jeux vidéo OnLive annonce ses dates et son prix à la Game Develop

    Mise à jour du 11/03/10 Le service de "Cloud computing" de jeux vidéo OnLive annonce ses dates et son prix à la Game Developers Conference 2010 Se déroulant actuellement, la Game Developers Conference 2010 a offert l'occasion à Mike McGarvey, responsable du projet OnLive, d'officialiser certains points sur son projet de "could computing" pour les jeux vidéo. On apprend ainsi que le service sera disponible à partir du 17 juin prochain sur la sol américain. Rien n'est précisé quand à sa disponibilité du service en Europe. Le service demandera au client de s'abonner mensuellement pour un prix de 14,95$ (soit environ 11€ par mois). Le service sera dans un p...

    Read the article

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • in memory datastore in haskell

    - by Simon
    I want to implement an in memory datastore for a web service in Haskell. I want to run transactions in the stm monad. When I google hash table steam Haskell I only get this: Data. BTree. HashTable. STM. The module name and complexities suggest that this is implemented as a tree. I would think that an array would be more efficient for mutable hash tables. Is there a reason to avoid using an array for an STM hashtable? Do I gain anything with this stem hash table or should I just use a steam ref to an IntMap?

    Read the article

  • Write data to SQL Server directly from BizTalk or use external service?

    - by dlongest
    An external source will be sending us XML data that BizTalk will pick up and transform into an internal schema. We need this data to be loaded into a SQL Server database as we're going to expose some of the data to our web front-end via a custom WCF service. The question is: what is the recommended approach for doing something like this? Options we're considering are having BizTalk write to the database directly or having BizTalk call a custom WCF service which would handle the save operation. Another briefly considered idea was having BizTalk write to an MSMQ and have a custom service pull from there and store it in the database. What are some of the guidelines or questions that should be asked in assessing these options? There are concerns related to overhead from calling the extra service, duplication of efforts if the schema is modified in the future (which it will be to some extent), and simply the best way to design within a service-oriented architecture that we're struggling with.

    Read the article

  • Hitachi Data Systems definition of cloud

    - by llaszews
    1. Ability to rapidly provision and de-provision a service. (aka: provisioning) 2. A consumption model where users pay for what they use. (aka: chargeback and showback) 3. The agility to flexibly scale - 'flex up' or 'flex down' - the services without extensive pre-planning. (aka: elasticity) 4. Secure, direct connection to the cloud without having to recode applications (aka: internet-based) 5. Multi-tenancy capabilitites that segregate and protect the data. (as it says multi-tenancy) Happen to be I have been talking about 4 of the 5. Did not mention connection to internet as assumed this.

    Read the article

  • In choosing a service-oriented architecture framework that needs to work with .NET and with Java, what to look for?

    - by cm007
    I planning to write an application in which there will be a service (call it A) listening for particular commands. This service will then relay those commands to other services (call them B and C) which are written, respectively, in .NET and Java (service A chooses which of service B or C to which to relay depending on the contents of the request to service A). I am looking for a framework that will allow for interoperability with both .NET and with Java, for example WCF or JAX-WS, or writing a custom framework (e.g., JSON REST commands over HTTP, similar to http://code.google.com/p/selenium/wiki/JsonWireProtocol). What questions/aspects should I consider in deciding?

    Read the article

  • PostgreSQL service doesn't start on Windows 7

    - by Mehrdad
    (Not sure if this should be on Stack Overflow or Super User... please move if needed.) When I start the PostgreSQL service on Windows 7 x64, it immediately stops. When I check my log folder (C:\PostgreSQL\9.1\data\pg_log\), I see new but empty log files. The Event Viewer doesn't tell me anything other than the fact that the server did not respond. I've even tried turning off my firewall (I don't have any antivirus or anything else), but nothing helps. The setup works fine when I'm on Windows XP (32-bit) (same computer, different partition). I can't figure out what's wrong, even though I've tried tracing the system calls. Is PostgreSQL compatible with Windows 7 x64 at all? Any ideas what the issue might be? More info: This problem also happens at the end of installation -- the service starts, then stops immediately, before the installer can do anything. Installation log: Starting the database server... Executing cscript //NoLogo "C:\Program Files\PostgreSQL\9.1\installer\server\startserver.vbs" postgresql-x64-9.1 Script exit code: 0 Script output: Starting postgresql-x64-9.1 Service postgresql-x64-9.1 started successfully // <==== NOT REALLY!! It stops! startserver.vbs ran to completion Script stderr: Loading additional SQL modules... Executing cscript //NoLogo "C:\Program Files\PostgreSQL\9.1\installer\server\loadmodules.vbs" "postgres" "****" "C:\Program Files\PostgreSQL\9.1" "C:\Program Files\PostgreSQL\9.1\data" 5432 Script exit code: 2 Script output: Installing the adminpack module in the postgres database... Executing 'C:\Users\HOMEUS~1\AppData\Local\Temp\rad6C20D.bat'... psql: could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? Failed to install the 'adminpack' module in the 'postgres' database loadmodules.vbs ran to completion Script stderr: Program ended with an error exit code Error running cscript //NoLogo "C:\Program Files\PostgreSQL\9.1\installer\server\loadmodules.vbs" "postgres" "****" "C:\Program Files\PostgreSQL\9.1" "C:\Program Files\PostgreSQL\9.1\data" 5432 : Program ended with an error exit code

    Read the article

  • Email deliverability -- Whitelist solution or Email delivery service?

    - by JoefrshnJoeclean
    Hey Folks -- our company is encountering the same recurring problem - email deliverability. A lot of our emails are still getting trapped in yahoo and gmail spam filters. We followed yahoo's best practices guide as well as tips Ive found on serverfault. (setting up DKIM, SPF) And even took the Email Server Test (http://www.allaboutspam.com/email-server-test/) Now my question is: has anyone had success using whitelist solutions like goodmail or EmailReach? Alternatively, Im beginning to think that going with a email delivery service like Mailchimp will save me the headache and future stress of managing our email lists. So whitelist solution or just fork up the money and send via an email delivery service? Thanks!

    Read the article

  • EC2 hosted service multi-tenant dynamic DNS solution

    - by accidental admin
    I want to change the model of my EC2 hosted service to have a separate sub domain for each tenant (ie. .example.com). My primary DNS is now with dnsmadeeasy.com, but their dynamic DNS offering seem pretty weak: it requires the API to use my full dnsmadeeasy.com account credentials, I rather have the API use a limited privilege credential that can only add/remove/modify subdomain records from what I gather it only allows to modify existing records, does not allow me to dynamically add/remove records for new tenant subdomains My question what are my alternatives? Is there something in the dnsmadeeasy API offering I misunderstood and I should just use them? Is there some other similar DNS service that has a DDNS offering that satisfies my requirements? Or should I just bite the bullet and host my own DNS (my fear is not configuration/learning/know how, my fear is reliability). If you recommend the latter, can you detail the necessary steps or a link to a good tutorial how to?

    Read the article

  • 503 Service Unavailable - What really it means?

    - by pandiya chendur
    Possible Dup: http://stackoverflow.com/questions/2529244/503-service-unavailable-what-really-it-means I am asking on behalf of original question poster because we both work in the same place... I developed a website and it loads in every other system but certainly not in mine ... WHen i used firebug my request show 503 Service Unavailable Firebug response header showed, Server squid/2.6.STABLE21 Date Sat, 27 Mar 2010 12:25:18 GMT Content-Type text/html Content-Length 1163 Expires Sat, 27 Mar 2010 12:25:18 GMT X-Squid-Error ERR_DNS_FAIL 0 X-Cache MISS from xavy X-Cache-Lookup MISS from xavy:3128 Via 1.0 xavy:3128 (squid/2.6.STABLE21) Proxy-Connection close For REF: please visit the original question and look at the answers and comments and help us out..

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >