Search Results

Search found 24353 results on 975 pages for 'test coverage'.

Page 582/975 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • Create device receive SMS parse to text ( SMS Gateway )

    - by Chris Okyen
    I want to use a server as a device to run a script to parse a SMS text in the following way. I. The person types in a specific and special cell phone number (Similar to Facebook’s 32556 number used to post on your wall) II. The user types a text message. III. The user sends the text message. IV. The message is sent to some kind of Device (the server) or SMS Gateway and receives it. V. The thing described above that the message is sent to then parse the test message. I understand that these three question will mix Programming and Server Stuff and could reside here or at DBA.SE How would I make such a cell phone number (described in step I) that would be sent to the Device? How do I create the device that then would receive it? Finally, how do I Parse the text message? I don't want to pay for cloud space, server scripting stuff or server space; I want to just use a free webserver to do this totally free - meaning I will have to do more on my own... My question can be seen in more depth in this visual flowchart

    Read the article

  • Destination host unreachable, but the errorlevel is 0 (from a win7)

    - by Doron
    From a windows 7 machine, I ping a non existing ip address. C:ping 192.168.1.222 Pinging 192.168.1.222 with 32 bytes of data: Reply from 192.168.1.222: Destination host unreachable. Reply from 192.168.1.222: Destination host unreachable. Reply from 192.168.1.222: Destination host unreachable. Ping statistics for 192.168.1.222: Packets: Sent = 3, Received = 3, Lost = 0 (0% loss) Even though there is no reply, the errorlevel is set to 0. *what I am trying to do, is figure out if a remote machine is replying to ping. One of my test is to turn off the machine and ping it. For some reason, ping sets errorlevel to 0 *

    Read the article

  • Azure &ndash; Part 5 &ndash; Repository Pattern for Table Service

    - by Shaun
    In my last post I created a very simple WCF service with the user registration functionality. I created an entity for the user data and a DataContext class which provides some methods for operating the entities such as add, delete, etc. And in the service method I utilized it to add a new entity into the table service. But I didn’t have any validation before registering which is not acceptable in a real project. So in this post I would firstly add some validation before perform the data creation code and show how to use the LINQ for the table service.   LINQ to Table Service Since the table service utilizes ADO.NET Data Service to expose the data and the managed library of ADO.NET Data Service supports LINQ we can use it to deal with the data of the table service. Let me explain with my current example: I would like to ensure that when register a new user the email address should be unique. So I need to check the account entities in the table service before add. If you remembered, in my last post I mentioned that there’s a method in the TableServiceContext class – CreateQuery, which will create a IQueryable instance from a given type of entity. So here I would create a method under my AccountDataContext class to return the IQueryable<Account> which named Load. 1: public class AccountDataContext : TableServiceContext 2: { 3: private CloudStorageAccount _storageAccount; 4:  5: public AccountDataContext(CloudStorageAccount storageAccount) 6: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 7: { 8: _storageAccount = storageAccount; 9:  10: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 11: _storageAccount.Credentials); 12: tableStorage.CreateTableIfNotExist("Account"); 13: } 14:  15: public void Add(Account accountToAdd) 16: { 17: AddObject("Account", accountToAdd); 18: SaveChanges(); 19: } 20:  21: public IQueryable<Account> Load() 22: { 23: return CreateQuery<Account>("Account"); 24: } 25: } The method returns the IQueryable<Account> so that I can perform the LINQ operation on it. And back to my service class, I will use it to implement my validation. 1: public bool Register(string email, string password) 2: { 3: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 4: var accountToAdd = new Account(email, password) { DateCreated = DateTime.Now }; 5: var accountContext = new AccountDataContext(storageAccount); 6:  7: // validation 8: var accountNumber = accountContext.Load() 9: .Where(a => a.Email == accountToAdd.Email) 10: .Count(); 11: if (accountNumber > 0) 12: { 13: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 14: } 15:  16: // create entity 17: try 18: { 19: accountContext.Add(accountToAdd); 20: return true; 21: } 22: catch (Exception ex) 23: { 24: Trace.TraceInformation(ex.ToString()); 25: } 26: return false; 27: } I used the Load method to retrieve the IQueryable<Account> and use Where method to find the accounts those email address are the same as the one is being registered. If it has I through an exception back to the client side. Let’s run it and test from my simple client application. Oops! Looks like we encountered an unexpected exception. It said the “Count” is not support by the ADO.NET Data Service LINQ managed library. That is because the table storage managed library (aka. TableServiceContext) is based on the ADO.NET Data Service and it supports very limit LINQ operation. Although I didn’t find a full list or documentation about which LINQ methods it supports I could even refer a page on msdn here. It gives us a roughly summary of which query operation the ADO.NET Data Service managed library supports and which doesn't. As you see the Count method is not in the supported list. Not only the query operation, there inner lambda expression in the Where method are limited when using the ADO.NET Data Service managed library as well. For example if you added (a => !a.DateDeleted.HasValue) in the Where method to exclude those deleted account it will raised an exception said "Invalid Input". Based on my experience you should always use the simple comparison (such as ==, >, <=, etc.) on the simple members (such as string, integer, etc.) and do not use any shortcut methods (such as string.Compare, string.IsNullOrEmpty etc.). 1: // validation 2: var accountNumber = accountContext.Load() 3: .Where(a => a.Email == accountToAdd.Email) 4: .ToList() 5: .Count; 6: if (accountNumber > 0) 7: { 8: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 9: } We changed the a bit and try again. Since I had created an account with my mail address so this time it gave me an exception said that the email had been used, which is correct.   Repository Pattern for Table Service The AccountDataContext takes the responsibility to save and load the account entity but only for that specific entity. Is that possible to have a dynamic or generic DataContext class which can operate any kinds of entity in my system? Of course yes. Although there's no typical database in table service we can threat the entities as the records, similar with the data entities if we used OR Mapping. As we can use some patterns for ORM architecture here we should be able to adopt the one of them - Repository Pattern in this example. We know that the base class - TableServiceContext provide 4 methods for operating the table entities which are CreateQuery, AddObject, UpdateObject and DeleteObject. And we can create a relationship between the enmity class, the table container name and entity set name. So it's really simple to have a generic base class for any kinds of entities. Let's rename the AccountDataContext to DynamicDataContext and make the type of Account as a type parameter if it. 1: public class DynamicDataContext<T> : TableServiceContext where T : TableServiceEntity 2: { 3: private CloudStorageAccount _storageAccount; 4: private string _entitySetName; 5:  6: public DynamicDataContext(CloudStorageAccount storageAccount) 7: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 8: { 9: _storageAccount = storageAccount; 10: _entitySetName = typeof(T).Name; 11:  12: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 13: _storageAccount.Credentials); 14: tableStorage.CreateTableIfNotExist(_entitySetName); 15: } 16:  17: public void Add(T entityToAdd) 18: { 19: AddObject(_entitySetName, entityToAdd); 20: SaveChanges(); 21: } 22:  23: public void Update(T entityToUpdate) 24: { 25: UpdateObject(entityToUpdate); 26: SaveChanges(); 27: } 28:  29: public void Delete(T entityToDelete) 30: { 31: DeleteObject(entityToDelete); 32: SaveChanges(); 33: } 34:  35: public IQueryable<T> Load() 36: { 37: return CreateQuery<T>(_entitySetName); 38: } 39: } I saved the name of the entity type when constructed for performance matter. The table name, entity set name would be the same as the name of the entity class. The Load method returned a generic IQueryable instance which supports the lazy load feature. Then in my service class I changed the AccountDataContext to DynamicDataContext and that's all. 1: var accountContext = new DynamicDataContext<Account>(storageAccount); Run it again and register another account. The DynamicDataContext now can be used for any entities. For example, I would like the account has a list of notes which contains 3 custom properties: Account Email, Title and Content. We create the note entity class. 1: public class Note : TableServiceEntity 2: { 3: public string AccountEmail { get; set; } 4: public string Title { get; set; } 5: public string Content { get; set; } 6: public DateTime DateCreated { get; set; } 7: public DateTime? DateDeleted { get; set; } 8:  9: public Note() 10: : base() 11: { 12: } 13:  14: public Note(string email) 15: : base(email, string.Format("{0}_{1}", email, Guid.NewGuid().ToString())) 16: { 17: AccountEmail = email; 18: } 19: } And no need to tweak the DynamicDataContext we can directly go to the service class to implement the logic. Notice here I utilized two DynamicDataContext instances with the different type parameters: Note and Account. 1: public class NoteService : INoteService 2: { 3: public void Create(string email, string title, string content) 4: { 5: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 6: var accountContext = new DynamicDataContext<Account>(storageAccount); 7: var noteContext = new DynamicDataContext<Note>(storageAccount); 8:  9: // validate - email must be existed 10: var accounts = accountContext.Load() 11: .Where(a => a.Email == email) 12: .ToList() 13: .Count; 14: if (accounts <= 0) 15: throw new ApplicationException(string.Format("The account {0} does not exsit in the system please register and try again.", email)); 16:  17: // save the note 18: var noteToAdd = new Note(email) { Title = title, Content = content, DateCreated = DateTime.Now }; 19: noteContext.Add(noteToAdd); 20: } 21: } And updated our client application to test the service. I didn't implement any list service to show all notes but we can have a look on the local SQL database if we ran it at local development fabric.   Summary In this post I explained a bit about the limited LINQ support for the table service. And then I demonstrated about how to use the repository pattern in the table service data access layer and make the DataContext dynamically. The DynamicDataContext I created in this post is just a prototype. In fact we should create the relevant interface to make it testable and for better structure we'd better separate the DataContext classes for each individual kind of entity. So it should have IDataContextBase<T>, DataContextBase<T> and for each entity we would have class AccountDataContext<Account> : IDataContextBase<Account>, DataContextBase<Account> { … } class NoteDataContext<Note> : IDataContextBase<Note>, DataContextBase<Note> { … }   Besides the structured data saving and loading, another common scenario would be saving and loading some binary data such as images, files. In my next post I will show how to use the Blob Service to store the bindery data - make the account be able to upload their logo in my example.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows 2003 Server on a domain, XP client PCs on a workgroup - file share without authentication?

    - by Zach
    I have a windows 2003 server on a domain and client PCs running XP on a workgroup. I have created a file share on the server that should be accessible by the client PCs. I even set the security and sharing to 'Everyone' just to test. When I try to access the file share from any of the XP machines, I get an authentication prompt that displays asking for credentials, even though 'Everyone' has full control currently (just for testing purposes). Why is it asking to authenticate? I need it to where it doesn't ask to authenticate. I also made sure passwords were set on all XP machines since I found this could be one possible issue and they all were. Any ideas? Thanks!

    Read the article

  • Is it possible to tunnel ICMP over TCP?

    - by Robert Atkins
    I don't want to tunnel TCP over ICMP (as ptunnel does), I want to go the other way around. I'm in the situation where I have TCP (HTTP) connectivity to a machine but an internal firewall over which I have no control is swallowing pings. The monitoring software I'm using appears to determine connectivity by attempting to send a ping before it tries to just connect to the web service on the target machine. It's failing this ping test and giving up. I believe if I could fool my monitoring software into thinking pings were getting through, it would then connect to the web service and be on its merry way. Anyone know how I can do this? I have SSH and root access on the destination machine.

    Read the article

  • Book Review: Professional ASP.Net MVC4

    - by Sam Abraham
    The past few weeks have been particularly busy as I continue to dedicate a bigger portion of my free time to refreshing my memory and enhancing my knowledge of best practices pertaining to technologies we plan on using for a major upcoming project. In this blog post, I will be providing a brief overview of my latest reading “Professional ASP.Net MVC4” by Jon Galloway, Phil Haack, Brad Wilson and K. Scott Allen. This book is a must read for web developers looking to enhance their MVC expertise with best practices and tips shared from recognized industry experts. This book takes the reader on a 16-chapter long journey towards being a better ASP.NET MVC developer with chapter 16 putting all information covered in practical context by dissecting the implementation of Nuget.org, a real-life open-source, ASP.NET MVC project.  All code samples referenced in this book are conveniently accessible via NuGet, a free, open-source Library package manager that installs as a Visual Studio Extension. Chapters 2, 3 and 4 thoroughly cover MVC’s various components: Controllers “C”, Views “V” and Models “M” respectively. Chapter 5 covers additional extension methods (Helpers) provided to speed and ease the use of common HTML elements such as forms, textboxes, grids, to name a few… Chapter 6 tackles built-in validation while providing examples and use cases on implementing custom validation that plugs into the MVC framework. Chapters 7 thru 13 discusses the latest on Membership, Ajax, Routing, NuGet and the ASP.Net Web API. Chapters 12 (Dependency Injection) and 13 (Unit Testing) demonstrate a big competitive advantage of MVC with its ease of test-ability and plug-ability. Chapters 14 and 15 targets the advanced developer showcasing how to extend MVC to customize and replace every piece in the framework.In conclusion, I strongly recommend Professional ASP.NET MVC 4 as an excellent read for both developers already using MVC as well as those getting started with the framework.   Many thanks to the Wiley/Wrox User Group Program for their support of our West Palm Beach Developers’ Group.  You can access my reviews of books I recently read: Professional ASP.NET Design Patterns Professional WCF 4.0 Inside Windows Communication Foundation Inside Microsoft SQL Server 2008 series

    Read the article

  • Subversion hooks no longer running

    - by Chris Lieb
    I don't know when this started happening, but, for some reason, none of my Subversion hooks are running anymore. I am running Subversion 1.6.9 on a Gentoo Linux machine, which has had its hooks work in the past. I am running Subversion through the svn_dav module for Apache2.2. I modified the hook scripts that I make use of to write into a file in the /tmp directory owned by apache:apache whenever they are executed, but after making a commit, there is nothing in the file that should be written to. The scripts are executable and owned by apache:apache, so I don't think that is the issue. Here is one of my test scripts (post-commit.sh) that isn't getting executed: #!/bin/sh /bin/echo post-commit >> /tmp/z_test exit 0 After running a commit, I expect both the pre-commit.sh and post-commit.sh hooks to be run, but neither of them appear to be writing into the desired file (/tmp/z_test). What's going on?

    Read the article

  • Linux - How to run Firefox with AT command

    - by conualfy
    I try to run a specified command on a desired time. I found at for this, and it seems to work fine if I run: echo "ls -al / > /home/florin/test.txt" | at 4:21am But I want to run a different thing: /usr/bin/firefox -new-tab http://google.ro I tried adapting the first line with my action (running it in terminal opens a new Firefox tab with http://google.ro, so the command is correct), but with at, it does not work: echo "firefox -new-tab http://google.ro" | at 4:23am The task seems to be scheduled, but it does not run. When running the previous line I get the default reply from at: warning: commands will be executed using /bin/sh Should my Firefox command be differently run in sh? Is there a way to do my action with at, or some other way? Thanks a lot!

    Read the article

  • Web Farm Framework - Missing IIS features

    - by Buginator
    I'm trying to install a web farm using Microsoft's Web Farm Framework 2.2. The server is Windows 2008 R2 with IIS 7.5. I followed a tutorial. Installed WFF from Web Platform Installer. However, I'm missing some key features in the "Server Farm" panel in IIS. This is how my setup looks like However, just like in the tutorial, I want it like this How can I enable ALL the things, like Load Balancer, Health Test, Server Affinity etc? Thanks. The tutorial I used was this: weblogs.asp.net/scottgu/archive/2010/09/08/introducing-the-microsoft-web-farm-framework.aspx

    Read the article

  • Snort [PFSense] is configured but not blocking or generating alerts!

    - by Chase Florell
    I've got PFSense V 2.0-RC1 (i386) and I've got the latest version of Snort installed I've loaded up a bunch of rules from Oinkmaster, I've enabled all of the preprocessors, and I've ensured the service is started. When I let it sit for a while and then check my Alerts and Block list, there are no entries. Even when I test it by logging into Skype (skype is listed as a Rule from P2P), I don't get any entries in the logs. If you need any further information, please let me know... I simply can't figure this one out.

    Read the article

  • No HTTP Response from Tomcat 7 EC2 instance

    - by David Kaczynski
    I am new to EC2 (and Tomcat, for that matter), and I am trying to deploy a vanilla Tomcat 7 server to an Ubuntu 12.04.1 EC2 instance and access the default test site over HTTP. My EC2 instance is running, and the Security Group includes port 80: My /etc/tomcat7/server.xml config has been edited to listen for HTTP requests on port 80: 0 I have restarted my Tomcat 7 server via sudo service tomcat7 restart. However, according to sudo netstat -lnp, Tomcat is not listed as listening over port 80: I am unable to get any response from going to the ...amazonaws.com public DNS in a web browser. What am I missing?

    Read the article

  • Connecting to SQL Server on Parallels Desktop with PHP

    - by Zen Savona
    well I recently bought a Mac and am using it as my primary computer. Because I am required to work with MSSQL via PHP, I have installed Parallels Desktop and run Server 2008 R2 on it. I am using the same mixed mode authentication which I previously had on windows. When I attempt to connect to the server with PHP using either a new test file or my old code, it just doesn't find the server. I have tried running PHP on the XP install with parallels, and using the hostname as COMPUTERNAME\SQLEXPRESS, LOCALIP\SQLEXPRESS localhost localip etc, PHP never finds the server. Also note that I can connect to the database server using Management Studio without problems, so SQL Server is running. Please note that both PHP and MSSQL are running within the virtualised environment. Any contribution is appreciated

    Read the article

  • Supporting and testing multiple versions of a software library in a Maven project

    - by Duncan Jones
    My company has several versions of its software in use by our customers at any one time. My job is to write bespoke Java software for the customers based on the version of software they happen to be running. I've created a Java library that performs many of the tasks I regularly require in a normal project. This is a Maven project that I deploy to our local Artifactory and pull down into other Maven projects when required. I can't decide the best way to support the range of software versions used by our customers. Typically, we have about three versions in use at any one time. They are normally backwards compatible with one another, but that cannot be guaranteed. I have considered the following options for managing this issue: Separate editions for each library version I make a separate release of my library for each version of my company software. Using some Maven cunningness I could automatically produce a tested version linked to each of the then-current company software versions. This is feasible, but not without its technical challenges. The advantage is that this would be fairly automatic and my unit tests have definitely executed against the correct software version. However, I would have to keep updating the versions supported and may end up maintaining a large collection of libraries. One supported version, but others tested I support the oldest software version and make a release against that. I then perform tests with the newer software versions to ensure it still works. I could try and make this testing automatic by having some non-deployed Maven projects that import the software library, the associated test JAR and override the company software version used. If those projects build, then the library is compatible. I could ensure these meta-projects are included in our CI server builds. I welcome comments on which approach is better or a suggestion for a different approach entirely. I'm leaning towards the second option.

    Read the article

  • DFS replication initial step problem

    - by vn.
    I just setup DFS on my network and it's working fine, and now I'm trying to setup DFS-R on a test folder, but then at the end of the procedure (all went fine, selected my 2 folders, primary folder, replication topology and such) I get this error message (roughly translated from french) : Unable to define security on the replicated folder. The shared administration folder doesn't exist. I'm also wondering if there's any required security on the folders to replicate so that DFS-R can access it. I was trying to add SYSTEM in the security, but it won't find it/allow me. The folder has many many files and folders on the primary DFS pointer, but none on the 2nd, just created it with quite the same rights. Note that the primary DFS pointer is on a 2008 server and the DFS service and the secondary DFS pointer are on a 2008r2. Any help is very appreciated, thanks.

    Read the article

  • Development environment for embedded system

    - by Howard Lee Harkness
    I need to develop software in C/C++ for an embedded system. I have Debian 6 running off of a USB hard drive. I would like to be able to generate a stripped-down kernel with modules, and install them either on a CF card or a USB 'thumb' drive. I succeeded in building a Linux 3.6 kernel and running it in Debian off of the USB hard drive, but I am having trouble figuring out how to install it on the thumb drive. I would like a build cycle that looks like this: 1) Build module or kernel with desired software 2) Install it on thumb drive 3) Boot and test I would like to use the same system for both development and testing, if that is feasible. I am looking for resources and tutorials that would help me understand how to do this.

    Read the article

  • Is there an equivalent of SU for Windows

    - by CodeSlave
    Is there a way (when logged in as an administrator, or as a member of the administrators group) to masquerade as a non-privileged user? Especially in an AD environment. e.g., in the Unix world I could do the following (as root): # whoami root # su johnsmith johnsmith> whoami johnsmith johnsmith> exit # exit I need to test/configure something on a user's account, and I don't want to have to know their password or have to reset it. Edit: runas won't cut it. Ideally, my whole desktop would become the user's, etc. and not just in a cmd window.

    Read the article

  • EMC CX3-10c Fault Condition won;t clear

    - by ITGuy24
    We have an old CX3-10c from Dell that had both Standby Power Supplies (SPS) fail. This obviously caused a fault on the system and disabled the cache. We have replaced the SPS's and they test fine as do all other components. Problem is there is still a fault on the "Enclosure SPE [SPE3]" despite all the component in the enclosure showing as good. I was on the line with Dell Gold support for 3 hours yesterday, they have had me restart the SPs multiple times, as well as reseat the power supplies, even shutdown the system completely and power it back on. All to no avail. Fault remains and cache cannot be re-enabled so long as the Fault is present. Any suggestions on clearing this erroneous fault?

    Read the article

  • Calling a .NET C# class from XSLT

    - by HanSolo
    If you've ever worked with XSLT, you'd know that it's pretty limited when it comes to its programming capabilities. Try writing a for loop in XSLT and you'd know what I mean. XSLT is not designed to be a programming language so you should never put too much programming logic in your XSLT. That code can be a pain to write and maintain and so it should be avoided at all costs. Keep your xslt simple and put any complex logic that your xslt transformation requires in a class. Here is how you can create a helper class and call that from your xslt. For example, this is my helper class:  public class XsltHelper     {         public string GetStringHash(string originalString)         {             return originalString.GetHashCode().ToString();         }     }   And this is my xslt file(notice the namespace declaration that references the helper class): <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet  xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:ext="http://MyNamespace">     <xsl:output method="text" indent="yes" omit-xml-declaration="yes"/>     <xsl:template  match="/">The hash code of "<xsl:value-of select="stringList/string1" />" is "<xsl:value-of select="ext:GetStringHash(stringList/string1)" />".     </xsl:template> </xsl:stylesheet>   Here is how you can include the helper class as part of the transformation: string xml = "<stringList><string1>test</string1></stringList>";             XmlDocument xmlDocument = new XmlDocument();             xmlDocument.LoadXml(xml);               XslCompiledTransform xslCompiledTransform = new XslCompiledTransform();             xslCompiledTransform.Load("XSLTFile1.xslt");               XsltArgumentList xsltArgs = new XsltArgumentList();                        xsltArgs.AddExtensionObject("http://MyNamespace", Activator.CreateInstance(typeof(XsltHelper)));               using (FileStream fileStream = new FileStream("TransformResults.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite))             {                 // transform the xml and output to the output file ...                 xslCompiledTransform.Transform(xmlDocument, xsltArgs, fileStream);                            }

    Read the article

  • Best PerfCounters for monitoring system health of IIS, WCF, WWF and .Net for a Workflow based soluti

    - by Gineer
    We have a solution built in .Net that will be installed into a client environment. The solution will span multiple servers and be running on multiple tiers. The client makes us of MOM (Microsoft operations Manager) to monitor the system. What are the best counters to use for monitoring the overall health of the system? Are there any built in counters that we could add into a MOM Pack (as an Alert) to test a given scenario? Any thoughts suggestions would be much apreciated. Thanks

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • Regulation of the software industry

    - by Flexo
    Every few years someone proposes tighter regulation for the software industry. This IEEE article has been getting some attention lately on the subject. If software engineers who write programs for systems that expose the public to physical or financial risk knew they would be tested on their competence, the thinking goes, it would reduce the flaws and failures in code—and maybe save a few lives in the bargain. I'm skeptical about the value and merit of this. To my mind it looks like a land grab by those that proposed it. The quote that clinches that for me is: The exam will test for basic knowledge, not mastery of subject matter because the big failures (e.g. THERAC-25) seem to be complex, subtle issues that "basic knowledge" would never be sufficient to prevent. Ignoring any local issues (such as existing protections of the title Engineer in some jurisdictions): The aims are noble - avoid the quacks/charlatans1 and make that distinction more obvious to those that buy their software. Can tighter regulation of the software industry ever achieve it's original goal? 1 Exactly as regulation of the medical profession was intended to do.

    Read the article

  • Autofill Outlook 2010 Subject Line

    - by Eric
    Hi - I am hoping someone can tell me how to set a consistent subject line in Outlook 2010, if there is a way. I am trying to find out for a client of mine and I do not have 2010 so cannot even test it but will send info to him to hopefully set up. Thanks! FYI - What I have done in the past, before he upgraded, was set up a link on a toolbar with a hyperlink that contains a blank email address but with a subject entered - do you know if this will work in Outlook 2010? Thanks Eric

    Read the article

  • Problems installing icinga-web

    - by Kungurov
    I'm using Ubuntu 10.04 LTS (64bit, Server), Apache 2.2.14 Following the instruction from the oficial icinga page http://docs.icinga.org/latest/en/index.html I installed the icinga-web-1.7.1 on my machine and configured a few hosts for test purposes. The Classic Interface runs as expected but the new Web Interface does not show any data. When I try: ps aux | grep ido2db | grep -v grep I get: icinga 27425 0.0 0.0 41464 600 ? Ss Jul27 0:00 /usr/local/icinga/bin/ido2db -c /usr/local/icinga/etc/ido2db.cfg which might indicate a problem with idomod/ido2db because according to the docs there should be at least 2 processes greped. Any ideas how to fix that?

    Read the article

  • subversion: enforce TLS

    - by Daniel Marschall
    Hello, I am running subversion on a Debian Squeeze system with Apache2 and mod_dav for viewing the contents with a webbrowser. I want to enforce the usage of TLS, so that the login data and the SVN contents cannot be read from the connection. I have tried following: <Location /svn> DAV svn SVNParentPath /daten/subversion/ # our access control policy AuthzSVNAccessFile /daten/subversion/access_control # try anonymous access first, resort to real # authentication if necessary. Satisfy Any Require valid-user # how to authenticate a user AuthType Basic AuthName "Subversion repository" AuthUserFile /daten/subversion/.htpasswd # Test SSLRequireSSL RewriteEngine On RewriteCond %{SERVER_PORT} !443 RewriteRule ^svn/(.)$ https://www.viathinksoft.de/svn/$1 [R,L] </Location> at file /etc/apache2/conf.d/subversion.conf Alas, this does not work. There is no redirect and there is still a HTTP request working at /svn/(projectname)/(somefolder) . This SSL-enforce-policy should work for - viewing the contents with webbrowser - retrieve contents with TurtoiseSVN client - committing contents with TurtoiseSVN client Can you please help me? Regards Daniel Marschall

    Read the article

  • Generating the escape sequences for terminal by ahk

    - by obeliksz
    I am trying to make an ahk for putty to send the keycodes that I want for some key combinations in order for my program to work over terminal too. For this I have an ahk already with some key combinations working properly by experimenting awfully much time from here and there, from key tables, etc but I still don't get, did not came up with a clear, logical method to calculate the escape key that I want. An example: ^F1::SendInput ^[O5P It gives 28 in my test prog. I see that for ^[1 I get 377 and for ^[2 376... and I see that there can be used letters out of the hexa numbers (A-F) and also ; and ~ or double [[ Do you understand how this works? Any good descriptive material for this? Thanks a lot!

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >