Search Results

Search found 129542 results on 5182 pages for 'web development server'.

Page 339/5182 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Benefits of Behavior Driven Development

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/07/26/benefits-of-behavior-driven-development.aspxContinuing my previous article on BDD, I wanted to point out some benefits of BDD and since BDD is an extension of Test Driven Development (TDD), you get those as well. I’ll add another article on some possible downsides of this approach. There are many articles about the benefits of TDD and they apply to BDD. I’ve pointed out some here and copied some of the main points for each article, but there are many more including the book The Art of Unit Testing by Roy Osherove. http://geekswithblogs.net/leesblog/archive/2008/04/30/the-benefits-of-test-driven-development.aspx (Lee Brandt) Stability Accountability Design Ability Separated Concerns Progress Indicator http://tddftw.com/benefits-of-tdd/ Help maintainers understand the intention behind the code Bring validation and proper data handling concerns to the forefront. Writing the tests first is fun. Better APIs come from writing testable code. TDD will make you a better developer. http://www.slideshare.net/dhelper/benefit-from-unit-testing-in-the-real-world (from Typemock). Take a look at the slides, especially the extra time required for TDD (slide 10) and the next one of the bugs avoided using TDD (slide 11). Less bugs (slide 11) about testing and development (13) Increase confidence in code (14) Fearlessly change your code (14) Document Requirements (14) also see http://visualstudiomagazine.com/articles/2013/06/01/roc-rocks.aspx Discover usability issues early (14) All these points and articles are great and there are many more. The following are my additions to the benefits of BDD from using it in real projects for my company. July 2013 on MSDN - Behavior-Driven Design with SpecFlow Scott Allen did a very informative TDD and MVC module, but to me he is doing BDDCompile and Execute Requirements in Microsoft .NET ~ Video from TechEd 2012 Communication I was working through a complicated task that the decision tree kept growing. After writing out the Given, When, Then of the scenario, I was able tell QA what I had worked through for their initial test cases. They were able to add from there. It is also useful to use this language with other developers, managers, or clients to help make informed decisions on if it meets the requirements or if it can simplified to save time (money). Thinking through solutions, before starting to code This was the biggest benefit to me. I like to jump into coding to figure out the problem. Many times I don't understand my path well enough and have to do some parts over. A past supervisor told me several times during reviews that I need to get better at seeing "the forest for the trees". When I sit down and write out the behavior that I need to implement, I force myself to think things out further and catch scenarios before they get to QA. A co-worker that is new to BDD and we’ve been using it in our new project for the last 6 months, said “It really clarifies things”. It took him awhile to understand it all, but now he’s seeing the value of this approach (yes there are some downsides, but that is a different issue). Developers’ Confidence This is huge for me. With tests in place, my confidence grows that I won’t break code that I’m not directly changing. In the past, I’ve worked on projects with out tests and we would frequently find regression bugs (or worse the users would find them). That isn’t fun. We don’t catch all problems with the tests, but when QA catches one, I can write a test to make sure it doesn’t happen again. It’s also good for Releasing code, telling your manager that it’s good to go. As time goes on and the code gets older, how confident are you that checking in code won’t break something somewhere else? Merging code - pre release confidence If you’re merging code a lot, it’s nice to have the tests to help ensure you didn’t merge incorrectly. Interrupted work I had a task that I started and planned out, then was interrupted for a month because of different priorities. When I started it up again, and un-shelved my changes, I had the BDD specs and it helped me remember what I had figured out and what was left to do. It would have much more difficult without the specs and tests. Testing and verifying complicated scenarios Sometimes in the UI there are scenarios that get tricky, because there are a lot of steps involved (click here to open the dialog, enter the information, make sure it’s valid, when I click cancel it should do {x}, when I click ok it should close and do {y}, then do this, etc….). With BDD I can avoid some of the mouse clicking define the scenarios and have them re-run quickly, without using a mouse. UI testing is still needed, but this helps a bunch. The same can be true for tricky server logic. Documentation of Assumptions and Specifications The BDD spec tests (Jasmine or SpecFlow or other tool) also work as documentation and show what the original developer was trying to accomplish. It’s not a different Word document, so developers will keep this up to date, instead of letting it become obsolete. What happens if you leave the project (consulting, new job, etc) with no specs or at the least good comments in the code? Sometimes I think of a new scenario, so I add a failing spec and continue in the same stream of thought (don’t forget it because it was on a piece of paper or in a notepad). Then later I can come back and handle it and have it documented. Jasmine tests and JavaScript –> help deal with the non-typed system I like JavaScript, but I also dislike working with JavaScript. I miss C# telling me if a property doesn’t actually exist at build time. I like the idea of TypeScript and hope to use it more in the future. I also use KnockoutJs, which has observables that need to be called with ending (), since the observable is a function. It’s hard to remember when to use () or not and the Jasmine specs/tests help ensure the correct usage.   This should give you an idea of the benefits that I see in using the BDD approach. I’m sure there are more. It talks a lot of practice, investment and experimentation to figure out how to approach this and to get comfortable with it. I agree with Scott Allen in the video I linked above “Remember that TDD can take some practice. So if you're not doing test-driven design right now? You can start and practice and get better. And you'll reach a point where you'll never want to get back.”

    Read the article

  • Amazon EC2 vs Dedicated server at Hetzner, what's the use for EC2?

    - by C-Blu
    After searching the web I still can't find the reason to use EC2. What's the point to scale EC2? If you expect a huge burst in traffic, they say. OK, but what if you already have a couple of sites with good traffic, and for example medium reserved EC2 instance is not enough. You are paying $36.60(medium reserved for 1year) in EU(Ireland) + traffic + optional expenses for databases and S3 if you use them. Of course as some point when you are under $56.6-$66.1 you can optimize your hosting costs with Amazon EC2. But when you get at some point if purchase EX4 server from Hetzner, it will surpass your perfomance needs for a long time, before you get a massive traffic. (I am wrong?) CPU: i7-2600 Quadcore (3.4-3.8 Ghz) RAM: 16 GB HDD: 2x3 TB SATA (6 Gbit/s) - I think that disc performance of a dedicated is better then of Amazon EBS Traffic: 10 TiB in month included. This is what you get from Hetzner for $56(- 19% VAT) or $66 for EU residents. Please, tell me what's the reason to use Amazon? Which load won't a server from Hetzner take, but Amazon Auto Scaling will? The maintenance of dedicated vs EC2 is still the same? Or hardware failure at Amazon, won't ruin your EBS storage? I'm still not at the level when I need expensive hosting, but want to know beforehand, just to be sure if Amazon infrastructure is better then pure performance of Hetzner's hardware.

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • What Problems Are Better Solved By SOAP Over REST?

    In the battle for web service supremacy SOAP and REST have been battling for years. In my personal opinion this debate should have never existed. Yes, both forms can be used to create an interactive web service, but each form of a service was developed independent of each other to solve two different yet similar problems. Based my research and experience I would have to say that REST should be the preferred web service methodology and SOAP should only be used in specific situations. Note, I did not say that I was against SOAP, and in fact I actually like to use SOAP when it is needed. Criteria for using SOAP: Does the service need a guaranteed level of reliability and security? Did the provider and consumer of the service agreed on a standardized data exchange format? Does the service need data context and state management? If you answer yes to any of these questions, then you may want to consider SOAP as the format for the web service. Another way to look at the relationship between REST and SOAP is to look at the medical field.  For most things a general doctor or you family health care provider can acceptably treat most conditions from the case of a common cold to a broken bone. A general doctor more aligns with REST in my opinion because for most service requirements REST fulfills a projects needs, but what happens if you need more of an advanced examination, you would go to a specialist. A specialist would already have experience dealing with specific issues that you are experiencing giving them specific context to how best treat you going forward. SOAP acts more like a specialist doctor giving that they understand the context of an issue and can treat it based on the state of other patients they have already treated. An example of where I would use SOAP over REST in real life would be a single sign-on application. I n these cases I need to check validate a username and password for authentication and authorization of a web page request. This service would need to maintain state while it authenticated a user and while it validated access to a web page on a subsequent request. This service must process every request for access and not allow caching to ensure that every request is processed and the appropriate users are allowed to view selected web pages. References: Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • What is a resonable workflow for designing webapps?

    - by Evan Plaice
    It has been a while since I have done any substantial web development and I'd like to take advantage of the latest practices but I'm struggling to visualize the workflow to incorporate everything. Here's what I'm looking to use: CakePHP framework jsmin (JavaScript Minify) SASS (Synctactically Awesome StyleSheets) Git CakePHP: Pretty self explanatory, make modifications and update the source. jsmin: When you modify a script, do you manually run jsmin to output the new minified code, or would it be better to run a pre-commit hook that automatically generates jsmin outputs of javascript files that have changed. Assume that I have no knowledge of implementing commit hooks. SASS: I really like what SASS has to offer but I'm also aware that SASS code isn't supported by browsers by default so, at some point, the SASS code needs to be transformed to normal CSS. At what point in the workflow is this done. Git I'm terrified to admit it but, the last time I did any substantial web development, I didn't use SCM source control (IE, I did use source control but it consisted of a very detailed change log with backups). I have since had plenty of experience using Git (as well as mercurial and SVN) for desktop development but I'm wondering how to best implement it for web development). Is it common practice to implement a remote repository on the web host so I can push the changes directly to the production server, or is there some cross platform (windows/linux) tool that makes it easy to upload only changed files to the production server. Are there web hosting companies that make it eas to implement a remote repository, do I need SSH access, etc... I know how to accomplish this on my own testing server with a remote repository with a separate remote tracking branch already but I've never done it on a remote production web hosting server before so I'm not aware of the options yet. Extra: I was considering implementing a javascript framework where separate javascript files used on a page are compiled into a single file for each page on the production server to limit the number of file downloads needed per page. Does something like this already exist? Is there already an open source project out in the wild that implements something similar that I could use and contribute to? Considering how paranoid web devs are about performance (and the fact that the number of file requests on a website is a big hit to performance) I'm guessing that there is some wizard hacker on the net who has already addressed this issue.

    Read the article

  • Is it possible to make/translate a 3d engine to ruby on rails?

    - by user20529
    I am looking to make a 3D FPS that runs inside web browsers. I looked into using WebGL, but it didn't seem far enough along into development. I decided on using RoR because Ruby was a language I knew. I realize this may seem like a ridiculous question, but is there any way I can port/rewrite/whatever a game engine(Say for instance IrrLicht) to run inside Rails? Or for that matter, any other language on the web.

    Read the article

  • sp_addlinkedserver on sql server 2005 giving problem

    - by Jit
    I am trying to create a link server of a remote database(both the servers are SQL serve2005). I am able to connect that remote server from my SQL Server management studio. I used the following syntax to create it. EXEC sp_addlinkedserver @server = N'LINKSQL2005', @srvproduct = N'', @provider = N'SQLNCLI', @provstr = N'SERVER=IP Address of remote server ;User ID=XXXXXX;Password=***' I have provided the IP addressntax. and user name and password in the above syntax. The link server is getting created. But when I try to execute a query on it I get the error below. Query Used. select * from LINKSQL2005.<DBName>.dbo.<TableName> OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Communication link failure". Msg 10054, Level 16, State 1, Line 0 TCP Provider: An existing connection was forcibly closed by the remote host. Msg 18456, Level 14, State 1, Line 0 Login failed for user 'sa'. OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Invalid connection string attribute". Pls help me, where am I making mistake.

    Read the article

  • Deployment of SQL Server: installing a second instance?

    - by Workshop Alex
    Simple problem. I'm working on a Delphi 2007/WIN32 application which now uses MS Access as simple data store. I have to modify it to support SQL Server Express, which is easy. These modifications are working so the application can be deployed using either SQL Server or MS Access. (Whatever the user prefers.) I did consider deploying the whole application together with the SQL Compact but this is not practicak. Using SQL Server Express 2008 instead of 2005 is an option, but also has a few nasty side-effects which we don't want to resolve for now. The problem is deploying the whole project. The installation with SQL Server would need a quiet installation so the user won't notice it. SQL Server is mentioned in the documentation so they know it's there. We just don't want to bother them with technical issues. In most cases, such an installation will go just fine. But what if the user already has an SQL Server (2005) installation which is used for something else? Personally, I would prefer to just install a second instance of SQL Server on their system so it won't conflict with the other installation. (Thus, if they uninstall the other app, the SQL instance will just stay installed.) While SQL Server 2005 and 2008 can be installed on the same system simply by using two different names for the instance, I wonder if it's also possible to install SQL Server 2005 twice on a single system to get two instances. And if possible, how?

    Read the article

  • Is a web-server (e.g servlets) a good solution for an IM server?

    - by John
    I'm looking at a new app, broadly speaking an IM application with a strong client-server model - all communications go through a server so they can be logged centrally. The server will be Java in some form, clients could at this point be anything from a .NET Desktop app to Flex/Silverlight, to a simple web-interface using JS/AJAX. I had anticipated doing the server using standard J2EE so I get a thread-safe, multi-user server for 'free'... to make things simple let's say using Servlets (but in practice SpringMVC would be likely). This all seemed very neat but I'm concerned if the stateless nature of Servlets is the best approach. If my memory of servlets (been a year or two) is right, each time a client sent a HTTP request, typically a new message entered by the user, the servlet could not assume it had the user/chat in memory and might have to get it from the DB... regardless it has to look it up. Then it either has to use some PUSH system to inform other members of the chat, or cache that there are new messages, for other clients who poll the server using AJAX or similar - and when they poll it again has to lookup the chat, including new messages, and send the new data. I'm wondering if a better system would be the server is running core Java, and implements a socket-based communication with clients. This allows much more immediate data transfer and is more flexible if say the IM client included some game you could play. But then you're writing a custom server and sockets don't sound very friendly to a browser-based client on current browsers. Am I missing some big piece of the puzzle here, it kind of feels like I am? Perhaps a better way to ask the question would simply be "if the client was browser-based using HTML/JS and had to run on IE7+,FF2+ (i.e no HTML5), how would you implement the server?" edit: if you are going to suggest using XMPP, I have been trying to get my head around this in another question, so please consider if that's a more appropriate place to discuss this specifically.

    Read the article

  • Unable to Connect to Management Studio Server

    - by Phil Hilliard
    I have a nasty situation. I am using Microsoft SQL Server Management Studio Express edition locally on my pc for testing, and once tested I upload database changes to a remote server. I have a situation where I deleted the Default Database on my local machine, and instead of searching hard enough to find an answer to that problem, I uninstalled and reinstalled Management Studio. Since then Management Studio has not been able to connect to the server. Is there any help (or hope for me for that matter), out there????? The following is the detailed error message: =================================== Cannot connect to LENOVO-E7A54767\SQLEXPRESS. =================================== A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (.Net SqlClient Data Provider) ------------------------------ For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476 ------------------------------ Error Number: -1 Severity: 20 State: 0 ------------------------------ Program Location: at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.ObjectExplorer.ValidateConnection(UIConnectionInfo ci, IServerType server) at Microsoft.SqlServer.Management.UI.ConnectionDlg.Connector.ConnectionThreadUser()

    Read the article

  • Are there any problems migrating users from OSX 10.4.11 server to 10.6.6. server using WGM?

    - by Geoff Hardman
    I have just set up a new Mac server running 10.6.6. Installation went smoothly and all appears ok. Can create a new user through WGM and logon to the server from one of our local client systems. Imported the old users from the 10.4 system using WGM but the new system will not create home directories for these users and will not let them log on. I have read that there are issues using this method but can not find any detail as to why. Not wishing to recreate over 350 users from scratch I am looking for an easier solution. Can anyone help? The new server is 10.6.6. We use Open Directory and AFP. The paths are the same as the old server.

    Read the article

  • Sharing business logic between server-side and client-side of web application?

    - by thoughtpunch
    Quick question concerning shared code/logic in back and front ends of a web application. I have a web application (Rails + heavy JS) that parses metadata from HTML pages fetched via a user supplied URL (think Pinterest or Instapaper). Currently this processing takes place exclusively on the client-side. The code that fetches the URL and parses the DOM is in a fairly large set of JS scripts in our Rails app. Occasionally want to do this processing on the server-side of the app. For example, what if a user supplied a URL but they have JS disabled or have a non-standard compliant browser, etc. Ideally I'd like to be able to process these URLS in Ruby on the back-end (in asynchronous background jobs perhaps) using the same logic that our JS parsers use WITHOUT porting the JS to Ruby. I've looked at systems that allow you to execute JS scripts in the backend like execjs as well as Ruby-to-Javascript compilers like OpalRB that would hopefully allow "write-once, execute many", but I'm not sure that either is the right decision. Whats the best way to avoid business logic duplication for apps that need to do both client-side and server-side processing of similar data?

    Read the article

  • Two hosted servers, one public - VPN?

    - by Aquitaine
    Hello there, Web developer here who has to occasionally wear a system & network admin hat (small company). We currently have a single hosted server running Windows Server 2003 that runs both our web server (IIS/Coldfusion) and our database server (SQL Server 2008). We lock down the SQL server by allowing only specific IPs to connect to it. Not ideal but it's worked thus far. We're moving up to two distinct servers and I want to take the opportunity to 'get things right' and make only the web server face the public. What I need to be able to do is to allow only a handful of people to connect to the database server. Rather than using an IP allow list, I'd prefer to use a VPN to let people through so that access is based on the user and not simply the user's location. I'm leaning toward something like OpenVPN, just so I can stick with Server 2008 Web edition. Do I: Use the web server as a VPN server and set up the database server to only accept connections from the web server? Is there an extra step required to make connections to, say, db.mycompany.com route through the VPN rather than through a different connection? I'm ignorant of this part of network infrastructure stuff. Or, Set up a VPN server on the database server as the only public-facing server connection so that there aren't any routing issues to deal with? I know this is Network 101 stuff but I thought I'd ask before just blundering through it since it could affect the company a bit. Thanks very much!

    Read the article

  • ADFS Relying Party

    - by user49607
    I'm trying to set up an Active Directory Federation Service Relying Party and I get the following error. I've tried modifying the page to allow <pages validateRequest="false"> to web.config and it doesn't make a difference. Can someone help me out? Server Error in '/test' Application. A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo..."). Description: Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted. This value may indicate an attempt to compromise the security of your application, such as a cross-site scripting attack. To allow pages to override application request validation settings, set the requestValidationMode attribute in the httpRuntime configuration section to requestValidationMode="2.0". Example: <httpRuntime requestValidationMode="2.0" />. After setting this value, you can then disable request validation by setting validateRequest="false" in the Page directive or in the <pages> configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. For more information, see http://go.microsoft.com/fwlink/?LinkId=153133. Exception Details: System.Web.HttpRequestValidationException: A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo..."). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [HttpRequestValidationException (0x80004005): A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo...").] System.Web.HttpRequest.ValidateString(String value, String collectionKey, RequestValidationSource requestCollection) +11309476 System.Web.HttpRequest.ValidateNameValueCollection(NameValueCollection nvc, RequestValidationSource requestCollection) +82 System.Web.HttpRequest.get_Form() +186 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.IsSignInResponse(HttpRequest request) +26 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.CanReadSignInResponse(HttpRequest request, Boolean onPage) +145 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.OnAuthenticateRequest(Object sender, EventArgs args) +108 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +80 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +266 `

    Read the article

  • Visual Studio 2010 Web Deployment

    - by Cranialsurge
    I am trying to use VS2010's 1-Click Publish feature to deploy a test site from my laptop to my server. I have the firewall turned off on both machines and the MS Deployment Service is up and running on both my laptop and the server. However when I try and publish from VS2010 on my laptop I get the following error Error 1 Web deployment task failed.(Remote agent (URL https://192.168.1.181/:8172/msdeploy.axd?site=LocationsTest) could not be contacted. Make sure the remote agent service is installed and started on the target computer.) The requested resource does not exist, or the requested URL is incorrect. Error details: Remote agent (URL https://192.168.1.181/:8172/msdeploy.axd?site=LocationsTest) could not be contacted. Make sure the remote agent service is installed and started on the target computer. An unsupported response was received. The response header 'MSDeploy.Response' was '' but 'v1' was expected. The remote server returned an error: (404) Not Found. 0 0 Test.Web Any idea what I am doing wrong here ?

    Read the article

  • sp_addlinkserver using trigger

    - by Nanda
    I have the following trigger, which causes an error when it runs: CREATE TRIGGER ... ON ... FOR INSERT, UPDATE AS IF UPDATE(STATUS) BEGIN DECLARE @newPrice VARCHAR(50) DECLARE @FILENAME VARCHAR(50) DECLARE @server VARCHAR(50) DECLARE @provider VARCHAR(50) DECLARE @datasrc VARCHAR(50) DECLARE @location VARCHAR(50) DECLARE @provstr VARCHAR(50) DECLARE @catalog VARCHAR(50) DECLARE @DBNAME VARCHAR(50) SET @server=xx SET @provider=xx SET @datasrc=xx SET @provstr='DRIVER={SQL Server};SERVER=xxxxxxxx;UID=xx;PWD=xx;' SET @DBNAME='[xx]' SET @newPrice = (SELECT STATUS FROM Inserted) SET @FILENAME = (SELECT INPUT_XML_FILE_NAME FROM Inserted) IF @newPrice = 'FAIL' BEGIN EXEC master.dbo.sp_addlinkedserver @server, '', @provider, @datasrc, @provstr EXEC master.dbo.sp_addlinkedsrvlogin @server, 'true' INSERT INTO [@server].[@DBNAME].[dbo].[maildetails] ( 'to', 'cc', 'from', 'subject', 'body', 'status', 'Attachment', 'APPLICATION', 'ID', 'Timestamp', 'AttachmentName' ) VALUES ( 'P23741', '', '', 'XMLFAILED', @FILENAME, '4', '', '8', '', GETDATE(), '' ) EXEC sp_dropserver @server END END The error is: Msg 15002, Level 16, State 1, Procedure sp_MSaddserver_internal, Line 28 The procedure 'sys.sp_addlinkedserver' cannot be executed within a transaction. Msg 15002, Level 16, State 1, Procedure sp_addlinkedsrvlogin, Line 17 The procedure 'sys.sp_addlinkedsrvlogin' cannot be executed within a transaction. Msg 15002, Level 16, State 1, Procedure sp_dropserver, Line 12 The procedure 'sys.sp_dropserver' cannot be executed within a transaction. How can I prevent this error from occurring?

    Read the article

  • SQL Server 2008, Kerberos and SPN

    - by andrew007
    Hi, I installed SQL Server 2008 on a Win XP SP2 workstation in a AD domain and configured to run with the "Network Service" account. In my error log I have the following message (Event ID:26037): The SQL Server Network Interface library could not register the Service Principal Name (SPN) for the SQL Server service. **Error: 0xd, state: 13**. Failure to register an SPN may cause integrated authentication to fall back to NTLM instead of Kerberos. This is an informational message. Further action is only required if Kerberos authentication is required by authentication policies. The strange thing is that I have another SQL Server 2008 installation in a Win 2003 server configured in the same way and there I do not have this message. My questions are: Does anybody know if there are limitations with Kerberos on Windows XP and SQL Server? Why the SPN is not automatically registered on Win XP when I use the "Network Service" but it works on Windows 2003 server? THANKS!

    Read the article

  • Connecting to SQL Server with Visual Studio Express Editions

    - by tlianza
    I find it odd that in Visual C# 2008 Express edition, when you use the database explorer, your options are: 1) Microsoft Access 2) SQL Server Compact 3.5, and 3) SQL Server Database File. BUT if you use Visual Web Developer 2008 Express, you can connect to a regular SQL Server, Oracle, ODBC, etc. For people developing command-line or other C# apps that need to talk to a SQL Server database, do you really need to build your LINQ/Data Access code with one IDE (Visual Web Developer) and your program in another (Visual C#)? It's not a hard workaround, but it seems weird. If Microsoft wanted to force you to upgrade to Visual Studio to connect to SQL Server, why would they include that feature in one of their free IDEs but not the other? I feel like I might be missing something (like how to do it all in Visual C#). Thanks! Tom

    Read the article

  • Subdomains for different applications on Windows Server 2008 R2 with Apache and IIS 7 installed

    - by Yusuf
    Hi, I have a home server, on which I have installed Apache, and several other applications that have a Web GUI (JDownloader, Free Download Manager). In order to access each of these apps (whether be it from the local network or the Internet), I have to enter a different port, e.g., http:// server:8085 or http:// xxxx.dyndns.org:8085 for Apache, http:// server:90 or http:// xxxx.dyndns.org:90 for FDM, http:// server:8081 or http:// xxxx.dyndns.org:8081 for JDownloader. I would like to be able to access them using sub-domains, e.g, http:// apache.server or http:// apache.xxxx.dyndns.org for Apache, http:// fdm.server or http:// fdm.xxxx.dyndns.org for FDM, http:// jdownloader.server or http:// jdownloader.xxxx.dyndns.org for JDownloader. First of all, would it be possible like I want it, i.e., both from LAN and Internet, and if yes, how? And then, even if it's possible only for Internet, I would like to know how to do it, if there's a way. Thanks in advance, Yusuf

    Read the article

  • MS Query Analyzer / Management Studio replacement?

    - by kprobst
    I've been using SQL Server since version 6.5 and I've always been a bit amazed at the fact that the tools seem to be targeted to DBAs rather than developers. I liked the simplicity and speed of the Query Analyzer for example, but hated the built-in editor, which was really no better than a syntax coloring-capable Notepad. Now that we have Management Studio the management part seems a bit better but from a developer standpoint the tools is even worse. Visual Studio's excellent text editor... without a way to customize keyboard bindings!? Don't get me started on how unusable is the tree-based management hierarchy. Why can't I re-root the tree on a list of stored procs for example the way the Enterprise Manager used to allow? Now I have a treeview that needs to be scrolled horizontally, which makes it eminently useless. The SQL server support in Visual Studio is fantastic for working with stored procedures and functions, but it's terrible as a general ad hoc data query tool. I've tried various tools over the years but invariably they seem to focus on the management side and shortchange the developer in me. I just want something with basic admin capabilities, good keyboard support and requisite DDL functionality (ideally something like the Query Analyzer). At this point I'm seriously thinking of using vim+sqlcmd and a console... I'm that desperate :) Those of you who work day in and day out with SQL Server and Visual Studio... do you find the tools to be adequate? Have you ever wished they were better and if you have found something better, could you share please? Thanks!

    Read the article

  • Make SQL Server 2005 accessible via Internet

    - by Gary Joynes
    I have an application that runs on a client's server built on a SQL Server 2005 database. We have now developed an ASP.NET v2 application which connects to this database. This web application will be hosted on an ISP's server but needs to access the SQL Server database on the client's server. The client's server has a firewall and so forth so I assume it should be possible to make the SQL Server accessible via the Internet but of course I am woriied about security. Can someone point me to some best practices to achieve this.

    Read the article

  • MS Build Server 2010 - Buffer Overflow

    - by user329005
    Hey everybody, I try to build an solution in MS Build Server (MS Visual Studio 2010 ver 10.0.30319.1) about ServerTasks - Builds - Server Task Builder - Queue new Built and go, 47 seconds later I get an error output: CSC: Unexpected error creating debug information file 'c:\Builds\1\ServerTasks\Server-Tasks Builder\Sources\ThirdParty\Sources\samus-mongodb-csharp-2b8934f\MongoDB.Linq\obj\Debug\MongoDB.Linq.PDB' -- 'c:\Builds\1\ServerTasks\Server-Tasks Builder\Sources\ThirdParty\Sources\samus-mongodb-csharp-2b8934f\MongoDB.Linq\obj\Debug\MongoDB.Linq.pdb: Access denied I checked the permissions of directory and set it (for debug purposes only) to grant access for all users, but still having an issue. Running the Procmon and filter file access for directory: 'c:\Builds\1\ServerTasks\Server-Tasks Builder\Sources\ThirdParty\Sources\samus-mongodb-csharp-2b8934f\MongoDB.Linq\obj\Debug\' tells me: 16:41:00,5449813 TFSBuildServiceHost.exe 3528 QuerySecurityFile C:\Builds\1\ServerTasks\Server-Tasks Builder\Sources\ThirdParty\Sources\samus-mongodb-csharp-2b8934f\MongoDB.Linq\obj\Debug BUFFER OVERFLOW Information: DACL, 0x20000000 and 16:41:00,5462119 TFSBuildServiceHost.exe 3528 QueryOpen C:\Builds\1\ServerTasks\Server-Tasks Builder\Sources\ThirdParty\Sources\samus-mongodb-csharp-2b8934f\MongoDB.Linq\obj\Debug FAST IO DISALLOWED Any ideas?

    Read the article

  • MS Query Analizer/Management Studio replacement?

    - by kprobst
    I've been using SQL Server since version 6.5 and I've always been a bit amazed at the fact that the tools seem to be targeted to DBAs rather than developers. I liked the simplicity and speed of the Query Analizer for example, but hated the built-in editor, which was really no better than a syntax coloring-capable Notepad. Now that we have Management Studio the management part seems a bit better but from a developer standpoint the tools is even worse. Visual Studio's excellent text editor... without a way to customize keyboard bindings!? Don't get me started on how unusable is the tree-based management hierarchy. Why can't I re-root the tree on a list of stored procs for example the way the Enterprise Manager used to allow? Now I have a treeview that needs to be scrolled horizontally, which makes it eminently useless. The SQL server support in Visual Studio is fantastic for working with stored procedures and functions, but it's terrible as a general ad hoc data query tool. I've tried various tools over the years but invariably they seem to focus on the management side and shortchange the developer in me. I just want something with basic admin capabilities, good keyboard support and requisite DDL functionality (ideally something like the Query Analyzer). At this point I'm seriously thinking of using vim+sqlcmd and a console... I'm that desperate :) Those of you who work day in and day out with SQL Server and Visual Studio... do you find the tools to be adequate? Have you ever wished they were better and if you have found something better, could you share please? Thanks!

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >