Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 383/500 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • Git branch strategy for small dev team

    - by Bilal Aslam
    We have a web app that we update and release almost daily. We use git as our VCS, and our current branching strategy is very simple and broken: we have a master branch and we check changes that we 'feel good about' into it. This works, but only until we check in a breaking change. Does anyone have a favorite git branch strategy for small teams which meets the following requirements: Works well for teams of 2 to 3 developers Lightweight, and not too much process Allows devs to isolate work on bug fixes and larger features with ease Allows us to keep a stable branch (for those 'oh crap' moments when we have to get our production servers working) Ideally, I'd love to see your step-by-step process for a dev working on a new bug

    Read the article

  • Http Geocoder (Google) Accuracy level

    - by sushruth
    I am geocoding a large amount of user entered addresses and interested in the accuracy levels returned. My GOAL is to get the BEST POSSIBLE ACCURACY score for a given address. I call the geocder api following way http://maps.google.com/maps/geo?q={address}&output=csv&sensor=false&key=xx now the accuracy levels returned for same address with/without premise name q = Key Arena, 305 Harrison Street, Seattle, WA 98109 (Accuracy is 5) q = 305 Harrison Street Seattle, WA 98109 (Accuracy is 8) q = Key Arena, Seattle, WA 98109 (Accuracy is 9.) Its obvious from the above that the google servers does not return the best accuracy when street name is appended with premise/venue. the question is :) is there a way to pass the complete address ( with premise name / i.e case 1 ) and get the Max Accuracy. ( or how can tell the google server that the address is passed with premise/building name and street name) ( if you are thinking why not just use case 3, the answer is these are user entered addresses, they could enter "my moms's house" for premise, with accurate street address. in which case i want the accuracy to be 8 not 5)

    Read the article

  • EJB3.1 Remote invocation - is it distributed automatically? is it expensive?

    - by Hank
    I'm building a JEE6 application with performance and scalability in the forefront of my mind. Business logic and JPA2-facade is held in stateless session beans (EJB3.1). As of right now, the SLSBs implement only @Remote-interfaces. When a bean needs to access another bean, it does so via RMI. My reasoning behind this is the assumption that, once the application runs on a bunch of clustered application servers, the RMI-part allows the execution to be distributed across the whole cluster automagically. Is that a correct assumption? I'm fine with dealing with the downsides of that (objects lose entityManager session, pass-by-value), at least I think so. But I am wondering if constant remote invocation isn't adding more load then necessary.

    Read the article

  • ObjectDataSource cannot find type when deployed to SharePoint

    - by Sean
    I'm receiving the following error when deploying a feature containing ASP.NET pages to our development SharePoint servers: System.InvalidOperationException: The type specified in the TypeName property of ObjectDataSource 'odsYears' could not be found. Our .dll is being deployed to the GAC and our pages are being deployed to the the respective Features directory in the 12 hive. We are not receiving this error on our Sandbox SharePoint server. I disassembled the .dll to be sure the class was being deployed and everything looked ok. Does anyone have any ideas on why this would not work on one of our SharePoint environments? Thanks.

    Read the article

  • How to convert a .NET WebService-Method-Result (Soap) into its original datatype?

    - by Marc
    Hello everyone. I have two "identical" webservices (Soap) on two different servers. Don't ask why :-) WebService-1 decides if it handels the request itself or if it passes the request to WebService-2. If so, the response of WebService-2 should directly be returned from WebService-1. The response datatype is complex and self defined. With simple datatypes like 'int or 'string' there would be no problem. The response of WebService-2 is a serialized object (I think it is called "stubs") and theredore it is not possibel to pass this object through as the response of WebService-1 because the type of the objects doesn't match. Is there a simple way to convert the serialised datatype into its original type without buiding a complex converter?

    Read the article

  • Getting "stack level too deep" error when deploying with Capistrano, Rails 3.1 ruby 1.9.2

    - by Victor S
    Here is the log for the cap deploy script output around where the error occurs. Anny suggestions why this might be happening? Thanks! [yup.la] executing command [yup.la] sh -c 'cd /srv/www/portrait/releases/20120406051647 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile' ** [out :: yup.la] rake aborted! ** [out :: yup.la] ** [out :: yup.la] stack level too deep ** [out :: yup.la] (in /srv/www/portrait/releases/20120406051647/app/assets/stylesheets/mobile.css.scss) ** [out :: yup.la] ** [out :: yup.la] Tasks: TOP => assets:precompile:primary ** [out :: yup.la] (See full trace by running task with --trace) ** [out :: yup.la] command finished in 30868ms *** [deploy:update_code] rolling back * executing "rm -rf /srv/www/portrait/releases/20120406051647; true" servers: ["yup.la"] [yup.la] executing command [yup.la] sh -c 'rm -rf /srv/www/portrait/releases/20120406051647; true' command finished in 288ms failed: "sh -c 'cd /srv/www/portrait/releases/20120406051647 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile'" on yup.la /Users/victorstan/Sites/portrait ?

    Read the article

  • How to flush data in php and disconnect user but keep the script alive

    - by Rodrigo
    This is a trick question, while developing a php+ajax application i felt into some long queries, nothing wrong with them, but they could be done in background. I know that there's a way to just send a reply to user while throwing the real processing to another process by exec(), however it dosen't feels right for me, this might generate exploits and it's not pratical on making it compatible with virtual servers and cross platform. PHP offers the ob_* functions although they help on flushing the cache, but the user will keep connected until the script is running. I'm wondering if there's an alternate to exec to keep a script running after sending data to user and closing connection/thread with apache, or a less "dirty" way to have processing data sent to another script.

    Read the article

  • Understand ACTV mode and the PORT command

    - by Ramy
    Hello, I'm the part time FTP server administrator (with no real full-time admin). We currently only allow ACTV mode connections. Some of our clients have had issues with this but for the most part they've been ok using ACTV. For the few who aren't, we've been able to push the data over to their servers from ours. there is one client in particular however who is currently having trouble. He is using file-zilla and issuing a PORT command. First, does using the PORT command imply that you are in ACTV mode? Second is there a way in FileZilla to explicitly change to ACTV mode? Thanks for the help, _Ramy

    Read the article

  • Rewrite rules doesn't work apache 1.3

    - by Sander Versluys
    I'm using a couple of rewrite directives that always works before on apache2 but now i'm uploaded to a shared hosting and the rewrite rules do not seem to get applied. I've reduced the my .htaccess files to the following essential rules: RewriteEngine On Rewritebase /demo/ RewriteRule ^(.*)$ index.php/$1 [L] As you can see, i want to rewrite every request to my index.php file in the demo folder from root. So everything like http://www.example.com/demo/albums/show/1 should be processed by http://www.example.com/demo/index.php for a standard MVC setup. (I'm using CodeIgniter btw) The directives above results in a 500 error, so i thought maybe because of some possible syntax differences between 1.3 and 2.x. After some trail and error editing, i've found the rewrite rule itself to be at fault but i really don't understand why. Any ideas to why my rewrite rule doesn't work? it did before on lots of different servers. Suggestions how to fix it? Note: mod_rewrite does work, i've written a small test to be sure.

    Read the article

  • PHP float bug: PHP Hangs On Numeric Value

    - by jeroen
    I just read an interesting article about php hanging on certain float numbers, see The Register and Exploring Binary. I never explicitly use floats, I use number_format() to clean my input and display for example prices. Also, as far as I am aware, all input from for example forms are strings until I tell them otherwise so I am supposing that this problem does not affect me. Am I right, or do I need to check for example Wordpress and Squirrelmail installations on my server to see if they cast anything to float? Or better, grep all php files on my servers for float?

    Read the article

  • Retreiving data from MySQL with html/javaScript on one domain and the PHP file on the other

    - by Mike
    I need to retrieve data from a MySQL database, and have it work one way for all types of servers. For example it should work on a server that runs no server side language, it should also work on LAMP, and IIS. I was thinking about using ajax and xmlhttprequest, but learned of the cross domain limitation. I also tried to just include the PHP in a tag, but it comes back with a syntax error in the HTML code created by the PHP file, even though it looks correct. Does anyone know how to fix either of these issues, or have a different way to go about it?

    Read the article

  • Manage DNS Zone in "slave" Mode with MS Windows 2008 R2

    - by kockiren
    Hello @all following issue, I configure a DNS Zone "location.domain.tld" for my internal Network and it works well, but now I want to manage the domain.tld for my internal Network, but the domain.tld is managed by an external DNS Provider. In location.domain.tld there are all Clients and Servers for internal use only (with local IPs) all these clients resolve the global Mailserver (for example) over his external Address but now i want to catch single Domainnames and resolve it on my own way. But i did not find a way to solve this issue. Any Ideas? Regards Rene

    Read the article

  • Create view or SP, only if the DB contains a pattern

    - by Randall Salas
    Hi all: I am working on a script, that needs to be run in many different SQL servers. Some of them, shared the same structure, in other words, they are identical, but the filegroups and the DB names are different. This is because is one per client. Anyway, I would like when running a script, If I chose the wrong DB, it should not be executed. I am trying to mantain a clean DB. here is my example, which only works for dropping a view if exists, but does not work for creating a new one. I also wonder how it would be for creating a stored procedure. Thx a lot. if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[ContentModDate]') and OBJECTPROPERTY(id, N'IsView') = 1) AND CHARINDEX('Content', DB_NAME()) 0 drop view [dbo].[ContentModDate] GO IF (CHARINDEX('Content', DB_NAME()) 0)BEGIN CREATE VIEW [dbo].[Rx_ContentModDate] AS SELECT 'Table1' AS TableName, MAX(ModDate) AS ModDate FROM Tabl1 WHERE ModDate IS NOT NULL UNION SELECT 'Table2', MAX(ModDate) AS ModDate FROM Table2 WHERE ModDate IS NOT NULL END END GO

    Read the article

  • SQL Server mirroring connection doesnt work

    - by StNickolas
    I have 2 servers srv-erp1 and srv-erp3. I made them mirror on each other. All setup is done by lots of tutorials and examples. But when I call ALTER DATABASE MIRROR_TEST SET PARTNER = 'TCP://srv-erp3:5022' It`s response is: The server network address "TCP://srv-erp3:5022" can not be reached or does not exist. Check the network address name and that the ports for the local and remote endpoints are operational. I go to cmd on srv-erp3 and use netstat -an... this port is listening. I go to cmd on srv-erp1 and use telnet srv-erp3 5022...and its ok to connect. All firewalls are turned off. The only difference in config of srvrs is that srv-erp1 is on Windows Server 2003 R2 x64, and srv-erp3 is on Windows 2008 R2 x64 What can be the reason of this problem? Regards, Dmitry.

    Read the article

  • Galera install failure on Fedora 18

    - by ehime
    I've been trying to reinstall MariaDB and have been encountering multiple issues, $ yum install Mariadb-Galera-server Error: Package: MariaDB-Galera-server-5.5.29-1.i386 (mariadb) Requires: galera Available: galera-23.2.4-1.rhel5.i386 (mariadb) galera galera = 23.2.4-1.rhel5 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest there is a requirement that libssl.so.6 and libcrypto.ssl.6 are installed, these DO show up in my /lib64 and /lib though as linked items. /usr/lib -rwxr-xr-x 1 root root 1356700 Nov 23 2010 libcrypto.so.0.9.8e lrwxrwxrwx 1 root root 19 Jun 28 12:03 libcrypto.so.6 -> libcrypto.so.0.9.8e -rwxr-xr-x. 1 root root 394272 Mar 18 14:22 libssl.so.1.0.1e lrwxrwxrwx 1 root root 16 Jun 28 12:03 libssl.so.6 -> libssl.so.0.9.8e /usr/lib64 -rwxr-xr-x 1 root root 1849680 Mar 18 14:21 libcrypto.so.1.0.1e lrwxrwxrwx 1 root root 26 Jun 28 11:54 libcrypto.so.6 -> /lib64/libcrypto.so.1.0.1e -rwxr-xr-x 1 root root 421712 Mar 18 14:21 libssl.so.1.0.1e lrwxrwxrwx 1 root root 23 Jun 28 11:54 libssl.so.6 -> /lib64/libssl.so.1.0.1e So the deps SHOULD be met, trying to $ yum install galera returns this Resolving Dependencies --> Running transaction check ---> Package galera.i386 0:23.2.4-1.rhel5 will be installed --> Restarting Dependency Resolution with new changes. --> Running transaction check ---> Package galera.i386 0:23.2.4-1.rhel5 will be installed --> Finished Dependency Resolution No errors? but no install either .... ? lets try wget and rpm'ing the package instead I guess? $ wget https://launchpad.net/galera/2.x/23.2.4/+download/galera-23.2.4-1.rhel5.x86_64.rpm $ rpm -ivh galera-23.2.4-1.rhel5.x86_64.rpm This issues the dreaded error: Failed dependencies: libcrypto.so.6()(64bit) is needed by galera-23.2.4-1.rhel5.x86_64 libssl.so.6()(64bit) is needed by galera-23.2.4-1.rhel5.x86_64 But we saw above these packages are here =( Whats going on?? Is openssl not installed? $ yum install openssl Loaded plugins: langpacks, presto, refresh-packagekit Package 1:openssl-1.0.1e-4.fc18.x86_64 already installed and latest version Nothing to do Its there.... ??? wth Fedora?

    Read the article

  • What processor is javax.xml.transform Using?

    - by Jeremy Witmer
    I've implented a simple webapp that transforms XML based on an XSTL stylesheet. It works fine on all the Windows servers I've deployed it on (to Tomcat), but on all Linux systems, I get a compile error on the XSLT. As best I can tell, it's because Java 1.6 isn't using the same processor behind javax.xml.transform. On the one Linux system, it's org.apache.xalan.xslt, version 2.4. What I can't figure out is how to generically figure out what any given system is using behind javax.xml.transform. Or, if anyone has any hints on what else I might do to figure out the problem, that'd be good, too.

    Read the article

  • Node.js and wss://

    - by CNelson
    I'm looking to start using javascript on the server, most likely with node.js, as well as use websockets to communicate with clients. However, there doesn't seem to be a lot of information about encrypted websocket communication using TLS and the wss:// handler. In fact the only server that I've seen explicitly support wss:// is Kaazing. This TODO is the only reference I've been able to find in the various node implementations. Am I missing something or are the websocket js servers not ready for encrypted communication yet? Another option could be using something like lighttpd or apache to proxy to a node listener, has anyone had success there?

    Read the article

  • Why is capistrano acting up like this?

    - by Matt
    I am having an issue with my deploy i ran cap deploy and got this Warning: Permanently added 'github.com,207.97.227.239' (RSA) to the list of known hosts. ** [174.143.150.79 :: out] Permission denied (publickey). ** fatal: The remote end hung up unexpectedly command finished *** [deploy:update_code] rolling back * executing "rm -rf /home/deploy/transprint/releases/20110105034446; true" servers: ["174.143.150.79"] [174.143.150.79] executing command here is my deploy.rb set :application, "transprint" set :domain, "174.149.150.79" set :user, "deploy" set :use_sudo, false set :scm, :git set :deploy_via, :remote_cache set :app_path, "production" set :rails_env, 'production' set :repository, "[email protected]:myname/something.git" set :scm_username, 'deploy' set :deploy_to, "/home/deploy/#{application}" role :app, domain role :web, domain role :db, domain, :primary => true please help

    Read the article

  • Self-signed ceritificates for many users/browsers/sites

    - by Demiurg
    Here is my problem - I have a lot of users using different browsers accessing many internal web sites using https. I can create my own Certificate Authority, than create a certificate for each server and after that have all the users import it. Obviously, it cannot work in reality - there are too many users and too many sites, and some sites will be added in the future. I'm looking for a way to automate this. Is there a way to create a certificate so that all major browsers (IE, FF, Opera, Chrome and Safari) would trust it for all servers ? If so, what is the best way to install it automatically in all major browsers ?

    Read the article

  • Clean install of IIS 6 on Windows Server 2003 ignoring 'web.config'?

    - by Vario
    Hi, Any help with this would be really appreciated! As the title suggests, I'm running a brand new install of Windows Server 2003 and IIS 6 and I'm basically attempting to mirror a live web server onto a new internal development server, which runs the same setup. It's an ASP.NET site that relies heavily on URL rewriting (using Intelligencia). ASP.NET is set to run on v2.0.50727 on both servers. I've tried intentionally introduce syntax errors into the web.config and it just appears to be ignoring them completely, so given IIS 6 doesn't read the web.config, the rest of the site doesn't work at all (I get a 404 error, as a 'Default.aspx' doesn't exist since the web.config handles the default page rewriting). Having looked at the Application Mapping, '.config' files are set to use the default 'c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll' which exists. Is there anything else I may be missing? Thanks in advance.

    Read the article

  • Robocopy for Windows 2003 doesn't support /DST option

    - by Jon
    Does anyone know if it is possible to download the latest robocopy for Windows 2003. The latest version provides the /DST option which ignores time stamps changed due to BST (British Summer Time). Every time we do a build and sync our servers when we go +1/-1 hour it takes hours instead of minutes because it sees everything as changed. I noticed it is included automatically with Vista/Win7 but the Resource toolkit that I downloaded doesn't include a new version of robocopy for Win Server 2003. If there is a place to download it from & will it also work on Windows Server 2003? Thanks.

    Read the article

  • ASP.NET Granting access to local resources

    - by Mina Samy
    Hi all I have an ASP.NET web application that runs on a windows server 2003 server. there is a form that reads and writes data to an xml file inside the application's directory. I always grant the NETWORK SERVICE user full control on my application folder so that it can read and write to the xml file. I put the application on another windows server 2003 server and did the same steps above but i was getting an Access denied exception on the form that reads and writes to the xml. I did some search and found that if you grant the user ASPNET full control to the directory it would work, I did that and it worked fine. my question is: what is the difference between granting full control permissions to NETWORK SERVICE and ASPNET users ? and what can be the difference between the two servers that caused this issue ? thanks

    Read the article

  • Plastic SCM vs. Mercurial? Need Source Control for Visual Studio 2005 on Windows 7

    - by Pete Alvin
    1) Has anyone used Plastic SCM? Is it reliable? 2) How does it compare with Mercurial? (It seems like this is a good candidate for DVCS on Windows. I tried Git and really didn't like it.) 3) I really like TortoiseSVN. I like a central model because of the piece of mind that if it's in the respository it's "safe" and tracked. Here is the question: Is the excitement over distributed version control (DVCS) worth the hype? My environment: Windows 7 Windows development (Dev. Studio 2005, SQL Server 2003); integration would be nice Two developers sharing same code push code to production servers almost daily

    Read the article

  • Symfony2 Syntax Errors (in vendor files)

    - by user1665246
    To maintain code integrity across our servers we'd like to keep the /vendor/* directory under source control, rather than use composer to download files each time we roll out onto another server - i.e. we can be certain that the /vendor/* files are identical. We run a syntax checker against all files committed to source control and run across the following error: File '/vendor/sensio/generator-bundle/Sensio/Bundle/GeneratorBundle/Resources/skeleton/bundle/Bundle.php' failed the PHP syntax check with the following error: PHP Parse error: syntax error, unexpected '}', expecting T_NS_SEPARATOR in /vendor/sensio/generator-bundle/Sensio/Bundle/GeneratorBundle/Resources/skeleton/bundle/Bundle.php on line 3 Is the "error" in this file intentional ? Any help appreciated. File contents below: <?php namespace {{ namespace }}; use Symfony\Component\HttpKernel\Bundle\Bundle; class {{ bundle }} extends Bundle { }

    Read the article

  • Is it possible to use Integrated Windows Auth when Server isn't on the domain?

    - by jskentzos
    Our production web servers ARE NOT part of the domain, but we'd like people to be able to log in automatically since they are logged into the domain on their PC. Is there anyway to get the browser (IE7+) to send the appropriate information to the server (IIS6) so I can retrieve the ServerVariables["AUTH_USER"] or ServerVariables["LOGON_USER"]? I presume the answer is no since if I set the security for windows auth to "on" and anonymous access to "off", then the server wouldn't know what do do with any user information for a domain which it has no knowledge of. I just want to know for sure before I give the SSO team a "not possible" answer.

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >