Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 383/500 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • How to flush data in php and disconnect user but keep the script alive

    - by Rodrigo
    This is a trick question, while developing a php+ajax application i felt into some long queries, nothing wrong with them, but they could be done in background. I know that there's a way to just send a reply to user while throwing the real processing to another process by exec(), however it dosen't feels right for me, this might generate exploits and it's not pratical on making it compatible with virtual servers and cross platform. PHP offers the ob_* functions although they help on flushing the cache, but the user will keep connected until the script is running. I'm wondering if there's an alternate to exec to keep a script running after sending data to user and closing connection/thread with apache, or a less "dirty" way to have processing data sent to another script.

    Read the article

  • NFS4 / ZFS: revert ACL to clean/inherited state

    - by Keiichi
    My problem is identical to this Windows question, but pertains NFS4 (Linux) and the underlying ZFS (OpenIndiana) we are using. We have this ZFS shared via NFS4 and CIFS for Linux and Windows users respectively. It would be nice for both user groups to benefit from ACLs, but the one missing puzzle piece goes thusly: Each user has a home, where he sets a top-level, inherited ACL. He can later on refine permissions for the contained files/folders iteratively. Over time, sometimes permissions need to be generalized again to avoid increasing pollution of ACL entries. You can tweak the ACL of every single file if need be to obtain the wanted permissions, but that defeats the purpose of inherited ACLs. So, how can an ACL be completely cleared like in the question linked above? I have found nothing about what a blank, inherited ACL should look like. This usecase simply does not seem to exist. In fact, the solaris chmod manpage clearly states A- Removes all ACEs for current ACL on file and replaces current ACL with new ACL that represents only the current mode of the file. I.e. we get three new ACL entries filled with stuff representing the permission bits, which is rather useless for cleaning up. If I try to manually remove every ACE, on the last one I get chmod A0- <file> chmod: ERROR: Can't remove all ACL entries from a file Which by the way makes me think: and why not? In fact, I really want the whole file-specific ACL gone. The same holds for linux, which enumerates ACEs starting with 1(!), and verbalizes its woes less diligently nfs4_setacl -x 1 <file> Failed setxattr operation: Unknown error 524 So, what is the idea behind ACLs under Solaris/NFS? Can they never be cleaned up? Why does the recursion option for the ACL setting commands pollute all children instead of setting a single ACL and making the children inherit? Is this really the intention of the designers? I can clean up the ACLs using a windows client perfectly well, but am I supposed to tell the linux users they have to switch OS just to consolidate permissions?

    Read the article

  • Restart nginx without sudo?

    - by tesmar
    So I want to be able to cap:deploy without having to type any passwords. I have setup all private keys so I can get to the remote servers fine, and am now using svn over ssh, so no passwords there. I have one last problem, I need to be able to restart nginx. Right now I have sudo /etc/init.d/nginx reload. That is a problem b/c it uses the capistrano password, the one I just removed b/c I am using keys. Any ideas on how to restart nginx w\out a password?

    Read the article

  • Increasing TCP/IP Window size

    - by Lior
    I am trying to send messages over tcp/ip between two servers. I want to send a message that is 30KB. But I want to send it with as a whole. I don't want tcp protocol to break it into segments. I am using communication between 2 Windows Server 2008 R2. The client and the server are coded using c#. I tryed using tcpclnt.SendBufferSize = 100000; tcpclnt.Client.DontFragment = true; and the same at the server. I also tried configuring the window size of the server(editing the registry).

    Read the article

  • Manage DNS Zone in "slave" Mode with MS Windows 2008 R2

    - by kockiren
    Hello @all following issue, I configure a DNS Zone "location.domain.tld" for my internal Network and it works well, but now I want to manage the domain.tld for my internal Network, but the domain.tld is managed by an external DNS Provider. In location.domain.tld there are all Clients and Servers for internal use only (with local IPs) all these clients resolve the global Mailserver (for example) over his external Address but now i want to catch single Domainnames and resolve it on my own way. But i did not find a way to solve this issue. Any Ideas? Regards Rene

    Read the article

  • intelligent thin start with port alias for bash

    - by seaofclouds
    i would like a single alias (ts) which starts my local development server. the script should test for an open port starting at 3000 and use the first available port. additionally, some sites require a rackup file, making -R config.ru necessary. this script should check the current directory for the config.ru file and add that to the alias if present. currently, to start my local development environment, i run: alias ts="thin -R config.ru -p 3000 start" often, i need to run several servers to test different sites, so i've created additional aliases: alias ts1="thin -R config.ru -p 3001 start"

    Read the article

  • Understand ACTV mode and the PORT command

    - by Ramy
    Hello, I'm the part time FTP server administrator (with no real full-time admin). We currently only allow ACTV mode connections. Some of our clients have had issues with this but for the most part they've been ok using ACTV. For the few who aren't, we've been able to push the data over to their servers from ours. there is one client in particular however who is currently having trouble. He is using file-zilla and issuing a PORT command. First, does using the PORT command imply that you are in ACTV mode? Second is there a way in FileZilla to explicitly change to ACTV mode? Thanks for the help, _Ramy

    Read the article

  • Http Geocoder (Google) Accuracy level

    - by sushruth
    I am geocoding a large amount of user entered addresses and interested in the accuracy levels returned. My GOAL is to get the BEST POSSIBLE ACCURACY score for a given address. I call the geocder api following way http://maps.google.com/maps/geo?q={address}&output=csv&sensor=false&key=xx now the accuracy levels returned for same address with/without premise name q = Key Arena, 305 Harrison Street, Seattle, WA 98109 (Accuracy is 5) q = 305 Harrison Street Seattle, WA 98109 (Accuracy is 8) q = Key Arena, Seattle, WA 98109 (Accuracy is 9.) Its obvious from the above that the google servers does not return the best accuracy when street name is appended with premise/venue. the question is :) is there a way to pass the complete address ( with premise name / i.e case 1 ) and get the Max Accuracy. ( or how can tell the google server that the address is passed with premise/building name and street name) ( if you are thinking why not just use case 3, the answer is these are user entered addresses, they could enter "my moms's house" for premise, with accurate street address. in which case i want the accuracy to be 8 not 5)

    Read the article

  • Retreiving data from MySQL with html/javaScript on one domain and the PHP file on the other

    - by Mike
    I need to retrieve data from a MySQL database, and have it work one way for all types of servers. For example it should work on a server that runs no server side language, it should also work on LAMP, and IIS. I was thinking about using ajax and xmlhttprequest, but learned of the cross domain limitation. I also tried to just include the PHP in a tag, but it comes back with a syntax error in the HTML code created by the PHP file, even though it looks correct. Does anyone know how to fix either of these issues, or have a different way to go about it?

    Read the article

  • What processor is javax.xml.transform Using?

    - by Jeremy Witmer
    I've implented a simple webapp that transforms XML based on an XSTL stylesheet. It works fine on all the Windows servers I've deployed it on (to Tomcat), but on all Linux systems, I get a compile error on the XSLT. As best I can tell, it's because Java 1.6 isn't using the same processor behind javax.xml.transform. On the one Linux system, it's org.apache.xalan.xslt, version 2.4. What I can't figure out is how to generically figure out what any given system is using behind javax.xml.transform. Or, if anyone has any hints on what else I might do to figure out the problem, that'd be good, too.

    Read the article

  • Create view or SP, only if the DB contains a pattern

    - by Randall Salas
    Hi all: I am working on a script, that needs to be run in many different SQL servers. Some of them, shared the same structure, in other words, they are identical, but the filegroups and the DB names are different. This is because is one per client. Anyway, I would like when running a script, If I chose the wrong DB, it should not be executed. I am trying to mantain a clean DB. here is my example, which only works for dropping a view if exists, but does not work for creating a new one. I also wonder how it would be for creating a stored procedure. Thx a lot. if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[ContentModDate]') and OBJECTPROPERTY(id, N'IsView') = 1) AND CHARINDEX('Content', DB_NAME()) 0 drop view [dbo].[ContentModDate] GO IF (CHARINDEX('Content', DB_NAME()) 0)BEGIN CREATE VIEW [dbo].[Rx_ContentModDate] AS SELECT 'Table1' AS TableName, MAX(ModDate) AS ModDate FROM Tabl1 WHERE ModDate IS NOT NULL UNION SELECT 'Table2', MAX(ModDate) AS ModDate FROM Table2 WHERE ModDate IS NOT NULL END END GO

    Read the article

  • Why is capistrano acting up like this?

    - by Matt
    I am having an issue with my deploy i ran cap deploy and got this Warning: Permanently added 'github.com,207.97.227.239' (RSA) to the list of known hosts. ** [174.143.150.79 :: out] Permission denied (publickey). ** fatal: The remote end hung up unexpectedly command finished *** [deploy:update_code] rolling back * executing "rm -rf /home/deploy/transprint/releases/20110105034446; true" servers: ["174.143.150.79"] [174.143.150.79] executing command here is my deploy.rb set :application, "transprint" set :domain, "174.149.150.79" set :user, "deploy" set :use_sudo, false set :scm, :git set :deploy_via, :remote_cache set :app_path, "production" set :rails_env, 'production' set :repository, "[email protected]:myname/something.git" set :scm_username, 'deploy' set :deploy_to, "/home/deploy/#{application}" role :app, domain role :web, domain role :db, domain, :primary => true please help

    Read the article

  • PHP float bug: PHP Hangs On Numeric Value

    - by jeroen
    I just read an interesting article about php hanging on certain float numbers, see The Register and Exploring Binary. I never explicitly use floats, I use number_format() to clean my input and display for example prices. Also, as far as I am aware, all input from for example forms are strings until I tell them otherwise so I am supposing that this problem does not affect me. Am I right, or do I need to check for example Wordpress and Squirrelmail installations on my server to see if they cast anything to float? Or better, grep all php files on my servers for float?

    Read the article

  • Self-signed ceritificates for many users/browsers/sites

    - by Demiurg
    Here is my problem - I have a lot of users using different browsers accessing many internal web sites using https. I can create my own Certificate Authority, than create a certificate for each server and after that have all the users import it. Obviously, it cannot work in reality - there are too many users and too many sites, and some sites will be added in the future. I'm looking for a way to automate this. Is there a way to create a certificate so that all major browsers (IE, FF, Opera, Chrome and Safari) would trust it for all servers ? If so, what is the best way to install it automatically in all major browsers ?

    Read the article

  • Is it possible to use Integrated Windows Auth when Server isn't on the domain?

    - by jskentzos
    Our production web servers ARE NOT part of the domain, but we'd like people to be able to log in automatically since they are logged into the domain on their PC. Is there anyway to get the browser (IE7+) to send the appropriate information to the server (IIS6) so I can retrieve the ServerVariables["AUTH_USER"] or ServerVariables["LOGON_USER"]? I presume the answer is no since if I set the security for windows auth to "on" and anonymous access to "off", then the server wouldn't know what do do with any user information for a domain which it has no knowledge of. I just want to know for sure before I give the SSO team a "not possible" answer.

    Read the article

  • Node.js and wss://

    - by CNelson
    I'm looking to start using javascript on the server, most likely with node.js, as well as use websockets to communicate with clients. However, there doesn't seem to be a lot of information about encrypted websocket communication using TLS and the wss:// handler. In fact the only server that I've seen explicitly support wss:// is Kaazing. This TODO is the only reference I've been able to find in the various node implementations. Am I missing something or are the websocket js servers not ready for encrypted communication yet? Another option could be using something like lighttpd or apache to proxy to a node listener, has anyone had success there?

    Read the article

  • Clean install of IIS 6 on Windows Server 2003 ignoring 'web.config'?

    - by Vario
    Hi, Any help with this would be really appreciated! As the title suggests, I'm running a brand new install of Windows Server 2003 and IIS 6 and I'm basically attempting to mirror a live web server onto a new internal development server, which runs the same setup. It's an ASP.NET site that relies heavily on URL rewriting (using Intelligencia). ASP.NET is set to run on v2.0.50727 on both servers. I've tried intentionally introduce syntax errors into the web.config and it just appears to be ignoring them completely, so given IIS 6 doesn't read the web.config, the rest of the site doesn't work at all (I get a 404 error, as a 'Default.aspx' doesn't exist since the web.config handles the default page rewriting). Having looked at the Application Mapping, '.config' files are set to use the default 'c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll' which exists. Is there anything else I may be missing? Thanks in advance.

    Read the article

  • Rewrite rules doesn't work apache 1.3

    - by Sander Versluys
    I'm using a couple of rewrite directives that always works before on apache2 but now i'm uploaded to a shared hosting and the rewrite rules do not seem to get applied. I've reduced the my .htaccess files to the following essential rules: RewriteEngine On Rewritebase /demo/ RewriteRule ^(.*)$ index.php/$1 [L] As you can see, i want to rewrite every request to my index.php file in the demo folder from root. So everything like http://www.example.com/demo/albums/show/1 should be processed by http://www.example.com/demo/index.php for a standard MVC setup. (I'm using CodeIgniter btw) The directives above results in a 500 error, so i thought maybe because of some possible syntax differences between 1.3 and 2.x. After some trail and error editing, i've found the rewrite rule itself to be at fault but i really don't understand why. Any ideas to why my rewrite rule doesn't work? it did before on lots of different servers. Suggestions how to fix it? Note: mod_rewrite does work, i've written a small test to be sure.

    Read the article

  • Getting "stack level too deep" error when deploying with Capistrano, Rails 3.1 ruby 1.9.2

    - by Victor S
    Here is the log for the cap deploy script output around where the error occurs. Anny suggestions why this might be happening? Thanks! [yup.la] executing command [yup.la] sh -c 'cd /srv/www/portrait/releases/20120406051647 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile' ** [out :: yup.la] rake aborted! ** [out :: yup.la] ** [out :: yup.la] stack level too deep ** [out :: yup.la] (in /srv/www/portrait/releases/20120406051647/app/assets/stylesheets/mobile.css.scss) ** [out :: yup.la] ** [out :: yup.la] Tasks: TOP => assets:precompile:primary ** [out :: yup.la] (See full trace by running task with --trace) ** [out :: yup.la] command finished in 30868ms *** [deploy:update_code] rolling back * executing "rm -rf /srv/www/portrait/releases/20120406051647; true" servers: ["yup.la"] [yup.la] executing command [yup.la] sh -c 'rm -rf /srv/www/portrait/releases/20120406051647; true' command finished in 288ms failed: "sh -c 'cd /srv/www/portrait/releases/20120406051647 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile'" on yup.la /Users/victorstan/Sites/portrait ?

    Read the article

  • How to Test Individual Front End Web Server

    - by ChiliYago
    My farm consists of two front end (FE) web servers that are managed by a load balancer. One FE went down so we configured the load balancer to only send traffic to the other FE. We rebuilt the failed FE and rejoined the farm which appears to have worked successfully (looking at IIS). I want to test the new FE before configuring the Load Balancer to use the new server. The approach I took was to add the IP/URL to my host file that pointed to the new server but nothing comes up. Any advice would be great. Thanks

    Read the article

  • Solr security question

    - by Camran
    I have a linux server, and I am about to upload a classifieds website to it. The website is php based. That means php code adds/removes classifieds, with the help of the users offcourse. The php-code then adds/removes a classified to a database index called Solr (like MySql). Problem is that anybody can currently access the database, but I only want the website to access the database (solr). Solr is on port 8983 as standard btw. My Q is, if I add a rule in my firewall (iptables), to only allow connections coming from the servers IP to the Solr port nr, would this solve my issue? Thanks

    Read the article

  • ASP.NET Granting access to local resources

    - by Mina Samy
    Hi all I have an ASP.NET web application that runs on a windows server 2003 server. there is a form that reads and writes data to an xml file inside the application's directory. I always grant the NETWORK SERVICE user full control on my application folder so that it can read and write to the xml file. I put the application on another windows server 2003 server and did the same steps above but i was getting an Access denied exception on the form that reads and writes to the xml. I did some search and found that if you grant the user ASPNET full control to the directory it would work, I did that and it worked fine. my question is: what is the difference between granting full control permissions to NETWORK SERVICE and ASPNET users ? and what can be the difference between the two servers that caused this issue ? thanks

    Read the article

  • Sending message from one server to another in Twisted

    - by Casey Patton
    I've implemented my servers in the following way: def makeServer(application, port): factory = protocol.ServerFactory() factory.protocol = MyChat factory.clients = [] internet.TCPServer(port, factory).setServiceParent(application) application = service.Application("chatserver") server1 = makeServer(application, port=1025) server2 = makeServer(application, port=1026) server3 = makeServer(application, port=1027) Note that MyChat is an event handling class that has a "receiveMessage" action: def lineReceived(self, line): print "received", repr(line) for c in self.factory.clients: c.transport.write(message + '\n') I want server1 to be able to pass messages to server2. Rather, I want server1 to be treated as a client of server2. If server1 receives the message "hi" then I want it to send that same exact message to server2. How can I accomplish this?

    Read the article

  • Plastic SCM vs. Mercurial? Need Source Control for Visual Studio 2005 on Windows 7

    - by Pete Alvin
    1) Has anyone used Plastic SCM? Is it reliable? 2) How does it compare with Mercurial? (It seems like this is a good candidate for DVCS on Windows. I tried Git and really didn't like it.) 3) I really like TortoiseSVN. I like a central model because of the piece of mind that if it's in the respository it's "safe" and tracked. Here is the question: Is the excitement over distributed version control (DVCS) worth the hype? My environment: Windows 7 Windows development (Dev. Studio 2005, SQL Server 2003); integration would be nice Two developers sharing same code push code to production servers almost daily

    Read the article

  • Reliable access to Internet but not local network (not DNS or proxy issues)

    - by Ian Goldby
    I'm looking for help with a Vista Home Premium laptop that has trouble accessing any resource on our home network, but accesses the Internet just fine. The set-up is this: The Vista laptop and a MacBook Pro connect wirelessly to the router-modem. A Synology DS212j NAS drive has a wired connection to the router-modem. Devices on the local network are always referred to by IP address, so this cannot be a DNS issue. The MacBook Pro connects reliably to the NA via AFP (network shared folders), SMB (network shared folders) and HTTP. The Vista laptop connects to and browses sites on the Internet without any problems. It can log into the NAS via SMB and list the shared folders (so there is nothing wrong with the log-in credentials), but when it tries to open any of the folders Explorer just hangs with the spinning cursor for several minutes and then says "\192.168.1.64\shared\Photos is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. The specified network name is no longer available." It can ping the NAS successfully. If I try to open the NAS drive's web interface, the browser just hangs. This is the same with IE, Firefox and Chrome. (There is no proxy.) I can log into the NAS drive with FTP and navigate directories, but when I try to list the contents of a directory with more than a handful of entries, the ftp client hangs. I set up a website on the MacBook. The Vista laptop was able to load some of the pages, but loading any of the images was very hit and miss. Images embedded in HTML pages never worked no matter how many times I reloaded the page, but when I linked directly to the image it did load (though several attempts were sometimes needed). I tried all of this with the Windows Firewall turned off, and with AVG turned off. That made no difference. I'd really appreciate any suggestions anyone can make. The fact that the Vista laptop has trouble with HTTP and FTP as well as SMB connections suggests to me that this is a problem at the TCP level or below. But don't forget it accesses sites outside the LAN with no problems.

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >