Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 444/683 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • When can an FTP server close its passive connections?

    - by Don Kirkby
    Does the FTP protocol allow the server to close any of its passive connections while the client is still connected? Can it tell when the client is finished receiving and then close the connection? I'm including an FTP server in my application using the pyftpdlib Python project. I've got it to work in active and passive mode, but I'm a bit concerned about when it closes its passive connections. I've tried connecting to it with both FileZilla and the default ftp command in Ubuntu, and in both cases, I get a new passive port for every request. That is, if I sit in the root folder and type ls 10 times, I use up 10 ports. This means that I have to allocate a big block of passive ports for the FTP server to use so it won't run out. As soon as the client disconnects, the server releases all the passive connections associated with that client and those ports can be reused. However, a long-running connection could use up a lot of ports.

    Read the article

  • Network Block Device (NBD) clients for Windows or similar solutions

    - by przemoc
    Are there any NBD clients for Windows? Strangely, I cannot find any, or I am searching for them in a wrong way. Such client should be possibly a driver with front-end tool (may be a command-line one) allowing to create virtual drives and associate them with given hosts (or simply localhost) and ports where NBD servers are listening. From user perspective virtual drive should be close to what physical drive is, so it should be accessible as something like \\.\PhysicalDriveX (maybe \\.\VirtualDriveX?), be visible in Disk Management (diskmgmt.msc) and mountvol tools at least. (The only thing I found remotely close to NBD on Windows is ImDisk's proxy mode and companion tool devio, but AFAIK ImDisk only works at partition level (so no virtual drive) and devio uses different protocol.) Secondary question is: Are there any (preferably simple) Windows-specific solutions allowing creation of virtual drive delegating read/write request to user-space via some explicit way (like via TCP, IPC, DLL implementing given API, etc.)?

    Read the article

  • Creating an in-house single source software development team

    - by alphadogg
    Our company wants to create a single department for all software development efforts (rather than having software development managed by each business unit). Business units would then "outsource" their software needs to this department. What would you setup as concerns/expectations that must be cleared before doing this? For example: Need agreement between units on how much actual time (FTE) is allocated to each unit Need agreement on scheduling of staff need agreement on request procedure if extra time is required by one party etc... Have you been in a situation like this as a manager of one unit destined to use this? If so, what were problems you experienced? What would you have or did implement? Same if you were the manager of the shared team. Please assume, for discussion, that the people concerned know that you can't swap devs in and out on a whim. I don't want to know the disadvantages of this approach; I know them. I want to anticipate issues and know how to mitigate the fallout.

    Read the article

  • Google is displaying "Translate this page" based on a previously registered domain inbound links

    - by crnm
    I recently started a new project with a newly registered generic tld domain. As soon as Google started indexing the page, it displayed a "translate this page" in SERP's, which tries to translate the page to the language of a small Eastern European country from the language that the site actually uses. I tried everything to prevent this: language meta headers and attributes, localisation through Google Webmaster Tools...all to no avail - nothing helped. After a couple of weeks I spotted dozens of inbound links popping up in Google Webmaster Tools all coming from that small Eastern European country, from sub-pages that are not active anymore (either sending out 404's or 301's to the main page), and also had been written in that other language. So the domain had been registered before and as it looks, it did got a lot of possibly spam links in that language. I can't even ask the sites where those links should have been to remove them as they are not active anymore physically, just in Google Webmaster Tools and/or internal data masses... Now I'm at a loss about what to do? As my site is pretty new, it does not have many links pointing towards it in my targeted language. So those are probably not enough to convince Google of attaching the right language to it as Google ignores all other signals about the page language. I'm also unsure if I should use the "disavow" tool, or a reconsideration request...or what else to do about this miserable state. I never used these tools before so I don't have any experience with them. Somehow I have to convince Google about the right language of the page and also to not count/apply/whatever all those historical links from the previous owner. (The domain had been deleted without any traces in Google before I registered it) Has anyone here ever dealt with a similar "Translate this page" problem? (I've also looked at this thread: How can I prevent Google mistakenly offering to translate a page? but didn't find a solution there)

    Read the article

  • Who should have full visibility of all (non-data) requirements information?

    - by ebyrob
    I work at a smallish mid-size company where requirements are sometimes nothing more than an email or brief meeting with a subject matter manager requiring some new feature. Should a programmer working on a feature reasonably expect to have access to such "request emails" and other requirements information? Is it more appropriate for a "program manager" (PGM) to rewrite all requirements before sharing with programmers? The company is not technology-centric and has between 50 and 250 employees. (fewer than 10 programmers in sum) Our project management "software" consists of a "TODO.txt" checked into source control in "/doc/". Note: This is nothing to do with "sensitive data access". Unless a particular subject matter manager's style of email correspondence is top secret. Given the suggested duplicate, perhaps this could be a turf war, as the PGM would like to specify HOW. Whereas WHY is absent and WHAT is muddled by the time it gets through to the programmer(s)... Basically. Should specification be transparent to programmers? Perhaps a history of requirements might exist. Shouldn't a programmer be able to see that history of reqs if/when they can tell something is hinky in the spec? This isn't a question about organizing requirements. It is a question about WHO should have full VISIBILITY of requirements. I'd propose it should be ALL STAKEHOLDERS. Please point out where I'm wrong here.

    Read the article

  • Lost connectivity after configuring multiple network adapters on separate networks

    - by Dave Long
    I am trying to setup an Ubuntu hosting server, currently just for development, and the server has two NICs, each sitting on a different network. eth0 is on 192.168.200.* and eth1 is on 192.168.101.* and each one has a static IP. eth0 is the public facing NIC card and eth1 is strictly for internal access to the server. I initially only setup eth0 and added the eth1 card when I needed it. eth0 was working find until I added eth1, now, can't get any connectivity on eth0 unless I pull eth1 out of the box. The configuration on each system is as follows: auto eth0 iface eth0 inet static address 192.168.200.94 netmask 255.255.255.0 network 192.168.200.0 broadcast 192.168.200.255 gateway 192.168.200.253 auto eth1 iface eth1 inet static address 192.168.101.64 netmask 255.255.255.0 network 192.168.101.0 broadcast 192.168.101.255 gateway 192.168.101.254 Again eth0 worked fine until I added eth1. I have seen this happen with Windows servers if you have a Default Gateway setup for both NICs, but I am not sure if this works the same on Ubuntu. My resolv.conf file looks like so: nameserver 192.168.101.59 nameserver 192.168.101.58 domain domain.local search domain.local Per request here is the Routing table 192.168.101.0 * 255.255.255.0 U 0 0 0 eth1 192.168.200.0 * 255.255.255.0 U 0 0 0 eth0 default 192.168.101.254 0.0.0.0 UG 100 0 0 eth1 default 192.168.200.253 0.0.0.0 UG 100 0 0 eth0

    Read the article

  • Some $_SERVER parameters missing when accessing PHP script via Cron

    - by Jakobud
    I have a script that I need to run with PHP via cron. The original author of the script made a lot of user of certain $_SERVER parameters (like REQUEST_URI). But it appears that certain variables don't exist when running PHP via command line or via CRON. For example, there is no request uri, so it makes sense that the REQUEST_URI parameter wouldn't be available. Is there any way around this other than to completely rewrite the script in order to avoid using special $_SERVER parameters that aren't universally available?

    Read the article

  • mod_proxy failing as forward proxy in simple configuration

    - by Stabledog
    (On Mac OS X 10.6, Apache 2.2.11) Following the oft-repeated googled advice, I've set up mod_proxy on my Mac to act as a forward proxy for http requests. My httpd.conf contains this: <IfModule mod_proxy> ProxyRequests On ProxyVia On <Proxy *> Allow from all </Proxy> (Yes, I realize that's not ideal, but I'm behind a firewall trying to figure out why the thing doesn't work at all) So, when I point my browser's proxy settings to the local server (ip_address:80), here's what happens: I browse to http://www.cnn.com I see via sniffer that this is sent to Apache on the Mac Apache responds with its default home page ("It works!" is all this page says) So... Apache is not doing as expected -- it is not forwarding my browser's request out onto the Internet to cnn. Nothing in the logfile indicates an error or problem, and Apache returns a 200 header to the browser. Clearly there is some very basic configuration step I'm not understanding... but what?

    Read the article

  • Working with Google Webmaster Tools

    - by com
    My first question is about Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. One of them is HTTP. I assume that all broken links in HTTP was somehow found by crawler, this is not the links from sitemap. If this was found by scanning all sitemap pages for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. And what the meaning of Linked From, I thought if the name of section is sitemap, therefore all URLs should be taken from sitemap, so why there is Linked From? The second question, what is the best way to trreat searching on the site. How come the searching result page are getting indexed? Because of the fact that all searching result page are getting indexed, I have to many page in Linked From. What's the right practice? Question three: In order to improve response time in WMT, can I redirect all crawler's requests to designated free web server? Is this good practice? Question four: How should I treat Google Analytics Code (with parameters PageView, PageLoadTime), in the case user request non existing page, should I render Google code or not? Right now I use Google Analytics Code on the common template page, such that every page, also non existing page with error message contains Google Analytics Code, it seems like it has influence on WMT.

    Read the article

  • what can be reasons for localhost responding super-slow the first time a page is requested?

    - by frequent
    Still learning my server-ways with Apache2.2/MySQL5.2/Coldfusion8 on localhost (running Windows XP) What I notice is every time I request a page for the first time after firing up Coldfusion and Apache, localhost needs forever (1+ minute to respond and send the inital page). After that all seems to run at normal loading time. I'm using require.js to pull in Jquery, Jquery Mobile and two other plugins on first page load, but loading the same page from a real server works normally, so I'm also ruling out this as a probable cause. Since it happens regardless of which page I'm loading first it shoud not be page related, so I'm looking for other clues on why this could happen. Thanks for some thought!

    Read the article

  • How to host ASP.NET application externally?

    - by Josh
    I have an ASP.NET application that I can get to locally by going to 192.168.1.102:81/TestApp. I would like to host the application externally by going to domain.com:81/TestApp (I already have my domain pointing to my router and this works fine - I have apache running on port 80 on another server). I modified the router settings to point any request coming through port 81 to 192.168.1.102. I am still having trouble accessing the ASP.NET site (I get the error message that "This link appears to be broken"). Am I missing something? How can I redirect domain.com:81/TestApp to my ASP.NET application? Thanks.

    Read the article

  • Drupal 7 on Windows - File Module Problems

    - by TimothyP
    Installed Drupal 7 using the Web Platform installer on Windows 2008 For some reason, the file module, when you upload a file, uses the first few letters of the filename as the unique key to store in the database, which of course causes problems very fast. I'm wondering does anybody have a workaround for this? An AJAX HTTP request terminated abnormally. Debugging information follows. Path: /file/ajax/field_file/und/0/form-EBMatHzV5cZXcWvXJtdADSdyw7Id9-GIpFM_NCJg_a4 StatusText: n/a ResponseText: Error message PDOException: SQLSTATE[23000]: [Microsoft][SQL Server Native Client 10.0][SQL Server]Cannot insert duplicate key row in object 'dbo.file_managed' with unique index 'uri_unique'. in drupal_write_record() (line 6776 of ..........\includes\common.inc). Error The website encountered an unexpected error. Please try again later. ReadyState: undefined (PS: I hope superuser is the right place to ask)

    Read the article

  • Where can I find a link to download the SP2 of OES2?

    - by Philippe
    Hi, I have a Netware Novell server with an eDirectory and different objects configured. I implemented an OEServer2 SP1 to emulate a DSfW to manage the eDirectory with AD. I join the domain with the Administrator login and I am logged as the Administrator domain. So far, there are no problems. When I open the MMC window on Windows Server 08 and snap in the "Active Directory Users and Computers" I can see all the OUs and objects presented in the Netware N. server. But, when I select some OUs I can have an error, and when I select other I don’t have this error. Error: “Data from XXXXX is not available from Domain Controller OES2.yyyy.local because: The server is unwilling to process the request. Try again later, or choose another DC by selecting Connect to Domain Controller on the Domain context menu.” With XXXX= OU’s name and yyyy.local= domain name and OES2 server name If somebody can upload this SP or post a link to download it... Thank you for your help!

    Read the article

  • WebsitePanel 2 totally NOT working on Windows Server 2012 on Azure

    - by Carmine Giangregorio
    I’m having many troubles installing WebSitePanel on an Azure Virtual Machine, with Windows Server 2012. I followed the steps in http://www.websitepanel.net/documentation/deployment-guide/server-configuration/preparing-windows-server-2008-r2-for-websitepanel-installation/ and installed everything I needed. Then, I installed the WebSitePanel Standalone Server package with the installer. I opened the endpoint for the port 9002 on Windows Azure; so I pointed my browser to myhostname.cloudapp.net (note: in Azure you don’t have a static IP address, instead you have an hostname like [hostname].cloudapp.net). So, loading myhostname.cloudapp.net:9002 fails, and any browser shows something like “Unable to load page”. Notice: if I try to load the WebSitePanel Portal directly on the server, I get an error HTTP 400 Bad Request. How come? IIS works perfectly on the server, in fact the default website runs without problems on port 80.

    Read the article

  • cruisecontrol.net failing build with no errors

    - by John Hoge
    Hi, I've been using CCNet for some time now but just upgraded to .net 4. I've installed the new framework on my dev box along with vs2010 and on my CC.net server as well. I've just installed CruiseControl.net version 1.5.6804.1, and changed my MSBuild tasks to point to the new v4.0.30319 framework directory. I've got two projects on .net 4 now that just don't build. They build perfectly well in VS and run well on both Cassini and IIS. I just don't get any error messages from cc.net, just this: BUILD FAILED Project: GoodBay Date of build: 2010-05-11 10:21:59 Running time: 00:00:02 Integration Request: Build (ForceBuild) triggered from SVN Modifications since last build (0)

    Read the article

  • Site Action Button Missing

    - by David
    I have a Moss 2007 site collection which was running smoothly. However, after moving to Forms Authentication connected to Active Directory neither my Farm Administrator account nor the 2 Site collection administrators can get past the site homepage. There is no Site Action buttons availabe. When I log into CA witht he Farm Administrator account the Site Action button exists there so I suspect its something to do with the permissions to the site application. I have tried to connect directly to the settings URL http://mysite/_layouts/settings.aspx etc but get the ususal error Error: Access Denied Current User You are currently signed in as: xxxx (Farm Administrator Account) Sign in as a different user Request access I get the same error when logging in with the site collection admin accounts. Ive also checked to make sure that the sites are not locked out.

    Read the article

  • Strange traffic on fresh Ubuntu Server install

    - by Fishy
    I've just installed Ubuntu Server on my home box after becoming partially familiar with it at work and wanting to train up as a Pen Tester. I installed the latest version on a logical partition (the main one contained Win7), and selected none of the extra modules (I think). I installed ngrep and fired it up (along with TCPdump) and immediately saw some strange traffic which I am unable to identify. My pc is sending out UDP packets every couple of seconds to a seemingly random series of IP addresses, all on the same port (47669 - though I did also see it use another port for a while). I watched it do this for about 20 mins, whilst trying to work out why it was doing it. The only other traffic was the odd ARP request for the router and SSDP UPnP broadcasts from the router. Anyone know what this is, or have any advice on how best to find out? Thanks. EDIT: Actually, it's not my box generating the traffic. It's receiving the traffic on that port, from a series of IP addresses, and returning 'port unreachable' messages.

    Read the article

  • Exchange 2007 mailbox reassignment

    - by John Virgolino
    I am trying to move a mailbox from one user account to another within a single AD domain. We are using Exchange 2007. I have followed these steps: Disable the account From "Disconnected Mailbox" container, connect to the new account I get a success message When I try to login to OWA using the new user account, I get this message: Outlook Web Access could not connect to Microsoft Exchange. If the problem continues, contact technical support for your organization. Request Url: https://mail.somedomain.com:443/owa/default.aspx User host address: 1.2.3.4 Exception Exception type: Microsoft.Exchange.Data.Storage.ConnectionFailedTransientException Exception message: Cannot open mailbox /o=cgsexchangeorganization/ou=exchange administrative group (fydibohf23spdlt)/cn=recipients/cn=someuser. NOTE: I changed some identifying information for security purposes. I have tried multiple times and get to the same place. When I login to OWA with old account, I get an error that the mailbox cannot be found, which makes total sense. Does anybody have any ideas on this? Thanks!

    Read the article

  • ErrorDocument 404 not found in non-existent subdomain

    - by Question Overflow
    I am trying to get the apache server to issue a custom 404 error for invalid subdomains. The following is the relevant part of the httpd configuration: Alias /err/ "/var/www/error/" ErrorDocument 404 /err/HTTP_NOT_FOUND.html.var <VirtualHost *:80> # the default virtual host ServerName site_not_found Redirect 404 / </VirtualHost> <VirtualHost *:80> ServerName example.com ServerAlias ??.example.com </VirtualHost> What I get instead is this: Not Found The requested URL / was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I don't understand why a URL to non-existent-subdomain.example.com produces a 404 error without custom error as shown above while a URL to eg.example.com/non-existent-file produces the full custom 404 error. Can someone advise on this. Thanks.

    Read the article

  • Does nginx auth_basic work over HTTPS?

    - by monde_
    I've been trying to setup a password protected directory in a SSL website as follows: /etc/nginx/sites-available/default server { listen 443: ssl on; ssl_certificate /usr/certs/server.crt; ssl_certificate_key /usr/certs/server.key; server_name server1.example.com; root /var/www/example.com/htdocs/; index index.html; location /secure/ { auth_basic "Restricted"; auth_basic_user_file /var/www/example.com/.htpasswd; } } The problem is when I try to access the URL https://server1.example.com/secure/, I get a "404: Not Found" error page. My error.log shows the following error: 011/11/26 03:09:06 [error] 10913#0: *1 no user/password was provided for basic authentication, client: 192.168.0.24, server: server1.example.com, request: "GET /secure/ HTTP/1.1", host: "server1.example.com" However, I was able to setup password protected directories for a normal HTTP virtual host without any problems. Is it a problem with the config or something else?

    Read the article

  • IIS + PHP + Page with lots of images = Intermittent 403 errors

    - by samJL
    I am using an up-to-date Server 2008 R2 Datacenter, running IIS 7.5 and PHP 5.3.6/FastCGI On PHP pages with lots of images (60+), some of the images fail to load It is not always the same images-- on each page refresh an image that worked previously may not load, while an image that did not now does Looking at the Net tab in Firebug reveals that the failing image requests are 403 errors All of the images are located on the server in question, and the images directory has the correct permissions I believe this problem is the result of a limit on requests All of my attempts at researching this problem point to maxConnections setting in IIS, yet mine is set at the highest/default of 4294967295 (maxBandwidth too) I am also running a ColdFusion site on the same IIS installation, and it does not suffer from 403's on pages with lots of images I am left thinking that there is another connection limit (in PHP or FastCGI?) overriding the IIS connection limit I don't see anything that looks like a request limit in the php.ini, what am I missing? Any help would be appreciated, thank you

    Read the article

  • Real-time Image Resize, Cropping and Caching Server Product

    - by Elijah
    I'm investigating what products are out there that will allow you to request images through a HTTP API in arbitrary image sizes. The server would behind a CDN but would still need to be able to handle a fair bit of traffic and be possibly load-balanced. I've been tasked with writing such a service, but I wanted to do some due diligence to see what commercial or open source solutions are out there. Google has not been particularly helpful. It may be because I have been searching for the wrong term. Third-party sites and services are out of the question because of corporate policies.

    Read the article

  • Designing communications for extensibility

    - by Thomas S.
    I am working on the design stages of an application that will a) collect data from various sources (in my case that's scientific data from serial ports), keeping track of the age of the data, b) generate real-time statistics (e.g. running averages) c) display, record, and otherwise handle the data (and statistics). I anticipate that I will be adding both data producers and consumers over time, and would like to design this application abstractly so that I will be able to trivially add functionality with a small amount of interface code. What I'm stumbling on is deciding what communication infrastructure I should use to handle the interfaces. In particular, how should I make the processed data and statistics available to multiple consumers? Some things I've considered: Writing to several named pipes (variable number). Each consumer reads from one of them. Using FUSE to make a userspace filesystem where a read() returns the latest line of data even if another process has already read it. Making a TCP server, and having consumers connect and request data individually. Simply writing the consumers as part of the same program that aggregates the data. So I would like to hear your all's advice on deciding how to interface these functions in the best way to keep them separate and allow room for extenstions.

    Read the article

  • How to avoid "DO YOU HAZ TEH CODEZ" situations?

    - by volothamp
    I have a strange situation at work, where a colleague of mine often asks me and other co-workers for working code. I would like to help him, but this constant request of trivial snippets interrupts my thoughts and sometimes makes it hard to concentrate. Plus, I have the impression (...) that this requests are generated by lack of competence, more than by laziness. In fact, he often asks things pretending to know the answer, since when I solve the problem he usually says things like "Sure", "Yes, that's what I thought", giving me the impression that my answer isn't worth it. How can I solve this embarrassing situation? Should I show more explicitly in front of other colleagues his lack of knowledge (by saying things like: "do it yourself if you can, please") or continue giving him what he wants? I think that he should aggregate all his answers in one, so that I can give him a portion of my time and he can work all by himself on his things. There is no hierarchy in the team, I must say we both have a similar seniority of five years, more or less. For the same reason I believe I cannot report to management, since trivial questions are often ignored. I discussed with other two members and they agree with me: in fact he often ask things cycling through colleagues.

    Read the article

  • Lots of Internet browsing issues, all browsers

    - by dario_ramos
    Before the upgrade, everything was working fine. Now, however, I can connect to the Internet but a lot of stuff fails, and the weirdest thing is that it happens with Firefox, Chromium and Opera. Some of the things that fail: I can't log in to Stack Overflow, after entering user/pass it loads for a long time on Firefox and throws Error 408 (browser request timed out) on Chromium and Opera I can't log in to Hotmail, similar symptoms I can login to Facebook, but when I try to write a comment, or just post something in my wall, it stays loading for a long time, and then fails The first two issues seem to be related to secure pages, and the second one is another issue altogether, I believe. However, they all happen with all browsers, which is really weird. Talking about weird: I connect using a Huawei SmartAX MT 810 USB modem, which cost me blood and tears to get it working under Ubuntu. I ordered an ethernet modem/router with my ISP, and I'm still waiting, but this issue intrigues me anyway. Has anyone experienced this kind of problems? I Googled around, but couldn't find a similar case.

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >