Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 444/683 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • yum-updatesd problem

    - by smusumeche
    I am trying to setup yum-updatesd to send email notification when there are updates available to the packages on my system. However, I am not receiving any notifications even though there are updates available (as seen in yum check-update) I have verified that the yum-updatesd service is running. This is my /etc/yum/yum-updatesd.conf: [main] # how often to check for new updates (in seconds) run_interval = 3600 # how often to allow checking on request (in seconds) updaterefresh = 600 # how to send notifications (valid: dbus, email, syslog) emit_via = email email_to = [email protected] email_from = [email protected] # should we listen via dbus to give out update information/check for # new updates dbus_listener = yes # automatically install updates do_update = no # automatically download updates do_download = no # automatically download deps of updates do_download_deps = no

    Read the article

  • Updating, etc., automatically

    - by Steve D
    Is there a way to set up Ubuntu 12.04 (or earlier versions) so that all recommended updates are done automatically, say once a week? When I say automatically, I mean no password entry or user intervention required. This sounds like a stupid request, so let me tell why I'm asking. My grandfather knows nothing about computers; he uses his solely to read Yahoo! mail. I want to get rid of his clunky, spyware-ridden Windows XP and install Ubuntu. I want to set it up so when he turns the computer on, after a couple minutes, voila!, Yahoo! mail, already signed in, ready to go. The problem is I don't want to have to go over there every week or so and make sure everything is up-to-date, he hasn't accidentally installed any spyware, etc. So can this be done? Is this the best way to set things up for my grandfather? Are there other things I should be worried about when it comes to keeping things hassle-free for him? Please don't post anything like "why not teach him how to... blah blah blah". My grandfather is 80 years old and has made it clear email is the only thing he will ever use a computer for! Thanks!

    Read the article

  • Highly robust and scalable search server needed for managing and analyze files

    - by ChrisBenyamin
    Hi everybody, I am looking for a professional search server system with functionality, like e.g. solr http://lucene.apache.org/solr/ holds. Place of action should be a centralized location, whereon many hosts would request data. Furthermore the system should be extensible for implementing statistical procedures. (e.g. a kind of heatmap (or common diagrams) of a (or more) file(s) (which has a guid), that is spread on different hosts.) This software doesn't have to be opensource. thanks. chris

    Read the article

  • Google is displaying "Translate this page" based on a previously registered domain inbound links

    - by crnm
    I recently started a new project with a newly registered generic tld domain. As soon as Google started indexing the page, it displayed a "translate this page" in SERP's, which tries to translate the page to the language of a small Eastern European country from the language that the site actually uses. I tried everything to prevent this: language meta headers and attributes, localisation through Google Webmaster Tools...all to no avail - nothing helped. After a couple of weeks I spotted dozens of inbound links popping up in Google Webmaster Tools all coming from that small Eastern European country, from sub-pages that are not active anymore (either sending out 404's or 301's to the main page), and also had been written in that other language. So the domain had been registered before and as it looks, it did got a lot of possibly spam links in that language. I can't even ask the sites where those links should have been to remove them as they are not active anymore physically, just in Google Webmaster Tools and/or internal data masses... Now I'm at a loss about what to do? As my site is pretty new, it does not have many links pointing towards it in my targeted language. So those are probably not enough to convince Google of attaching the right language to it as Google ignores all other signals about the page language. I'm also unsure if I should use the "disavow" tool, or a reconsideration request...or what else to do about this miserable state. I never used these tools before so I don't have any experience with them. Somehow I have to convince Google about the right language of the page and also to not count/apply/whatever all those historical links from the previous owner. (The domain had been deleted without any traces in Google before I registered it) Has anyone here ever dealt with a similar "Translate this page" problem? (I've also looked at this thread: How can I prevent Google mistakenly offering to translate a page? but didn't find a solution there)

    Read the article

  • Delivery Status Notification (Relay) in Exchange Server 2007 with original email attachment

    - by Nick Kavadias
    I have recently setup Exhchange Server 2007. The server is smarthosting outgoing messages. Users have 'request delivery receipt' on by default their 'auditing' purposes in Outlook. They would like the original email attached to the delivery notification as was the case in Exchange Server 2003. I need this same functionality in 2007. The question has been asked here, here and here but cannot find a valid solution. Here's some information about the functionality in Exchange 2003. The question is, can i replication this functionality in 2007? Here is what a 2007 delivery message looks like: I know it's possible to customize DSN's. Can I make a custom DSN for this type of message and have the original included as an attachment? Anyone got any other ideas?

    Read the article

  • Apache Balancing by source IP

    - by Daniel
    I am using Apache's Proxy Balancer to balance one sub domain (e.g. subdomain.domain.com) to an application which is located on 2 servers. Here an extract from my Apache configuration file: <Proxy *> Order deny,allow Allow from all </Proxy> <Proxy balancer://cluster1> BalancerMember http://server1:28081 route=w1 BalancerMember http://server2:28082 route=w2 </Proxy> ProxyPass /path balancer://cluster1/path ProxyPassReverse /path balancer://cluster1/path My question is, if it's possible to decide with the source IP-address which BalancerMember should be used for the request? To e.g. Requests from 1.2.3.4 to Member 1?

    Read the article

  • When can an FTP server close its passive connections?

    - by Don Kirkby
    Does the FTP protocol allow the server to close any of its passive connections while the client is still connected? Can it tell when the client is finished receiving and then close the connection? I'm including an FTP server in my application using the pyftpdlib Python project. I've got it to work in active and passive mode, but I'm a bit concerned about when it closes its passive connections. I've tried connecting to it with both FileZilla and the default ftp command in Ubuntu, and in both cases, I get a new passive port for every request. That is, if I sit in the root folder and type ls 10 times, I use up 10 ports. This means that I have to allocate a big block of passive ports for the FTP server to use so it won't run out. As soon as the client disconnects, the server releases all the passive connections associated with that client and those ports can be reused. However, a long-running connection could use up a lot of ports.

    Read the article

  • getaddrinfo(3) failed

    - by user101289
    I'm trying to connect to a webservice using a PHP wrapper (which is using curl under the covers). On my local linux machine running PHP 5.3 it works perfectly. However, when I move to a remote server (also running PHP 5.3 on Linux) the call the the webservice URL returns: getaddrinfo(3) failed for http://server.host.com:8080/login I get a similar error from a ping on the remote host: ping: unknown host http://server.host.com:8080/login But when I issue a curl request from the command line, it returns the expected URL. Can anyone shed any light on this issue? Thanks!

    Read the article

  • PHP script unable to email on OpenBSD Apache

    - by MattC
    I have a webserver running OpenBSD 4.7 and PHP 5.2.12 out of the ports tree. There is a small contact page that is supposed to send an email to a specific address. When I fill in the form using a web browser, it sends the AJAX request to the PHP page which claims it worked successfully but there is no email. The maillog is empty as well. I created a small php script that replicates this functionality and when I run it by hand using the "php -f" command, it sends an email without a problem. I think this has to do with being chrooted but I can't seem to get it to work. Furthermore, I can't seem to get PHP to log. I told it to log to /var/www/logs/php_errors.log and restarted but can't get it to send anything to the file. Does anyone have any tips for debugging these sort of things in OpenBSD?

    Read the article

  • Inventory Consignment Flow

    - by ipohfly
    Not sure whether this is the right place to ask this question, but here goes.. Currently I have requirement to add support for consignment transaction in our inventory module. I have a very limited understanding of what consignment means in inventory, i.e. Customer get stocks/products from Seller without actually buying them, the product just resides in the Customer's inventory and it's still owned by the Seller. Only when the Customer actually buy the stocks then only will the ownership of the stock is transferred. The issue is i can't imagine how the data will be presented to both the Customer and the Seller. What i know is that i would need to deduct the stock from the Seller's inventory when the Customer raise a request to get the stock through consignment, but what about the 'ownership' of the stocks/products? Does that mean i would need to create another column in my table to state that for each inventory it is owned by who? Anywhere i can get information on how i should work out an inventory module like this? Thanks.

    Read the article

  • Creating an in-house single source software development team

    - by alphadogg
    Our company wants to create a single department for all software development efforts (rather than having software development managed by each business unit). Business units would then "outsource" their software needs to this department. What would you setup as concerns/expectations that must be cleared before doing this? For example: Need agreement between units on how much actual time (FTE) is allocated to each unit Need agreement on scheduling of staff need agreement on request procedure if extra time is required by one party etc... Have you been in a situation like this as a manager of one unit destined to use this? If so, what were problems you experienced? What would you have or did implement? Same if you were the manager of the shared team. Please assume, for discussion, that the people concerned know that you can't swap devs in and out on a whim. I don't want to know the disadvantages of this approach; I know them. I want to anticipate issues and know how to mitigate the fallout.

    Read the article

  • College wifi works easily on Linux, but not on Windows

    - by user52849
    In Linux: After connecting to the college wifi, going to the network login page logging in, the internet works perfectly as it should. In Windows: After connecting to the college wifi, going to the network login page, logging in, Windows shows "Internet access" and the wireless icon turns white. But still after that, regardless of the browser being used, attempting to accessing any page just shows "Sending request". It does work though after a lot of tries, but only in intervals. But when running Ubuntu 11.10 in VirtualBox, it works properly just like booting in Ubuntu, even if it isn't working on Windows. The college wifi service is really crappy and has been unable to solve this problem. I'm pretty sure there should be a solution for this, but what? What is it that Ubuntu is doing right and Windows isn't? Windows settings set to "Automatically detect settings" and no proxy server used.

    Read the article

  • Sharepoint asks for NTLM credentials for every unique URL. How do I stop it?

    - by CamronBute
    I'm tasked with troubleshooting a problem we're having with a SP2010 site. The app is external, and there are several clients that must connect. Some clients are receiving a crazy amount of credential requests when trying to log on. It appears to ask for every unique URL (eg. every different picture, link, etc) and it won't stop. Other clients are having no problems. I cannot seem to replicate the issue, either. I'm attempting to replicate by restricting all settings (including cookies) on my own browser, but to no avail. I put the HTTP request under a microscope, and it's asking for NTLM credentials. The client is using IE8, and the browser is running in Protected Mode, but the browser settings cannot be determined... I'm guessing this is a webserver thing, simply because it appears to be an authentication thing. What might the problem be?

    Read the article

  • IIS + PHP + Page with lots of images = Intermittent 403 errors

    - by samJL
    I am using an up-to-date Server 2008 R2 Datacenter, running IIS 7.5 and PHP 5.3.6/FastCGI On PHP pages with lots of images (60+), some of the images fail to load It is not always the same images-- on each page refresh an image that worked previously may not load, while an image that did not now does Looking at the Net tab in Firebug reveals that the failing image requests are 403 errors All of the images are located on the server in question, and the images directory has the correct permissions I believe this problem is the result of a limit on requests All of my attempts at researching this problem point to maxConnections setting in IIS, yet mine is set at the highest/default of 4294967295 (maxBandwidth too) I am also running a ColdFusion site on the same IIS installation, and it does not suffer from 403's on pages with lots of images I am left thinking that there is another connection limit (in PHP or FastCGI?) overriding the IIS connection limit I don't see anything that looks like a request limit in the php.ini, what am I missing? Any help would be appreciated, thank you

    Read the article

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • SSL issues with puppet agent at openSUSE

    - by Roman Grazhdan
    I have a master running at my vps, and it has a simple helloworld manifest which works fine with any ubuntu machine I have. It connects, exchanges keys and creates test file allright, so I'm sure it's not server issue. The agent which is running at a virtual machine with openSUSE says: err: Could not request certificate: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed. This is often because the time is out of sync on the server or client I believe it's probably a broken or missing lib, since the package is not built very accurately - it wouldn't start out of the box because of wrong path to lockfile, for example. So how do I figure out what exactly is wrong here? The time is allright, I've checked it. I probably could do without SSL if it's possible, since that SUSE machines are just for training, but it's the last opportunity.

    Read the article

  • Extracting useful information from free text

    - by insta
    We filter and analyse seats for events. Apparently writing a domain query language for the floor people isn't an option. I'm using C# 4.0 & .NET 4.0, and have relatively free reign to use whatever open-source tools are available. </background-info> If a request comes in for "FLOOR B", the sales people want it to show up if they've entered "FLOOR A-FLOOR F" in a filter. The only problem I have is that there's absolutely no structure to the parsed parameters. I get the string already concatenated (it actually uses a tilde instead of dash). Examples I've seen so far with matches after each: 101WC-199WC (needs to match 150WC) AAA-ZZZ (needs to match AAA, BBB, ABC but not BB) LOGE15-LOGE20 (needs to match LOGE15 but not LOGE150) At first I wanted to try just stripping off the numeric part of the lower and upper, and then incrementing through that. The problem I have is that only some entries have numbers, sometimes the numbers AND letters increment, sometimes its all letters that increment. Since I can't impose any kind of grammar to use (I really wanted [..] expansion syntax), I'm stuck using these entries. Are there any suggestions for how to approach this parsing problem?

    Read the article

  • Working with Google Webmaster Tools

    - by com
    My first question is about Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. One of them is HTTP. I assume that all broken links in HTTP was somehow found by crawler, this is not the links from sitemap. If this was found by scanning all sitemap pages for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. And what the meaning of Linked From, I thought if the name of section is sitemap, therefore all URLs should be taken from sitemap, so why there is Linked From? The second question, what is the best way to trreat searching on the site. How come the searching result page are getting indexed? Because of the fact that all searching result page are getting indexed, I have to many page in Linked From. What's the right practice? Question three: In order to improve response time in WMT, can I redirect all crawler's requests to designated free web server? Is this good practice? Question four: How should I treat Google Analytics Code (with parameters PageView, PageLoadTime), in the case user request non existing page, should I render Google code or not? Right now I use Google Analytics Code on the common template page, such that every page, also non existing page with error message contains Google Analytics Code, it seems like it has influence on WMT.

    Read the article

  • cruisecontrol.net failing build with no errors

    - by John Hoge
    Hi, I've been using CCNet for some time now but just upgraded to .net 4. I've installed the new framework on my dev box along with vs2010 and on my CC.net server as well. I've just installed CruiseControl.net version 1.5.6804.1, and changed my MSBuild tasks to point to the new v4.0.30319 framework directory. I've got two projects on .net 4 now that just don't build. They build perfectly well in VS and run well on both Cassini and IIS. I just don't get any error messages from cc.net, just this: BUILD FAILED Project: GoodBay Date of build: 2010-05-11 10:21:59 Running time: 00:00:02 Integration Request: Build (ForceBuild) triggered from SVN Modifications since last build (0)

    Read the article

  • Storing images in file system and returning URLs or virtually resizing and returning byte arrays?

    - by ismaelf
    I need to create a REST web service to manage user submitted images and displaying them all in a website. There are multiple websites that are going to use this service to manage and display images. The requirements are to have 5 pre-defined image sizes available. The 2 options I see are the following: The web service will create the 5 images, store them in the file system and and store the URL's in the database when the user submits the image. When the image is requested, the web service will return an array of URLs. I see this option to be a little hard on the hard drive. The estimates are 10,000 users per site, and lets say, 100 sites. The heavy processing will be done when the user submits the image and each image is going to be pulled from the File System. The web service will store just the image that the user submits in the file system and it's URL in the database. When the user request images, the web service will get the info from the DB, load the image on memory, create its 5 instances and return an object with 5 image arrays (I will probably cache the arrays). This option is harder on the processor and memory. The heavy processing will be done when the images get requested. A plus I see for option 2 is that it will give me the option to rewrite the URL of the image and make them site dependent (prettier) than having a image repository for all websites. But this is not a big deal. What do you think of these options? Do you have any other suggestions?

    Read the article

  • How to build an API on top of an existing Rails app with NodeJs and what architecture to use?

    - by javiayala
    The explanation I was recently hired by a company that has an old RoR 2.3 application with more than 100k users, a strong SEO strategy with more than 170k indexed urls, native android and ios applications and other custom-made mobile and web applications that rely on a not so good API from the same RoR app. They recently merged with a company from another country as an strategy to grow the business and the profit. They have almost the same stats, a similar strategy and mobile apps. We have just decided that we need to merge the data from both companies and to start a new app from scratch since the RoR app is to old and heavily patched and the app from the other company was built with a custom PHP framework without any documentation. The only good news is that both databases are in MySQL and have a similar structure. The challenge I need to build a new version that: can handle a lot of traffic, preserves the SEO strategies of both companies, serve 2 different domains, and have a strong API that can support legacy mobile apps from both companies and be ready for a new set of native apps. I want to use RoR 3.2 for the main web apps and NodeJs with a Restful API. I know that I need to be very careful with the mobile apps and handle multiple versions of the API. I also think that I need to create a service that can handle a lot IO request since the apps is heavily used to create orders for restaurants at a certain time of the day. The questions With all this in mind: What type of architecture do you recommend me to follow? What gems or node packages do you think will work the best? How do I build a new rails app and keep using the same database structure? Should I use NodeJS to build an API or just build a new service with Ruby? I know that I'm asking to much from you guys, but please help me by answering any topic that you can or by pointing me on the right direction. All your comments and feedback will be extremely appreciated! Thanks!

    Read the article

  • How to host ASP.NET application externally?

    - by Josh
    I have an ASP.NET application that I can get to locally by going to 192.168.1.102:81/TestApp. I would like to host the application externally by going to domain.com:81/TestApp (I already have my domain pointing to my router and this works fine - I have apache running on port 80 on another server). I modified the router settings to point any request coming through port 81 to 192.168.1.102. I am still having trouble accessing the ASP.NET site (I get the error message that "This link appears to be broken"). Am I missing something? How can I redirect domain.com:81/TestApp to my ASP.NET application? Thanks.

    Read the article

  • caches domain user on local PC

    - by user630320
    We have a fully working domain in UK and around the world we have user who use VPN ( checkpoint) to connect to or domain. One of the user in USA has a laptop which he never logged on to before ( it does caches the user login details). Does anyone know how to cache user login information on this laptop. I have tried netdom trust to add this user to the laptop but i was not able to do this. At the moment user is logging in with a local administrator account and then using VPN to log on to our domain but when it comes to accessing files on domain user get access deieded. When user try to login it gets There are currently no log on servers available to service the logon request Does anyone know how to add user.

    Read the article

  • Lost connectivity after configuring multiple network adapters on separate networks

    - by Dave Long
    I am trying to setup an Ubuntu hosting server, currently just for development, and the server has two NICs, each sitting on a different network. eth0 is on 192.168.200.* and eth1 is on 192.168.101.* and each one has a static IP. eth0 is the public facing NIC card and eth1 is strictly for internal access to the server. I initially only setup eth0 and added the eth1 card when I needed it. eth0 was working find until I added eth1, now, can't get any connectivity on eth0 unless I pull eth1 out of the box. The configuration on each system is as follows: auto eth0 iface eth0 inet static address 192.168.200.94 netmask 255.255.255.0 network 192.168.200.0 broadcast 192.168.200.255 gateway 192.168.200.253 auto eth1 iface eth1 inet static address 192.168.101.64 netmask 255.255.255.0 network 192.168.101.0 broadcast 192.168.101.255 gateway 192.168.101.254 Again eth0 worked fine until I added eth1. I have seen this happen with Windows servers if you have a Default Gateway setup for both NICs, but I am not sure if this works the same on Ubuntu. My resolv.conf file looks like so: nameserver 192.168.101.59 nameserver 192.168.101.58 domain domain.local search domain.local Per request here is the Routing table 192.168.101.0 * 255.255.255.0 U 0 0 0 eth1 192.168.200.0 * 255.255.255.0 U 0 0 0 eth0 default 192.168.101.254 0.0.0.0 UG 100 0 0 eth1 default 192.168.200.253 0.0.0.0 UG 100 0 0 eth0

    Read the article

  • How should I group these variables?

    - by stariz77
    I have a shape that will be defined by: char s_type; char color; double height; double width; These variables are scanned in from a request string sent to my server and passed into my printing function, which then prints out the shape. Currently they are just local variables sitting in my main(); however, I was wondering if there would be any advantage in creating a struct containing these variables, and then passing the struct to my printing function? or how else might I improve my program's structure/style, would passing a struct by reference have any kind of performance benefit if there were many requests and therefore many printing function calls? printer(char st, char cr, double ht, double wd); int main() { // Other main functionality. char s_type; char color; double height; double width; sscanf (serv_req, "GET /%c/%c/%lf/%lf", &s_type, &color, &height, &width); printer(s_type, color, height, width); // Other main functionality. return 0; } It seemed "neater" if I had a struct or something that didn't leave me with declarations in the middle of everything else going on in main. I'm interested in structure/style as well as performance. EDIT: didn't mean to put printer declaration inside main.

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >