Search Results

Search found 2693 results on 108 pages for 'keeping up'.

Page 92/108 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Doing a global count of an object type (like Users), best practice?

    - by user246114
    Hi, I know keeping global counters is frowned upon in app engine. I am interested in getting some stats though, like once every 24 hours. For example, I'd like to count the number of User objects in the system once every 24 hours. So how do we do this? Do we simply keep a set of admin tool functions which do something like: SELECT FROM com.me.project.server.User; and just see what the size of the returned List is? This is kind of a bummer because the datastore would have to deserialize every single User instance to create the returned list, right? I could optimize this possibly by asking for only the keys to be returned, so the whole User object doesn't have to be deserialized. Then again, a global counter for # of users probably would create too much contention, because there probably won't be hundreds of signups a minute for the service I'm creating. How should we go about doing this? Getting my total number of users once a day is probably a pretty typical operation? Thank you

    Read the article

  • Modifying an ObservableCollection using move() ?

    - by user1202434
    I have a question relating to modifying the individual items in an ObservableCollection that is bound to a ListBox in the UI. The user in the UI can multiselect items and then drop them at a particular index to re-order them. So, if I have items {0,1,2,3,4,5,6,7,8,9} the user can choose items 2, 5, 7 (in that order) and choose to drop them at index 3, so that the collection now becomes, {0,1,3, 2, 5, 7, 4, 8,9} The way I have it working now, is like this inside of ondrop() method on my control, I do something like: foreach (Item item in draggedItems) { int oldIndex = collection.IndexOf(item.DataContext as MyItemType); int newIndex = toDropIndex; if (newIndex == collection.Count) { newIndex--; } if (oldIndex != newIndex) { collection.Move(oldIndex, newIndex); } } But the problem is, if I drop the items before the index where i start dragging my first item, the order becomes reversed...so the collection becomes, {0,1,3, 7, 5, 2, 4, 8,9} It works fine if I drop after index 3, but if i drop it before 3 then the order becomes reversed. Now, I can do a simple remove and then insert all items at the index I want to, but "move" for me has the advantage of keeping the selection in the ui (remove basically de-selects the items in the list..)....so I will need to make use of the move method, what is wrong with my method above and how to fix it? Thanks!

    Read the article

  • Saving information in the IO System

    - by djTeller
    Hi Kernel Gurus, I need to write a kernel module that simulate a "multicaster" Using the /proc file system. Basically it need to support the following scenarios: 1) allow one write access to the /proc file and many read accesses to the /proc file. 2) The module should have a buffer of the contents last successful write. Each write should be matched by a read from all reader. Consider scenario 2, a writer wrote something and there are two readers (A and B), A read the content of the buffer, and then A tried to read again, in this case it should go into a wait_queue and wait for the next message, it should not get the same buffer again. I need to keep a map of all the pid's that already read the current buffer, and in case they try to read again and the buffer was not changed, they should be blocked until there is a new buffer. I'm trying to figure it there is a way i can save that info without a map. I heard there are some redundant fields inside the I/O system the I can use to flag a process if it already read the current buffer. Can someone give me a tip where should i look for that field ? how can i save info on the current process without keeping a "map" of pid's and buffers ? Thanks!

    Read the article

  • Elegent methods for caching search results from RESTful service?

    - by Paul
    I have a RESTful web service which I access from the browser using JavaScript. As an example, say that this web service returns a list of all the Message resources assigned to me when I send a GET request to /messages/me. For performance reasons, I'd like to cache this response so that I don't have to re-fetch it every time I visit my Manage Messages web page. The cached response would expire after 5 minutes. If a Message resource is created "behind my back", say by the system admin, it's possible that I won't know about it for up to 5 minutes, until the cached search response expires and is re-fetched. This is acceptable, because it creates no confusion for me. However if I create a new Message resource which I know should be part of the search response, it becomes confusing when it doesn't appear on my Manage Messages page immediately. In general, when I knowingly create/delete/update a resource that invalidates a cached search response, I need that cached response to be expired/flushed immediately. The core problem which I can't figure out: I see no simple way of connecting the task of creating/deleting/updating a resource with the task of expiring the appropriate cached responses. In this example it seems simple, I could manually expire the cached search response whenever I create/delete/update a(ny) Message resource. But in a more complex system, keeping track of which search responses to expire under what circumstances will get clumsy quickly. If someone could suggest a simple solution or some clarifying thoughts, I'd appreciate it.

    Read the article

  • Why my tracking service freezes when the phone moves?

    - by user2878181
    I have developed a service which includes timer task and runs after every 5 minutes for keeping tracking record of the device, every five minutes it adds a record to the database. My service is working fine when the phone is not moving i.e it gives records after every 5 minutes as it should be. But i have noticed that when the phone is on move it updates the points after 10 or 20 minutes , i.e whenever the user stops in his way whenever he is on the move. Do service freezes on the move, if yes! how is whatsapp messenger managing it?? Please help! i am writing my onstart method. please help @Override public void onStart(Intent intent, int startId) { Toast.makeText(this, "My Service Started", Toast.LENGTH_LONG).show(); Log.d(TAG, "onStart"); mLocationClient.connect(); final Handler handler_service = new Handler(); timer_service = new Timer(); TimerTask thread_service = new TimerTask() { @Override public void run() { handler_service.post(new Runnable() { @Override public void run() { try { some function of tracking } }); } }; timer_service.schedule(thread_service, 1000, service_timing); //sync thread final Handler handler_sync = new Handler(); timer_sync = new Timer(); TimerTask thread_sync = new TimerTask() { @Override public void run() { handler_sync.post(new Runnable() { @Override public void run() { try { //connecting to the central server for updation Connect(); } catch (Exception e) { // TODO Auto-generated catch block } } }); } }; timer_sync.schedule(thread_sync,2000, sync_timing); }

    Read the article

  • grails services :: multiple projects

    - by naveen
    PROBLEM : I have multiple grails projects (lets say appA, appB and appC) : services to be precise I want to run them in a single grails-app.. probably a war deployment, how can i do this? REQUIREMENTS : I want this to be a single app since i am deploying it on cloud and i don't have enough memory to hold all these service instances individually. The reason for multiple grails project is scalability. So that if later on i want to run 10 instance of appA, 3 instance of appB, and 1 instance of aapC; i should be able to do that. EDIT : Can i use something like 0mq, will that be helpful in keeping the services separated from each other. How will i package my service? And reading the docs of 0mq seems that it can work with both inprocess and external process. Will async grails requests on HTTP work with 0mq in process/ external mq calls. Haven't used 0mq, but from the initial doc it seems to work. Need some experience calls in this scenario. Are there any other alternatives or mq alternatives?

    Read the article

  • Ordering a set of lines so that they follow one from the other

    - by george
    Line# Lat. Lon. 1a 1573313.042320 6180142.720910 .. .. 1z 1569171.442602 6184932.867930 3a 1569171.764930 6184934.045650 .. .. 3z 1570412.815667 6190358.086690 5a 1570605.667770 6190253.392920 .. .. 5z 1570373.562963 6190464.146120 4a 1573503.842910 6189595.286870 .. .. 4z 1570690.065390 6190218.190575 Each pair of lines above (a..z) represents the first and last coordinate pair of a number of points which together define a line. The lines are not listed in sequence because I don't know what the correct sequence is just by looking at the coordinates (unless I look at the lines in a map). Hence my question: how can I find programmatically (in Python) what the correct sequence is, so I can join the lines into one long line, keeping in mind the following problem: - a 'z' point (the last point in a line) may well be the first point if the line is described as proceeding in the opposite direction to other lines. e.g. one line may go from left to right, another from right to left (or top to bottom and viceversa). Thank you in advance...

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • CSS challenge: Two background images, centered column with fixed with, min-height 100%

    - by laurent
    In a nutshell I need a CSS solution for the following requirements: Layout: One centered column with fixed width and a minimum height of 100% Two vertically repeated background images behind the centered column, one aligned to the left, one aligned to the right Cross browser compatibility A little more details Today a new requirement for my current web site project came up: A background image with gradients on the left and right side. The challenge is now to specify two different background images while keeping the rest of the layout spec. Unfortunately the (simple) layout somehow doesn't go with the two backgrounds. My layout is basically one centered column with fixed width: #main_container { margin: 0 auto; min-height: 100%; width: 800px; } Furthermore it's necessary to stretch the column to a minimum height of 100%, since there are quite some pages with only little content. The following CSS styles take care of that: html { height: 100%; } body { margin: 0; height: 100%; padding: 0; } So far so good - until the two background image issue arrived... I tried the following solutions Two absolute positioned divs behind the main container One image defined with the body, one with the html CSS class One image defined with the body, the other one with a large div begind the main container With either one of them, the dynamic height solution was ruined. Either the main container didn't stretch to 100% when it was too small, or the background remained at 100% when the content was actually longer

    Read the article

  • Errors when switching to specific static IP

    - by michaelc
    I had a Fedora box running using my static IP 69.169.136.6, etc, all configured according to what the ISP required. Just recently the hard drive failed (and I should have been keeping better backups) - while it is being recovered I would like to put up a webpage on my Archlinux PC explaining the problem - I presently do not have sufficient access to change the DNS record assigned to the domain. When I change my ip address while my system is running to 69.169.136.6, ifconfig reports the new ip address, but http://whatismyip.com/ does not. When I change it and reboot, I can't ping - the message I recieve is "connect: Network is unreachable" (when given one of google.com 's IP addresses - hostnames give me ping: unknown host xxx). Until I have access to the DNS system, what can I do to make this work? Edit: With new IP address, same problem, IP is now 69.169.136.29. Some commands might be useful: #ping 69.169.136.1 PING 69.169.136.1 (69.169.136.1) 56(84) bytes of data. 64 bytes from 69.169.136.1: icmp_seq=1 ttl=64 time=0.377 ms #ping 69.169.190.211 connect: Network is unreachable #ping 208.72.160.67 connect: Network is unreachable #ifconfig eth0 Link encap:Ethernet HWaddr 00:E0:4D:97:23:9B inet addr:69.169.136.29 Bcast:69.169.137.255 Mask:255.255.254.0 inet6 addr: fe80::2e0:4dff:fe97:239b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:132091 errors:0 dropped:0 overruns:0 frame:0 TX packets:17 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9635179 (9.1 Mb) TX bytes:1322 (1.2 Kb) Interrupt:29 Base address:0x6000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:48 errors:0 dropped:0 overruns:0 frame:0 TX packets:48 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2480 (2.4 Kb) TX bytes:2480 (2.4 Kb) #ip route 69.169.136.0/23 dev eth0 proto kernel scope link src 69.169.136.29 #cat /etc/resolv.conf # Generated by dhcpcd #nameserver 208.67.222.222 #nameserver 208.67.220.220 nameserver 69.169.190.211 nameserver 208.72.160.67 # /etc/resolv.conf.tail can replace this line Update: have new static IP addresses, verified to work in Windows... Relevant portions of /etc/rc.conf below: #Static IP example #eth0="eth0 69.169.136.6 netmask 255.255.254.0 broadcast 69.169.136.1" #eth0="eth0 69.169.136.29 netmask 255.255.254.0 broadcast 69.169.137.255" eth0="eth0 69.169.136.32 netmask 255.255.254.0 broadcast 69.169.137.255" #eth0="dhcp" INTERFACES=(eth0) # Routes to start at boot-up (in this order) # Declare each route then list in ROUTES # - prefix an entry in ROUTES with a ! to disable it # #gateway="default gw 192.168.0.1" gateway="default gw 69.169.136.1" #gateway="69.169.136.1" ROUTES=(!gateway) #ROUTES=()

    Read the article

  • Handling bounced email when using a postfix smarthost

    - by Mark Rose
    I'm running a high availability cluster, and so far, most things work great. I have two external machines that act as outgoing mail hosts (smarthosts). The internal hosts are configured to relay all email through these two external facing hosts. My smarthosts' main.cf looks like this: myhostname = lb1.example.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = lb1.example.com, localhost relayhost = mynetworks = 127.0.0.0/8 10.1.248.0/24 My internal hosts' main.cf looks like this: mynetworks = 127.0.0.0/8 myhostname = web1.example.com mydestination = $myhostname, localhost.$mydomain, localhost relayhost = [10.1.248.3] smtp_fallback_relay = [10.1.248.2] lb1's internal IP is 10.1.248.2, and lb2's internal IP is 10.1.248.3. On the external hosts, email for root and www-data is forwarded to [email protected] with /etc/aliases. One advantage to using the smarthost setup is that spam filters and the like can connect back to the sending sending server. All email is sent fine, and headers look like this: Received: from lb2.example.com ([198.51.100.3]) by mx.google.com with ESMTP id y17si1571259icb.76.2011.01.13.18.20.32; Thu, 13 Jan 2011 18:20:32 -0800 (PST) Received-SPF: neutral (google.com: 198.51.100.3 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=198.51.100.3; Received: from db1.example.com (unknown [10.1.248.20]) by lb2.example.com (Postfix) with ESMTP id D364823C0BE for <[email protected]>; Thu, 13 Jan 2011 21:20:31 -0500 (EST) Received: by db1.example.com (Postfix) id C9FA7760D6A; Thu, 13 Jan 2011 21:20:31 -0500 (EST) Delivered-To: www-data@localhost Received: by db1.example.com (Postfix, from userid 0) id C1632760D6C; Thu, 13 Jan 2011 21:20:31 -0500 (EST) The problem is bounced/reject email. The external machine tries to forward the email back to the internal machine, e.g. www-data on web1 sending an email that bounces (such as a user signing up with a bad email address). An additional complication is using Google mail for the main example.com domain. In lieu of specifying every internal host in the external hosts' mydestination, is there a better way of setting things up, keeping in mind I can't adjust touch the mx for example.com?

    Read the article

  • Excel 2007 VBA macros don't work in Parallels

    - by MindModel
    I've got a complex Excel spreadsheet I need to use at work. My colleagues use the spreadsheet on Windows PC's, with no special configuration required. I want to run it on a MacBook Pro running Snow Leopard. The spreadsheet contains VBA macros which connect to external Oracle db's over the Internet. If I understand correctly, Excel on the Mac doesn't run VBA macros, so I have to use Parallels. I installed Parallels on the Mac and it's running correctly, as far as I can tell. I installed Excel 2007 under Parallels. I can open the Excel spreadsheet in Parallels and click buttons in the spreadsheet to run macros, but the macros fail with compiler errors. I don't have the password to the source code for the VBA macros, and if possible, I don't want to dig in to the code at that level. I know that there are quite a few things that could go wrong, and examining the VBA code might help, but I'm hoping to solve the problem without going down that road. The spreadsheet runs without any special configuration on Windows, so I'm wondering if anyone out there knows of any limitations of Excel VBA macros under Parallels, or anything else I could do to get this spreadsheet working. It's the only thing that's keeping me from using this MacBook Pro at work. Here is the error message: Compile error in hidden module: clsXXXXx0020Toolx0020Ser. This error commonly occurs when code is incompatible with the version, platform, or architecture of this application. Click Help for more info. Compile error in hidden module: A protected module contains a compilation error. Because the error is in a protected module it cannot be displayed. This error commonly occurs when code is incompatible with the version or architecture of this application (for example, code in a document targets 32-bit Microsoft Office applications but it is attempting to run on 64-bit Office). This error has the following cause and solution: Cause of the error: The error is raised when a compilation error exists in the VBA code inside a protected (hidden) module. The specific compilation error is not exposed because the module is protected. Possible solutions: If you have access to the VBA code in the document or project, unprotect the module, and then run the code again to view the specific error. If you do not have access to the VBA code in the document, then contact the document author to have the code in the hidden module updated.

    Read the article

  • Accidentally Removed Permissions for the XP SP3 Registry key HKEY_CLASSES_ROOT! Workaround Please!

    - by Praveen Kumar
    This is Praveen and I am using Microsoft Windows XP SP3 Build 2600. I had problems with using Microsoft Office 2010. It was keeping on saying, "Please wait while windows configures Microsoft Office Professional Plus 2010". After seeing this link: http://social.answers.microsoft.com/Forums/en-US/outlookcontact/thread/6e9c3f2e-010a-4b74-b433-0c41548ee468?prof=required I thought of giving the Registry Key: HKEY_CLASSES_ROOT, full access permissions for Everyone. I went to the registry, right-clicked on HKEY_CLASSES_ROOT and clicked on Permissions. I added Everyone and gave Full Control to the ACL and clicked on Apply. Also I checked "Replace permission entries on all child objects with entries shown here that apply to child objects." After a long time, it said with an error, cannot replace for few entries. Now, the key, HKEY_CLASSES_ROOT has no access to any user. My system is not starting up. Some of my friends asked me to try the Last Known Good Configuration. Even that did not work out. When I tried to open in the Safe Mode, I got only one driver to load and even that didn't load well. Another friend suggested me to put the Setup disk and reinstall the OS. I tried that and after completing the "Installing Device Drivers" part, when it started "Installing Network", the system is getting restarted. Now, the setup is also half way through and even if I open in Safe Mode to try System Restore, it popped up a message saying, "Setup cannot run under Safe Mode. Setup will restart now." and the system is restarting. All my official files and my software, which I developed for Registry Security, resides in my system now. I am unable to access the system and I want it to be working to submit the project, as the deadline is this week. I had no better solutions from elsewhere. Can anyone please help me out with this issue. If it is possible to open the registry editor's stored file from another system and restore the access permissions, I hope it would solve the problem. Please do help me. Thanking You, Praveen.

    Read the article

  • How to transition to Comcast with static IP address

    - by steveha
    I have my own email server in my house, on a static IP address. I have had business DSL for over a decade, but I also now have Comcast business Internet. I want to transition from the DSL to the Comcast, and I have some questions. I have a domain name, my own mail server, and a firewall (a PC with two network interfaces, running Devil-Linux). I need to make sure I understand how to set up the Comcast cable box, and how to set up my firewall. First, do I need to change any settings in the cable box? Currently I have only used the cable box by plugging in a laptop, with the laptop doing DHCP. I think I can leave the box alone but I would like to make sure. Second, I'm not sure I understand the instructions Comcast gave me for setting up the firewall. My DSL provider gave me the following information: static IP address, net mask, gateway, and two DNS servers. Comcast gave me: static IP address, routable static IP address, net mask, and two DNS servers, and told me to put the "static IP address" as the "gateway" on the firewall. Is this just Comcast-speak here? Does "routable static IP address" mean the same thing as "static IP address" in my DSL setup, the end-point address that I should publish in the DNS MX records for my email server? Or should I publish the "static IP address", and Comcast will then route all its traffic over the cable box? My plan is: first, I'm going to configure another firewall, so I have one firewall for the DSL and one for the Comcast (rather than madly editing settings to switch back and forth). Then I will publish the new Comcast static IP address as a backup email server address in the DNS MX records, wait a while to let it propagate, and then switch my home over from the DSL to the Comcast. Then I'll change DNS to make that the primary mail address and the DSL the secondary, let that go a while and make sure it seems reliable. Then I'll remove the DSL from the DNS MX records completely, and finally shut down the DSL service. (I thought about keeping the DSL as a backup, but the reason I'm leaving DSL is that it has become unreliable; and I have heard that Comcast business Internet is reliable.) Final question, any advice for me? Anything you think might be useful, helpful, or educational. Thanks.

    Read the article

  • Nginx and PHP-FPM running out of connections

    - by E3pO
    I keep running into errors like these, [02-Jun-2012 01:52:04] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 19 idle, and 49 total children [02-Jun-2012 01:52:05] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 19 idle, and 50 total children [02-Jun-2012 01:52:06] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 19 idle, and 51 total children [02-Jun-2012 03:10:51] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 18 idle, and 91 total children I changed my settings for php-fpm to these, pm.max_children = 150 (It was at 100, i got a max_children reached and upped to 150) pm.start_servers = 75 pm.min_spare_servers = 20 pm.max_spare_servers = 150 Resulting in [02-Jun-2012 01:39:19] WARNING: [pool www] server reached pm.max_children setting (150), consider raising it I've just launched a new website that is getting a conciderable amount of traffic on it. This traffic is legitimate and users are getting 504 gateway timeouts when the limit is reached. I have limited connections to my server with IPTABLES and I'm running fail2ban and keeping track of nginx access logs. The traffic is all legitimate, i'm just running out of room for users. I'm currently running on a dual core box with ubuntu 64bit. free total used free shared buffers cached Mem: 6114284 5726984 387300 0 141612 4985384 -/+ buffers/cache: 599988 5514296 Swap: 524284 5804 518480 My php.ini max_input_time = 60 My nginx config is worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 19000; # multi_accept on; } worker_rlimit_nofile 20000; #each connection needs a filehandle (or 2 if you are proxying) client_max_body_size 30M; client_body_timeout 10; client_header_timeout 10; keepalive_timeout 5 5; send_timeout 10; location ~ \.php$ { try_files $uri /er/error.php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; fastcgi_intercept_errors on; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } What can I do to stop running out of connections? Why does this keep occurring? I'm monitoring my traffic on Google Analytics realtime and when the user count gets above about 120 my php-fpm.log is full of these warnings..

    Read the article

  • setting up a basic mod_proxy virtual host

    - by SevenProxies
    I'm trying to set up a basic virtual host to proxy all requests to test.local to a WEBrick server I have running on 127.0.0.1:8080 while keeping all requests to localhost going to my static files in /var/www. I'm running Ubuntu 10.04. I have libapache2-mod-proxy-html installed and I have the module enabled with a2enmod proxy. I also have my virtual host enabled. However, whenever I go to test.local I always get a cryptic 500 server error and all my logs are telling me is: [Thu Mar 03 01:43:10 2011] [warn] proxy: No protocol handler was valid for the URL /. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule. Here's my virtual host: <VirtualHost test.local:80> LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so ServerAdmin webmaster@localhost ServerName test.local ProxyPreserveHost On # prevents this folder from being proxied ProxyPass /static ! DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined and here's my settings for mod_proxy: <IfModule mod_proxy.c> #turning ProxyRequests on and allowing proxying from all may allow #spammers to use your proxy to send email. ProxyRequests Off <Proxy *> # default settings #AddDefaultCharset off #Order deny,allow #Deny from all ##Allow from .example.com AddDefaultCharset off Order allow,deny Allow from all </Proxy> # Enable/disable the handling of HTTP/1.1 "Via:" headers. # ("Full" adds the server version; "Block" removes all outgoing Via: headers) # Set to one of: Off | On | Full | Block ProxyVia On </IfModule> Does anybody know what I'm doing wrong? Thanks

    Read the article

  • How to transition to Comcast with static IP address [migrated]

    - by steveha
    I have my own email server in my house, on a static IP address. I have had business DSL for over a decade, but I also now have Comcast business Internet. I want to transition from the DSL to the Comcast, and I have some questions. I have a domain name, my own mail server, and a firewall (a PC with two network interfaces, running Devil-Linux). I need to make sure I understand how to set up the Comcast cable box, and how to set up my firewall. First, do I need to change any settings in the cable box? Currently I have only used the cable box by plugging in a laptop, with the laptop doing DHCP. I think I can leave the box alone but I would like to make sure. Second, I'm not sure I understand the instructions Comcast gave me for setting up the firewall. My DSL provider gave me the following information: static IP address, net mask, gateway, and two DNS servers. Comcast gave me: static IP address, routable static IP address, net mask, and two DNS servers, and told me to put the "static IP address" as the "gateway" on the firewall. Is this just Comcast-speak here? Does "routable static IP address" mean the same thing as "static IP address" in my DSL setup, the end-point address that I should publish in the DNS MX records for my email server? Or should I publish the "static IP address", and Comcast will then route all its traffic over the cable box? My plan is: first, I'm going to configure another firewall, so I have one firewall for the DSL and one for the Comcast (rather than madly editing settings to switch back and forth). Then I will publish the new Comcast static IP address as a backup email server address in the DNS MX records, wait a while to let it propagate, and then switch my home over from the DSL to the Comcast. Then I'll change DNS to make that the primary mail address and the DSL the secondary, let that go a while and make sure it seems reliable. Then I'll remove the DSL from the DNS MX records completely, and finally shut down the DSL service. (I thought about keeping the DSL as a backup, but the reason I'm leaving DSL is that it has become unreliable; and I have heard that Comcast business Internet is reliable.) Final question, any advice for me? Anything you think might be useful, helpful, or educational. Thanks.

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • How to deny the web access to some files?

    - by Strae
    I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file. It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/fruit/foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on www.example.com/folder-name-static/folder-name-dinamyc/file-name-static.txt, where those parts are allways the same, just that one change ? EDIT: As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

  • Python Django sites on Apache+mod_wsgi with nginx proxy: highly fluctuating performance

    - by Halfgaar
    I have an Ubuntu 10.04 box running several dozen Python Django sites using mod_wsgi (embedded mode; the faster mode, if properly configured). Performance highly fluctuates. Sometimes fast, sometimes several seconds delay. The smokeping graphs are al over the place. Recently, I also added an nginx proxy for the static content, in the hopes it would cure the highly fluctuating performance. But, even though it reduced the number of requests Apache has to process significantly, it didn't help with the main problem. When clicking around on websites while running htop, it can be seen that sometimes requests are almost instant, whereas sometimes it causes Apache to consume 100% CPU for a few seconds. I really don't understand where this fluctuation comes from. I have configured the mpm_worker for Apache like this: StartServers 1 MinSpareThreads 50 MaxSpareThreads 50 ThreadLimit 64 ThreadsPerChild 50 MaxClients 50 ServerLimit 1 MaxRequestsPerChild 0 MaxMemFree 2048 1 server with 50 threads, max 50 clients. Munin and apache2ctl -t both show a consistent presence of workers; they are not destroyed and created all the time. Yet, it behaves as such. This tells me that once a sub interpreter is created, it should remain in memory, yet it seems sites have to reload all the time. I also have a nginx+gunicorn box, which performs quite well. I would really like to know why Apache is so random. This is a virtual host config: <VirtualHost *:81> ServerAdmin [email protected] ServerName example.com DocumentRoot /srv/http/site/bla Alias /static/ /srv/http/site/static Alias /media/ /srv/http/site/media WSGIScriptAlias / /srv/http/site/passenger_wsgi.py <Directory /> AllowOverride None </Directory> <Directory /srv/http/site> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Ubuntu 10.04 Apache 2.2.14 mod_wsgi 2.8 nginx 0.7.65 Edit: I've put some code in the settings.py file of a site that writes the date to a tmp file whenever it's loaded. I can now see that the site is not randomly reloaded all the time, so Apache must be keeping it in memory. So, that's good, except it doesn't bring me closer to an answer... Edit: I just found an error that might also be related to this: File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1049, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory The server has 600 of 2000 MB free, which should be plenty. Is there a limit that is set on Apache or WSGI somewhere?

    Read the article

  • Preinstalled Windows 8 and Linux UEFI dual boot on a laptop

    - by itchy355
    I am trying to set up Windows 8 and Arch Linux on a new Sony Vaio E14 with preinstalled windows 8. So far: installed W8 to my new SSD (switched for the original HDD) using Recovery Media shrunk the W8 partition, deleted recovery partition, disabled swap confirmed W8 booting just fine On to Arch: disabled Secure Boot in bios confirmed W8 booting just fine Booted Arch off the CD and installed everything to 4th and 5th partition set up rEFInd for EFIstub kernel bootloader After that it got worse. I was unable to boot anything else than Windows 8 (although I was glad that they at least kept working just fine). Tried: creating EFI\refind\ and putting the .efi there (as per Arch manual overwriting EFI\boot\bootx64.efi overwriting EFI\Microsoft\Boot\bootmgr.efi overwriting EFI\Microsoft\Boot\bootmgfw.efi --- YAY rEFInd shown up! So far, so good. I've kept the whole W8 Boot\ directory in EFI\windows8 and set up a boot menuentry for it; and it booted just fine. But, upon restart, everything was wrong -- 'Operating system not found' instead of any bootloader (refind or w8). Booted back into Arch using the live CD to find out that the EFI partition had erroneous FAT table. fsck.vfat fixed it, and I've found that EFI\Microsoft\Boot was back to it's original state (all refind files deleted and replaced with W8 bootloaders). I've overwritten them again and got back to rEFInd showing up correctly and Arch being perfectly bootable. After that I've tried only renaming EFI\Microsoft\Boot\bootmgfw.efi to bootmgfw.001.efi (then copying refind's .efi to bootmgfw.efi and keeping EVERY OTHER file as it was), but with exactly the same result. Tried marking the GPT EFI partition as read-only, same result. Now I'm kinda out of luck. Arch boots fine, so does W8 but it destroys the EFI partition in the process. Thanks for any ideas, Googling brought me this far and I can't find any better. PS -- windows 8 MAYBE destroys the partition upon shutdown -- when I order a shutdown in W8, it takes unusually long (about half a minute instead of ~5 seconds). So in theory I could solve this by hard-resetting the laptop instead of a normal shutdown, but that's just not nice.

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Question: what's the best way to handle sub-projects in eclipse when using git for SCM? Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link to a subdirectory of the directory containing the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • Safer RAID5 rebuilds using partially failed disks?

    - by arcticmac
    There have been lots of articles posted recently about how RAID5 is dangerous because of long resilver times, and in particular because of increasing chances of encountering a URE during the resilver. Obviously this is a significant concern. However, it seems that in many cases of interest (as long as you're keeping some kind of eye on your disks), when it comes time to rebuild the array, the disk that I'm replacing is still mostly readable. If you try to explain this predicament to the average layperson, they are typically very confused as to why you have two almost completely functional disks but can't produce one working array. It seems to me that there ought to be some way to take advantage of this to make rebuilds safer, as long as I'm willing to have the RAID5 be read-only for a couple of days while it rebuilds. Conceptually, what I have in mind looks something like this: When a disk fails, immediately take the RAID5 offline or mount it read-only Attach a new disk (either in a spare bay, or externally via eSATA) and begin rebuilding it to replace the failed one. If known, perhaps start with the stripes in which the failure occurred, to minimize the chances of losing those if another disk fails. In the event that a second disk experiences a URE or other failure during the rebuild, try to source that data from the disk that is being replaced. Presumably if this happens, more rebuilding would be necessary. When complete, shut down the server, swap the replacement drive into the original bay if desired, and bring the array back up. Obviously such a process would not be appropriate for applications where uptime is critical or data loss cannot be tolerated, but it seems to me that this could help considerably to improve the reliability of RAID5. I assume that there's not a good way to implement a recovery like this at present, given that I haven't seen any indication of tools that are designed to do this, and that it seems like it would be rather obtuse to work out manually. Are there also technical issues with it that I haven't thought of (I'm still fairly new to RAID stuff)? Any thoughts on how hard something like this would be to implement (e.g. in linux md raid)?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >