Search Results

Search found 3912 results on 157 pages for 'distributed caching'.

Page 122/157 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

  • Offline copies in Windows file sharing

    - by netvope
    I frequently access media files (music or video) on a remote Windows file share. My Internet connection is not very fast, and I find it a waste of bandwidth when I repeatedly access the same files. For example, I may listen to the same song 30 times in a month. So, I would like to cache files I've used. I know Windows has an "Always available offline" feature but I think it doesn't suit my needs. I don't want to make the whole share "available offline" as the remote Windows file share is huge (in terabytes). Making individual files "available offline" is tedious as the files are scattered in many different directories. It would be much more convenient if I can simply cache those I've used. Also The files on the share seldom change. Many of the files are rarely used. Some of the files are frequently used. I don't have a list of the most frequently used files. So I think the best way is to have a caching proxy for the Windows share. What do you think? I have a Linux box sitting around. Perhaps I should try to setup samba4?

    Read the article

  • How to serve pages through multiple frameworks/template engines efficiently

    - by Leftium
    I would like to render a file that has both PHP tags and Web2Py tags mixed together. To do this, I would like the web server to pass the file through Web2Py, then PHP. I found a method to call PHP from Web2py via Python (based on this method for running PHP on top of django), but this method loses the benefits of any server optimizations from mod_php or FastCGI like caching and multi-threaded operation. A new process is created for each PHP request, which is very slow. Is there a better way to efficiently render pages with both Web2Py(Python) and PHP tags in the same file? Note I am not looking for methods of serving PHP-only and Web2Py-only files from the same server/domain. I prefer solutions for Apache2 or Cherokee. I'm open to using other web servers, though. Background info: I prefer to develop in Web2Py, but we have this pre-existing system written in PHP. I would like to augment the PHP system with some of Web2Py's features like auth authentication/user management and the T() internationalization object. Also it would make it much easier to port the PHP project to Web2Py if it could be done piecemeal. Since the PHP project consists of many files, it would greatly help if they did not need modification.

    Read the article

  • "Delivered-To" Header in Exchange

    - by Kaii
    In some SMTP server implementations (i.e. Postfix) you can enable Delivered-To and X-Original-To headers that will be added to your email. (or [X-]Envelope-To) This is very helpful with distribution lists to determine which e-mail address the mail has been redirected to. So, when the mail has been sent to [email protected], you can see in the Delivered-To or Envelope-To header that it has been redirected (distributed) to [email protected], which is one of many other e-mail addresses that are linked to a single mailbox. How do I find which address was used to deliver this mail to a specific mailbox on Microsoft Exchange 2010? Looking at the plain message (with all headers) i can not find any information that the mail arrived via address [email protected] I think I need the Delivered-To header (or a similar one) to be set on Microsoft Exchange when a mail is delivered via distribution lists. Is there any way to enable such header in Exchange 2010? I need it so that our Ticket system (OTRS) correctly recognizes where the ticket belongs to. Adding all the e-mail addresses of all distribution lists to the system configuration is not the right solution. And if there is a solution for Exchange 2010, is this possibly also applicable to Exchange 2007?

    Read the article

  • how do you add an A record for a root domain

    - by nbv4
    this seems really simple, but I can't figure it out. I'm using xname.org since it's free and I own a bunch of domains spread over a few different registrars. The setup I desire is very simple: one A record that points the bare domain name to my server, plus a wildcard CNAME record pointing all subdomains to the same server. So if the user goes to domain.com it will point them to 285.24.435.75, if they go to www.domain, blah.domain.com, or any other sub domain, they'll get sent to 285.24.435.75. All the examples I read on the internet about setting up A records all have the A record set to a subdomain such as www. WWW is deprecated so I want to have noting to do with it. Currently my xname.org zone looks like this: $TTL 86400 ; Default TTL domain.com. IN SOA ns0.xname.org. nbvfour.gmail.com. ( 2010052503 ; serial 10800 ; Refresh period 3600 ; Retry interval 604800 ; Expire time 10800 ; Negative caching TTL ) $ORIGIN domain.com. IN NS ns2.xname.org. IN NS ns0.xname.org. IN NS ns1.xname.org. @ IN A 65.49.73.148 * IN CNAME domain.com The '@' symbol is something that the godaddy domain interface uses to mean "this root domain', but that may have been specefic to that interface and has no meaning here. Before I had a 'www' entry in the A rcords and it worked in the sense that I could ping 'www.domain.com' and it returned a response, but pinging the root domain 'domain.com' returned "no host found".

    Read the article

  • Offline cache copies in Windows file sharing

    - by netvope
    I frequently access media files (music or video) on a remote Windows file share. My Internet connection is not very fast, and I find it a waste of bandwidth when I repeatedly access the same files. For example, I may listen to the same song 30 times in a month. So, I would like to cache files I've used. I know Windows has an "Always available offline" feature but I dont' think it suit my needs. I don't want to make the whole share "available offline" as the remote Windows file share is huge (in terabytes). Making individual files "available offline" is tedious as the files are scattered in many different directories. It would be much more convenient if the system can simply cache those I've used. I could also manually make a local copy each time I use a file... but this is even more troublesome than making each file "available offline" Also The files on the share seldom change. Many of the files are rarely used. Some of the files are frequently used. I don't have a list of the most frequently used files. It would be the best if I could tell Windows to cache the last accessed 10GB, but apparently it doesn't have this feature. So I think the best way is to have a SMB/CIFS caching proxy. What do you think? I have a Linux box sitting around. Perhaps I should try to setup samba4?

    Read the article

  • Wastage of resources in Virtualization

    - by Sabeen Malik
    I have asked this question on SO, but was suggested that i ask it here on SF, so here it goes. http://stackoverflow.com/questions/3010753/wastage-of-resources-in-virtualization I am not sure if this is the right place to ask the question. However i hope it is. When looking for a VPS earlier today, I was trying to understand how each container would work in the background. Keeping in mind the fact that the operating system uses most of the memory and power on a system, wouldn't having multiple operating systems in the same machine mean more wastage of resources. For instance if i was running centOS on a dedicated box and it was running lets say 20 background OS level processes. Then i go and install a virtualization platform and install 5 more centOS virtual machines in the same system which are exactly the same as the host operating system. Doesn't this mean duplication of those 20 processes 6 times? So internally the context switching is happening between 120 processes instead of 20? Further Notes: Here is an example of what i am thinking: I have a master-slave configuration for a long running, cpu + memory intensive process, which can be distributed to 4 machines. Lets say when the process runs on these 4 machines with lets say 1 Gh CPU and 1 Gig RAM, i get 400 results per hour from the cluster (assuming 100 results from one machine) . Now i get a bigger machine ( lets say 4Gh and 4 Gig RAM), have 4 virtual hosts on it with 1 Gz CPU and 1 Gig RAM. Will this configuration give me the same 4 results per hour from these 4 virtual hosts?

    Read the article

  • Pros and Cons of a proxy/gateway server

    - by Curtis
    I'm working with a web app that uses two machines, a BSD server and a Windows 2000 server. When someone goes to our website, they are connected to the BSD server which, using Apache's proxy module, relays the requests & responses between them and the web server on the Windows server. The idea (designed and deployed about 9 years ago) was that it was more secure to have the BSD server as what outside people connected to than the Windows server running the web app. The BSD server is a bare bones install with all unnecessary services & applications removed. These servers are about to be replaced and the big question is, is a cut-down, barebones server necessary for security in this setup. From my research online I don’t see anyone else running a setup like this (I don't see anyone questioning it at least.) If they have a server between the user and the web app server(s), it is caching, compressing, and/or load balancing. Is there anything I’m overlooking by letting people connect directly from the internet ** to a Windows 2008 R2 server that’s running the web application? ** there’s a good hardware firewall between the internet with only minimal ports open Thank you.

    Read the article

  • Redis as substitution for Memcache

    - by Boban P.
    We have distributed web app, and for now, as session handler, we use two separate instances of memcache in redundancy, so everything that is written in one memcache is also written in other. Memcache is fairly easy to install, use, and maintain but we have one problem: if one memcache fail, everything is fine, php comunicate with other instance which has all data (although, half of connections have a delay because they try to use failed one, wait a little, and then contact other memcache). When failed instance comes back to life again, it starts up empty. If established session request data from that instance, session fails, and user logs out, and that happens to half of users.So, we are thinking about to switch to redis for session handling, and maybe keep memcache for cache only. My questions are: If we setup redis instances as master-slave, and if master fails, can sentinel promote slave as new master and when old master comes back to life, will it stay as slave or not? Is redis call malloc at startup to allocate part of memory, like memcache or varnish, or it calls malloc for every key inserted? And what are pros and cons of that?

    Read the article

  • Cache-control for permanent 301 redirects nginx

    - by gansbrest
    I was wondering if there is a way to control lifetime of the redirects in Nginx? We would liek to cache 301 redirects in CDN for specific amount of time, let say 20 minutes and the CDN is controlled by the standard caching headers. By default there is no Cache-control or Expires directives with the Nginx redirect. That could cause the redirect to be cached for a really long time. By having specific redirect lifetime the system could have a chance to correct itself, knowing that even "permanent" redirect change from time to time.. The other thing is that those redirects are included from the Server block, which according the nginx specification should be evaluated before locations. I tried to add add_header Cache-Control "max-age=1200, public"; to the bottom of the redirects file, but the problem is that Cache-control gets added twice - first comes let say from the backend script and the other one added by the add_header directive.. In Apache there is the environment variable trick to control headers for rewrites: RewriteRule /taxonomy/term/(\d+)/feed /taxonomy/term/$1 [R=301,E=expire:1] Header always set Cache-Control "store, max-age=1200" env=expire But I'm not sure how to accomplish this in Nginx.

    Read the article

  • iptables secure squid proxy

    - by Lytithwyn
    I have a setup where my incoming internet connection feeds into a squid proxy/caching server, and from there into my local wireless router. On the wan side of the proxy server, I have eth0 with address 208.78.∗∗∗.∗∗∗ On the lan side of the proxy server, I have eth1 with address 192.168.2.1 Traffic from my lan gets forwarded through the proxy transparently to the internet via the following rules. Note that traffic from the squid server itself is also routed through the proxy/cache, and this is on purpose: # iptables forwarding iptables -A FORWARD -i eth1 -o eth0 -s 192.168.2.0/24 -m state --state NEW -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE # iptables for squid transparent proxy iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.2.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 How can I set up iptables to block any connections made to my server from the outside, while not blocking anything initiated from the inside? I have tried doing: iptables -A INPUT -i eth0 -s 192.168.2.0/24 -j ACCEPT iptables -A INPUT -i eth0 -j REJECT But this blocks everything. I have also tried reversing the order of those commands in case I got that part wrong, but that didn't help. I guess I don't fully understand everything about iptables. Any ideas?

    Read the article

  • Which version control should I use for my configuration files?

    - by rakete
    I want to store some of my configuration files (~/.emacs.d/, .Xdefaults, etc. linux $HOME stuff) in version control so I can easily sync them with my notebook/workplace and see my past changes and revert to them should the need arise. So far it seems to me that there are quite some people using git for this and I think that I too want to use a distributed vcs for this (if only to get more used to them) but I can't say that I am very experienced with all things dvcs. I did use darcs and git briefly and so far I can say that I really like the way git handles branches, and I think the possibility to have different branches within the same directory is especially useful for my use case. Darcs on the other hand has cherry picking of patches, which too is quite the convenient feature when managing configuration files (at least I assume it is). So, what would you recommend to use? And what would be your reasoning for your recommendation? What other vcs with nice feature that I haven't mentioned exist and would make a good vcs to store configuration files and why?

    Read the article

  • Disconnect from PHP after output generated

    - by Oli
    I have a LEMP stack. Nginx sitting in front of PHP-FPM. Because some of the sites are heavy and there's OPCode caching, PHP is set up so that there are only 5 child processes running. The aim being that each child can deal with any request in less than half-a-second and then move onto the next request. One problem I've found is that if it's a big chunk of HTML that's getting sent out, and the user has a slow connection, that PHP thread stays occupied until they've finished downloading. Because of my current setup, I have a pretty unforgiving timeout inside PHP where the script is killed after 20 seconds. This is to make sure everybody gets a turn but on a slow connection, this can mean the user gets cut off with a 504 Gateway timeout. I was wondering if there was some sort of buffer solution that I could implement within or just behind Nginx that sent the request through and then... well... buffered the content into its own cache and feed that onto the client as and when they could download it. The aim being that the underlying PHP thread can be freed up. What I'm asking for doesn't have to be PHP-specific. Anything that deals with FastCGI, or even any Nginx-upstream might have a similar issue to this.

    Read the article

  • Designing a software based load balancer

    - by Kishore pandey
    Hello to all Server fault users, I am new to this website but have constantly been using the mother website, stackover flow. Well to begin with, i would like to design a load balancer for the organization i am working for. As i am very new to this whole, idea about load balancing and networks. I am finding it very difficult to start my project. I did a lot of research on already existing load balancer and found some(HAPROXY,NGINX) that could solve my problems, but the point is, I am still in a dilemma if they could answer the following requirements of mine: The client and server in my architecture are distributed. The load balancer should take care of the firewall. LB server should balance the load among all servers present in WWW cloud. The LB server should have some sort of configuration file, with the help of which it is possible to configure the servers. Heart beat: With the help of which it would be possible to check if any server is down, if any server is down the request should be passed to some other server. Various load balancing algorithms of the incoming requests. Easy error handling. It should be fairly possible to prioritize the incoming requests. Is there any already available load balncer solution on the market that could satisfy these requirements? If not is there any base code available with the help of which i could develop my own load balncer. If not where should i start from scratch? I am practically new to everything. Any help from a load balancer expert is very much appreciated. Thanx a ton in advance. Cheers and regards. Kishore

    Read the article

  • Can I run Excel 2010 on a server?

    - by Glen Little
    This question is not about a person using Excel on a computer that happens to have an Windows Server OS. And it is not about using any Sharepoint services features! The question is about automated processes that use code (Office Automation) to open Excel files, manipulate them, run calculations, read data, save copies of the file and close the files... all in code. In previous versions of Excel the licensing agreement prevented use on a public server, notes from Microsoft warned about the problems trying to use Office Automation in a server environment, and we were warned that Excel was single threaded and not designed for use on a server. Most of the articles about this were written before Office 2010. But now, Excel 2010 is designed to work on a High Performance Computing server using HPC Services for Excel. One HPC document mentions "Windows HPC Server 2008 R2 includes a comprehensive pop-up manager that can handle occasional dialog boxes and pop-up messages". So my question is... is it now "safe" to run code that automates Excel 2010 on a "normal" server without using the HPC services? If not, can the HPC Services for Excel work on a single server? I don't need the high performance, distributed computing, aspect of HPC Services for Excel... just the ability to run Excel on a server. Can that now be done? Thanks, Glen

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

  • Swapping out a hardware firewall does the mac address get cached?

    - by Dan
    We need to replace a hardware firewall (cisco pix) and have a spare that we will use (temporarily). The firewall sits in front of a couple of web-servers colocated at a data-centre. The replacement will be configured with identical settings (external/internal IP addresses, configured ports etc.). When we swap the firewalls over, will this work immediately or will the old Pix's mac address be cached and the new firewall not be seen until the cache is cleared? (What is it though that is caching the address? Is it just the switch/router that our pix is connected to?) Reason for asking is a few years ago I had a smoothwall firewall in front of a lone server (the external IP of the smoothwall was also the external IP of the web-server). When I replaced the smoothwall with a pix, the IP address of the web-server stayed the same but it now had to be reached via the new firewall on a different IP. It took about 2-4 hours before the rest of the world could see that web-server again. I'm hoping for less downtime this time!

    Read the article

  • Which upgrade path for disk IO bound postgres server?

    - by user41679
    Hi all, We currently have a Sun x4270 with 2xquad core Xeon Nehalmen 2.93ghz cores (16 threads), 72 gig of ram and 16 x 10k SAS disks split between the os raid 1, a partition for the Write Ahead Logs which is raid 10 and a partition for the database tables and indexes which is also raid 10, all xfs. I'm currently evaluating which path to go down in terms of upgrades. We'll be sharding the DB at some point soon, but for now I need to focus on hardware upgrades specifically. The machine is not CPU or memory bound at all at the moment, just IOWait is become an issue. The machine is mostly write access as we have a heavy caching layer. We're seeing about 300 write IOPS average on both the database partitions. We don't have any additional storage infrastructure like a Fiber Channel or ISCSI network. Budget isn't too much of a concern, something inline with the size of this server (i.e no $1m IBM machines) Space is ok on the DB side of things, we're running out obviously but there's also some reduction we can do. Additional space would be good though. My current thoughts are either: * ISCSI SAN, possible with 10Gbit network that has solid state acceleration. * FusionIO card / Sun F20 card (will the FusionIO card work in the Sun box? * DAS shelf (something like this http://www.broadberry.co.uk/das-direct-attached-storage-servers/cyberstore-224s-das) which a combination of 15k sas disks and some Intel X25-E drives for DB indexes etc) what would I need to put in the x4270 to add a DAS shelf? I think it's a SAS HBA card, do I have to use Sun's own card or will any PCI Express card work? Anything else??? what would you guys do from your experience? I appreciate it's a lot of questions, but I haven't expanded a DB machine for a number of years and the landscape has changed dramatically since then! Any advice or feedback would be very much appreciated. Let me know if there's anything else I can clarify. Thanks in advance!

    Read the article

  • how to go about scaling a web-application ?

    - by phoenix24
    for someone whoes been primarily a web-application developer, and know not much about scaling/scalability techniques. I'll start by stating my application is written in Python, using Django; a fairly standard setup. I currently use Apache 2.2 for my webserver, and MySql for my database server; both running on the same vps server. Up until now, it was basically a prototype and merely 15-30 concurrent users at any given time; so I had no issues, but now since we'll be adding more users we'll have severe performance issues. So my question is how do i go about scaling my web-application? and my plan is as follows. Now I have just one vps server running, apache + mysql. Next, I plan to add another vps server, to run only MySql; so i'll have one web-server and one db server. Next, I'll add Memcache to the webserver for caching data; and taking some load off mysql. Next, another web-server for serving all the static content; Next, a vps server for load-balancing (nginx/varnish) behind which would be my two web-servers and then db-server. Does that sound like a workable strategy, please guide me around here.

    Read the article

  • apache: can't renew ssl certificate

    - by Caballero
    I have Godaddy SSL certificate for one website on my dedicated server running Centos 5.3 / Apache 2.2.3. I have renewed certificate on Godaddy recently, however now it's showing as expired on my website. I've re-keyed certificate since and reuploaded domain.key, domain.crt and bundle.crt (example file names) files to the server, restarted apache, but the sertificate still shows as expired. I'm running out of clues. I've tried replacing content of .crt files with jiberish and restart apache - it's still showing that certificate is expired, even though it shouldn't be picked up at all. I eventually rebooted dedicated server, still no luck. I'm using free SSL check tool http://www.digicert.com/help/ which clearly shows all the green checks except one - certificate is expired. Has someone any idea what might be causing this? Could there be some kind of caching going on here? UPDATE: after running openssl x509 -in domain.crt -noout -enddate I'm getting this output: notAfter=Jun 2 08:16:51 2013 GMT So I asume this means I have the right certificate on the server and yet the old expired one shows on the web...

    Read the article

  • getent passwd fails, getent group works?

    - by slugman
    I've almost got my AD integration working completely on my OpenSUSE 12.1 server. I have a OpenSUSE 11.4 system successfully integrated into our AD environment. (Meaning, we use ldap to authenticate to AD directory via kerberos, so we can login to our *nix systems via AD users, using name service caching daemon to cache our passwords and groups). Also, important to note these systems are in our lan, ssl authentication is disabled. I am almost all the way there. Nss_ldap is finally authenticating with ldap server (as /var/log/messages shows), but right now, I have another problem: getent passwd & getent shadow fails (shows local accounts only), but getent group works! Getent group shows all my ad groups! I copied over the relavent configuration files from my working OpenSUSE 11.4 box: /etc/krb5.conf /etc/nsswitch.conf /etc/nscd.conf /etc/samba/smb.conf /etc/sssd/sssd.conf /etc/pam.d/common-session-pc /etc/pam.d/common-account-pc /etc/pam.d/common-auth-pc /etc/pam.d/common-password-pc I didn't modify anything between the two. I really don't think I need to modify anything, because getent passwd, getent shadow, and getent group all works fine on the OpenSUSE11.4 box. Attempting to restart nscd service unfortunately didn't do much, and niether did running /usr/sbin/nscd -i passwd. Do any of you admin-gurus have any suggestions? Honestly, I'm happy I made it this far. I'm almost there guys!

    Read the article

  • How to find out what is causing a slow down of the application on this server?

    - by Jan P.
    This is not the typical serverfault question, but I'm out of ideas and don't know where else to go. If there are better places to ask this, just point me there in the comments. Thanks. Situation We have this web application that uses Zend Framework, so runs in PHP on an Apache web server. We use MySQL for data storage and memcached for object caching. The application has a very unique usage and load pattern. It is a mobile web application where every full hour a cronjob looks through the database for users that have some information waiting or action to do and sends this information to a (external) notification server, that pushes these notifications to them. After the users get these notifications, the go to the app and use it, mostly for a very short time. An hour later, same thing happens. Problem In the last few weeks usage of the application really started to grow. In the last few days we encountered very high load and doubling of application response times during and after the sending of these notifications (so basically every hour). The server doesn't crash or stop responding to requests, it just gets slower and slower and often takes 20 minutes to recover - until the same thing starts again at the full hour. We have extensive monitoring in place (New Relic, collectd) but I can't figure out what's wrong; I can't find the bottlekneck. That's where you come in: Can you help me figure out what's wrong and maybe how to fix it? Additional information The server is a 16 core Intel Xeon (8 cores with hyperthreading, I think) and 12GB RAM running Ubuntu 10.04 (Linux 3.2.4-20120307 x86_64). Apache is 2.2.x and PHP is Version 5.3.2-1ubuntu4.11. If any configuration information would help analyze the problem, just comment and I will add it. Graphs info phpinfo() apc status memcache status collectd Processes CPU Apache Load MySQL Vmem Disk New Relic Application performance Server overview Processes Network Disks (Sorry the graphs are gifs and not the same time period, but I think the most important info is in there)

    Read the article

  • How to organise storage for media content such as video and music?

    - by thor
    Currently, we have a single server hosting all content: music, video and software. This content is downloaded by users through HTTP. Now free space is coming to an end and we are exploring different ways of extending our storage capacity. We want to do it cheap, simple and reliable (protected from disk/ server faults). Currenly, we see two ways: Add a couple of cheap servers with 4 disks (RAID1 ?), run some distributed file-system on top, like GlusterFS. Pros: hopefully, we will see all our disks as single flat file system, just dump content into it and be done. Cons: could be tricky in configuration and handling of faults. Add a couple of cheap servers, all running HTTP servers. Each piece of content (be it a music file or video) is placed on randomly selected two servers. Pros: don't have to deal with RAID, as content is duplicated; single server failure does not bring down any part of content; doubled distribution capacity (as any signle file could be downloaded from any of two servers hosting it). Cons: requires some scripting on part of distribution of content, adding/ removing servers. Do we miss any other ways? Which of the aforementioned options seems to be the best?

    Read the article

  • Free, simple, configurable SOCKS5 server

    - by Pooria Azimi
    I've been looking (for the past 6-7 hours) for a fast, free and configurable SOCKS5 server. I haven't found anything that matches my needs. They are either too complicated, too bare-bones or simply buggy as hell. This is (all) I need: I want it to run on Linux (and also OS X, preferably) I want it to listen on localhost:8888 When my app (say wget.. or curl --socks5=localhost:8888) requests http://www.google.com/search?q=asd (or any other url - both http and https), I want it to fetch the page not from google's servers, but from http://localhost:4444/cached?uri=http://www.google.com/search%3Fq%3Dasd. Nothing more! I don't need caching, or anything else. I just want a SOCKS5 server, running locally, which redirects all queries to my own (local) server. It could be written in C, C++, Python, PHP, Perl, Node.js or any other language. I don't care, as long as it supports my (very limited) needs, or I can easily change the source to make it so. Thanks a lot

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >