Search Results

Search found 7777 results on 312 pages for 'resources'.

Page 243/312 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • a brand new FS based on a database without using fuse

    - by Devrim
    hi all, To serve millions of files out of a single directory, being able to connect to a drive from hundreds of endpoints, and for some other reasons (to avoid gluster/nfs/all fs based networking solutions), I want to evaluate the possibility of making a filesystem that's based on a mongodb (or any other). Basically, it works like fusefs, every single file is kept in mongo gridfs. In theory, I do, mount mongodbfs /mountPoint mongodb://localhost then when i say touch /mountPoint/test.txt this file is inserted into mongodb. This FS will also store uid/gid and perms with the file, we can throw hundreds of servers to it, and no useradd will be necessary. I'm not thinking to include all the features of FS, just the ones we need. My question is, how do I start my quest in finding resources, books, links, people, developers who'd help me implement this? at least a proof of concept. Is it feasible? What should I expect as a timeline for such undertaking? Please only think about gazillion small files and folders.

    Read the article

  • Windows 7: L10N mechanics

    - by John Sonderson
    I have a localized version of Windows 7. I can't figure out where windows gets the names for files and directories on the system. For instance, consider the following (default) files. > cd C:\Users\Public\Pictures\Sample Pictures > dir Chrysanthemum.jpg Desert.jpg ... When I view these files on the default file explorer I see these names: Crisantemo.jpg Deserto.jpg ... This seems to imply that each file can be somehow assigned a localized name somewhere. However I cannot figure out how. Would appreciate if someone could shed some light on this issue. Thanks. UPDATE EDIT: The desktop.ini file in the folder containing Chrysanthemum.jpg contains the following entries. The .dll files used to translate the various resources are unfortunately not human-readable and I have no clue as to how they could be generated for other files created by the user to be translated, but they serve the purpose, and solve the mystery which lead to the post. Thanks. [LocalizedFileNames] Chrysanthemum.jpg=@%systemroot%\system32\SampleRes.dll,-101 Desert.jpg=@%systemroot%\system32\SampleRes.dll,-102 Hydrangeas.jpg=@%systemroot%\system32\SampleRes.dll,-103 Jellyfish.jpg=@%systemroot%\system32\SampleRes.dll,-104 Koala.jpg=@%systemroot%\system32\SampleRes.dll,-105 Tulips.jpg=@%systemroot%\system32\SampleRes.dll,-106 Lighthouse.jpg=@%systemroot%\system32\SampleRes.dll,-107 Penguins.jpg=@%systemroot%\system32\SampleRes.dll,-108 [.ShellClassInfo] LocalizedResourceName=@%SystemRoot%\system32\shell32.dll,-21805

    Read the article

  • Under what conditions will sendmail try to immediately resend a message instead of waiting for the standard requeue interval?

    - by Mike B
    CentOS 5.8 | Sendmail 8.14.4 I used to think that if SendMail experienced a temporary (400-class) error during delivery, it would place the message in a deferred queue (e.g. /var/spool/mqueue) and retry an hour later. For the most part, that appears to be the case. But every now and then, I'll notice log entries like this (email/domains renamed to protect the innocent :-) ) : Dec 5 01:43:03 foobox-out sendmail [11078]: qBE3l7js123022: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=124588, relay=exbox.foo.com. [10.10.10.10], dsn=4.0.0, stat=Deferred: 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel Dec 5 01:53:34 foobox-out sendmail [12763]: qBE3l7js123022: to=<[email protected]>, delay=00:10:31, xdelay=00:00:00, mailer=relay, pri=214588, relay=exbox.foo.com., dsn=4.0.0, stat=Deferred: 452 4.3.1 Insufficient system resources Dec 5 02:53:35 foobox-out sendmail [23255]: qBE3l7js123022: to=<[email protected]>, delay=01:10:32, xdelay=00:00:01, mailer=relay, pri=304588, relay=exbox.foo.com. [10.10.10.10], dsn=2.0.0, stat=Sent (<[email protected]> Queued mail for delivery) Why did Sendmail try again just 10 minutes after the first attempt and then wait another hour before trying again? If this is expected behavior, what scenarios will cause this faster requeue interval to occur?

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • Random server lag, no CPU/mem/pagefile usage

    - by Kev
    We have a fairly new server running Windows 2003 SP2, and the past few days we've noticed random slowdowns. When I'm logged into the server over remote desktop while this is happening, or if I'm physically sitting at the server logged in, suddenly everything becomes extremely laggy. Any UI element I try to interact with takes upwards of ten seconds to react, and then responds very slowly. Then a minute later everything is quite snappy again. During this, I have Task Manager minimized to the tray, and there's no CPU usage. I open it up right after this happens, and there's very little CPU usage on the graph, and no memory or pagefile usage above normal. (Normal being 1.5 GB free in the case of memory.) This is what I see logged into the server, and then users start calling saying things are slow, timing out, and failing--anything to do with our server. No events in the Event Viewer around the times this happens. The context I'm working in (last thing I clicked, etc.) seems different every time--different programs active, different combinations of programs open. Never anything particularly stressful (like adding an event entry to a Cobian Backup configuration, or editing text in TextPad, which has been exceptionally stable in my extensive usage of it.) I would've thought it was just the server, but a family member's home PC (entirely separate) running WinXPSP3 had the same thing happen to it last night a few times. Is this some new behaviour introduced by the latest Windows Updates? Either way, where do I even start to look when nothing seems to be chewing up resources?

    Read the article

  • Poor write performance on Debian server running NFS with 22TB exported JFS filesystem

    - by user143546
    I am currently running a debian server that is exporting a large JFS filesystem (22TB) over NFS (nfs-kernel-server.) When attempting to write to the NFS share, the performance is very poor. The 22TB disk is sitting on a NAS mounted using iSCSI. It will bust for a moment near expected line speed, and then sit idle for several seconds. Very little traffic measured in the low kb/sec. The wait peeks on write. When reading from the NFS mount, the system operates at expected speeds (11MB/sec). The issue does not occur when using SFTP, rsync, or local coping (non-nfs). The issue persists between stable and testing releases. On the same machine I have a 14TB ext4 filesystem using the exact same export configuration that does not share the issue. This share is not in regular use and thus not consuming resources. NFS Server: cat /etc/exports /data2 10.1.20.86(rw,no_subtree_check,async,all_squash) cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /etc/default/nfs-kernel-server RPCNFSDCOUNT=8 RPCNFSDPRIORITY=0 RPCMOUNTDOPTS=--manage-gids NEED_SVCGSSD= RPCSVCGSSDOPTS= NFS Client: cat /etc/fstab 10.1.20.100:/data2 /root/incoming nfs rw,noatime,soft,intr,noacl 0 2 cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /proc/mounts 10.1.20.100:/data2/ /root/incoming nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.20.86,minorversion=0,addr=10.1.20.100 0 0 This problem has me pretty stumped. Any help would be greatly welcomed. Thanks.

    Read the article

  • Router vs switch in a LAN [closed]

    - by servernewbie
    If I have a LAN and and connect it with a switch, I understand it uses a CAM table to route packets in layer 2 (by saving mac to port relations). So far all good. However, when using a router for a LAN (ONLY for a LAN, not to connect it to "the outside" WAN/internet/etc) I get a bit confused as to how it internally processes packets. I would first split this into two router scenarios: Router with buit-in switch In this scenario, I would expect that it will act exactly as a switch with a CAM table internally. This would probably benefit a bit in speed (guessing here?) compared to the next option. Router without built-in switch Here is where I get confused. If hostA wants to send a packet to hostB, it will ARP to find hostB's MAC address and send it there. Now, if we had a switch (above scenario) this would be easy. But how does it work now in a router WITHOUT a switch? If I would guess, hostA would send an Ethernet frame with hostB's MAC address to the line. The router would fetch the packet (even though the router has another MAC address, it would still fetch this packet even if it only contains hostB's MAC address). It would strip the Ethernet frame header and check the IP, and then check its own internal ARP table again for the MAC address. Now, this would seem like a waste of resources compared to a router with a built-in switch. But maybe it does not work like that at all. Does it also contain a CAM table? If that would be true, what would then the difference between these two routers really be?

    Read the article

  • Things to check for an internet-facing email server.

    - by Shtééf
    I'm faced with the task of setting up a public-internet-facing email server, that will be relaying mail for all of our other servers in the network. While the software in itself is set up in few keystrokes, what little experience I have with managing an email server has thought me that there are tons of awkward filtering techniques employed by other email systems. Systems that my own server will inevitably interact with a some point. Hence, my questions: What things should be kept in mind and double checked when setting up an email server? What resources are available for checking if my email server is set-up correctly? I'm specifically NOT looking for instructions for any given mail server, such as Exchange or Postfix. But it's okay to say: “you should have X and Y in your set-up, because when talking to server software Z, it typically tries to weed out open relays by checking for these.” Some things I've discovered myself: Make sure forward and reverse DNS are set up. Mail servers tend to do a reverse lookup for the peer IP-address when receiving. Matching a reverse look up with a follow-up forward lookup is probably employed to weed out open relays run through malware on home networks. Make sure the user in the From-address exists. The From-address is easily spoofed. A receiving mail server may try to contact the mail server in the From-domain, and see if the From-user actually exists.

    Read the article

  • Apache2: Limit simultaneous requests & throttle bandwidth per IP/client?

    - by xentek
    I want to limit simultaneous requests & throttle bandwidth per IP/Client on a single apache vhost. In other words, I want to ensure that this site, which hosts large media files, doesn't get hammered by someone trying to download everything all at once (just happened the other night). I'd like to limit the outgoing transfer speed overall for this site, as well as limit the number of connections a single IP can make to the server to a sane default (i.e. within normal browser limits for multiple requests so page loads aren't effected too much). Bonus points if I can actually scope it to file types (i.e. leave web files alone, but apply these rules to just the media files). We're running Ubuntu 9.04 on all the servers, and have two apache/php servers being load balanced via Round Robin by a squid proxy server. MySQL is running on its own box as well. We've got plenty of bandwidth to give them, so I don't really want overall caps, but just want to throttle the amount of memory/CPU it takes to serve this site. There other sites on these servers that we don't want to apply these rules too, just want to keep this one from hogging all the resources. Let me know if you need more info! Thanks in advance for your suggestions!

    Read the article

  • Sending email with exim and external sender address

    - by Tronic
    hi. i have following problem: i want to send emails with an rails webapp. i set up an exim server and when looking into the logs, the sending works, but the emails aren't sent really. i had the same problem with another isp. the sender address is hosted on another mailserver, other isp. i think the problem is, that sending doesn't work because the sener address isn't hosted on the same server. do you have any advice on this? the logs (exim) tell me the following: 2011-01-01 14:38:06 1PZ1eo-0000Ga-38 <= <> R=1PZ1eo-0000GY-1p U=Debian-exim P=local S=1778 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 => [email protected] R=dnslookup T=remote_smtp H=mx1.emailsrvr.com [98.129.184.131] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,O=mx1.emailsrvr.com,OU=GT21850092,OU=See www.geotrust.com/resources/cps (c)08,OU=Domain Control Validated - QuickSSL(R),CN=mx1.emailsrvr.com" 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 Completed [email protected] is the external sender-address! thank you!

    Read the article

  • A web app provider has asked for specific browser config

    - by Matthew
    They have asks to turn off caching on our browsers. I was aghast that they would ask such a thing. I said to them; To avoid caching it is best practice to use; <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="cache-control" content="no-cache" /> This should work across all browsers. Their reply was; We need to refresh javascript at runtime, this will not help us – any more ideas? I replied; Unsure what you mean by “refresh javascript at runtime”. If you are using ajax, browser caching can effect the XMLHttpRequest open method. Adding these meta tags to the source has fixed this for me in the past. Browser caching only caches resources, it should have no effect on site scripting. These meta tags will bypass browser caching. This is a reasonable request, isn't it?

    Read the article

  • How can I install git on RHEL 6?

    - by JR.Xyza
    I'm trying to install Git on a RHEL6 development server, I have experience with Ubuntu but this is my first time working with RHEL (I'm a developer trying to fill in for a recently departed Linux Sysadmin). I've set up two additional repos (EPEL and IUS) for other packages needed for a Magento install. Output of yum repolist: [root@box]# yum repolist Loaded plugins: product-id, security, subscription-manager Updating certificate-based repositories. repo id repo name status epel Extra Packages for Enterprise Linux 6 - x86_64 7,841 ius IUS for RHEL 6Server - x86_64 135 Most of what I've read indicates a simple 'yum install git' should work with EPEL enabled, but I get the dreaded [root@box]# yum install git Loaded plugins: product-id, security, subscription-manager Updating certificate-based repositories. Setting up Install Process No package git available. Error: Nothing to do Same goes for git-daemon, etc. I've tracked down a number of git RPMs such as this one at repoforge but they require a train of dependencies that seems to never end. I've also toyed with compiling it manually but the rabbit hole to get make working seems to go even deeper. I'm convinced there's a simple oversight somewhere keeping me from being able to install from the EPEL repo, but I'm a rookie at all this. Thanks in advance for help/pointers/additional resources.

    Read the article

  • clocksource tsc unstable

    - by amorfis
    Ok, now I have real server fault ;) After some time from booting (about one minute) my server hangs. All I can do is hard reset. Then after restart in /var/log/kern.log I can find: Jul 29 22:38:57 leonidas kernel: [ 90.729598] longhaul: Failed to set requested frequency! Jul 29 22:38:57 leonidas kernel: [ 90.731252] longhaul: Enabling "Ignore Revision ID" option. Jul 29 22:38:57 leonidas kernel: [ 91.201461] longhaul: Failed to set requested frequency! Jul 29 22:38:57 leonidas kernel: [ 91.201482] longhaul: Disabling ACPI C3 support. Jul 29 22:38:57 leonidas kernel: [ 91.204230] longhaul: Disabling "Ignore Revision ID" option. Jul 29 22:38:58 leonidas kernel: [ 91.416133] longhaul: Failed to set requested frequency! Jul 29 22:38:58 leonidas kernel: [ 91.416152] longhaul: Enabling "Ignore Revision ID" option. Jul 29 22:38:58 leonidas kernel: [ 91.960048] Clocksource tsc unstable (delta = -105611479 ns) I found some resources on the net, and it said to change clocksource, or disable ACPI. I tried disabling ACPI but it didn't help (but I noticed there was longer time before hanging). I can't change clock to hpet, because my system doesn't have such one. Output of cat /sys/devices/system/clocksource/clocksource0/available_clocksource: acpi_pm jiffies tsc My system is ubuntu server on VIA Epia hardware.

    Read the article

  • As an admin, what tools do you use to log what you do to your boxes?

    - by Jerry
    I am more of a linux applications developer than an admin. Over time, I've built servers and maintained them, sometimes to offer services, mostly just to develop the applications I work on. Way back when I would create a file in my account to keep notes on what I did on each machine, so that I could replicate that when I migrated to other machines. Nowadays, I install something a private trac installation, install it's blog plugin, and then use that to make notes of everything I install, and most commands that I run, as well as the output. This provides me a combination wiki and blog that I find very useful as a "captain's log". I do this mostly so that when I migrate to a new clean machine, I have a much easier time in bringing it up. And yet, I am always amazed when I see others just install this, delete that, run this, setup this config, ... without seeming to use any way to actually note what they are doing. What do YOU do, and what tools are available? I am especially interested in the transition between maintaining a few machines for a few people and maintaining several to dozens of machines providing a real service. What are the best practices, and where can I find good resources? Thanks!

    Read the article

  • Change A Password

    - by Thomas
    I have a non-domain machine that I use with our company's domain resources over vpn regularly. I switched to Windows 8 (fresh install), and the "Change a password" option went away from the Ctrl-Alt-Del window. Can't seem to google anything about this subject, or find a way to access that password change dialog. I tried running the .reg file from http://www.sevenforums.com/tutorials/63014-ctrl-alt-del-screen-add-remove-change-password.html with no luck. I also tried to Disable "Remove Change Password" via gpedit.msc. I could do it from my domain laptop, but I like to do it on this machine because it updates all my saved copies of those credentials. My local account is tied to my hotmail account if that matters. Updates: Administrator account. I apologize for stating this was an upgrade, it was a fresh install to a diff't drive. 64-bit Pro install. Bounty's almost up If someone can just confirm that the Change A Password... should or should not be present on a non-domain, Live tied, Win8 install, I'll be satisfied that I can or cannot expect to fix it.

    Read the article

  • F5 BIG_IP persistence iRules applied but not affecting selected member

    - by zoli
    I have a virtual server. I have 2 iRules (see below) assigned to it as resources. From the server log it looks like that the rules are running and they select the correct member from the pool after persisting the session (as far as I can tell based on my log messages), but the requests are ultimately directed to somewhere else. Here's how both rules look like: when HTTP_RESPONSE { set sessionId [HTTP::header X-SessionId] if {$sessionId ne ""} { persist add uie $sessionId 3600 log local0.debug "Session persisted: <$sessionId> to <[persist lookup uie $sessionId]>" } } when HTTP_REQUEST { set sessionId [findstr [HTTP::path] "/session/" 9 /] if {$sessionId ne ""} { persist uie $sessionId set persistValue [persist lookup uie $sessionId] log local0.debug "Found persistence key <$sessionId> : <$persistValue>" } } According to the log messages from the rules, the proper balancer members are selected. Note: the two rules can not conflict, they are looking for different things in the path. Those two things never appear in the same path. Notes about the server: * The default load balancing method is RR. * There is no persistence profile assigned to the virtual server. I'm wondering if this should be adequate to enable the persistence, or alternatively, do I have to combine the 2 rules and create a persistence profile with them for the virtual server? Or is there something else that I have missed?

    Read the article

  • What is best configuration settings for Wordpress and MySQL on Win2008 + IIS7 stack?

    - by holiveira
    I currently have four blogs that uses Wordpress running on a shared hosting company. This blogs have a considerable amount of visits and I'm constantly receiving warnings from the hosting company saying that I'm consuming too much server CPU. Considering the fact that I have a dedicated server in another company with plenty of idle resources (it has a quad core Xeon 2.5GHz and 8GB of Ram and run on Win2008) I'm planning to move the blogs to this server in order to have some more freedom. I'm currently using this server to host some web applications using ASP.Net and SQL Express. I've installed a blog to test and it worked fine, but some issues appeared and raised some questions in my mind: How to properly set the permissions in the folders used by wordpress plugins, I mean, what permissions should I set for the IIS_User in some folders so that the plugins works correctly? What's the best caching plugin to use considering this is a Window Server? In the previous hosting company I used the WPSuperCache, but it was a Linux Stack. Or should I ignore the caching plugins and use the Dynamic Caching Feature of IIS7? How can I optmize the MySQL server running in this server (specially the settings regarding memory and caching) How can I protect the admin folders against hacker attacks? I know some people will advice me not to run Wordpress in a Windows stack, but that's my only choice. I don't even know were to start managing and LAMP stack, don't have the time to do so nor the money to rent another server.

    Read the article

  • cant remove index.php from url in codeigniter

    - by Ashiq
    iam new in codeigniter frame work,i want to remove index.php from url and tried many times bt its not working..... here is my .htaccess file RewriteEngine on RewriteBase /test/ RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ test/index.php/$1 [L,QSA] iam also change $config['index_page'] = ''; bt when running this i got an error message... Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator at [email protected] to inform them of the time this error occurred, and the actions you performed just before this error. More information about this error may be available in the server error log. here is my appache error log [Sat Jan 05 16:59:53.265625 2013] [core:error] [pid 3976:tid 1152] [client ] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. pls help to solve this........ Thanks

    Read the article

  • Is there a utility to visualise / isolate and watch application calls

    - by MyStream
    Note: I'm not sure what to search for so guidance on that may be just as valuable as an answer. I'm looking for a way to visually compare activity of two applications (in this case a webserver with php communicating with the system or mysql or network devices, etc) such that I can compare the performance at a glance. I know there are tools to generate data dumps from benchmarks for apache and some available for php for tracing that you can dump and analyse but what I'm looking for is something that can report performance metrics visually from data on calls (what called what, how long did it take, how much memory did it consume, how can that be represented visually in a call stack) and present it graphically as if it were a topology or layered visual with different elements of system calls occupying different layers. A typical visual may consist of (e.g. using swim diagrams as just one analogy): Network (details here relevant to network diagnostics) | ^ back out v | Linux (details here related to firewall/routing diagnostics) ^ back to network | | V ^ back to system Apache (details here related to web request) | | ^ response to V | apache PHP (etc) PHP---------->other accesses to php files/resources----- | ^ v | MySQL (total time) MySQL | ^ V | Each call listed + time + tables hit/record returned My aim would be to be able to 'inspect' a request/range of requests over a period of time to see what constituted the activity at that point in time and trace it from beginning to end as a diagnostic tool. Is there any such work in this direction? I realise it would be intensive on the server, but the intention is to benchmark and analyse processes against each other for both educational and professional reasons and a visual aid is a great eye-opener compared to raw statistics or dozens of discrete activity vs time graphs. It's hard to show the full cycle. Any pointers welcome. Thanks! FROM COMMENTS: > XHProf in conjunction with other programs such as Perconna toolkit > (percona.com/doc/percona-toolkit/2.0/pt-pmp.html) for mySQL run apache > with httpd -X & (Single threaded debug mode and background) then > attach with strace -> kcache grind

    Read the article

  • PHP on several servers with session-sharing

    - by Etu
    there's certanly other threads about this, but I have one more question. We are about to scale the website at work to have more than one server. And we need to share the sessions between the servers. We have been looking into different solutions, one in memcached and use Memcached as sessionhandler in PHP. That will probably work. And the idea would be to run memcached on every machine and let all webservers access all other servers memcached servers, and then we have shared sessions between the machines, yay. (we have no resources to setup with sticky-sessions yet, that's a later project. we need this running, and we need this running now. and we will loadbalance with DNS for a starter) But then... If I want to take one server down, say, for maintenance, or a server crashes, or whatever reason. I don't want the users to just loose their sessions and have to start from the beginning... That's why we need some kind of replication, which Memcached does not support. Then I found http://repcached.lab.klab.org/ -- which has multi-master replication of memcached, which is great, and is what I want. But does it work with 2 machines? Say 3, 5, 10? For future scaling. I also looked into redishttp://redis.io/ -- which also seems great, but is a bit more "shaky" with the php-session-handler support, and no multi-master-replication. The thing is that I like to use memcached, but I want to be able to power down one of two boxes without loosing half of the sessions. Any suggestions?

    Read the article

  • Why is my cron daemon is being killed every few minutes?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6 x64 on an OpenVZ virtual machine. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has at least 800 MB of free memory, and cat /proc/loadavg returns 0.22 0.13 0.04 so resources shouldn't be the issue. With cron running, free -m reports: total used free shared buffers cached Mem: 1024 211 812 0 0 0 -/+ buffers/cache: 211 812 Swap: 0 0 0 I also tried removing and reinstalling the cron package using apt-get. Update: I initially thought the problem was a resource issues. I erased my entire VPS and started from a fresh Debian image. There is now nothing else running on the system, but even from a clean install my cron daemon is still being killed at random. What else should I check? How do I find out what's killing my crond?

    Read the article

  • High-performance Academic Server [closed]

    - by PHPsmith
    Suppose I want to build a server for the university's academic interests. The server is dedicated only to a site, where users (students and lecturers) just view and fill the academic data. But at a time (e.g. once a semester), about 12,000 students will access the site simultaneously. Due to limitation of resources, I have to build the server using free software (except for the operating system Windows 7, the university has been prepared). The hardware is also limited to the usual 4-core computers (eg, Ivy Bridge Intel Core i7-3770) with approximately 16GB of memory (DDR3 1600 MHz), equipped with an RJ-45 port (Intel 82 579 Gigabit Ethernet). With all these limitations, I have to choose the software (web server, database, etc) are appropriate for this purpose is achieved. I decided to create a site in PHP. Please help me by answering the following questions based on your expertise. (my prime candidate software to consider after googling) Web server which is faster & stable & secure, when implemented and optimized for PHP? And why? (nginx) PHP accelerator which is faster & stable & compatible with the selected web server? And why? (APC with Zend Optimizer+) Database which is faster & stable & secure, when implemented and optimized for selected web server and selected PHP accelerator? (MySQL) Are there any errors that have been or will be happening from my condition is? If there is, please enlighten me? Is there anything else I need to know in order to achieve this goal? If there is, please enlighten me? I understand that the performance also depends on the implementation of source-code program, so I assume it will create a site with the best efficiency (e.g. using AJAX).

    Read the article

  • Strange File-Server I/O Spikes - What Is Causing This?

    - by CruftRemover
    I am currently having a problem with a small Linux server that is providing file-sharing services to four Windows 7 32-bit clients. The server is an AMD PhenomX3 with two Western Digital 10EADS (1TB) drives, attached to a Gigabyte GA-MA770T-UD3 mainboard and running Ubuntu Server 10.04.1 LTS. The client machines are taking an extremely long time to access/transfer data on the file server. Applications often become non-responsive while trying to open files located remotely, or one program attempting to open a file but having to wait will prevent other software from accessing network resources at all. Other examples include one image taking 20 seconds or more to open, and in one instance a user waited 110 seconds for Microsoft Word 2007 to save a document. I had initially thought the problem was network-related, but this appears not to be the case. All cables and switches have been tested (one cable was replaced) for verification. This was additionally confirmed when closing down all client machines and rebooting the server resulted in the hard-drive light staying on solid during the startup process. For the first 15 minutes during boot, logon and after logging on (with no client machines attached), the system displayed a load average of 4 or higher. Symptoms included waiting several minutes for the logon prompt to appear, and then several minutes for the password prompt to appear after typing in a user name. After logon, it also took upwards of 45 seconds for the 'smartctl' man page to appear after the command 'man smartctl' was issued. After 15 minutes of this behaviour, the load average dropped to around 0.02 and the machine behaved normally. I have also considered that the problem is hard-drive-related, however diagnostic programs reveal no drive problems. Western Digital DLG, Spinrite and SMARTUDM show no abnormal characteristics - the drives are in perfect health as far as the hardware is concerned. I have thus far been completely unable to track down the cause of this problem, so any help is greatly appreciated. Requested Information: Output of 'free' hxxp://pastebin.com/mfsJS8HS (stupid spam filter) The command 'hdparm -d /dev/sda1' reports: HDIO_GET_DMA failed: Inappropriate ioctl for device (the BIOS is set to AHCI - I probably should have mentioned that).

    Read the article

  • Cisco configuration for public library internet

    - by AlternateZ
    I'm a C/C++ computer programmer turned IT support guy working for a public library. My day is usually spent helping random grandparents learn how to use email, so my networking knowledge is limited to what I can glean from google. Here's the situation. We have a public library with 20 PCs on a LAN and also public wifi access. Previously we were running all of this on 1 ADSL connection and people complained about low speeds. We hired a networking company to set up a Cisco dual-WAN router for us, and purchased an additional ADSL connection. The intention was to give the LAN PCs a guaranteed amount of bandwidth each, and then let the wifi users split the rest. The results were far worse than what we expected, and all we got from the company was excuses and they've since washed their hands of us. During busy periods, net performance on the LAN PCs are so poor that attaching files to gmail etc often times out and fails - far from the "guaranteed amount of bandwidth each" that we hope for! Sometimes it feels like performance is worse than before when we had 1 ADSL link and an unconfigured router? Anyways, surely this is a problem encountered a million times over across the world? (Sharing internet across many users effectively.) What are standard solutions for something like this? I admit to not even knowing the right jargon to google for (load balancing?) I'd appreciate any links to resources/guides that might help me get a better understanding of the problem/solutions, and perhaps some stories of your own experience in solving similar problems. This will help us evaluate and negotiate with network consultants in the future. If its relevant, our router config contains a section "policy-map" with "bandwidth percent" for each class of user (LAN, wifi), and "fair queue".

    Read the article

  • Can't run utilities/.exe's that use the network from a [DFS] windows share on Windows 2008 servers. Can this be overcome?

    - by Jim Lawhon
    Under Windows Server 2008 I'm unable to run many utilities that use network resources. This works just fine under Windows Server 2003. For example: \\domain\dfs\tools$\bin\sendmail.exe ... \\domain\dfs\tools$\bin\psexec.exe ... echo %_metric% %_value% %_unixtime% | \\domain\dfs\bin\foo$\nc graphite.domain 2003 -w1 Reproducing and maintaining this folder on a large number of servers/vm's is not desirable. Is there a way to allow Windows Server 2008 to run these tools? If so, can this be enabled via GPO or in a fashion that can be scripted during automated builds? Edit: The commands/tools do work just fine, when run from local drives. Edit2: Wget example: d:\scripts\helpers>z:\bin\wget http://www.google.com SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = z:/etc/wgetrc --2011-04-11 00:32:15-- http://www.google.com/ Resolving www.google.com... failed: Host not found. z:\bin\wget: unable to resolve host address `www.google.com' wget can neither use DNS to resolve the IP nor can it use HTTP if provided an IP directly. Edit3: The problem seems to be tied to DFS/DFS shares. Tools run correctly from other normal windows-server file-shares. They also run correctly when run directly from the file-servers behind the DFS. They only fail when we attempt to run them from the DFS UNC path or mapped drives.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >