Search Results

Search found 11015 results on 441 pages for 'explain plan'.

Page 340/441 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • High memory utilization with firebird + windows server 2008 r2

    - by chesterman
    i have a Windows Server 2008 R2 (64bit) running a 64bit installation of Firebird 2.1.4.18393_0 in a 4GB phisical server. After a while, the task manager show that all memory is used, but the sum of the memory of all process does not stack to the half of the memory. Unfortunally, it's start swapping. Using RAMMAP, i can see that my entire database file is mapped into the memory. This only occours in windows server 2008 r2 and windows 7 64 bit. i can use firebird 32 or 64bit installations, doesn't matter. How can i prevent this? Why this only occours in w2k8r2 and w7? tks in advance ** UPDATE Aparently, this occours by the use of all memory by the file system cache. The microsoft documentations explain that this WAS a issue in windows xp, 2k3, vista and 2k8, but it was solved in 7 and 2k8r2. also adds that this issue is more common in 64bit hosts. (http://support.microsoft.com/kb/976618) there are some tools (DynCache, setcache and the Get/SetSystemFileCacheSize system calls from windows API) that allows me to fix a upper limit to the memory usage by the fscache, but the documentation argues that i should not do this in w2k8r2 because it will severely impact on the overall system performance. anyway, i tried, the performance remained the same shit and the use of the page file remained, although there is now more the 1gb of free memory.

    Read the article

  • OpenVPN - Cannot browse ipv4 websites

    - by user1494428
    I have set up an openVPN tunnel on my VPS (OpenVZ - Ubuntu 12.04). The problem is I can only browse websites which support ipv6 like google. http://whatismyv6.com/ reports me that I've an ipv6 adress, so I guess this is the problem. Server configuration: dev tun server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key dh /etc/openvpn/easy-rsa/keys/dh1024.pem push "route 10.8.0.0 255.255.255.0" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" push "redirect-gateway def1" comp-lzo persist-tun persist-key status openvpn-status.log log /var/log/openvpn.log verb 3 Client configuration: client remote xx.xx.xx.xx 1194 dev tun comp-lzo ca ca.crt cert client1.crt key client1.key redirect-gateway def1 verb 3 I have configured NAT with this command: iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -j SNAT --to xx.xx.xx.xx Can someone explain me how I can make it works (forcing ipv4?) I had the same problem with another vps and I also tried on another client (All Windows 7).

    Read the article

  • Sharepoint Central Administration stuck / high CPU usage

    - by johnnyb10
    I'm using WSS 3 and I recently added a new web application to my SharePoint Server. After adding it, I wasn't able to open the Central Administration site. I also noticed that there was a w3wp.exe error (Event ID 1000) in the Event Viewer. The situation now is that the w3wp.exe process is hovering around 50% CPU usage continuously. I installed a program called IIS Peek, and it shows continuous GET requests on the Central Administration site; this happens even if I stop the Central Administration site in IIS. The IP addresses identified in the GET request is my workstation, which is what I used to attempt to access Central Administration after I created the new web application. Can someone explain what's going on and how I might fix it? It seems as if my computer tried to access Central Administration and then it hung, but the page requests that were happening at the time are somehow continuing over and over again. So my two problems are the inability to access Central Administration, and the CPU Usage of w3wp.exe, which I'm assuming are two symptoms of the same problem. I'd like to know if there's anything I can do besides restarting IIS, because we have clients accessing other sites on this server. Thanks.

    Read the article

  • How can I remedy the always-on-top window problem?

    - by GateKiller
    Sorry for the vague title but this one is hard to explain so bear with me please. I'm using Windows Vista at work for web development and sometimes when I Click or Alt-Tab to background window, the window will get focus but it will not be brought to the front. In order to bring the window to the front, I have to click on the applications border (when the resize cursor appears) and the window will then jump to the front. I've had this problem for about a year now and it happens at least a dozen times a day, but it doesn't do this all the time - seems random. I hope I have explained the issue fully (and you've understood it) and would appreciate any constructive answers or comments to solve this problem. Example: If I Alt-Tab from Google Chrome to Notepad and this problem randomly occurs, Google Chrome will remain in front of Notepad, however, I will be able to type text into Notepad while the window is behind Google Chrome. Clicking on Notepad's content area will not bring it to the front but clicking it's window border will. Video Exampe http://vimeo.com/19388998 In this video, I clicked from Google Chome to UltraEdit and chrome stayed in font, but as you can see, I can still type in UltraEdit. I'm starting to believe that this could be a bug in Google Chrome so I'll continue to watch if this between other applications.

    Read the article

  • Storing Windows Updates for reuse

    - by Saiyine
    At work we update lots of computers using Windows Update. Windows XP and 7 all day long, rarely some Vista. We do it through a corporate proxy, as connecting them to a domain to build a Wsus server is out of the question, so we download about two gigabytes a day of the very same updates everyday. I've tried WSUS Offline. It's pretty complete but when it finishes it's common to be still missing hundreds of megabytes of updates, because its intention is not to fully update a system but to install the critical updates, as the developers explain in the forums. Now I'm trying with Autoupdater. It's far worse, with poor capabilities for non-English Windows XP, but at least it gives the option to install non critical updates in Windows 7. It still misses hundres of megabytes of updates after fully updating the system. And finally, both doesn't install the driver related updates of Windows 7, so they at most save us a couple of hundreds of megabytes and a reboot (with the associated login to the computer and to the proxy) out of three or four. So, is it possible to somewhat extract the installed updates in a Windows 7 system and not having to download the same updates again and again at least with machines with the same hardware? Or even better, a generic package with all the updates?

    Read the article

  • Why is it a bad idea to use a customer email as the from address

    - by Crab Bucket
    I've got an application that emails users once they have filled in a form. It uses a [email protected] as a from address. The customer wants it to use the email from the form as the from address which could be anything. I have been told that this is a bad idea due to spoofing/blacklisting and spam. I feel really vague about the exact reason about why this is a bad idea particularly as i've got to try to counsel the client out of this. Can someone explain to me why this is a bad idea. Interestingly the client has used a gmail account as the from address as a demo which not only works fine but has enabled the application to start sending emails (it wouldn't do it before with an email which was [email protected]). Erm - what is going on. I'm told one thing and the opposite works. Sorry - i know this is basic but I could find anything on a google search. Largely I think because I'm having trouble even framing the question. EDIT Thank you everyone - great answers. Interestingly the server sending the email and the mail box that it is going to are both behind the same firewall so the client says they are unconcerned about spam. Oh well.

    Read the article

  • Batch Script to Trim lines in text to first 30 or 50 characters only

    - by SuperUserMan
    I am now new to scripts but i find it really difficult understanding "for" command (especially with that tokens and delimiters etc) . Saying so, i think that for command can be used to do what i am doing. If its not and there is an easier way, ignore my ignorance :( Say i have multiple lines in a text file abc.txt with each line starting and ending with " (quotes) E.g. a file of 3 lines "hey what is going on @mike220. I am working on your car. Its engine is in very bad condition" "Because if you knew, you'd get shredded and do it with certainty" "@honey220 Do you know someone who has busted their ass on a diet only for results to come to a screeching halt after a few weeks" How can i trim each line, within the quotes, to a Fixed length say 30 or 50 or 100 characters (including spaces) I want to enter the number of character in batch and it can trim accordingly and produce a file def.txt with trimmed lines within quotes. Say i enter 50, results of above example should be "hey what is going on @mike220. I am working on you" "Because if you knew, you'd get shredded and do it" "@honey220 Do you know someone who has busted their" Thanks P.S. if you use For command, kindly please explain the command. EDIT: Though the answer provided worked, there is an issue with non english text. I am getting garbled text in Output file for non english text in input file . Any help @barlop here is the nonenglish text ( 1 line) "???? ?? ???? ?? ???? ???? ??? ?????? ???"

    Read the article

  • Strange IP address showing up with OS X ssh

    - by user50799
    I was futzing around with DTrace on Mac OS X and found the following script that prints out information about connections being established: $ cat script.d syscall::connect:entry { printf("execname: %s\n", execname); printf("pid: %d\n", pid); printf("sockfd: %d\n",arg0); socks = (struct sockaddr*)copyin(arg1, arg2); hport = (uint_t)socks->sa_data[0]; lport = (uint_t)socks->sa_data[1]; hport <<= 8; port = hport + lport; printf("Port number: %d\n", port); printf("IP address: %d.%d.%d.%d\n", socks->sa_data[2], socks->sa_data[3], socks->sa_data[4], socks->sa_data[5]); printf("======\n"); } I run it in one window: $ sudo dtrace -s ./script.d Then I ssh to another machine from another window. I get this output from my dtrace window: CPU ID FUNCTION:NAME 0 18696 connect:entry execname: ssh pid: 5446 sockfd: 3 Port number: 22 IP address: 192.168.0.207 ====== 0 18696 connect:entry execname: ssh pid: 5446 sockfd: 5 Port number: 12148 IP address: 109.112.47.108 ====== ^C The first IP address I can explain (192.168.0.207), that's the machine I'm connecting to. But what's with the 109.112.47.108 machine? It doesn't show up in tcpdump nor netstat -an Is there something with my dtrace code or my understanding of how the connect system call works?

    Read the article

  • How does Subnetting Work?

    - by Kyle Brandt
    How does Subnetting Work, and How do you do it by hand or in your head? Can someone explain both conceptually and with several examples? Server Fault gets lots of subnetting homework questions, so we could use an answer to point them to on Server Fault itself. What is classless routing and why is class-based routing obsolete? If I have a network, how do I figure out how to split it up? If I am given a netmask, how do I know what the network Range is for it? Sometimes there is a slash followed by a number, what is that number? Sometimes there is a subnet mask, but also a wildcard mask, they seem like the same thing but they are different? Someone mentioned something about knowing binary for this? What is NAT (Network Address Translation). Not looking for links to other sites (unless maybe you have one post with a bunch of good ones). I already know how to subnet, I just thought it would be nice if Server Fault had a generic subnetting answer.

    Read the article

  • Thunderbird 11.0.1 and Lightning 1.3: How do I propose a different time for a meeting?

    - by seaao
    This all happens on my x64 Linux workstation btw. tl;dr: My colleague invited some people and me for a meeting. The meeting was scheduled a week to late. I -wrongfully- accepted. How do I propose a new time? To explain a bit more in detail: I received a meeting request in my mailbox. Thunderbird is so nice to let me accept or reject it, and after I click the button to accept, it is directly added to my calendar. But when I double click on the meeting to edit it, I get a function-wise scaled-down version of the meeting: The only settings I can alter are whether I want reminders, and if I go at all. Trying to drag it to another day doesn't work either: My calendar behaves like read-only (which it isn't btw). There are several questions (without answer...) to be found on Stack Overflow and on the Thunderbird knowledge base about using lightning. But I get the idea that I'm one of the few who won't comply with the team even before the meeting is started. My googling revealed no bugs or feature requests in the direction I'm thinking. A link to an explanation how to achieve this, or another perspective about how to reach the desired goal (meeting with my colleagues and me) would be mostly welcome!

    Read the article

  • Active Directory: how to be SURE users can change their own passwords?

    - by Latro
    Working on some project where a tool we have has to authenticate against AD connecting via LDAPS and perform password changes if required or requested. IN THEORY, the tool does that, and we have seen it work in other projects. IN PRACTICE, against this particular directory, it fails. Been driving me crazy. The particulars of the situation: Windows 2003 AD Defined a "technical user" for the LDAP connection with rights to change users passwords When password change is required - in this case, because pwdLastSet is 0 - the tool uses the technical account to go, bind to the controller and change the user password. If password change is not required but the user request it, then the bind is done with the user account. That last condition is the one that doesnt work. With the technical user the password change is possible, but with the user itself, it isnt. We get an error like this: LDAP access failed: javax.naming.directory.InvalidAttributeValueException: [LDAP: error code 19 - 0000052D: AtrErr: DSID-03190F00, #1: 0: 0000052D: DSID-03190F00, problem 1005 (CONSTRAINT_ATT_TYPE), data 0, Att 9005a (unicodePwd) no idea what DSID-03190F00 means cause it doesnt seem to be anywhere in google :-/ Been looking at several MS documentation pages and frankly, I'm not understanding one bit of it. There is some "control access right" called User-Change-Password that may, or may not, control what objects have the right to change their own password, which may, or may not, have to do with ACE and ACLs... There is GPO. There is maybe the password policy but it is only set to ask for passwords of 6 chars or more... Can anybody explain to me in easy-to-check steps how can I go and tell the AD admin guy (who is as lost as me) what to do to ensure that users in the AD directory (objectClass top,person,organizationalPerson and user) are able to change their own passwords by themselves? Thanks in advance

    Read the article

  • Google Chrome is running my system out of memory

    - by jasondavis
    I am running Windows 7 x64 with 12GB of RAM I often have multiple windows and a ton of tabs open. I use the extension Session Buddy to restore all my windows and tabs once the memory gets too high. So my 12gb of ram will get up to around 93% used because of Chrome, now I can close chrome down and restore the same amount of windows and tabs and it will only use about 25% of memory, it then over time increases back up to the 90% zone after several hours. It seems that when I close tabs, instead of freeing that memory up, it doesn't so that is why the huge increase of memory usage as new tabs are opened and closed it just adds up, this sounds like a huge bug in chrome. Just for an example I just re-booted my system, I only have 1 window with 4 tabs open and in the task manager, it shows 29 chrome.exe processes I then killed all chrome processes and opened a chrome window with just 1 tab, it made 27 chrome.exe processes. Is this an issue that others have? More importantly, is there a fix? UPDATE I just read that each plugin and extension creates a chrome.exe process, I then couunted 24 extensions so that helps explain a portion of the large processes. Still not sure about memory not being freed up though!

    Read the article

  • Pinging computer name in LAN results in public IP?

    - by Bob
    Hi, I recently introduced a new machine to my LAN. The computer name for this machine is 'server'. Historically I've been able to access machines from my home network (from a web browser or RDP) using the machine name and it resolves to a local IP address just fine. However, I can't seem to do this anymore. When I ping the computer name, I get the following: C:\Users\Robert>ping server Pinging server.router [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=24ms TTL=54 Reply from 67.215.65.132: bytes=32 time=23ms TTL=54 Reply from 67.215.65.132: bytes=32 time=24ms TTL=54 Reply from 67.215.65.132: bytes=32 time=24ms TTL=54 I notice also that it appends the 'router' suffix to my domain name for some reason. 'router' is the name of my router, obviously. I'm also using OpenDNS as my DNS provider (configured through my router so it gets passed down through DHCP). Why is this not working for me? Can someone explain how the DNS resolution should take place? For LAN resolution, it shouldn't go straight to OpenDNS. I thought that each Windows machine kept it's own sort of "mini DNS server" that knows about all machines on the local network and it first tries to resolve using that. Please let me know what I can do to get this working!

    Read the article

  • How to turn off Windows Azure's "This copy of Windows is not genuine" message?

    - by Sid
    Is there any setting/configuration item to avoid Windows Azure from printing that error on the screen or detecting it? I've put a screenshot below that shows the message when you RDP into the web role. My web role runs on Windows Azure Guest OS 1.17 (variant of Windows Server 2008 SP2) Background: I was explaining our architecture to some outside engineers (NDA'd and all) and had to demystify the webrole as they were unfamiliar with Azure. I RDP'd into the VMs running the Web Role when one of their engineers gasped "are you guys running pirated copies of Windows in the cloud?" I also noticed that within the RDP screen, the Azure machines had "This copy of Windows is not genuine" on the bottom left corner. Now obviously, Microsoft is running their own OS in their own datacenter with no influence from me. So no 'piracy' here, despite that obvious warning. However, they seemed so distracted by this ("how can it be? really? hmmm?") that we wasted more time talking about it than the actual matter at work. Like I said, they have little exposure to Azure but have value add elsewhere. I want to get rid of this so I don't have to explain this in the future. PS Microsoft: If you're going to modify Windows Server <XYZ> into Windows Azure <A.B> , you should also modify the code that verifies product integrity.

    Read the article

  • What is going on when I can't access an SMB server share (not accessible error) until I run cmdkey to delete the credential?

    - by Warren P
    I have a network connection share issue. The first connection works, and seems to stay connected for at least a few hours. However, after each time my windows 7 PC reboots, it can no longer form a network connection to the shared folder, nor browse to it, until I not only unmap and remap the mapped drive, but also, I have to use cmdkey to delete the stored credentials like this: cmdkey /delete:Domain:target=HOSTNAME My work PC is on a domain, and I am not the IT administrator, but I'm curious if there is anything I can do to investigate this issue. Any settings in registry or group policy that I could examine to see why the first connection works, but each subsequent attempt (once a stored credential exists) to browse or use the connection, fails with a connection error saying it is "not accessible", like this: I do not even get any error until at least several minutes go by. THe first thing I see is a window frozen and empty, and then I get this error: This has happened when connecting to a share on a DROBO device, and on a share which is not on the domain, but which was a Microsoft Home Server. I wonder if there's something broken in WIndows 7 professional with regards to connecting to non-domain shares when an active directory domain controller exists, and a particular workstation is joined to a domain? The problem only occurs if I click "remember credentials". It is not fixed by any amount of working with net use. Usingcmdkey to delete all stored credentials for the host is the only way to get back in, and it affects all non-domain shared folders. Update I'm hoping there are some registry locations I could check that could be misconfigured in some way that might explain why SMB/CIFS stored credentials for non-domain systems seem to be auto-invalidated in this weird way. Knowing how whacko Microsoft Windows domain and security handling is sometimes, this could be some kind of stupid "feature".

    Read the article

  • Use autocomplete in dropdown cells with Excel 2007?

    - by Martin
    I want to make a survey with Excel and I therefore have defined the cells for the answers as a dropdown cell which only accepts answers from a certain list, e. g.: The two Lists List1 and List2 (yellow cells) are the possible answers for the questions in Block 1.x resp. 2.x (blue) . There might be a block 4 with more questions, which again use List1 for their possible answers. My problem is: I'd like to be able to use the autocompleate feature to fill in the blue cells with the dropdown menu, so that the user only types 5 and it automatically expands to "5: extremely important" or "5: extremely difficult". According to my research on the www, this should be possible if I add the list with possible answers directly above the cells where autocomplete should work (I did this with the green helper cells which could be hidden) . But I have to enter at least 4 characters 5: e to get the autocompleted suggestion. Is there a way to make autocomplete already replace a "5" by the corresponding valid term? As the survey file shall be distributed to a lot of people "outside", I can not use VBA magic because it may be blocked on their computer and might not work. EDIT: it seems to have to do with the numbers I use: If I'd start my List items with A, B, C instead of 1, 2, 3, it would work perfectly. Excel seems to ignore the pure numbers when they are entered and does not try to autocomplete them.. is there a workaround? (I hope it is clear what I want, it seems a little difficult to explain.)

    Read the article

  • Sending eMails in a external subnet in vmware ESXi

    - by user80658
    This might be a bit hard for me to explain - and it is a pretty individual situation. I got a native server at Hetzner (www.hetzner.de). The public IP is 88.[...].12. I got ESXi running on this server. I can access the esxi console by the public ip, but none of the virtual machines. That's why I bought a public subnet with 8 (6 usable) IPs (46.[...]) and an additional public ip (88.[...].26). This additional public ip belongs to the first virtual maschine - a firewall appliance - which is connected to the WAN. This need to be done this way - since it is the official way by hetzner. My 46. subnet is behind the firewall. I got a virtualmin server with dovecot imap/pop3 server. When sending a email, most provider (gmail) will accept those mails, but a lot will put it into spam (aol). My theory is: The MX line of my domain says of course the ip of the virtual machine (46.[...]), but in the raw email it says that email is sent by the ip of the firewall (88.[...].26), which doesnt sound trustworthy. A solution would be if the firewall could handle mail, but it simply cant. How can I prevent this problem? Thanks.

    Read the article

  • WAMP running extremely slow on WIndows 7

    - by JavaCake
    After 2 days of tough fight trying to figure out what the problem is with my Windows 7 32-bit machine at work i have nearly given up. The issue is that the pages are loaded extremely slow, the performance is both when accessed locally (127.0.0.1) or from another computer in the intranet. First to explain the system: WAMP version: Apache 2.2.22 – Mysql 5.5.24 – PHP 5.4.3 XDebug 2.1.2 XDC 1.5 PhpMyadmin 3.4.10.1 SQLBuddy 1.3.3 webGrind 1.0 DocumentRoot: Located on network drive MySQL: InnoDB Pages: PHP, MySQL, AJAX etc. So basically the changes i have made in order to get a greater performance: Changed C:\windows\system32\drivers\etc\hosts: 127.0.0.1 localhost 127.0.0.1 127.0.0.1 Modified my.ini: innodb_flush_log_at_trx_commit = 2 Modified httpd.ini: EnableMMAP on EnableSendfile on Modified php.ini: realpath_cache_size= 4m How i measure the performance is the overall loadtime of the page. I run it locally on my Mac OS X machine aswell (MAMP), and typically the frontpage loadtime is 0.06seconds but on the Windows 7 machine it is 6-10seconds. I have verified the loadtime with developertools in Chrome aswell. Furthermore the result is identical in XAMPP.

    Read the article

  • Why am I missing 4GB of RAM on Windows Server 2008 R2 64bit?

    - by Nick G
    I noticed today that a server was very low on memory. It physically has 8GB installed and runs Windows 2008 R2 Standard 64bit. It also hosts 2 virtual machines using HyperV. Server is Dell Poweredge R510. However the host OS reports in task manager that it only has 4GB of RAM, despite actually having 8GB and it being a 64bit OS. Computer properties shows Installed memory: 8.00GB (3.99GB usuable). Why would "usable" be half the real RAM installed under a 64bit OS? Additionally nearly all of the 4GB of visible RAM on the host OS is being used by something without anything showing up in task manager (presumably HyperV as it's allocated 3.6GB to the virtual machines its hosting). However that doesn't explain where the other 4GB has gone which Windows can't even see. Where is my missing 4GB of RAM? Update: Dell OpenManage says this: Total Installed Capacity 8192 MB Total Installed Capacity Available to the OS 4096 MB So looks like Nathan's suggestion of memory mirroring might be correct. I'll have to reboot to check this (I think?) Update 2 OK. So I reboot and I get a message saying "the amount of system memory has changed" (despite not having touched the hardware in a year). Once Windows has booted, all 8GB is visible again. Looks like I probably have a hardware RAM issue (I'll perhaps try reseating it whenever I can chuck everyone off the server next). Thanks for your answers and comments. I was hoping it was going to be the mirrored-RAM option but it seems not - that's not even mentioned in the BIOS.

    Read the article

  • standard deviation on excel

    - by user270692
    so I have 3 data sets, which all came from the first data set using the excel program. all consisted of random numbers from =randbetween(0,100) starting from A2:A102 which is 100 integers. for data set b I had to enter=SUM(A2+5) press enter and drag it down to get the integers for cells B2:B102 and for C I had to do multiplication t get it. I did =PRODUCT(A2:A102*5). so everything was taken from data set A. now I did the formulas needed to do sample standard dev and mean(average) . for data set a and b the standard deviations were the same but the mean was larger in data set B of course because I added 5 to each cell in set A. my question is why wouldn't the standard deviation be the same for data set C also? if im using the info from data set A? and how do I calculate the standard deviation (sample) by hand so I can explain why the standard dev doesn't change but the mean does. I don't know what numbers to include in the formula for sample standard deviation.

    Read the article

  • Why is my cron daemon is being killed every few minutes?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6 x64 on an OpenVZ virtual machine. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has at least 800 MB of free memory, and cat /proc/loadavg returns 0.22 0.13 0.04 so resources shouldn't be the issue. With cron running, free -m reports: total used free shared buffers cached Mem: 1024 211 812 0 0 0 -/+ buffers/cache: 211 812 Swap: 0 0 0 I also tried removing and reinstalling the cron package using apt-get. Update: I initially thought the problem was a resource issues. I erased my entire VPS and started from a fresh Debian image. There is now nothing else running on the system, but even from a clean install my cron daemon is still being killed at random. What else should I check? How do I find out what's killing my crond?

    Read the article

  • Interaction between two Clouds

    - by Snehal Masne
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration. [using the Ubuntu Enterprise Cloud] Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....? Please reply.. I am sorry if I'm doing mistake anywhere... Thanks in advance :) Regards, www.TechProceed.com

    Read the article

  • How do you set up CRON?

    - by user1723760
    I have never used CRON before but I want to use CRON in order to be able to perform schedule jobs for a php script. The php script is called "inactivesession.php" and in the php script is this code: <?php include('connect.php'); $createDate = mktime(0,0,0,10,25,date("Y")); $selectedDate = date('d-m-Y', ($createDate)); $sql = "UPDATE Session SET Active = ? WHERE DATE_FORMAT(SessionDate,'%Y-%m-%d' ) <= ?"; $update = $mysqli->prepare($sql); $update->bind_param("is", 0, $selectedDate); $update->execute(); ?> Wht I want to do is that when the above date is reached (25th Oct), I want the php script to perform the UPDATE statement above. But my question is that how do I use CRON in order to do this? The server I am using is the university's server known as helios, does CRON need to be set up in helios, (do I have to call the admin for this) or is it something else which uses CRON. I have never used CRON before so can you explain to me how CRON can be set up for the example above with the server I am using? Thanks UPDATE: Hi, I think the name of the OS is actually helios but I am not sure, I have a wikipedia page on this: here. I will read the crontab wikipedia page and see what I can find, but what my question is that is CRON already set up, do I just go right ahead and use CRON or do I need to set it up first?

    Read the article

  • How to set an executable white list?

    - by izabera
    Under Linux, is it possible to set a white-list of executables for a certain group of users? I need them to be unable to use, for example, make, gcc and executables on removable disks. How can this be done? Edit, let me explain better. I'm dealing with a high school IT system, young geeks that (during the lessons) want to play, surf the net, damage those computer however they can. The major step to achieve this goal was to remove the system they're familiar with and install Ubuntu in all the computers. This actually works quite well, but recent events proved that this is not enough. I want to allow them to execute certain safe programs, like Open Office, and to deny any other program, whether it is preinstalled software, something they carry in usb drives, a downloaded program or a script they program on site. It's possible to remove the 'x' permission on any file on the pc, but of course it would be impractical. Furthermore, they would be able to run anything they download. I thought the best solution would be to make a white-list of safe programs and to deny anything else, but I don't really know how to do it. Any idea is helpful.

    Read the article

  • What causes PHP pages to consistently download instead of running normally

    - by Jonathan
    Hi, I'm running a Ubuntu Server on a VM, to test out different web forum solutions. I have set up a ~/public_html/ to be accessible with the apache2 web server, and that works fine. However when I go to a .php file on a browser (using my VM's ip-address/~username/phpfile.php) it does not display it as it should. Instead it offers to save to file/asks what program to open it with. Interestingly though that dialog box does recognise that it is a php file. I have the following version of php installed on the system: PHP 5.3.2-1ubuntu4.5 with Suhosin-Patch (cli) (built: Sep 17 2010 13:49:46) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies And the following server: Server version: Apache/2.2.14 (Ubuntu) Server built: Nov 18 2010 21:19:09 If anyone knows what might be causing this/potential solutions it would make me very happy :) EDIT: Turns out files this behaviour was only apparent on files in the ~/public_html/ directory. All php files in /var/www/ work fine. Prizes go to whoever can explain why? :D (And by prizes I just mean a well done, no actual prizes I'm afraid.)

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >