Search Results

Search found 21436 results on 858 pages for 'draw order'.

Page 669/858 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • How to reduce memory consumption an AWS EC2 t1.micro instance (free tier) ubuntu server 14.04 LTS EBS

    - by CMPSoares
    Hi I'm working on my bachelor thesis and for that I need to host a node.js web application on AWS, in order to avoid costs I'm using a t1.micro instance with 30GB disk space (from what I know it's the maximum I get in the free tier) which is barely used. But instead I have problems with memory consumption, it's using all of it. I tried the approach of creating a virtual swap area as mentioned at Why don't EC2 ubuntu images have swap? with these commands: sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048 && sudo chmod 600 /var/swapfile && sudo mkswap /var/swapfile && echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab && sudo swapon -a But this swap area isn't used somehow. Is something missing in this approach or is there another process of reducing the memory consumption in these type of AWS instances? Bottom-line: This originates server freezes and crashes and that's what I want to stop either by using the swap, reducing memory usage or both.

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • Network outage caused by SMC8013WG Cable Modem/Router ?

    - by mkocubinski
    At work, we have a basic Class C Network. The gateway/router is a SMC8013WG (stock comcast commercial cable modem), and simple unmanaged switch (HP Pro Curve 1400 24G). The SMC8013WG is our default gateway as well as DHCP server. Periodically, I'd say almost every other day.. the entire network will just stop responding. I won't be able to ping/see the gateway, any computers on our local network, or anything on the internet. The only way to fix this is to unplug the Comcast cable modem, wait, and plug back in. This unfailingly fixes the problem. But this doesn't make much sense to me.. shouldn't the network still be fine locally, since everything is plugged into the switch anyway? Why would resetting the router fix this? Can anyone suggest anything to check to in order to narrow this problem down? Just to be clear.. here is the basic topology: { Internet } -- (12.345.67.89) Comcast Cable Modem (192.168.1.1) -- Switch -- 192.168.1.2-254 P.S. Our IT guy is in about 3 hours a day every other week or so, so.. we're kind of on our own most of the time.

    Read the article

  • Directories Throwing 404 Errors - Virtual Host Configuration and mod_rewrite

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • Variable host IP address in iptables rule

    - by DrakeES
    I am running CentOS 6.4 with OpenVZ on my laptop. In order to provide Internet access for the VEs I have to apply the following rule on the laptop: iptables -t nat -A POSTROUTING -j SNAT --to-source <LAPTOP_IP> It works fine. However, I have to work in different places - office, home, partner's office etc. The IP of my laptop is different in those places, so have to alter the rule above each time I change place. I have created a workaround which basically determines the IP and applies the rule: #!/bin/bash IP=$(ifconfig | awk -F':' '/inet addr/&&!/127.0.0.1/{split($2,_," ");print _[1]}') iptables -t nat -A POSTROUTING -j SNAT --to-source $IP The workaround above works. I only still have to execute it manually. Perhaps I could make it a hook executing whenever my laptop obtains an IP address from DHCP - how can I do that? Also, I am just wondering if there is an elegant way of getting it done in the first place - iptables? Maybe there is a syntax allowing to specify "current hardware ip addres" in the rule?

    Read the article

  • Routing connections through VPN based on hostname (not IP range)

    - by Michal M
    This bugs me immensly. I need to connect to client's network through VPN. But I definitely do not want to send all the traffic through client's network so this option is out of question. What I need basically is for the OS to know that all client's network subdomains (*.example.com) need to go through the VPN connection. I tried a couple of options: Changing order of services and setting the VPN on top, but this works the same as "Send all traffic over VPN connection". Using "VPN on Demand" option from network advanced options, but this feature is quite rubbish to be honest. Seems to work only in Safari (?!) and it doesn't route the connection, but it basically triggers the OS to connect to the selected VPN. The reason I need it to work based on hostnames rather than IP range is simple - my client has a lot of servers inside his network and it's impossible for me to remember all IPs. They are all within a range, but this doesn't help me remembering. Another option would be to put the VPN connection on the bottom of network services and untick "Send all traffic..." and then put all known hostnames in hosts file, but considering there could be hundreds of servers (therefore hostnames and ips too) it ridiculous job. And if new server appears on the network I'd need to edit the hosts file again. Sisyphean labours. However this works on Windows very simply. If a hostname is not available through default network interface, then it seems to try VPN connection and this works brilliantly. So, how can I achieve that on Mac, then? I know client's internal DNS addresses if that is of any help (like directing a certain domains through a different DNS)? PS. Using latest version 10.6.6. PS2. I am using VPN to access intranet, version control servers (svn://), samba shares and for SSH access to servers.

    Read the article

  • Plesk command working in manual script, not in cronjob

    - by dsaunier
    Hi, In order to install a hosting plan, I use Plesk's commands in SSH as specified in their official guide. When typed directly in SSH (Putty), it works perfectly. The line is as follows with obviously values hard coded when in CLI: /usr/local/psa/bin/domain --create '.$url.' -owner mynamehere -ip '.IP_SERVER_PLESK.' -status enabled -hosting true -hst_type phys -login '.$ftp_user.' -passwd '.$ftp_pw.' -www false -php true -php_safe_mode false -hard_quota 100M I then put that request in a php script that does other things after hosting is installed. Now for the weird part: when calling that script from CLI it also works fine, I do a ./myscript.php and it installs the hosting, then sends emails etc. However after I create a cronjob to have that same script called regularly, then the Plesk command fails. The cronjob is started in Plesk as */15 * * * * /usr/bin/php /home/scripts/myscript.php and it works fine for everything BUT the Plesk hosting install, that returns "Unable to read Control Panel configuration file" and therefore does not install the domain hosting. Still this is the same script that I call manually ! On that server are the PHP used to call a cronjob and the one used in CLI different ? What do I miss, help greatly appreciated ! Regards.

    Read the article

  • EXCEL 2007 macro

    - by Binay
    I have a macro which connects to db and fetches data for me and makes it comma separated. But the problem is the comma is getting appended to the last row, which I don't want. I'm struggling here. Could you please help out? Here is the part from the code. If cn.State = adStateOpen Then Rec_set.Open "SELECT concat(trim(Columns_0.ColumnName), ' ','(', 'varchar(2000)' ,')') columnname FROM DBC.Columns Columns_0 WHERE (Columns_0.TableName= " & Chr(39) & Tablename & Chr(39) & "and Columns_0.Databasename=" & Chr(39) & db & Chr(39) & ")ORDER BY Columns_0.Columnid;", cn 'Issue SQL statement If Not Rec_set.EOF And Not Rec_set.EOF Then Do Until Rec_set.EOF For i = 0 To Rec_set.Fields.Count - 1 strString = strString & Rec_set(i) & "," Next strFile.WriteLine (strString) strString = "" Rec_set.MoveNext Loop Here is the result I am getting. EMPNO (varchar(2000)), ENAME (varchar(2000)), JOB (varchar(2000)), MGR (varchar(2000)), HIREDATE (varchar(2000)), SAL (varchar(2000)), COMM (varchar(2000)), DEPTNO (varchar(2000)), I don't want the last comma.

    Read the article

  • Linux SFTP and many local user accounts, limits with mount --bind?

    - by user123428
    I am in the process of building a solution to handle many developers (possibly hundreds) to work on their files via sftp, each one Jailed in their home directory. For our particular needs, we have a samba mount point that contains all of the users home directories. I have started developing the following solution and hit some walls: - I have configured a Ubuntu Lucid Server as sftp server. - In order to jail the user in their home directory (without allowing them the browse a directory up and seeing all the other users folders) I am using mount --bind and not a symbolic link (also some ftp clients don't really work with sym links). - The user accounts are local unix user accounts on the sftp server (not using a directory service or anything) that have an empty home folder created on the local machine, then I use mount --bind to bind the empty folder to the actual users home directory on the samba share. With this solution I am hitting a couple of problems, in the case of a server reboot, all the mount --binds are lost because they are not written in fstab. Then I have read somewhere that the maximum amount of entries in fstab are 400 (which does not really help us). I have thought of a solution of writing something that stores the mounts in a text file as a backup and on server reboot, run the script that re mounts. I am just really unsure about this whole process and was wondering if anyone has any insight on possibly a better solution for SFTP? (not FTP)

    Read the article

  • Virtual folder for multiple sites

    - by Cups
    I am creating a very simple flat file CMS for small (multilingual) websites. The little file writing that goes on is handled by 4 scripts in a publicly available folder in each site named /edit. Given that I have 2 websites now working on that simple system: websiteA/index.php (etc) websiteA/edit/ websiteB/index.php (etc) websiteB/edit/ What is the best way of making that /edit folder "virtual" in order that these and each subsequent website owner can login to their view of /edit and yet the code only exists in one place. I do not want the website owners to have to login from a central website, but from their own /edit directory. I have already read about different solutions seemingly using the <Directory> directive in my httpd.conf declaration for each website, and also using straight mod_rewrite but admit to now becoming confused about some of the terminology. Each website has its own config file which contains path settings and so on. What in your opinion is the best way to handle this? EDIT In light of a reply, I suppose that given a virtual host directive such as this: <VirtualHost 00.00.00.00:80> DocumentRoot /var/www/html/websitea.com ServerName www.websitea.com ServerAlias websitea.com DirectoryIndex index.htm index.php CustomLog logs/websitea combined </VirtualHost> Is it possible to create an alias inside that directive for the folder websitea.com/edit ?

    Read the article

  • fglrx-legacy-driver not seeing Radeon HD 4650 AGP

    - by Rocket Hazmat
    I am running Debian Squeeze on an old Dell Dimension 8300 box. It has an AGP Radeon HD 4650 card. I use this machine to mine bitcoins, and today I noticed that the machine had rebooted! My precious uptime! Anyway, my miner wouldn't start, so I figured might as well update my graphics driver, maybe that would fix the issue. I went to amd.com and downloaded the newest driver (12.6 legacy), but after installing it, aticonfig gave an error: aticonfig: No supported adapters detected I uninstalled the driver and figured I'd try to install it from apt. AMD has dropped support for the HD 4000 series in fglrx, forcing me to use fglrx-legacy-driver (currently only in experimental). In order to install this, I had to update libc6 (and some other important packages, like gcc), I had to use their wheezy versions. I finally got glrx-legacy-driver installed, but I still got: aticonfig: No supported adapters detected Why isn't the driver finding my video card? I have a hunch it has something to do with the fact that it's an AGP video card. Here is the output of lspci -v (why does it say Kernel driver in use: fglrx_pci?): 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV730 Pro AGP [Radeon HD 4600 Series] (prog-if 00 [VGA controller]) Subsystem: Advanced Micro Devices [AMD] nee ATI Device 0028 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 16 Memory at e0000000 (32-bit, prefetchable) [size=256M] I/O ports at de00 [size=256] Memory at fe9f0000 (32-bit, non-prefetchable) [size=64K] Expansion ROM at fea00000 [disabled] [size=128K] Capabilities: [50] Power Management version 3 Capabilities: [58] AGP version 3.0 Kernel driver in use: fglrx_pci

    Read the article

  • Nginx and 1000 WordPress Installs - Optimization

    - by GTE
    Hey, I'm trying to create a rather unusual (imo) configuration where I have: nginx php-fastcgi mysql 1000 seperate WordPress installs (with WP Super Cache). Each WP install corresponds to a seperate subdomain. Furthermore, I have 1000 cron jobs being called every hour that in turn call a WP plugin (using wget) which retrieves data from an API and posts it to the respective blog. This is all being run on a virtual server with 1024MB of RAM, 4 shared processors, etc. The server is not doing well, especially during the times that the cron jobs are being executed. Nginx constantly throws 504 errors and the site has a significant lag. 1) Am I crazy for having 1000 individual WP installs? Should I be using WP-MU and will this help significantly? (I have certain plugin restrictions that I prefer having seperate installs but could switch if need be.) 2) Instead of having 1000 unique cron jobs - should be calling say a bash script that will then process the 1000 HTTP requests I need? Could this be done in a succesive order instead of a sequential one? 3) Any other kind of suggestion you may have for optimization? Should I be proxying to Apache instead of just using nginx, etc. Any kind of advice would be appreciated. Thanks in advance

    Read the article

  • How do you get linux to honor setuid directories?

    - by Takigama
    Some time ago while in a conversation in IRC, one user in a channel I was in suggested someone setuid a directory in order for it to inherit the userid on files to solve a problem someone else was having. At the time I spoke up and said "linux doesn't support setuid directories". After that, the person giving the advice showed me a pastebin (http://codepad.org/4In62f13) of his system honouring the setuid permission set on a directory. Just to explain, when i say "linux doesnt support setuid directories" what I mean is that you can go "chmod u+s directory" and it will set the bit on the directory. However, linux (as i understood it) ignores this bit (on directories). Try as I might, I just cant quite replicate that pastebin. Someone suggested to me once that it might be possible to emulate the behaviour with selinux - and playing around with rules, its possible to force a uid on a file, but not from a setuid directory permission (that I can see). Reading around on the internet has been fairly uninformative - most places claim "no, setuid on directories does not work with linux" with the occasional "it can be done under specific circumstances" (such as this: http://arstechnica.com/etc/linux/2003/linux.ars-12032003.html) I dont remember who the original person was, but the original system was a debian 6 system, and the filesystem it was running was xfs mounted with "default,acl". I've tried replicating that, but no luck so far (tried so far with various versions of debian, ubuntu, fedora and centos) Can anyone clue me in on what or how you get a system to honor setuid on a directory?

    Read the article

  • Auto-focus xdvi after running viewdvi in Emacs with AUCTeX.

    - by D Connors
    I've been using emacs with AUCTeX mode to edit my latex documents for a few days now, but there's something that's really bugging me. As it should be, whenever I do C-c C-c RET it compiles the file, and if repeat the command it views the output in xdvi. It's also set to the mini-mode TeX-source-specials-mode, so instead of opening a new window in xdvi it only reloads the window that's already open, brings it to the front, and sends me to wherever the pointer was in emacs (forward search). Now here's the problem: Even though the xdvi window is brought to the front, it's not focused. Instead, the emacs windows stays with focus (and that's where any keyboard input goes). And I keep forgetting about that, which leads me to accidentally editing the source file while trying to navigate in xdvi. Not to mention I'm forced to alt-tab in order to focus xdvi, and alt-tab twice if I just want to get back to emacs. Is there a way around this problem? I just want xdvi to be focused whenever I run the view command from emacs.

    Read the article

  • How much Ram should I need on my VPS package? Am I being ripped off?

    - by Tamerax
    Hello! So, I'm currently on a VPSVille Cpanel3 account that has 768 MB guaranteed ram and 2048 MB burst ram (full details here: http://www.vpsville.ca/cpanel-vps). It's running CentOS, Cpanel, Apache and FastCGI. On the server itself I have a joomla community site with a forum system that generally has about 20 people on it max at any point and even then, during the evening, no one. It's a pretty small site but has a number of modules running on it. It gets about 6000 visits a month. Also on the server is a wordpress site that gets about 80-150 visits a day, 2 other wordpress sites that aren't developed yet so they don't get any traffic at all and 2 static html websites that also only get about 500 hits a month. All in all, no huge sites. The issue is that I get "out of memory" errors fairly frequently and it kills my server and I need to reboot it in order to get all my sites up and running again. It seems to me that I shouldn't have these issues with that much ram allotted to my account and everytime I send in a support ticket, they just tell me to upgrade the ram. Now, I'm still pretty new to all this so I'm not a good judge of how much I really need for my sites to run. I don't know if my sites really do need this much OR vpsville has oversold there servers, they don't actually have those resources available and I'm getting ripped off. So, how much ram should I be using with my current setup? Thanks!

    Read the article

  • Shrinking windows and recovery partitions on the samsung new series 9

    - by bobbaluba
    I just bought a samsung NP900X3C, and as I was going to install linux, I noticed the windows partitions and recovery partitions occupied a major portion of the disk. The disk is a 128 GB SSD, and I want to keep the windows partition in order to play some games once in a while, but the windows disk is already 45GB full (with no installed programs) and the recovery partition is 20GB. That leaves under 60 GB for linux, which is not optimal, since that is what I'm going to be using most of the time, and there would be no room for games on the windows partition. There are also two small partitions that I don't know what are doing, one 100mb at the start of the disk that I'm guessing is some kind of boot partition, and one 5GB, that is described as an OS/2 hidden C: drive What I'm wondering is: can i delete the recovery partition? What about the mystical 5gb partition? Here is what fdisk reports: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 128.0 GB, 128035676160 bytes 255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x83953ffc Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 198273023 99033088 7 HPFS/NTFS/exFAT /dev/sda3 198273024 207276031 4501504 84 OS/2 hidden C: drive /dev/sda4 207276032 250068991 21396480 27 Hidden NTFS WinRE

    Read the article

  • Permissions for Multiple User VPS

    - by adnymarc
    I have a Linode VPS server that I have recently setup and am migrating to from Mediatemple, where I have a VPS managed by Plesk. I dislike the Plesk interface and the mess it makes of a lot of things, but appreciated its ability to allow multiple people access to different domains on a server. I have most everything setup the way I would like it, but am having issues with permissions for my domain directories. I am running Ubuntu 8.04 LTS and Apache 2 as my web server. I have domains successfully located in /var/www/vhosts/domainname.com but have to modify files as root in order to add/change files for the domains. I would like to setup access with the following criteria: Each domain can have a user assigned to it (and allow for the same user to manage multiple domains - could even create symlinks in their home folder to their domains) Certain users will have shell access and may be chrooted to the domain directory they control FTP needs to be setup and able to correctly access the domains so that content editors for each domain can upload/download without permissions issues I am relatively new to linux sysadmin and have searched for a good guide to help solve these issues but haven't been able to find one yet. Thanks in advance for your help.

    Read the article

  • I have just created a subnet for a local network, connecting to a standalone server on another network, now I cannot connect to the internet

    - by Seth
    I am just learning some new aspects of servers and networking. We have a network of 5 subnets that all interconnect with each-other. In order to get two computers on the subnet that we were setting up, I changed the IP from the subnet where the standalone server is on (where they used to be set up)to the local subnet we are remotely hooking up. Likewise I also changed the gateway to coincide with the new subnet. Only problem is that since doing this, I am unable to establish a connection to the internet. I can ping the server and correspong gateway & DNS server, but cannot get connected to the internet. We do have a dumb-switch (non-programmable) connected that receives both the internet and private network inputs and distributes (or should do so) to about 5 other computers. Bottom line, I cannot currently connect to the internet, and am wondering what could be causing this.. It is likely something very obvious and pardon me being more vague than I probably should be, but I could use some help resolving this! Thanks for any help!

    Read the article

  • Iptables and system-config-firewall

    - by nivde92
    I had a set of netfilter rules set with iptables, but someone else told me to use system-config-firewall to add a rule for sharing files with Windows. (Samba) This rewrote the iptables rules file and I lost my own custom rules. I have a backup copy, but am having trouble restoring them. Edit: The server is Centos, I already tried to restore the rules with iptables-restore < /root/working.iptables.rules but for some reason the rules don't change. What are you trying to do? Trying to restore the iptable rules that I have in a backup file. What have you tried in order to make it happen? I've tried to modify the iptables file with vim, since the command iptables-restore was no help. What results did you expect? To get the old rules back. What actually happened? Nothing, when I run the command or edit the file by hand the file doesn't change at all. Maybe something else it's overwriting.

    Read the article

  • Did Windows 7 Startup Repair trash My Documents?

    - by Metaphile
    Earlier today, I rebooted my computer. Partway through the boot process, it shut down suddenly. When I tried again, I was prompted to run Startup Repair, and I did. Afterwards, my computer booted normally and everything seemed to be in order. Then I noticed that my My Documents folder contains a mix of old and new files. On closer inspection, it appears that Windows has reverted my system to a previous state. Two things puzzle me: 1) According to Microsoft, "System Restore does not affect personal files, such as e-mail, documents, or photos [...]", yet many of my personal files have been affected. 2) Why were some things reverted, but not others? I had recently reorganized a bunch of files in My Documents. The reverted directory structure seems to be a hybrid of old a new, with a lot of new stuff missing. It's hard to say for sure, but it looks like the stuff that's missing would have been in conflict (two folders with the same name, for example), and Windows favored the old stuff. Is this normal behavior for Startup Repair/System Restore? To modify personal files, I mean? Is there a pattern to the mess it's made of My Documents?

    Read the article

  • Computer Turns on Briefly then right back off again.

    - by goddamnyouryan
    So yesterday I came home from work and went to turn my computer on....it turned on for about 5 seconds then promptly turned right back off again...before I ever saw anything on the screen. I tried again, same result. After several attempts, I've found that the length at which it turns on differs. After trying multiple times in a row, it only stays on for about 3 seconds. If I let it rest for a bit it sometimes will stay on for up to a minute (though it never boots, the screen stays black the whole time). I'm not sure what is causing this issue...I built this computer a little more than 2 years ago and this is the first issue I have ever had with it. I did all the usual checks: -It's not the power switch -The capacitors on the motherboard all seem to be in working order -The PSU seems to be fine as it lights up, fan spins, and will sometimes stay on for about a minute period My hope is that the thermal paste on the cpu has degraded and just needs to be re-applied. Does that seem like a reasonable assumption? I'm going to tear the thing apart and do a minimum system build when I get home, but any heads up as to what I should be looking for would be much appreciated. Any thoughts?

    Read the article

  • Help with Backup Scheme for B.E 12.5

    - by Jemartin
    I'm in process of implementing a new backup scheme. I would say that I'm kind of new to it. So here my question. I'm currently using Backup Exec 12.5 on Windows Server 2008 w/Hyper-V, and IBM Adic Scalar 24. I currently backup our mail server, SQL DB, Board Server Linux Red Hat, Ftp, etc. To a Near-line which is local on our SAN I have the daily's go there as well as full. I would like to start weekly full to tape on a Saturday it takes about 2-3 days to complete the entire full to tape due to backing up from our Co-Lo as well. I have read up on the Father/son rotation but here's my issue with that I dont use tapes everyday only on the weekly full to tape will I be using them. So if there is 4 weeks in a month would I rotate in this order ( Month June WK1 =7tapes , June WK2=7 tapes, June WK3=7tapes June Wk4=7tapes with WK4 being the last tape for the month of June I would use that as a Month tape. For the month of July Wk1= June's WK1 tapes, July WK2= June's WK2 tapes July WK4 = Junes Wk4 tape for a month or would I use a set of new tapes for the last week in July. All tapes are being taking off site as well.

    Read the article

  • Intermittent 403 errors when using allow to limit access to url with both explicit IP and SetEnvIf

    - by rbieber
    We are running Apache 2.2.22 on a Solaris 10 environment. We have a specific URL that we want to limit access to by IP. We recently implemented a CDN and now have the added complexity that the IP's that a request are shown to be coming from are actually the CDN servers and not the ultimate end user. In the case that we need to back the CDN out, we want to handle the case where either the CDN is forwarding the request, or the ultimate client is sending the request directly. The CDN sends the end user IP address in an HTTP header (for this scenario that header is called "User-IP"). Here is the configuration that we have put in place: SetEnvIf User-IP (\d+\.\d+\.\d+\.\d+) REAL_USER_IP=$1 SetEnvIf REAL_USER_IP "(10\.1\.2\.3|192\.168\..+)" access_allowed=1 <Location /uri/> Order deny,allow Allow from 10.1.2.3 192.168. allow from env=access_allowed Deny from all </Location> This seems to work fine for a time, however at some point the web server starts serving 403 errors to the end user - so for some reason it is restricting access. The odd thing is that a bounce of the web server seems to resolve the issue, but only for a time - then the behavior comes back. It might be worthwhile to note as well that this URL is delegated to a JBoss server via mod_jk. The denial of access is, however; confirmed to be at the Apache layer and the issue only seems to happen after the server has been running for some time.

    Read the article

  • Is it possible (and how if it is) dump two concatenaded disks in a new disk using DD?

    - by pedromarce
    Hi, I have a Lacie enclosure that has a setup with 2 500gb disks configured as 1 drive of 1TB, the only partition created for the whole drive is HFS+ journaled, but the controller in the enclosure is gone and so the drive refuses to mount anymore. I have been able to remove those two disks from the enclosure and connect them using USB ports and a program called R-studio (Raid recovery program) check that the setup the controller in the enclosure was using was both disks concatenated (Not Striped). And so configuring that option in R-studio I could be able to get back all the information. But before I got a license for r-studio for just one use, I would rather buying a new 1TB disk and try to write all the information of those two disks in this new one. I can use Mac or linux machines to do it, and I think it should be ok use DD command in linux to concatenate those two drives into the new one in the right order to get it working again in the new disk and I will reformat the old ones, but I am not sure. So, is it possible in this scenario to write both disks into a new one using DD? Any hints how the command would look? Thanks,

    Read the article

  • Excel 2013: VLookup for cells that share common characters within cell but are both surrounded by other non-matching text

    - by Kylie Z
    I am pulling information from 2 different databases. The databases use different naming protocol for the exact same item/specified placement however they always have certain components of the name in common. The length of these names can vary throughout each of the databases (see the pic below) so I don't think counting characters would help. I need a formula (probably a vlookup/match/index of some sort) to pair up the names from the 2nd database name with the 1st database name and then place it in the adjacent column(B2) on sheet1. Until this point I've had to match, copy, and paste the pairs manually from one sheet to the other and it takes FOREVER. Any help would be much appreciated!!! For example: Database1 Name in Sheet1,A2: 728x90_Allstate_629930_ALL_JUL_2013_MASSACHUSETTSAUTO_BAN_MSN_ROSMSNAUTOSMASSACHUSETTS_7.2.13 Database2 Name in Sheet2, A13: BAN_MSN_ROSMSNAUTOSMASSACHUSETTS728X90_728X90_DFA Common Factors: "ROSMSNAUTOSMASSACHUSETTS" & "728X90" Therefore A2 and A13 need to pair up In some cases, Database 1 and 2 will have a common name aspect but sizing will be different. They need to have BOTH aspects in common in order to be paired so I would NOT want the below example to pair up. Database1 Name in Sheet1,A2: 728x90_Allstate_629930_ALL_JUL_2013_MASSACHUSETTSAUTO_BAN_MSN_ROSMSNAUTOSMASSACHUSETTS_7.2.13 Database2 Name in Sheet2, A12: BAN_MSN_ROSMSNAUTOSMASSACHUSETTS300X250_300X250_DFA Common Factor: Only "ROSMSNAUTOSMASSACHUSETTS" matches. "728x90" is not equal to "300X250" - Sizing is different so they should not be paired.

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >