Search Results

Search found 20044 results on 802 pages for 'software monkey'.

Page 711/802 | < Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >

  • Calculating memory footprints using /proc/sysvipc/shm

    - by MarkTeehan
    This is for a SLES 10 database server. One of my servers runs three databases and three app servers; I am analyzing how their shared memory segments grow and shrink to avoid intermittent out-of-memory scenarios. "Top" is hot helpful for this since its calculations for RES and VIRT are inconsistent. I am doing this by matching up the contents of /proc/sysvipc/shm with memory usage reported by the database admin console. I do this by totaling up saving the contents of /proc/sysvipc/shm and then total up "bytes" for all of the segments for the offending userid. This is a large server with hundreds of segments and tens (or hundreds) of GB of allocated memory per userid. However it doesn't match up - the database management software claims to be using around 25% more memory than the total I calculate. Negligible swap space is in use, so I am ignoring that. I am running it as root so I am sure I see all shared memory segments. My question is : is all (significant) allocated memory recorded in /proc/sysvipc/shm, or is this only shared memory (*and not "un-shared" memory?). If this is incorrect, what is the correct way to calculate out the total allocated memory for each userid? Also: I believe doing a 'cat' on this file locks server IPC. I check it every 5 seconds - is it likely that this frequency could be problematic? Thanks! Mark Teehan Singapore

    Read the article

  • Isn't a hidden volume used when encrypting a drive with TrueCrypt detectable?

    - by neurolysis
    I don't purport to be an expert on encryption (or even TrueCrypt specifically), but I have used TrueCrypt for a number of years and have found it to be nothing short of invaluable for securing data. As relatively well known free, open-source software, I would have thought that TrueCrypt would not have fundamental flaws in the way it operates, but unless I'm reading it wrong, it has one in the area of hidden volume encryption. There is some documentation regarding encryption with a hidden volume here. The statement that concerns me is this (emphasis mine): TrueCrypt first attempts to decrypt the standard volume header using the entered password. If it fails, it loads the area of the volume where a hidden volume header can be stored (i.e. bytes 65536–131071, which contain solely random data when there is no hidden volume within the volume) to RAM and attempts to decrypt it using the entered password. Note that hidden volume headers cannot be identified, as they appear to consist entirely of random data. Whilst the hidden headers supposedly "cannot be identified", is it not possible to, on encountering an encrypted volume encrypted using TrueCrypt, determine at which offset the header was successfully decrypted, and from that determine if you have decrypted the header for a standard volume or a hidden volume? That seems like a fundamental flaw in the header decryption implementation, if I'm reading this right -- or am I reading it wrong?

    Read the article

  • Skipping nginx PHP cache for certain areas of a site?

    - by DisgruntledGoat
    I have just set up a new server with nginx (which I am new to) and PHP. On my site there are essentially 3 different types of files: static content like CSS, JS, and some images (most images are on an external CDN) main PHP/MySQL database-driven website which essentially acts like a static site dynamic PHP/MySQL forum It is my understanding from this question and this page that the static files need no special treatment and will be served as fast as possible. I followed the answer from the above question to set up caching for PHP files and now I have a config like this: location ~ \.php$ { try_files $uri =404; fastcgi_cache one; fastcgi_cache_key $scheme$host$request_uri; fastcgi_cache_valid 200 302 304 30m; fastcgi_cache_valid 301 1h; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example$fastcgi_script_name; fastcgi_param HTTPS off; } However, now I want to prevent caching on the forum (either for everyone or only for logged-in users - haven't checked if the latter is feasible with the forum software). I've heard that "if is evil" inside location blocks, so I am unsure how to proceed. With the if inside the location block I would probably add this in the middle: if ($request_uri ~* "^/forum/") { fastcgi_cache_bypass 1; } # or possible this, if I'm able to cache pages for anonymous visitors if ($request_uri ~* "^/forum/" && $http_cookie ~* "loggedincookie") { fastcgi_cache_bypass 1; } Will that work fine, or is there a better way to achieve this?

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • Problems with USB-Devices using VDR

    - by emmsinator
    Hey Guys, I'm using VDR on vSphere4. It works sucessfully. I've already backuped several VMs with VDR and I like it very much. But now we got a problem. We have 2 VMs, using an USB-Device Server with a stick plugged in, which is definetely need by these 2 VMs for Licensing and so. Every time, I start the Backup process, the VMs lost the communication to the USB-Server and its stick after building the snapshot and while online. Because of that, the software on these VMs can't work correctly. I have to restart both Machines to solve this problem. These fact is bad for an automatic backup. Does VDR have a special function for those cases or is something like this already known? It would be no problem, to shutdown the servers for building snapshots on Saturday or Sunday. Can VDR initiate a shutdown before starting the backup process? Otherwise I must try to use scripts, but that wouldn't be so nice. Thanks a lot for your help.

    Read the article

  • Access an external SSH server through a restrictive proxy [on hold]

    - by Cyrille
    I'm a software developer. I wish to access my computer at home through SSH. For example, I sometime need to access my personal projects source code to check how I handled specific problems. Unfortunately, I currently work under an over-restrictive and anti-productive proxy that waste a hell of a lot of everyone's time (We often have to visit websites from our smartphones or use a web proxy to check very legitimates websites for answers, and don't get me started on other "security" overkill features we have to cope with...). Well, back to the subject, I can access my home computer from my phone (SSH, port 22 and 80 both redirected by router on port 22). It works, but it's quite uncomfortable. From my office computer, this is what I tried so far: export http_proxy=http://user:pass@proxyip:8080 echo "user:pass" > ~/.corkscrew-auth echo "ProxyCommand corkscrew proxyip 8080 %h %p /home/me/.corkscrew-auth" > ~/.ssh/config ssh 82.23.34.56 -l me -p 80 Proxy could not open connnection to 82.23.34.56: Forbidden ssh_exchange_identification: Connection closed by remote host (same without -p 80) Without corkscrew: ssh: connect to host 82.23.34.56 port 80: Connection timed out ssh: connect to host 82.23.34.56 port 22: Connection timed out Any other idea ?

    Read the article

  • What kind of hosting do I need? [closed]

    - by Robert Smith
    I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have custom made forum (rather than a built-in forum like the ones you can find as plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. Thanks a lot.

    Read the article

  • Managing SharePoint permissions via Active Directory?

    - by rgmatthes
    My company has thousands of employees organized thoroughly via Active Directory. I have confidence in the accuracy of the Department and Title information displayed in the user profiles. I'm helping to put up a brand new SharePoint 2007 site, and I contacted IT about managing the site's permissions through AD Groups. The goal is to have the site automatically assign read/write/contribute/whatever permissions based on the information in AD. For example, we could create an AD Group called "Managers" that would contain anyone with the "Manager" title in their AD user profile. I would have SharePoint tap into this AD Group to mass assign permissions if I knew all managers would need a certain level of access (read/write/contribute/whatever). Then if a manager joins the company or leaves it, the group is automatically updated (provided AD gets updated, of course). My IT rep called back and said it couldn't be done. This seems like a pretty straightforward business requirement, and one of the huge benefits of having Active Directory, but maybe I'm mistaken. Could anyone shed some light on this? A) Is it possible to use dynamically-updated AD Groups when assigning permissions via SharePoint? (Does anyone know of a guide I could show my doubtful IT rep?) B) Is there a "best practice" way to go about this? I've read some debate on whether SharePoint Groups or AD Groups are the way to go. My main concern is dynamic updating. C) If this isn't available out of the box, can someone recommend third-party software that will provide the functionality I'm looking for? A big thanks to anyone who can help me out!!

    Read the article

  • In need of assistance for recovering a lost partition

    - by Tek
    The program that has worked for the most part is Active@ Partition Recovery. I'm so close but yet so far from recovering my data. Okay so here's what happens. In the following screenshot (blanked out a folder and filename with profanity in case some of you guys are at work :P), it detects the partition I accidentally deleted with ALL 100% of my data listed. Of course, I didn't write ANY data to that drive after I did this. But when I click "recover" and finishes the recovery process, in Windows I click on the partition that was just was just recovered... It's EMPTY. The program seems to be able to see my lost files, but when I recover the partition windows doesn't seem to think the same =( Things I tried: I tried running a chkdsk /r /f after I recovered the partition, apparently it couldn't find any errors. Tried using other software like TestDisk to recover the partition, but they (all) act similar to Windows in that it detects the (missing) partition but when I browse it there's no files. The partition is there along with all the file and data information. The sector information is also in the screenshot, is there any way I can use this to my advantage in recovering my data? Other information: Dualboot: Win8 / Ubuntu 12.10 x64 1TB Internal desktop drive, GPT Layout, NTFS formatted drive, 64K allocation size.

    Read the article

  • Is my motherboard failing, or is there some other issue?

    - by ThatGuy
    So, several months ago I put together my own desktop PC. I set up a dual boot to Windows and Ubuntu. Recently, without changing any settings or installing anything new, the wifi stopped working on windows (I use a wifi adapter). It said it was connected, Network settings showed that it was working and running trouble shooting had no results. My internet still works on any other device. I found that removing the adapter from the motherboard and plugging it back in was the only thing that fixed the problem. Reinstalling the wifi drivers did not help. I purchased a new Wifi adapter, but the problem persists. More recently, I had a much more discouraging development. Sometimes, turning on the computer results in a boot loop: BIOS never starts. Instead, the monitor turns on as if it got a signal, then immediately turns off. This loops on it's own indefinitely until I hold down power, hard reset it, and try again. Sometimes it works, sometimes it doesn't. I haven't tested much on the Ubuntu side. It appears that wifi works at least some of the time, but since I've had issues just getting to BIOS I'm not confident the issue is on the software side. I've also noticed issues with some of the USB ports no longer working, but that seems to be off and on. Finally, as of a few minutes ago, I booted to windows to discover that everything was running very slowly. Slow here is a relative word, but I have a Samsung 840 pro SSD and I'm used to applications running nigh instantly, and it was a solid 3 minutes before any of my applications would load. Anyway, my question is this: Is it likely that my motherboard is failing? Either way, what steps can I take to try and pin down the problem and figure out what to do?

    Read the article

  • All computers on network get stuck waiting for some sites indefinetely

    - by zacaj
    This happens across three computers, running windows 7 and Ubuntu, firefox, opera, and chrome (all latest versions). I am connected to the internet through a Verizon wireless usb modem. When I try to open some web pages they will never finish loading (and usually never even show anything). The status bar at the bottom of the browser will display "Waiting for X" The servers it gets stuck on include: platform.twitter.com s7.addthis.com connect.facebook.net ajax.googleapis.com 2mdn.net Ive been getting away with just blocking them in AdBlock up until now, however the last two have been causing problems. There are some sites which require googleapis.com to load correctly, and some that wont ever load unless its blocked. eBay requires access to 2mdn.net to load pictures. On top of this its getting really annoying having to update AdBlock across all these computers whenever a new site pops up. I'm hoping there's some easier way to fix this? The different sites causing the freeze indicate to me that it's either a problem on my end (somehow?) or some server side software that got updated with a new bug?

    Read the article

  • Apache APC (Windows) Can I optimize these APC settings more?

    - by ar099968
    I would like to optimize APC some more but I am not sure where I could do something. First here is the stats after 1 week of running with the current configuration: General Cache Information APC Version 3.1.9 PHP Version 5.4.4 APC Host XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Server Software Apache Shared Memory 1 Segment(s) with 128.0 MBytes (IPC shared memory, Windows Slim RWLOCK (native) locking) Start Time 2014/06/08 05:00:00 Uptime 6 days, 11 hours and 55 minutes File Upload Support 1 Host Status Diagrams Memory Usage Free: 99.7 MBytes (77.9%) Used: 28.3 MBytes (22.1%) Hits & Misses Hits: 510818 (99.9%) Misses: 608 (0.1%) Detailed Memory Usage and Fragmentation Fragmentation: 0.60% (609.8 KBytes out of 99.7 MBytes in 83 fragments) File Cache Information Cached Files 693 ( 35.4 MBytes) Hits 5143359 Misses 1087 Request Rate (hits, misses) 13.24 cache requests/second Hit Rate 13.24 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.01 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters -/apc.php$, -/apc_clean.php$, -.tpl.cache.php$, -.tpl.php$, -.string.cache.php$, -.string.php$ apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2M apc.num_files_hint 7000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.shm_strings_buffer 4M apc.slam_defense 0 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1

    Read the article

  • Strange boot problems on 6 month old setup

    - by Balefire
    I've already exhausted my knowledge on this one, so forgive me if this post is a bit long. I built a computer 6 months ago for my wife and it worked fine until last week. Then it randomly shut down and would lock up while trying to boot on the boot screen. I cleared cmos and it allowed me to do startup recovery, but it "failed to fix the issue" so I reinstalled windows on the HD (moving the old install to windows_old). It worked, so I started installing drivers again, but then when I restarted to finalize installations it locked up again. This time, I took the hard drive and hooked it up to my computer, backed up all her files, and then formatted the hard drive before reinstalling it. (again had to clear cmos to let me boot from disk) It installed windows, I installed drivers, and it worked for a few hours but then died during startup again. So, then I got a new HD, cleared cmos, and installed clean again, with the same result as the time before, it worked for a few hours, installed windows updates, then crashed on the 3rd or 4th time turning it on. I decided next to try reinstalling and then going online to see if there were any updates for the BIOS or drivers on the Motherboard, but now I can't get it to even bring up the boot menu, so now I'm just left wondering was it the motherboard, or is it the CPU, or the RAM? The problem was strangely intermittent so I thought it had to be a software issue, since a hardware issue would ALWAYS fail to boot, right? But now it seems to be a hardware issue, because it's not bringing up anything. Any suggestions? System: Windows 7 64-bit 970A-DS3 Gigabyte Motherboard AMD Phenom II X4 955 Deneb 3.2GHz Quad core Proc GeForce GT 430 (Fermi) 1GB Video Card 500W PSU 2 x G.SKILL Ripjaws X Series 4GB 240-Pin DDR3 1600 RAM

    Read the article

  • Workaround for Dell "Power Supply Not Recognised" issue

    - by Haedrian
    So, I have a Dell Inspirion and the power supply port appears to be damaged. Basically when I plug it in I get a nice popup telling me that it couldn't detect that its a Dell power supply so it won't charge the battery and underclocks the system. It still works for other purposes (that is, giving power) I thought it was the actual power supply cable so I bought a new one, that worked for a while, provided I inserted it at JUST THE RIGHT angle. But now that's not working anymore, so I assume its the part which connects to the computer. The battery charging I can live without, the underclocking I can't. I'd like a way around this issue. Things I've tried: Updating the BIOS Replacing the power supply cable Inserting it at different angles Turning it off and on again Swearing at it Twisting it while inserting it So, is there a workaround somehow? I'd like to avoid taking out my soldering kit and risking permanently damaging expensive equipment if that's allright. I'm hoping for a software solution. Added: The exact model is a Del Inspirion N5010

    Read the article

  • RAID5 issue after replacing motherboard and upgrading firmware

    - by 8steve8
    ok so ive had a 4x2TB(samsung HD204UI w/firmware patch) raid5 array working normally for about a month. It was in a h57 gigabyte motherboard using the intel raid with Windows 7 x64. Today I got an intel h67 motherboard, so I upgraded the intel raid drivers to 10.1.0.1008 from 9.6.0.1014, and I'm not sure if i checked after a reboot, but it caused no problems. I swapped in the new dh67 motherboard, and my array status was "failed". 2 of the 4 drives listed themselves as members, while the other two drives listed themselves as non-members. I tried going back to the old h57 mobo, and downgrading the raid drivers, but the issue remains. It's not port dependent, 2 of the drives always come up as non-members regardless of what port or motherboard they are plugged into. This screenshot should show that the SNs match, which raises the question why the software doesn't realize the drive is a member of the array. I'd like to know if anyone has experienced anything similar, and what should I do, can I force the drive to be recognized as a member (without wiping data)?

    Read the article

  • Best way to create and restore a Drive Image with Windows 7? [closed]

    - by jasondavis
    Possible Duplicate: Want to create a system image I am about to build a new PC. I am a windows 7 user. For years now I have been wanting to install windows and all my favorite software, music, etc., and then make a drive IMAGE and be able to go in 6 months later or WHENEVER I want to start fresh and completely format my drives and restore my IMAGE and have all my settings, programs, etc be just5 like when I created the original image. I know there is many ways to do this but I have never done this 100% successfully and I have about a week to figure out how to do it perfectly for when I build my new PC. I have heard good things about using tried Acronis true image in the PAST for doing what I describe4, I tried using it but, but the newer versions are overly complex and don't even seem to work the way I hoped. I also see that Windows 7 has some sort of drive IMAGE creator itself as well. Does the newer Windows 7 image creator do what I am describing above? If it does do what I am asking for (complete drive image with windows, all programs and settings) saved to an IMAGE file that can easily be restored to ANY hard drive in the future? Please share your experiences, tips, ideas on how to achieve this the easiest and most reliable way please

    Read the article

  • Windows 7 / TCP/IP network share guide - looking for to resolve failure to mount lacie network drive but works on XP,Linux,Mac.

    - by Rob
    Can anyone advise me of a really good, readable, Windows 7 TCP/IP network share guide, book, or reference. I want this because I cannot mount my Lacie 2big ethernet network drive in Windows 7 (32 bit home), but I can mount it in Windows XP Home 32bit, Ubuntu Linux 10.04 and Apple MacOS X. This drive is being mounted via the accompanying Lacie Ethernet Agent in XP (which I believe uses "Bonjour" protocol), on Mac and Linux it works without further need for software. Another Super User user has the same problem, but no answer: Trouble accessing network drives in Windows 7 I hope my take on the question shows a better willingness to investigate and do some digging - and therefore invite some suggestions to help with this. The drive is detected by Windows 7 (i.e. speech bubble "network drive found") but on trying to open an Explorer window, this remains blank with the Windows busy pointer. I'd prefer not to reinstall Windows 7 to see if that cures the problem, I'd rather understand what is happening/not happening, perhaps even compare differences with Windows XP. Suggestions, please for such guides or even the original problem itself. Update Edit Rewrote question more comprehensively here: Mhttp://superuser.com/questions/304209/looking-for-definitive-answer-to-accessing-a-network-share-via-windows-7-home-and

    Read the article

  • Physical Debian to VMWare: vmware-converter, dd-image or otherwise?

    - by Dabu
    we have two debian Lenny production machines, both running larger commercial websites. Now these machines need to be moved, and in the process, they need to be virtualized to VMWare ESX. If you believe the internet information, there are several ways to accomplish this. The easiest for us would be to use our weekly dd backup where the whole disk, however, I have no experience with this kind of technology and if it is really possible. The second best way would be via an application on the source machine virtualizing it and generating an ESX compatible VM. However, the software is beta and unsupported, and after installation, nothing really works (the /etc/init.d/vmware-converter script doesn't actually do anything, start and stop reply with success messages, yet ps shows that there are no new processes). The worst way with the most work would be to install a new machine and set it up manually, copying files and databases as needed. This part is clear in it's execution, and my question(s) do not touch this. Is my 1st way possible? Has anyone done this yet, or better, has a page with instructions? Or is there a help page that explains how to correctly install, run and use the vmware-converter tool using a Debian installation (it's possible that I dod something wrong during installation already)? Thank you.

    Read the article

  • Windows 7 not booting after failed SRT (SSD caching) install

    - by david
    This is a fairly new computer, only about a month old. i7 2700k, z68 motherboard, with a 1.5tb WD black HD, and a 128gb crucial M4 ssd. I followed the instructions for setting up ssd caching, the SATA controller was set to RAID, I installed the intel software and enabled acceleration and it said everything went fine. But when I went to reboot, I received the lovely "Reboot and Select proper Boot device" error message. I checked the bios, and it was booting from the correct HD (I tried the only other option anyway just in case, it was the ~50 odd gb of unformatted space left on the SSD) AFter that I entered the raid until (ctrl-i at boot) and removed the acceleration and deleted the raid array (because it was being used as a cache this was non destructive) Still no boot. So I reinstalled win7 directly on the SSD, booted, and checked the HDD to make sure it hadn't been wiped. It hadn't, all the files were still there, including all the windows stuff. I backed up my data to an external drive just in case, but I'd really like to get this install booting again. I trawled the webs a bit, and have tried entering recovery mode and using the bootrec.exe and bootsect.exe to fix it, but to be honest I'm not sure what I'm doing with those. My question is basically: How do I make my harddrive bootable again?

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • What can inexperienced admin expect after server setup completed seemingly fine? [closed]

    - by Miloshio
    Inexperienced person seems to have done everything fine so far. This is his very first time that he is the only one in charge for LAMP server. He has installed OS, network, Apache, PHP, MySQL, Proftpd, MTA & MDA software, configured VirtualHosts properly (facts because he calls himself admin), done user management and various configuration settings with respect to security recommendations and... everything is fine for now... For now. If you were directing horror movie for server admin above mentioned what would you make up for boogieman that showed up and started to pursue him? Omitting hardware disaster cases for which one cannot do anything 'from remote', what is the most common causes of server or part-of-server or server-related significant failure when managed by inexperienced admin? I have in mind something that is newbie admins very often missing which is leading to later intervention of someone with experience? May that be some uncontrolled CPU-eating leftover process, memory-related glitch, widely-used feature that messes up something unexpected on anything like that? Newbie admin for now only monitors disk-space and RAM usage, and number of running processes. He would appreciate any tips regarding what's probably going to happen to his server over time.

    Read the article

  • Arduino IDE "launch 4j" error

    - by John
    I have a computer running Windows XP. I am trying to run the Arduino IDE 0022. I double-click on arduino.exe, it waits about 30 seconds on the load up title screen, and then it gives me this error: Launch 4j: an error occurred while starting the application My only choice is to click "OK"; the error goes away, and the Arduino IDE closes. If I try to delete the Arduino files (to try overwriting with some different files), I get an error that doesn't allow me to do so: Cannot delete awt.dll: Access denied Make sure the disk is not full or write protected and that the file is not currently in use. The only way to delete the file is by restarting the computer. So something must still be trying to run after that first error. I have noticed in Task Manager that some Java programs are still running: javaw.exe (3 processes) I think this is a problem with Java, but I checked and updated all of my Java software and it is all up to date. I have looked on other forums for this issue and none of them seemed to help. From the forums I have tried: Different Arduino IDE versions Updating Java Opening arduino.exe as Administrator Nothing has worked. Anyone have any suggestions?

    Read the article

  • RDP suggesting a user name when connecting to a server

    - by Neolisk
    Prior to Event X, RDPing to Server 2003 always caused the user name appear blank and Login to be enabled, so you could pick to which domain you would log in. For us it's either local or our domain. Since a recent Event X a domain + user name is being suggested for every server and it's not the most recently used user name. If you remove it manually from RDP dialog, it's still being pre-populated for you, and then at the next available opportunity it returns into General/User name option of RDP dialog. So user name field comes pre-populated and you cannot change to log in locally (only if you manually erase domain specifier - everything before \) - Log in to option is disabled by default. We did not do any changes to our domain or client machines, so I am suspecting some Windows update caused it (and this being Event X). Interesting fact - it does not consistently happen on all machines, and some can login to some servers fine, while other servers keep suggesting a default user name. What could be that Event X and is there a way to fix it? EDIT: I tried this - How to clear remote desktop connections history and specifically this part of it: reg delete "HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client\UsernameHint" /f The problem still persists.

    Read the article

  • Should I buy more RAM for my computer or a better CPU?

    - by Reijer The Coder
    I have recently bought a new laptop, an ASUS N56VJ-S4149H. I bought it to use it in all possible ways I could want, so that would be gaming downloading and streaming movies browsing the web and programming software. So far it has been running great but one thing annoys me a little bit. I frequently play Mount&Blade: Warband, it's a great game and you should check it out. Everything runs smoothly but only when I turn down the 'Texture Detail'. Otherwise it will lag a lot and I can't really play. Now my question is: Is this a problem of not enough RAM? At first I was really sure it was but after thinking about for a while I'm not anymore. My computer only has 4 Gigabytes of RAM and there is still a slot open. So will it improve when I buy another 8 Gigabytes? EDIT: It has an Nvidia GeForce GT 635M inside. and an Intel Core i5 3230M CPU with 2,6 GHz. Help and answers are really appreciated. Thanks to all.

    Read the article

  • Apache https is slow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With https (KeepAlive on), I can get over 3000 requests per second. However, with https (KeepAlive off), I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

< Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >