Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 301/401 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • esxi 5 monitoring

    - by user134880
    I'm new to esxi/vmware world. Have few question about monitoring esxi host. Now I'm using trial versian esxi 5.1 I can see performance charts in vShpere client. Cool. But I want to export this raw data (raw - I mean cvs,txt or some format which I can parse later) to other server to be able to parse this data later and create custom charts. (please do not advise to try vCenter, I need custom charts etc.) I could run esxtop in batch mode and use this data... But... How does vShpere client performance charts work? Where client takes data for charts? So if I will use esxtop batch mode it will add extra load to server. Is it possible to use same source as vSphere client use for charts? There is /var/lib/vmware/hostd/stats/hostAgentStats-20.stats file. Its looks like it is binary format. As I understood it is exactly data which I need?!?! Any ideas how to parse it? Thanks! PS: maybe some one know where to find, if there is one, info about processes running on esxi host?

    Read the article

  • What are the steps needed to set up and use security for AWS command line tools?

    - by chris
    I've been trying to set up the AWS command-line tools following Eric's most useful guide at http://alestic.com/2012/09/aws-command-line-tools. I can't seem to find a good how-to for how to generate the x509 certificate and private key, and how that relates to the various security files the guide creates. Update: I have found a couple of links that describe the some steps. These steps seem to work, however I'm not sure if this is secure & the best way to do it: 1) Create a private key openssl genrsa -out my-private-key.pem 2048 2) Create x.509 cert openssl req -new -x509 -key my-private-key.pem -out my-x509-cert.pem -days 365 Hit enter to accept all of the defaults. Then, from the IAM Dashboard, User, select a user & click on the "Security Credentials" tab. Click on "Manage Signing Certificates", then "Upload Signing Certificate", paste in the contents of my-x509-cert.pem, click OK and it should be accepted. One step that is discussed, but not required for me, was the addition and subsequent removal of a pass phrase on the private key. Should I have been prompted for one, and is my cert potentially unsafe because of this?

    Read the article

  • Making one of the folders default in Apache

    - by OmerO
    Hello, The file & directory structure of my website is as follows: /Library/WebServer/mysite/joomla .. /Library/WebServer/mysite/wiki .. /Library/WebServer/mysite/forum .. /Library/WebServer/mysite/index.php As you see, there are various applications each residing in separate folders. Now, in order to define this structure, I have made this entry in Apache http-vhosts.config file: ServerName mysite.com DocumentRoot "/Library/WebServer/mysite" ** And I already have the DirectoryIndex defined: DirectoryIndex index.html index.php, and so on. So far so good but I want this specific functionality: When someone visits mysite, he/she should automatically directed to: /Library/WebServer/mysite/joomla (and therefore /Library/WebServer/mysite/joomla/index.php) I don't want to achieve that functionality by putting a redirection code inside /Library/WebServer/mysite/index.php or /Library/WebServer/mysite/index.htm because that causes time delays (because of the redirection, of course) But in this case, the only proper way of achieving it seems to set DocumentRoot this way: DocumentRoot "/Library/WebServer/mysite/joomla" But when I set it that way, then the other folders (/wiki, /forum, etc.) are simply not served by Apache. To work around it, I put directives like: Alias /wiki /Library/WebServer/mysite/wiki .. Alias /forum /library/WebServer/mysite/forum and it did work actually the way I wanted. But... I still cannot use it that way because in this case I just couldn't manage to make the wiki use Short URLs (as described in link text) So, I have to set the DocumentRoot back to /Library/WebServer/mysite and shoud be able to assign /Library/WebServer/mysite/joomla as the "default directory" (my own terminology :) Can I do it in Apache? Is there any other way you might suggest? Thanks.

    Read the article

  • Are my iptables secure?

    - by Patricia
    I have this in my rc.local on my new Ubuntu server: iptables -F iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 9418 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 9418 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 5000 -m state --state NEW,ESTABLISHED -j ACCEPT # Heroku iptables -A INPUT -i eth0 -p tcp --sport 5000 -m state --state ESTABLISHED -j ACCEPT # Heroku iptables -A INPUT -p udp -s 74.207.242.5/32 --source-port 53 -d 0/0 --destination-port 1024:65535 -j ACCEPT iptables -A INPUT -p udp -s 74.207.241.5/32 --source-port 53 -d 0/0 --destination-port 1024:65535 -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT iptables -P INPUT DROP iptables -P FORWARD DROP 9418 is Git's port. 5000 is a port used to manage Heroku apps. And 74.207.242.5 and 74.207.241.5 are our DNS servers. Do you think that this is secure? Can you see any holes here? Update: Why is it important to block OUTPUT? This machine will be used only by me.

    Read the article

  • .NET 2.0 Application now running slow on IIS 7.5

    - by Valien
    I recently moved (and still in testing) an application from a Windows 2003 Server (Physical box) running IIS 6.x to a Windows 2008 R2 Standard (VM) IIS 7.5 server. The application is a .NET framework 2.0 application and is running under a 2.0 App Pool. This site works great except for one thing: Takes forever to get a request back. I've been tracking it with Chrome Inspect Element and it queries the site and can take up to 45 seconds to answer. Now when it does the page(s) render instantly but it's that initial request that's killing it. I see no error logs or issues with the application or Windows Event Viewer or even IIS logs so not sure where to start looking next. Some new changes was that previously the app resided behind a Pix firewall and now is behind a larger network environment in a DMZ zone (and I believe NetScaler is also being used to manage the network). I do not have rights/abilities to look at the network itself but can contact the Data center folks to look deeper into this but I wanted to make sure it's not my application that might be causing the slowdown or IIS. In summary: .NET 2.0 application works great in IIS 6.x Application moved to an IIS 7.5 server and now slow on rendering but when it does render responds back with pages instantly. Edit for solution Found out that it was the SOAP calls that were slowing the site down. In the new datacenter my application cannot request SOAP calls and so they time out after 40-45 seconds or so. Now trying to find out if I can install a proxy server to redirect this...

    Read the article

  • Why is my apache2, mod_fcgid, php configuration causing 100% cpu usage?

    - by Scott Lundgren
    Page load makes a quick initial connection, then hangs about 10 seconds before the page renders. When the server load goes up I start watching top & I see that both CPUs get pegged at times to 100% by between 4-8 processes of php-cgi. My theory is that since I never see RAM usage never go above 50%, that apache is able to handle the requests coming in, but is queueing them for PHP to process. What is wrong with my mod_fcgid/php configuration ? RHEL 5.4 2 Xeon E5420s @ 2.50 Ghz 4 Gb RAM Apache 2.2.3 Timeout 30 KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 5 <IfModule worker.c> StartServers 2 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> mod_fcgid 2.2.10 LoadModule fcgid_module modules/mod_fcgid.so <IfModule !mod_fastcgi.c> AddHandler fcgid-script fcg fcgi fpl php </IfModule> SocketPath run/mod_fcgid SharememPath run/mod_fcgid/fcgid_shm DefaultInitEnv PHPRC "/etc/" FCGIWrapper /usr/bin/php-cgi .php MaxRequestsPerProcess 1500 MaxProcessCount 20 IPCCommTimeout 240 IdleTimeout 240 APC 3.0.19 extension = apc.so apc.enabled=1 apc.shm_segments=1 apc.optimization=0 apc.shm_size=32 apc.ttl=7200 APC cache is 43% used with a 99% hit rate

    Read the article

  • split virtualization design based on environment or server role?

    - by Dan
    I'm setting up the server environment for a new software development group, which will include 4 test environments. These are web applications, so each environment will have an application server and a database server. I'm planning on buying two physical servers (e.g. 6-core CPU each with 12GB or so of RAM), and I'm thinking virtualization is appropriate here. With that in mind, I've thought of a couple ways that I could organize the virtualization strategy: - Separated by server role: Server 1 has all the application servers, each in their own guest VM. Server 2 has all the databases. OR - Separated by environment: Server 1 has a VM for two of the environments, with the VM containing both the app server and the database server. Server 2 would also contain two test environments, with the same style (app server and database in same VM). The advantages I see with all the app servers on one server and all the databases on another server is that I could probably be more efficient with the database server (one instance running multiple databases). But the other option seems easier to manage (archives/restorations would be contained in a single VM). Any recommendations? TIA.

    Read the article

  • Can many addon domains slow down cPanel or create problems?

    - by Marco Demaio
    I actually resell hosting plans by using one single cPanel account and many addon domains under it. Basically for each new user I don't create a new cP account, but I simply create a new ADDON domain and give him the necessary space. I know in this way the final user won't be able to manage his emails and he want be able to access all cPanel features, but that's ok. My only question is: "is there a limit on the number of cPanel addon domain that can be added?". I know in my account I'm allowed to add unlimited addon domains, but is there something that might happen that slows down cPanel or could create problems? I mean is there any suggestions you could give me about the way i'm using cPanel which might not be the usual way of using it. (As for instance: "be aware that Awstast could run very slow or crash", or "be aware that mails from different addon domains under the same cp account could create many problems.", etc.) Many thanks!

    Read the article

  • prevent IE8 tabs from opening as a new "process in the taskbar"

    - by Nano8Blazex
    This may have been asked before too... But, anyways. I'm using Windows 7 Ultimate, and IE 8, and have the taskbar in icon view. I'm not sure how to explain this, but I'm amazed at how each tab in IE8 seems to act like a new "process" in the taskbar (as if each tab was a window). Like... each tab acts like a different window in the taskbar although they are actually running in the same window. Now when I use IE 8 it looks (in the taskbar) like there's 15 windows open when in fact the taskbar is simply showing the 15 tabs. More simply put, it's displaying a "stack" for all of the tabs when I'd rather have the icon act like, for example, firefox so that a stack is only shown for the multiple windows. I know that they are meant to be running as separate processes to prevent crashing and the such... but is there a way to disable this strange "taskbar" effect? I'd rather have the taskbar show the main window and not the tabs individually.

    Read the article

  • FreeBSD's ng_nat stopping pass the packets periodically

    - by Korjavin Ivan
    I have FreeBSD router: #uname 9.1-STABLE FreeBSD 9.1-STABLE #0: Fri Jan 18 16:20:47 YEKT 2013 It's a powerful computer with a lot of memory #top -S last pid: 45076; load averages: 1.54, 1.46, 1.29 up 0+21:13:28 19:23:46 84 processes: 2 running, 81 sleeping, 1 waiting CPU: 3.1% user, 0.0% nice, 32.1% system, 5.3% interrupt, 59.5% idle Mem: 390M Active, 1441M Inact, 785M Wired, 799M Buf, 5008M Free Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 4 155 ki31 0K 64K RUN 3 71.4H 254.83% idle 13 root 4 -16 - 0K 64K sleep 0 101:52 103.03% ng_queue 0 root 14 -92 0 0K 224K - 2 229:44 16.55% kernel 12 root 17 -84 - 0K 272K WAIT 0 213:32 15.67% intr 40228 root 1 22 0 51060K 25084K select 0 20:27 1.66% snmpd 15052 root 1 52 0 104M 22204K select 2 4:36 0.98% mpd5 19 root 1 16 - 0K 16K syncer 1 0:48 0.20% syncer Its tasks are: NAT via ng_nat and PPPoE server via mpd5. Traffic through - about 300Mbit/s, about 40kpps at peak. Pppoe sessions created - 350 max. ng_nat is configured by by the script: /usr/sbin/ngctl -f- <<-EOF mkpeer ipfw: nat %s out name ipfw:%s %s connect ipfw: %s: %s in msg %s: setaliasaddr 1.1.%s There are 20 such ng_nat nodes, with about 150 clients. Sometimes, the traffic via nat stops. When this happens vmstat reports a lot of FAIL counts vmstat -z | grep -i netgraph ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP NetGraph items: 72, 10266, 1, 376,39178965, 0, 0 NetGraph data items: 72, 10266, 9, 10257,2327948820,2131611,4033 I was tried increase net.graph.maxdata=10240 net.graph.maxalloc=10240 but this doesn't work. It's a new problem (1-2 week). The configuration had been working well for about 5 months and no configuration changes were made leading up to the problems starting. In the last few weeks we have slightly increased traffic (from 270 to 300 mbits) and little more pppoe sessions (300-350). Help me please, how to find and solve my problem?

    Read the article

  • Handling the Outlook 2007 AutoArchive PST file

    - by Doug Luxem
    We encourage our users to enable AutoArchive in Outlook 2007 as a way to manage their mailbox sizes. However, we frequently end up running in to problems with the archive.pst file that is generated. The two main problems we have are: The archive.pst file is located in the user's local profile directory and is never backed up. A dead hard drive or stolen laptop could result in months or years of missing email. All other personal data is stored on network shares, but we can't do that for Outlook PST files. Without some sort of manual intervention, the archive will grow to enormous sizes. Although Outlook 2007 SP2 handles the large files better than before, it still results in slow response times from Outlook and an increase likelihood of a corrupt PST file. To mitigate these problems personally, I move the archives to a c:\Outlook folder and manually back that up to a shared drive every month or so. Additionally, I rotate archive files every year so that I have one file for each year (archive2008.pst, etc). Obviously, asking our users to do this same wouldn't help much. We need some sort of automated solution to take care of points 1 and 2. I have to imagine this is a common problem for Exchange organizations, so what is the best method to handle this?

    Read the article

  • Migrate users from one Active Directory domain to another?

    - by Matt
    I work for a company that hosts desktops for a number of different companies. At the moment, all the clients access a single domain controller called HOSTING. Under that are groups for each company. Each of the hosting servers exist on the same network and so are therefore potentially browseable by other terminal servers. This has raised some security issues and I've found it a little tricky to manage the security. As well, it's possible to see who the other hosted companies are even though other users cannot see their data. What I'd like to do is isolate each clients terminal server/s into their own VLAN. In addition, I'm thinking that each TS would have it's own DC which could just run on the TS for that company. Overhead for a DC is fairly minimal. This would isolate users on that TS from seeing the other companies completely. Firstly, does this sound like a sensible plan? Second... if it is sensible, how would I go about pulling the accounts from the HOSTING domain to a new domain? ideally, without the need for users to change their passwords?

    Read the article

  • Diagnosing Logon Audit Failure event log entries

    - by Scott Mitchell
    I help a client manage a website that is run on a dedicated web server at a hosting company. Recently, we noticed that over the last two weeks there have been tens of thousands of Audit Failure entries in the Security Event Log with Task Category of Logon - these have been coming in about every two seconds, but interesting stopped altogether as of two days ago. In general, the event description looks like the following: An account failed to log on. Subject: Security ID: SYSTEM Account Name: ...The Hosting Account... Account Domain: ...The Domain... Logon ID: 0x3e7 Logon Type: 10 Account For Which Logon Failed: Security ID: NULL SID Account Name: david Account Domain: ...The Domain... Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x154c Caller Process Name: C:\Windows\System32\winlogon.exe Network Information: Workstation Name: ...The Domain... Source Network Address: 173.231.24.18 Source Port: 1605 The value in the Account Name field differs. Above you see "david" but there are ones with "john", "console", "sys", and even ones like "support83423" and whatnot. The Logon Type field indicates that the logon attempt was a remote interactive attempt via Terminal Services or Remote Desktop. My presumption is that these are some brute force attacks attempting to guess username/password combinations in order to log into our dedicated server. Are these presumptions correct? Are these types of attacks pretty common? Is there a way to help stop these types of attacks? We need to be able to access the desktop via Remote Desktop so simply turning off that service is not feasible. Thanks

    Read the article

  • HP Pavillion DV6500 recovery disk failure

    - by Scott W
    I recently attempted to re-install Windows Vista on an HP Pavillion DV6500 using the factory recovery DVD's, but encountered a strange problem. When the recovery disk attempted to reformat the hard disk, it failed at 22%. The error message provided was not very informative, just the error code "0x400110020000 1005". A google search turned up some people with a similar problem who asserted that HP has been know to ship corrupted recovery DVDs. The recovery disk did manage to reformat the the recovery partition before failing though, so recovering from the partition is no longer an option. It would be possible to reinstall from an off-the-shelf retail copy of Vista and then pull the drivers from HP's website, but I don't have access to a copy of Vista, and it would really be outrageous to have to purchase a new OS when I have a perfectly valid license already. Thought about biting the bullet and upgrading to Windows 7, but my understanding is that without Vista installed I'd be unable to use the upgrade version, and be forced to purchase the more expensive non-upgrade retail copy (!). Can anyone suggest a possible solution to this Catch-22? I've run out of ideas.

    Read the article

  • Bouncing between a 502 and 503 error

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • lsof not showing what port a proc is listening on

    - by ericslaw
    I have many processes on a box listening on several ports. I am trying to map ports to pids. The problem is that lsof is not telling me what ports belong to which process. Given an apache listening on port 80, I can see it listening via netstat: user@host% netstat -an|grep LISTEN|grep 80 *.80 *.* 0 0 49152 0 LISTEN But when I try to map port 80 to a pid I get nothing: user@host% lsof -iTCP:80 -t When I try seeing what sockets that specific pid is using I get: user@host% lsof -lnP -p31 -a -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME libhttpd. 31 0 15u IPv4 0x6002d970b80 0t0 TCP *:65535 (LISTEN) Notice the *:65535 in the NAME column. Does anyone know why lsof is not reporting the port in use? I am running as root. I am using a mix of lsof and os versions: lsof v4.77 on Solaris10 sparc lsof v4.72 on Redhat4.2 etc I know that linux solutions can use "netstat -p", so I guess I'm only looking for why solaris isn't working, but I find lsof is frequently silent and not showing me expected data.

    Read the article

  • Detecting login credentials abuse

    Greetings. I am the webmaster for a small, growing industrial association. Soon, I will have to implement a restricted, members-only section for the website. The problem is that our organization membership both includes big companies as well as amateur “clubs” (it's a relatively new industry…). It is clear that those clubs will share the login ID they will use to log onto our website. The problem is to detect whether one of their members will share the login credentials with people who would not normally supposed to be accessing the website (there is no objection for such a club to have all it’s members get on the website). I have thought about logging along with each sign-on the IP address as well as the OS and the browser used; if the OS/Browser stays constant and there are no more than, say, 10 different IP addresses, the account is clearly used by very few different computers. But if there are 50 OS/Browser combination and 150 different IPs, the credentials have obviously been disseminated far, and there would be then cause for action, such as modifying the password. Of course, it is extremely annoying when your password is being unilaterally changed. So, for this problem, I thought about allowing the “clubs” to manage their own list of sub-accounts, and therefore if abuse is suspected, the user responsible would be easily pinned-down, and this “sub-member” alone would face the annoyance of a password change. Question: What potential problems would anyone see with such an approach?

    Read the article

  • Windows 7 BSOD when changing power plan

    - by dd5
    i have a strange problem. When i want to change the power plan on my laptop from High performance to Balanced, Windows freezes and i get bsod. The power plan settings are all default. Laptop specs: - Intel Core i3 330M/350M - Intel® HM55 Express Chipset - DDR3 1066 MHz SDRAM 8GB - ATI Mobility™ Radeon HD5730 1GB DDR3 VRAM - Intel SSD330 128gb - Windows 7 Home premium I've searched the internets but couldnt find a similar issue. BSOD first started when i installed this SSD and stopped when i've updated the chipset controller driver then started again yesterday when i wanted to change the power settings plan.Minidump file here. Any help with this weird issue appriciated, thanks. Edit: - i've ran Memory diagnostic tool, - Intel SSD diagnostics - and updated the firmware to 3.2.1. Non of these steps worked or shown signs of errors - but still got BSOD when changing power plan settings. After analizing the dump file via osronline.com here a first few lines: CRITICAL_OBJECT_TERMINATION (f4) A process or thread crucial to system operation has unexpectedly exited or been terminated. Several processes and threads are necessary for the operation of the system; when they are terminated (for any reason), the system can no longer function. Arguments: Arg1: 0000000000000003, Process Arg2: fffffa8008661b30, Terminating object Arg3: fffffa8008661e10, Process image file name Arg4: fffff800033de270, Explanatory message (ascii) -- Solution -- Provided by Vinayak: After installing the Intel Rapid storage Technology from MajorGeeks, i didn't experience a BSOD since, thank you :)

    Read the article

  • Deleting "undeletable" files in Vista

    - by Nik Reiman
    I recently upgraded my workstation from XP SP3 to Vista Business, and during the upgrade Windows moved my old C:\Windows directory to C:\Windows.old. I got all of the stuff I needed out of that folder, but there are six "undeletable" files there so I cannot remove it. They are: Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-H Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-V Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelper.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroPDF.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\pdfshell.dll Whenever I try to delete the files either through explorer or a command line, I get a permission denied error. I have tried to grant myself full permission on the files, but again, permission denied. I don't even have acrobat installed on my Vista machine, and I uninstalled Adobe updater. However, I still can't manage to get rid of these files. How do I nuke them for good? Edit: I was able to take ownership of the files, but I still can't delete them. Renaming them did not work, as I was denied permission to do that as well. I'll try booting up in safe mode and getting rid of them there. Edit II: Booting up into safe mode did not allow me to delete the files. Bummer.

    Read the article

  • New Static Website with Hosted DNS alternating 502, 503 and Page Does Not Exist Errors

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • How do I collect SNMP readings from intermittently-connected sites?

    - by Luke404
    I am collecting SNMP data on-site for a number of systems, currently using Cacti. These systems are spread on a number of sites that aren't always connected to internet, but I also need to centralize the data on a single system (datacenter housed server) and get graphs out of it. If I directly poll remote systems with a centralized Cacti I'd loose data when a site is not connected to internet. I should record data on-site (I have a server at each site and I can run whatever I want on it) and then 'sync' everything to the central system. One hack could be a cacti or directly an rrdtool on site and then periodically rsync RRD data to the central Cacti system, but that doesn't sound like a 'clean' solution: every RRD would have to be defined at both places and rsync scripts setup with the specific file names. Can you suggest a better solution? Cacti is not a requirement but I'd like to use something like that on the central system. On-site systems need only to collect data I don't need to graph it there or manage users rights to view data and stuff like that, users will only access the centralized system.

    Read the article

  • Remote Desktop Connection issues

    - by stead1984
    I have a server at a remote site, the sites are connected to each other a site-to-site VPN connection using Cisco ASA 5510 firewalls. One end is managed by me, the other managed by the remote location's IT, between the 2 of us is another party who manage and route the connections. Remote desktop has been working fine with no problems then recently I noticed it was working for ONE server over the VPN which it previously had done. All the routes seem fine and I can still ping the remote server and even download files from an FTP site on the remote server.... so the VPN seems fine. Remote Desktop works fine to the remote server within the remote location but not over the VPN. I don't understand why it's stopped working, I originally thought it was a rule in place by the other party but they stress it's not them. The only thing that has changed on the server initiating the RDP connection is that it now runs file services sharing a folder. The source server (remote location) may or may not have had updates applied. Any idea's?

    Read the article

  • Running multiple copies of openssh-server (sshd) on Ubuntu

    - by cecilkorik
    I may be attacking this problem the wrong way, if so let me know. I have a server which is available through SSH from both the public internet and the local LAN. I would like to have two very different security policies for each, by running two copies of sshd with two different sshd_config files each on a different port. Some of the things I'd like to change is to allow password or public-key authentication on the LAN, but public-key only from the internet. All (real) users could login from the LAN side, but only certain authorized users would be individually whitelisted to login through the internet. As far as I can tell this requires having two different SSH daemons running on different ports with different sshd_configs. I am fine with the different ports part, I can easily forward port 22 to any port I want through my firewall. So my question is what is the best way to actually START the second sshd under Ubuntu 10.04 LTS. Is there a recommended way to do something like this? Surely I am not the first person with this sort of need. I have a bit of experience with upstart, and I can manually hack the second sshd into /etc/init/ssh.conf I suppose but I'm not sure if that will get overwritten by the package. However I do it, It's important to ensure both sshd processes always get restarted after any automatic or manual upgrade of the openssh-server package. Thanks in advance.

    Read the article

  • Comprehensive solution for managing patches, event viewing, change management, inventory, etc

    - by Holocryptic
    I'm looking for a solution that incorporates most or all of the following: Patch Management, Server event viewing/tracking, AD change management, ticketing and internal/external kb, remote access - ability to shadow user sessions or create new ones, imaging, and inventory. Our environments contains Windows Servers and ESXi Hosts (We're not completely virtual, but we're moving that direction). Various Cisco and Linksys switches and firewalls. This is a tall order, and I don't know if it can be done on a reasonable budget. I've looked and found some questions on SF that deal with some of this: http://serverfault.com/questions/72015/active-directory-management-tools-for-medium-sized-forest-less-than-1000-users http://serverfault.com/questions/4021/are-there-any-tools-to-do-change-management-with-active-directory-group-policy http://serverfault.com/questions/21752/what-is-a-good-patch-update-management-server What I'm ideally looking for is a reasonably cheap solution that integrates the features into a central interface. We're a non-profit, so money is a limiting factor (the cheaper, the better; but we have a max of $15k). What we are trying to avoid is having to deal with multiple vendors, while maintaining scalability (we're creating more sites that we'll have to manage). Is this possible, or will we have to cobble together something to make it work for us?

    Read the article

  • Own server, multiple website: most secure PHP setup

    - by plua
    Hi there, We have a company server with a variety of websites. They are maintained by different people from within our company. All websites are public. The server access is limited to our company only. This is NOT a shared hosting environment. We are looking into securing the server, currently analyzing the risk related to permissions of files. We feel the highest risk is when files are uploaded and then opened/executed by the public. This should not happen, but an error in a script might allow people to do so (there are image uploaders, file uploaders, etc). Uploader scripts use PHP. So the question is: what is the best way of setting / organizing permissions of files and processes? There seem to be several options to run PHP (and Apache), and setting the permissions. What should we take into consideration? Any tips? We are considering mod_php and FastCGI, but perhaps given our situation other solutions are preferred?

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >