Search Results

Search found 10622 results on 425 pages for 'shared hosting'.

Page 251/425 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • What is the specific advantage of a blade server for virtualisation?

    - by ChrisZZ
    We are planning to implement a VDI Solution. We had some discussions about Blade vs Rack. As we are only planning to implement 75-100 Clients, we calculated, that we would need 2 Servers with Dual 8C Processors - and a shared storage server. This calculation is based on a paper by ORACLE, that says, 12 active virtual machines per core. Now, for buying to servers, a blade does not scale financially. But the Blade has some other advantages: a) The interconnectivity between the blades is super-fast. b) IO Virtualisation Are there other advantages, that we should consider, that would make up for price - and are this advantages so important, that we should think about investing in the blade?

    Read the article

  • txt file descriptor in lsof

    - by wfaulk
    In my experience, files that have the file descriptor of txt in lsof output are the executable file itself and shared objects. The lsof man page says that it means "program text (code and data)". While debugging a problem, I found a large number of data files (specifically, ElasticSearch database index files) that lsof reported as txt. These are definitely not executable files. The process was ElasticSearch itself, which is a java process, if that helps point someone in the right direction. I want to understand how this process is opening and using these files that gets it to be reported in this way. I'm trying to understand some memory utilization, and I suspect that these open files are related to some metrics I'm seeing in some way. The system is Solaris 10 x86.

    Read the article

  • Can I set up a 2nd home wireless router, with router2 connecting to the internet through a desktop which is wirelessly connected to router1?

    - by gil b.
    Hi, I apologize for the crudeness of my MSPaint drawing, but please view my diagram of what I'd like to accomplish: Proposed home network architecture Currently, all devices are connected to 1 wireless router. I would like to make my own subnet, with a box in-between my subnet and the shared wireless router, so that I can learn about IDS, traffic analysis, etc. I was also given a cisco PIX firewall to play around with, and it'd be an added bonus if I could incorporate that into my network. The reason for this proposed architecture is so that I can monitor all MY traffic, without seeing anything going on with my roommates' traffic. my MAIN Question is, is it possible to have my desktop connect to the wireless router with internet via wireless card AND share that connection via the ethernet card, hooked to wireless router 2? cable modem - wireless router - desktop pc connected wirelessly - wireless router 2 getting internet from wired connection to desktop pc - laptops connected wirelessly The PIX can be left out for now, but I'm wondering if it could eventually be incorporated? THANKS!

    Read the article

  • Google Drive desktop client not updating existing files from other users

    - by cqm
    I've looked around and there doesn't really seem to be any troubleshooting information for the Google Drive desktop client. It all assumes you are using Google Docs on the web. Anyway, my team is trying to use Google Drive like Dropbox, where multiple people are editing files shared amongst them through the desktop, such as images. Dropbox is really good at noticing when a checksum for a file is changed, and syncing it. Google Drive's desktop client seems not to do this at all. Google Drive desktop client seems to only sync newly created files and not giving any notification at all that there is a modified version, it will never sync it, even though going online and opening that file will show the modified version. Is there any way to fix this? and the answer has nothing to do with proxy or firewall configurations. Team is using computers running OSX and Windows.

    Read the article

  • How to allow an internal server accept remote connections not through RD Gateway

    - by Matt Ahrens
    So, I help administrate a collection of servers running various windows server environments. We have a RD Gateway server, properly configured, to gatekeep for us. It does not have the other servers listed in it's server farm category, though. I just added a refurbished server for a non-profit development environment that is sharing the rack space and port. I would like this server to be accessible via remote connection, but not require RD gateway certification (I cannot add the users for this development server to our gateway since they do not work for the organization hosting the rack.) Is there any way for me to add this dev. server as an exception to which servers should require RD Gateway clearance, or otherwise let users bypass RD gateway credentials for this one machine? Thanks, and let me know if I am misinformed on how RD gateway works or anything. I am still learning.

    Read the article

  • nfs server on cygwin slow

    - by Weltenwanderer
    The setup: We run an instance of cygwin nfsd on a Windows 2008 Server (Xeon 3,2 GHz). There are several Sun Solaris and SunOS machines accessing the shares. This is the exports file: /disk3 (rw,all_squash) /disk2 (rw,all_squash) Those paths are soft linked to the relevant cygdrive/d/path/to/dir paths. Some of the folders contain up to 10k files. The Problem: ls -la on the mounted folder on the sun boxes takes 2 - 3 minutes and the general read performance is really bad. cat filename displays the file in slow bursts and this hurts performance on tasks that access those shared files heavily. Processor load is not the issue, the nfs server idles most of the time, the cygwin tasks never get over 1% load.

    Read the article

  • Why not install Msvcr71.dll into system32?

    - by hillu
    While looking for an authoritative source for the missing Msvcr71.dll that is needed by a few old applications, I stumbled across the MSDN article Redistribution of the shared C runtime component in Visual C++. The advice given to developers is to drop the DLL into the application's directory instead of system32 since DLLs in this directory are considered before the system paths. What can/will go wrong if I (as an administrator, not a developer) decide to take the lazy path and install Msvcr71.dll (and Msvcp71.dll while I'm at it) into the system32 directory (of 32 bit Windows XP or Windows 7 systems) instead of putting a copy in each application's directory? Is there another good solution to provide the applications with the needed DLLs that doesn't involve copying stuff to the application directories?

    Read the article

  • FTP "PUT" fails from Virtual Machine, but not host PC: 504 Command not implemented for that paramete

    - by BrianH
    I have an FTP Script I'm using to automate a file transfer. The transfer works fine on my PC (XP SP2), but when I try and run it on a VM on my PC (XP SP2), the "put" commands gives off: 504 Command not implemented for that parameter. FTP File: open [ftp site] [username] [password] cd [directory on FTP server] binary hash put ..\[subfolder1]\[Subfolder2]\[subfolder3]\[filename] bye The FTP site/server is around the world, and not under my control. From what I understand of a 504, that means the command should NEVER work, but since the same script DOES work on my PC (hosting the VM), that eliminates syntax, file naming, etc. The put command when triggered from the VM, actually creates a 0 length file on the target FTP server, but doesn't populate the file.

    Read the article

  • How to analyse logs after the site was hacked

    - by Vasiliy Toporov
    One of our web-projects was hacked. Malefactor changed some template files in project and 1 core file of the web-framework (it's one of the famous php-frameworks). We found all corrupted files by git and reverted them. So now I need to find the weak point. With high probability we can say, that it's not the ftp or ssh password abduction. The support specialist of hosting provider (after logs analysis) said that it was the security hole in our code. My questions: 1) What tools should I use, to review access and error logs of Apache? (Our server distro is Debian). 2) Can you write tips of suspicious lines detection in logs? Maybe tutorials or primers of some useful regexps or techniques? 3) How to separate "normal user behavior" from suspicious in logs. 4) Is there any way to preventing attacks in Apache? Thanks for your help.

    Read the article

  • cURL looking for CA in the wrong place

    - by andrewtweber
    On Redhat Linux, in a PHP script I am setting cURL options as such: curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, True); curl_setopt($ch, CURLOPT_CAINFO, '/home/andrew/share/cacert.pem'); Yet I am getting this exception when trying to send data (curl error: 77) error setting certificate verify locations: CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none Why is it looking for the CAfile in /etc/pki/tls/certs/ca-bundle.crt? I don't know where this folder is coming from as I don't set it anywhere. Shouldn't it be looking in the place I specified, /home/andrew/share/cacert.pem? I don't have write permission /etc/ so simply copying the file there is not an option. Am I missing some other curl option that I should be using? (This is on shared hosting - is it possible that it's disallowing me from setting a different path for the CAfile?)

    Read the article

  • Remote access not working without connected monitor

    - by winSharp93
    I am trying to configure a Windows Server 2008 as a Home Server for my personal use (mainly for storing documents, hosting source-control, etc.). The "server" consists of an Intel Atom 2700DC board and an Intel SSD. Configuring remote access to the server, I am confronted with a very strange problem: As long as a monitor is connected to my server, remote access works without any problems. However, when no monitor is connected at boot-time, remote access simply won't work (I keep getting errors when trying to connect that the remote server was not found or that remote access is disabled). Windows definitely boots when no monitor is connected as I receive a message asking me whether to enter safe mode when booting after powering the server down by plugging the power cord. When I plug in a monitor after boot, it stays turned off and remote desktop connections still fail. Do you have any ideas about what I could try?

    Read the article

  • Xen HVM guest has severe clock drift

    - by ipartola
    I am seeing a very severe clock drift on my Xen HVM VPS, rented from a hosting provider, so I don't have access to the dom0 system. I continuously run ntpd, but the clock drifts by as much as 30 seconds in 5 minutes and NTP cannot keep up. Has anyone experienced this? Here are some details: $ dmesg | grep clock [ 0.160000] Measured 347 cycles TSC warp between CPUs, turning off TSC clock. [ 0.396000] * this clock source is slow. Consider trying other clock sources [ 0.550448] Switching to clocksource acpi_pm [ 0.653135] rtc_cmos 00:05: setting system clock to 2011-03-09 02:45:40 UTC (1299638740) $ cat /sys/devices/system/clocksource/clocksource0/available_clocksource acpi_pm $ cat /sys/devices/system/clocksource/clocksource0/current_clocksource acpi_pm

    Read the article

  • How to configure auto-logon in Active Directory

    - by Jonas Stensved
    I need to improve our account management (using Active Directory) for a customer support site with 50+ computers. The default "AD"-way is to give each user their own account. This adds up with a lot of administration with adding/disabling/enabling user accounts. To avoid this supervisors have started to use shared "general" accounts like domain\callcenter2 etc and I don't like the idea of everyone knowing and sharing accounts and passwords. Our ideal solution would be to create a group with computers which requires no login by the user. I.e. the users just have to start the computer. Should I configure auto-logon with a single user account like domain\agentAccount? Is there anything else to consider if I use the same account for all users? How do I configure the actual auto-logon with a GPO on the group? Is there a "Microsoft way" without 3rd party plugins? Or is there a better solution?

    Read the article

  • Win2008 - restrict VPN user permissions

    - by Sebas
    Windows 2008 R2 SP1 Foundations file server with no AD, only workgroup sharing some folders, and now a RRAS server. Shared folders are open to everyone in the office (XPs and Sevens) without accounts/passwords, but I was thinking about partially limiting access to the new "VPNuser" account. I'm new to Windows Server and its permissions settings: I thought about denying access to vpnuser through NTFS rights in some folders. It doesn't work, but now I'm guessing that the vpnuser is not considered as a logged user (doesn't appear as such) and is considered a "guest", like the rest of people connecting in the office. I say that because of this: http://social.technet.microsoft.com/Forums/windowsserver/en-US/ff6d3726-ff41-4d3f-9d97-5361af0206dd/vpn-users-on-server-shows-as-guest?forum=winserverNIS Also, because when I create a txt file using the VPN connection, owner field shows in description as "guest". Am I right? How can I set different rights for the VPNuser from the rest of "guest" users in the office?

    Read the article

  • Uninstall SQL Server 2005 Express after Demoting the DC

    - by Walter Aman
    A Windows Server 2003 SP2 hosting a now orphan installation of SQL 2005 Workgroup was pressed into service as a DC in a disaster recovery scenario. It has since been demoted. The server also hosts legacy apps for which we lack reinstallation resources; thus our desire to preserve it as close to intact as possible while removing the orphaned roles. All efforts to remove SQL 2005 thru Control Panel and ARPWrapper /remove fail with error 29528. Should I abandon this and leave the orphan SQL dormant, or is it reasonable to remove it post-demote?

    Read the article

  • IIS permissions issue pointing docroot to Samba share

    - by lalalalalalalambda
    I have an IIS project which is stored on a Samba shared, network mounted with the following line: X: \\my-samba-server\dev /user:freddie Connectivity is fine, can read/write files from X:. In IIS, I'm trying to set it as the Physical path via \\my-samba-server\dev\folder\to\my\files, which results in the following 500.19 error: Config Error | Cannot read configuration file due to insufficient permissions It is by default trying to use the Pass-through authentication. If I try to set this to connect as the specific user freddie, I receive: The specified user does not exist What is the correct way to connect to a path which has been setup as described above? *Samba man pages indicate version 3.6 is on the Debian host

    Read the article

  • IT Inventory Tracking

    - by DrStalker
    What is a good tool to keep track of IT inventory? Systems that are installed and running, parts being ordered, that sort of thing. I'd love a central, web based system (preferably something we can customize) but my searching so far has resulted in a lot of dead open source projects that havn't been updated in a few years and poorly created commercial websites that don't do a very good job describing their product. The software doesn't have to be free or open source - a good commercial alternative is fine. It doesn't even need to be a web-based tool, that's just what I thought would be simplist to find and easiest to deploy. The number of assets that it will be tracking will be in the dozens, so it doesn't have to be a super high-end enterprise solution but it does need to do a better job than an excel sheet in a shared folder (which is our current "solution")

    Read the article

  • Memory is free, but still swapping?

    - by japancheese
    Hello, I'm sure this is a pretty basic question, but I'm just trying to get a grasp of what's going on with my Ubuntu (Hardy Herron) server (running a Rails-based site). It seems that I have free memory available, yet the system is reporting that it is still swapping memory (unless I'm reading this incorrectly?). Here is the "free -m" output total used free shared buffers cached Mem: 1024 905 118 0 33 409 -/+ buffers/cache: 462 561 Swap: 2047 95 1952 Could anyone explain to me some possible reasons that it is maintaining 95mb of swap at all times (it is never less)? I'm just looking for some leads on things I could check out that would explain to me exactly how memory is utilized in Linux.

    Read the article

  • SSL certs or intermediate for DMZ

    - by rex
    I've been tasked with deploying and managing load balancers covering internal servers and DMZ servers. I have no experience with this, and this is a first for my organization as well. Balancers are up, running, legit. Currently we are using a self-signed cert for Exchange/OWA. I know that we should have a cert signed by a CA, but the balancer has options for SSL cert or intermediate cert, and I'm unclear on the difference, or on which we need. We will be hosting Lync, Exchange and some custom apps in the DMZ. disclaimer: Apologies up front, I'm desktop support. I recently passed my Net+. It seems that has made me the network engineer in this organization.

    Read the article

  • php include path problem:Same code works on Ubuntu default Apache and php conf, but not on CentOS

    - by Neo
    So the same code works on my ubuntu server but when I upload it to my dedicated hosting server running CentOS it seems to add an extra prefix of .:/usr/share/pear:/usr/share/php: I tried setting includepath to different things but it just doesn't work. the file is in a directory called language in the same folder as the file that is including it and I'm using : include dirname(FILE).DIRECTORY_SEPARATOR."language".DIRECTORY_SEPARATOR."storage.inc"; and include dirname(__FILE__)."/language/language.php"; and include "language/language.php"; and alot of other combinations but I can't get it to find the file. Fatal error: require_once() [function.require]: Failed opening required '/home/neo/public_html/migration/include/class/core/storage.inc' (include_path='.:/usr/share/pear:/usr/share/php:/home/neo/public_html/migration') in /home/neo/public_html/migration/include/class/core/class_lang.inc on line 153

    Read the article

  • AD account locks out when using Outlook 2007?

    - by Down Town
    Hi, I/we have a problem with our Windows Server 2008 forest and Exchange. We are buying Exchange hosting from another firm and Exchange Server is in their Windows Server 2008 forest. So, we have two forests and there isn't any trusts between these two forests. Our own forest logon name is [email protected] and we also use the same email address to logon to the Exchange mailbox. Everything works fine if both our AD account and Exchange mailbox account have the same password, but if the passwords don't match, our AD account gets locked out. I have tried to figure out why Outlook sends false logon attemps to our AD. If someone can help, please do.

    Read the article

  • Apache does not serve non-locally

    - by yodaj007
    I have a freshly installed instance of Fedora Core 16 inside VirtualBox using bridged networking. On it, as root I typed in: yum -y install httpd service httpd start ifconfig Inside the VM, I can open a web browser to 'localhost' and I get the Apache test page. It works. But in Windows (the machine hosting the VM), I point my browser to the IP address returned by ifconfig (192.168.2.122). The connection times out. I can go to a command prompt and ping the VM. Is there a firewall or something that comes with Fedora by default? Or is there something I need to change in a config file?

    Read the article

  • Trouble with resolving hostnames on CentOS using Bind

    - by cabaret
    I'm taking a course on server administration at school and I have managed to set up virtual hosting in apache and a dns server on a virtual machine. However, I have now set up an old pc to run CentOS and I'm trying the same on that box. The problem I ran into now is that I can't resolve hostnames from the linux box. I have set up the nameserver in /etc/resolv.conf to the IP of the CentOS machine, but when I try for example ping google.com I get ping: unknown host google.com However, when I do ping 66.102.13.105 (which is the Google IP, figured that out by pinging on my mac) I get: PING 66.102.13.105 (66.102.13.105) 56(84) bytes of data. 64 bytes from 66.102.13.105: icmp_seq=1 ttl=52 time=15.5 ms Slightly confused why this is happening. Could it be because of my router sitting in between the linux machine and the cable modem? It's a D-Link somethingsomething. Thanks in advance

    Read the article

  • What's required to enable communication between two IP ranges located behind one switch?

    - by Eric3
    Within our co-located networking closet, we have control over two ranges of 254 addresses, e.g. 64.123.45.0/24 and 65.234.56.0/24. The problem is, if a host has only one IP address, or a block of addresses in only one range, it can't contact any of the addresses in the other subnet. All of our hosts use our hosting provider's respective gateway, e.g. 64.123.45.1 or 65.234.56.1 A host on the 64.123.45.0/24 range can contact the 65.234.56.1 gateway and vice-versa Everything in our closet is connected to an HP ProCurve 2810 (a Layer 2-only switch), which connects through a Juniper NetScreen-25 firewall to the outside world What can I do to enable communication between the two ranges? Is there some settings I can change, or do I need better networking equipment?

    Read the article

  • Suggestions on providing HA access to an external (fibre) RAID subsystem

    - by user145198
    We are looking at upgrading our storage capacity with an external RAID subsystem that has redundant (2) fibre controllers, each controller has 4 x 8 Gbps fibre ports. I would like to make access to this storage system occur via HA Linux. Ideally I would connect 2 fibre ports from each controller into each Linux server, and then export either NFS or iSCSI via a 10 Gbe interface. I have seen plenty of references to DRBD, however all of those references tend to use block storage that is solely attached to each machine, rather than having a shared block storage device, so I am unsure if DRBD could (or should) be used in this case. Ideas?

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >