Search Results

Search found 8369 results on 335 pages for 'company'.

Page 276/335 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • About to go live: virtual dedicated server or cloud?

    - by morpheous
    I am about to launch my startup company, and we will be going live in a few weeks time. We have really tight budgetary constraints, since we are bootstrapping - and would prefer not to raise external capital. I cant use shared hosting because I need more control of the server machine (for technical reasons - e.g. using proprietary extensions to PHP, Apache and in the database layer as well) - but want to control costs and dont want to go fully private server route, until we have determined the market size etc. So the only real alternatives AFAIK is between virtual server and the cloud. At the moment, cloud services seem a bit "vague" to me. My understanding is that they allow an entity to outsource its IT infrastructure, which in my mind (at least), is indistinguishable from what a hosting provider provides (at least from a functional point of view) - I would like to seek some clarification on exactly what the difference between the two is. Back to my original question, my requirements are: IT infrastructure that can scale with growth Ability to have control of the machine (for e.g. to install our internally developed libraries etc) Backup software that is flexible and comprehensive enough (yet simple to use), that allows a (secured) backup strategy to be implemented. On this issue, I have always wondered where the actual backed up data was stored (since the physical machines are remote, and one cant get access to any actual tapes etc backed onto). I would also like some advice and recommendations in this area. Regarding data size, I am expecting the dataset to be increasing by a few megabytes of data (originally, say 10Mb, in about a years time, possibly 50Mb) every day. As an aside, I have decided to deploy on a Debian server (most of my additional libraries etc were compiled and built on a Debian machine). Mindful of all of the above, I would like some advice (and reason) as to which route to take. I would also like some advice on which backup software to use, from people who have walked a similar path.

    Read the article

  • There are currently no logon servers available

    - by Ian Robinson
    I am running a Windows 7 laptop that is joined to my company's domain. When I installed Windows 7, I created an account for myself, joined to the domain, and it had been working quite well even though I'm physically remote most of the time, and not actually on the network. However, today I created a new local user account (non-admin) for my little brother. While he was using it, he decided he wanted to install a program, because his account is not an admin, he was prompted to enter Administrator credentials to allow the program to make changes to his computer. I entered my credentials, and this is the first time I ran into the error message: There are currently no logon servers available to service the logon request. I tried logging off and loggin back in, rebooting, etc etc, and no matter what, every time I try to authenticate as my "normal" domain account - I get that message. I can no longer access my computer as an administrator. I no longer know how to log in to my machine using any other account aside from my little brother's non-admin account. I don't have any other local accounts created, and the default local admin account was never enabled. I'd appreciate any ideas on how I can recover access to my account. Let me know if I can provide any more information. FYI - This is a similar question but not sure any of the answers help me in my case. http://serverfault.com/questions/71632/there-are-currently-no-logon-servers-available-to-service-the-logon-request

    Read the article

  • Why do hosts prefer Linux to Windows Server?

    - by iconiK
    So far I see a HUGE majority of hosts provide only Linux shared hosting, providing Windows only to VPS (or even to only dedicated servers). Why is it so? While Windows is a lot more expensive than Linux (though it depends on a lot of factors, not just initial and support license cost), it also provides ASP.NET, IIS and of course, Microsoft SQL Server. I know in the past it might have been because of cPanel being Linux only but now they have a Windows version. But still, why is Linux predominantly used on shared hosting? PHP works on both systems. IIS can be (and probably is) faster. MySQL runs on both systems as well. cPanel has a Windows version. Python, Perl, Ruby, all run on Windows as well. You even have MS SQL Server Express, which I find more superior than MySQL in both speed and features. Access is there for low usage requirements, as is SQLite (which is so great for quick small stuff). And with PowerShell you have a good alternative to the Unix shell. EDIT: I am looking for common reasons, I realize each hosting company (and/or it's clients) may have different needs. This becomes very important when you get to VPS or Cloud which give you a full operating system to use.

    Read the article

  • should i link to a blog site or install my own blog engine?

    - by dc
    we're setting up a company blog. Our technology stack is .NET. Should we just use blogger/wordpress for the blog and redirect to it from our site? or should i install a blog engine directly on our site (e.g. blogEngine.NET)? some considerations i'd like feedback on are: 1.SEO - if you host your blog on wordpress/blogger instead of installing it on your site - will you get better page rankings? (if the content was the exact same) 2.scalability - i've read that dotNetBlogEngine doesnt scale well on web farms etc. our website is setup to be stateless. 3.security - presumably a hosted blog site has the advantage of having regular security updates. how easy is it to keep an installed blog engine patched? 4.examples of installed blog engines - dotNetBlogEngine seems to be the best but has a couple of limitations. can anyone suggest another one (n/a if you're advice is to host the blog on blogger/wordpress) 5.any other comments/issues/concerns we should be aware of? thanks for your feedback!

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • Windows Server 08 R2 file share File locking, OSX clients

    - by Keith Loughnane
    I've spent the last two weeks banging my head against this wall. I think I'm starting to understand the problem though. I manage a design company and they have 5 macs (OSX 10.5/.6/.7) connected over SMB to a Windows 2008 R2 file server, another machine functions as Domain Controller (that might not matter). All the macs can connect ok, no issues finding the server or logging in. For the most part things are ok. The problem is files locking up. I thought it was a permissions issue at first but it seems to be file locking. The users open a file; .ind, .pdf etc the file opens, the software reads it and closes it. That's fine, but the folder above the folder locks, it can't be moved and it can't be renamed. Eg: /Working/Project01/Imagefiles/image.pdf /Finished/ The user opens image.pdf, closes it and wants to move the whole Project01 folder into Finished. It gives a username/pass dialogue and then does nothing, no error, or just does nothing. Trying to rename gives a dialogue that says you don't have permission. It looks like it's looking for permission locally, which is why I spent about a week looking at that. Eventually I found that Finder on the macs seems to be keeping the folders open. I can work around it by Killing finder, remounting the shared drive or closing the file through the server manager but this just proves the theory it's not a solution. Has anyone dealt with this problem?

    Read the article

  • Accessing a shared folder in Windows Server 2008 R2.

    - by Triztian
    Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated.

    Read the article

  • Windows: disable remote access of local drive, even by domain admin

    - by Matt
    We have a network of Windows 7 PCs that are managed as part of a domain. What we want is for the domain admin to be unable to view the PC's local drive (C:) unless he is physically at the PC. In other words, no remote desktop and no ability to use UNC. In other words, the domain admin should not be allowed to put \\user_pc\c$ in Windows Explorer and see all the files on that computer, unless he is physically present at the PC itself. Edit: to clarify some of the questions/comments that have come up. Yes, I am an admin---but a complete Windows novice. And yes, for the sake of this and my similar questions, it is fair to assume that I am working for someone who is paranoid. I understand the arguments about this being a "social problem versus a technical problem", and "you should be able to trust your admins", etc. But this is the situation in which I find myself. I'm basically new to Windows system administration, but am tasked with creating an environment that is secure by the company owner's definition---and this definition is clearly very different from what most people expect. In short, I understand that this is an unusual request. But I'm hoping there is enough expertise in the ServerFault community to point me in the right direction.

    Read the article

  • Internet Explorer will not open

    - by KCotreau
    I recently migrated a company to a Microsoft domain environment, logged the users in under their new domain accounts, and then copied the old profiles to the new profile. I am not sure if that is related since they did not complain about it right away and it may have been a subsequent patch or something, but I have two XP computers that will not open IE8. You click on it, and it nothing graphically happens at all, but you can see a process in task manager. If you click many times, you get multiple instances. It will appear often TWICE per click. It still works in the old profile, so it is specific to the profile, and I would like to fix it rather than blow it away. Here is what I have done without success: Tried opening without add-ons (the one in System Tools) Reinstalled IE8 Ran SFC /SCANNOW I found a script that was supposed to repair any registry entries, and ran it. I tried exporting the whole HKCU\Software\Microsoft\Internet Explorer key and deleting it, hoping that when I restarted it, it would recreate it...No joy. I restored it. Any ideas?

    Read the article

  • Patch management on multiple systems

    - by Pierre
    I'm in charge of auditing the security configuration of an important farm of Unix servers. So far, I came up with a way to assess the basic configuration but not the installed updates. The very problem here is that I just can't trust the package management tools on those machine. Indeed some of them did not sync with the repository for a long time (So I can't do a "yum check-updates" on Redhat for example). Some of those servers are not even connected to the internet and use an company repository. Another problem is that I have multiple target systems: AIX, Debian, Centos/Redhat, etc... So the version could be different (AIX) and the tools available will be different. And, last but not least, I can't install anything on the target system. So I need to use a script to retrieve the information and either: process it directly or save the information to be able to process it later on a server (Which may happen to run a different distribution than the one on which the information have been retrieved). The best ideas I could come up with were: either retrieve the list of installed packages on the machine (dpkg -l for example on debian) and process it on a dedicated server (Directly parsing the "Packages" file of debian repositories). Still, the problem remains the same for AIX and Redhat... or use Nessus' scripts to assess vulnerability on the installed packages, but I find this a bit dirty. Does anyone know any better/efficient way of doing this ? P.S: I already took time to review some answers to similar problems. Unfortunately Chef, puppet, ... don't meet the requirements I have to meet. Edit: Long story short. I need to have the list of missing updates on a Unix system just like MBSA on Windows. I'm not authorized to install anything on this system as it's not mine. All I have are scripts languages. Thanks.

    Read the article

  • Getting rid of your server in a small business environment

    - by andygeers
    In a small business environment, is it still necessary to have a central server? Speaking for my own company (a small charity with about 12 employees) we use our server (Windows Server 2003) for the following: Email via Microsoft Exchange Central storage Acting as a print server User authentication / Active Directory There are significant costs associated with running a server like this: Electricity, first for the server itself then for the air conditioning required (this thing pumps out a lot of heat) Noise (of which there is a lot) IT support bills (both Windows Server and Exchange are pretty complicated, and there are many ways they can go wrong) I've found ways to replace many of these functions with cheaper (better?) alternatives: Google Apps / GMail is a clear win for us: we have so many spam related problems it's not even funny, and Outlook is dog slow on our aging computers You can buy networked storage devices with built in print servers, such as the Netgear ReadyNAS™ RND4210 that would allow us to store/share all of our documents, and allow us to access printers over the network The only thing that I can't figure out how to do away with is the authentication side of things - it seems to me that if we got rid of our server, you'd essentially have a bunch of independent PCs that had no shared pool of user accounts / no central administrator. Is that right? Does that matter? Am I missing any other good reasons to keep a central server? Does anybody know of any good, cost-effective ways of achieving the same end but without the expensive central server?

    Read the article

  • AWS own email domain and some generic questions

    - by John Brunner
    I'm getting started with Amazon Web Services and I have a few question I'm not sure about. As every (company) webpage I want to use an "[email protected]" email adress, but how is that done? I looked up at godaddy.com (for domain registration), the offer me an email adress like I want, but for 3 dollars per month. Is this possible with AWS? Because at AWS you have just a complex domain which is not very userfriendly or serious. Also I want to host my dynamic webpage on the amazon cloud, but I'm not sure if I'm doing that right. I've read many guides, and all I know is that I have to purchase a Elastic Compute Cloud, and a Simple Storage Service... and every guide is working with the basic linux package, why not Windows? Is it more expensive? I just want to host a mySQL Server for the dynamic webpage, which is reached over a normal domain. And one last question, if I sign up for an AWS account it asks me for an email account. But I found it a little bit unserious to write there my free-webmailer-adress... How is it done the normal way? Thanks in advance! Best regards, john.

    Read the article

  • How to bypass vpn talking to VMWare Guest?

    - by marc esher
    Greetings. Network/VPN n00b question here. I'm running VMWare Workstation with a Guest Windows 2003 Server. It has SQL Server 2000 installed. The sole purpose for this Guest is to house SQL Server... it needn't have internet access or access to any other resources on the network other than the host. When launch Check Point VPN software, the host routes through the company network before it connects to the guest ... i.e. it's no longer a direct connection. I assume this is just how things are supposed to work. However, what's happening is that the connection between my host and the SQL Server instance on the guest intermittently drops. It's not consistent, and some databases on the server will be responsive while others aren't. It appears that the databases with the most traffic on the guest (the ones I'm hitting with load tests) are the ones that become intermittently unresponsive. This problem only manifests when VPN is on; when it's off, I can pound away on this database with no troubles. Thanks for any advice!

    Read the article

  • HP Power Manager SMTP setup doesn't have space for username & password

    - by Martha
    Is there some way to configure HP Power Manager to not assume that there's an email server running locally? We recently acquired an HP T1500 G3 UPS, which we're trying to control using HP Power Manager 4.2. The main reason we wanted to get this particular UPS is because it says it's capable of sending notifications (of the "Yo, the power's out, you may want to look into it" type) via email, as opposed to SNMP. Turns out, that's not entirely true. The server is running Windows Server 2003. It is not running an email server of any sort - we do that via two different providers. Outlook email is provided by Verizon, and our SMTP email service is provided by a small local company. When we use CDO to send auto-generated notification emails, we have to provide the SMTP server name, port, username, and password. The HP Power Manager interface only allows us to enter the server name and the username. Thus, not surprisingly, the emails never go anywhere. Help?

    Read the article

  • Webserver python update script

    - by ThePyCoder
    So i have made this website on which you can trade stocks based on real stock quotes with virtual money. The stock quotes are in a MySQL database and are updated using a python script which runs every minute or so. Now, this works fine on my local machine with xampp but how about moving the project to a commercial web server? Basically I want my page hosted by a professional company but do those kind of servers support python scripts running in the background? Because a dedicated server would be to expensive and the script does some other sql tasks too so it can't be replaced by PHP or so... So, are there any good web hosting services out there who give me the possibility of running a script in the background and hosting a website in the foreground? For what server specifications do i have to look for? Thnx in advance! PS: I've done some research, and I found a python supporting web host WITH ssh support. Is that what I need? Or is the ssh not allowed to start processes?

    Read the article

  • E-mail duplication problem

    - by Gavin Osborn
    I have taken out a hosting agreement with a well respected hosting provider for a couple of internet facing servers. We have deployed several applications to these servers which send various e-mails back to us for reporting purposes. Context: Each server runs Windows Server 2003 R2 with the IIS 6.0 SMTP service installed. Each application is configured to use the local instance of IIS to send e-mails. The external IP address of each server is mapped to a particular domain eg: server1.mydomain.com server2.mydomain.com These e-mails are sent from a company domain name and not the domain name of the hosted servers (eg: [email protected]) Symptoms: A small number (<1%) of e-mails sent from these applications appear to be duplicated. These are exact duplicate in terms of both content and message headers. The Fix: I contacted my hosting provider and they told me this was a common problem & instructed me to: Change the HELO response of your mail server service to a FQDN (server1.mydomain.com && server2.mydomain.com) Create a DNS A record that resolves the FQDN of your mail server to the primary IP address of your sending mail server. Create a PTR record that resolves your primary IP address back to your mail server's FQDN In the sending domain's (mycompanydomain.com) DNS zone file, add the appropriate SPF record for your hosted servers. eg: v=spf1 a mx include:mydomain -all The Problem Continues: I made all of the changes as prescribed above, I was a little hesitant because these steps seemed to suggest they were more for stopping your messages getting blocked than they were for stopping them from being duplicated - but I am certainly no expert in these matters. It has been 5 days since I applied this fix and the problem still persists. I am certain that these problems are not a bug in the software because they are 4 different applications installed on 2 different servers, all of whom are exhibiting this strange behaviour. This behaviour has also not been seen in our UAT environment. Were my hosts correct to suggest this fix? If not, does anyone know what could be the cause of this problem? Many Thanks

    Read the article

  • Web page layout becomes broken when moved to live.

    - by Moses
    I want to preface my question with the fact that I'm only a front-end web developer, so please excuse my gross lack of knowledge in this area. My company has three webservers: one for development (IIS v5), one for staging (IIS v5), and one for live deployment (IIS v6). Staging is an exact mirror of live. When I compare the staging and live web pages side by side in Firefox (3.6), the pages are identical. However, when I compare the staging and live pages with Internet Explorer (8), there are major differences... In staging, the squares for bulleted lists are small. In live, the squares are big. In staging, the borders for tables are thick. In live, the borders are thin. In staging, an ASP generated image is the proper height. In live, the image is cropped at the bottom by about 10px. In the end, the layout on live became broken because of these tiny differences, but why? Does the fact that live is on IIS 6 and staging is on IIS 5 account for the small variance in display? And is there any way I can change this server side? Also, is there any reason why Firefox displays both correctly and IE displays both incorrectly?

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by Olal'a
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Transferring an SQL Processor License to a virtual hosted environment

    - by Andrew Shepherd
    My company is currently hosting a service in-house, and we want to move to an externally hosted environment. We would then be using a virtual server. I understand that this might be spread across multiple machines, but from my perspective as a customer, this layer is abstracted away - I shouldn't know or care about the hardware that the OS is hosted on. We have a licensed edition of SQL Server 2008. This is one Processor license. Will it be a violation of the licensing agreement to use this in a virtual environment. From the reference guide here it says When licensed Per Processor With Workgroup, Web, and Standard editions, for each server to which you have assigned the required number of per processor licenses, you may run, at any one time, any number of instances of the server software in physical and virtual operating system environments on the licensed server. However, the total number of physical and virtual processors used by those operating system environments cannot exceed the number of software licenses assigned to that server For enterprise edition there is an added option: if all physical processors in a machine have been licensed, then you may run unlimited instances of SQL server 2008 in one physical and an unlimited number of virtual operating environments on that same machine. I'm having trouble getting my head around this. Would I theoretically have to get a license for every processor in this virtual environment (which is effectively impossible because I have no way of knowing how many processors there actually are)? Or can I just say that it's hosted on one "virtual" server, so that's OK?

    Read the article

  • httpd.config Easy Apache WHM CentOS

    - by jessie
    First let me explain how I got to this situation. I run a Streaming Video Site. Videos are about 100-250MB in size at any given time there are 500 people on the site. So I guess that would make then static. Recently My site started getting really slow and the only way to fix it temporarily was to restart apache. Now there was no change in traffic that could have caused this. My site is not being attacked. My hosting company recomended to implement mpm_mod and suPHP. They did that by using Easy Apache in WHMS. Then everything was working fine but a little slow. I researched around and to my understanding that mpm will do that but be more table. I was told that installing FastCGI would speed things up just enough. Well that made everything worse. The site is slow and time's out. I used WHM and took off fastCGI but its still the same, it seems like everything i do as of now nothing changes. I even did a roll back on the htconfig file but that didn't work. I'm not sure how to fix this. and my hosting network guy wont be able to touch the problem until Tuesday. I have root access.

    Read the article

  • Why would I need a firewall if my server is well configured?

    - by Aitch
    I admin a handful of cloud-based (VPS) servers for the company I work for. The servers are minimal ubuntu installs that run bits of LAMP stacks / inbound data collection (rsync). The data is large but not personal, financial or anything like that (ie not that interesting) Clearly on here people are forever asking about configuring firewalls and such like. I use a bunch of approaches to secure the servers, for example (but not restricted to) ssh on non standard ports; no password typing, only known ssh keys from known ips for login etc https, and restricted shells (rssh) generally only from known keys/ips servers are minimal, up to date and patched regularly use things like rkhunter, cfengine, lynis denyhosts etc for monitoring I have extensive experience of unix sys admin. I'm confident I know what I'm doing in my setups. I configure /etc files. I have never felt a compelling need to install stuff like firewalls: iptables etc. Put aside for a moment the issues of physical security of the VPS. Q? I can't decide whether I am being naive or the incremental protection a fw might offer is worth the effort of learning / installing and the additional complexity (packages, config files, possible support etc) on the servers. To date (touch wood) I've never had any problems with security but I am not complacent about it either.

    Read the article

  • Server 2008 R2 domain windows update strategy

    - by Joost Verdaasdonk
    Let me explain my question a bit. We are a small company that have now made the first move to a bigger network. For now the network contains of 5 servers 2008 R2 (dc,sql,web,etc..). Everything we need is now in place but for now we cannot afford to finish the network by implementing redundant systems. (secondary dc, dns, sql cluster, etc...) For some people this is hard to understand but this is the current situation. (and we are aware and will fix this when we can) Because we want to keep our system secure and up to date I've made sure that all systems are updated regularly. The problem is ofc that the nr of updates Microsoft rolls out that need a system reboot seam to occur more often. (maybe I'm wrong and it just feels like this) ;-) In our domain servers depend on each other for services (like SQL, WEB, or whatever) so just rebooting a server at will is NOT a good idea! For now I update all of them without rebooting at once. After all are up to date I bring them down in the order they are depended on each other. After this I reboot all of them in the inverse order. I understand ofc that if I DID have redundancy in my system that updating and rebooting would not be such a problem because the server task could be taken over by another node but this is something we generally need to add when we can. So my question is. If you read my above situation can you suggest me more Update strategies or general ideas that could help me do this process in a better / faster way? Thanks for your thoughts!

    Read the article

  • Outbound mail issue during Exchange 2003 migration

    - by user27574
    Dear all, I am having an outbound email issue during the Exch 03 migration. Basically, we are migrating Exch03 to new hardware, both servers are Server 03 based. Everything runs smooth while setting up and installing Exch 03 on the new box. Public folders are all replicated. My issues are shown below.... 1) After starting to move users' mailboxes to new Exch 03, they receive some undeliverable mail and bounced back mail from some vendors, then I move few users back to test around, they have no problem at all after moving back to old Exch 03. 2) Another issue is our company has Blackberry users, we don't have BES. Under each user's mailboxes, we have forward rule setup, so that both user inbox and BB can receive email. User who is moved to the new Exch 03 server, they can only send email to the BB user's inbox, mail cannot be forwarded to BB at all, smtp queue stacks up and keep trying until the time is expired. Since not all emails that the users send out from the new Exch have problem, I am not able to narrow down what is the issue here. Can anyone give me some ideas? Could this be MX record / Reversed DNS relate? Or firewall NAT rule setting? Thanks.

    Read the article

  • How can I safely close this window and forever avoid seeing similar pop-ups from Mackeeper Zeobit's malware and spyware?

    - by Michael Prescott
    The attached image shows a window that just popped up and the only button available is the OK button. I could Force quit Safari, but I've got several sites open right now and don't want to try and find my place again. Besides, I've seen similar hacks in the past and I'd like to learn how to handle them in a way better than just a brute force-quit. I've never heard of MacKeeper or Zeobit, so I opened Firefox and did a few searches while Safari is obviously still stuck, waiting for me to click the sneaky OK button in the dialog window. Anyhow, at least the first few pages of most search results contain lots of blabbering from questionable witnesses about how MacKeeper saved them from some malware or spyware. However, any company that is hacking the browser to maliciously install their product is itself the criminal and not providing a true security application. So, there are three questions here: How can I close this window? Can I do something to Safari to avoid these hacks in the future? (Just curious) Is MacKeeper or Zeobit somehow loading the search results so that no information about their application being malware or spyware is listed (I can't be the only person in the world that is offended by their tactics, even though it appears I am)?

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >