Search Results

Search found 2484 results on 100 pages for 'maintain'.

Page 38/100 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Outlook 2007 Clients attachments not opening correctly

    - by Az
    Hello, Thanks for taking the time to read this. We are having an issue on our Terminal Server clients that has me perplexed. We are running Server2003 x64 and Exchange 2007 environment. When we attempt to open an attachment for example .jpeg and the user selects to open the file, it prompts them to select a program to open the file with. The file associations apear to be fine, if I save the document to the desktop and open it, it opens with the correct program automatically. If i select "always open using this program" it will then open automatically, but I don't want to have to do this for all file types we open regularly on each client. Is this some sort of Exchange server security setting that is forcing them to associate the file? Does outlook or exchange maintain its own file association database? Thanks for reading!

    Read the article

  • How much the distance and ms can affect on the download speed ?

    - by Prix
    Let's consider A (client) and B (server) where A makes download from B. How much can a bad routing from A to B affect the download speed ? For example A does a tracert to B and get a response of 10 steps where the avg ms is around 300 with 10% packet loss at the 4 step and when the connection is normal the avg from A to B is 10 ~ 30 ms. Could this sort of impact reduce A download speed drasticaly or as long as both side and routes have enough link for the full speed of A from B and vice-versa it should maintain the same speed ? Besides tracert and the ping analyse of A to B what else is used to identify the problem ? If you need extra information please let me know.

    Read the article

  • maintaining redirects in nginx from an external source

    - by Sascha
    I am in the situation to give our marketing department the opportunity to maintain their redirects by their own. Until now, they passed the information to the IT department and we maintained it for them in nginx.conf. Some of these guys are quite familiar with redirections in IIS or even in apache but it is no option to give them direct access to the nginx configuration. I see, that there is no nginx support for .htaccess files which I could give access to and I would also prefer not to grant write access to an conf-file that nginx includes. I expect, that our marketing will break our nginx setup within hours... Is there a secure possibility without giving them access our the heart of our load balancer?

    Read the article

  • Access permission /opt/ in Ubuntu

    - by user1201239
    I want to access my /opt/ folder I have found following commands for giving access permission. But not sure what is the purpose of this commands which one is the better to use it to maintain security and access permission both. Please explain me the purpose or what this different numbers means in security permission ? here are they , sudo chmod 755 -R /opt/ sudo chmod 755 /opt/ sudo chmod 775 /opt/ sudo chmod 777 /opt/ I dint know these commands so what I use to do previously was "gksudo nautilius" then rightclick- change the owner from root to current usergroup But now as I have found this commands I would like to know Which one should i use ? and what they do ?

    Read the article

  • I want to version control my entire slice

    - by Tom
    I'm renting a slice (i.e., a VPS) from Slicehost. I've a spent a day or two filling up /usr with my favorite packages, /etc with configs and init scripts, and so on. Now I want to: save this whole setup somewhere (e.g., to load onto another machine). see what changes I've made to which files revert changes, tag revisions, and all that other good version control stuff Saving a disk image gives me (1), but not (2) and (3). Using Subversion (svn import / svn://someotherhost) might give me all three, but I expect problems if I actually try to check a project out into / and maintain .svn directories in root-owned areas. And to load my setup onto a fresh slice, I'd need to install an svn client on it first. Is there a good way to do what I want to do?

    Read the article

  • Is SCCM overkill for medium-sized organizations?

    - by Le_Quack
    I am an IT technician in a high school with around 1600 students 250 staff and 800+ client computers mostly running Windows 7. Our team is composed of three members. My boss seems content with a network that works (just about) not necessarily a productive well maintained network that is easy to run and maintain. I'm still fairly early on in my I.T. career so I'm not up to speed on all the different endpoint management solutions that are available. I'm looking for a better way to manage clients (deploy software, track changes, inventory etc) I like the look of SCCM 2012's features but the case studies seem to be aimed at large multi-site infrastructural rather than a single mid sized site. Is SCCM suitable for a mid sized single site or is it aimed at much larger corporations? How can I determine whether or not an endpoint management solution like SCCM is a good fit for our organization? EDIT: Thanks for all the help I'll take a look at SCE and SCCM and get some proposals drawn up to take to my boss/deputy head

    Read the article

  • Do you find using a VPS worthwhile?

    - by Grant Palin
    I am currently on shared hosting, and have been recently looking at the idea of switching to a VPS instead. From what I have gathered, a VPS allows you more control over your server setup. But at the same time you have to set it up yourself, and maintain it. This is the bit I am asking about... Despite the power and flexibility you get from using a VPS, you have to take care of it yourself. Is it worth it? Some context: I am primarily a Windows user, but have been tinkering with various Linux distros off and on for several years. I know enough about Linux to get by, or to be dangerous - take your pick. I've also done some tinkering on my current host, but have no serious sysadmin experience. There's always a first time!

    Read the article

  • How to enable CDR on AsteriskNow 1.5

    - by Michal Niklas
    I have upgraded PBX to Asterisk 1.6.2.7 and now CDR files are not created. It looks that such logging is disabled: Connected to Asterisk 1.6.2.7 currently running on pbx2 (pid = 5824) Verbosity is at least 3 pbx2*CLI> cdr show status pbx2*CLI> Call Detail Record (CDR) settings ---------------------------------- Logging: Disabled Mode: Simple Asterisk shows that CDR modules are loaded: pbx2*CLI> module show like cd Module Description Use Count cdr_manager.so Asterisk Manager Interface CDR Backend 0 cdr_csv.so Comma Separated Values CDR Backend 0 app_cdr.so Tell Asterisk to not maintain a CDR for 0 app_forkcdr.so Fork The CDR into 2 separate entities 0 func_cdr.so Call Detail Record (CDR) dialplan functi 0 cdr_custom.so Customizable Comma Separated Values CDR 0 6 modules loaded How to enable creating CDR csv files?

    Read the article

  • tar - exclude certain files

    - by Alan
    I wish to tar all files in a directory and its subdirectories that do NOT end in .jpg, .bmp, .gif, or png. So, given the following folders and files: foo/file.txt foo/file.gif foo/bar/file foo/bar/image.jpg I want to tar only the files file.txt and file. file.gif and image.jpg should be ignored. I would also like to maintain the folder structure. My first thought was to pipe the results of the find command in conjunction with grep -v ".jpg|.gif|.bmp.png" to a text file, and then use the tar include argument to feed it that list of files. However, the results of the grepped find command also contain directories (in the example above, it would be "foo" and "foo/bar"), and when a directory is fed to tar, it includes all files in that directory, so I would end up with a tar file containing all of the files--not what I want. Is there any way to prevent find from outputting directories? Is there a far easier way to approach this?

    Read the article

  • How do you persist installed software & configurations on an Amazon EC2 instance?

    - by Richard
    I've gotten a base Debian AMI up and running and now I need to know the best way to maintain it. I've ran the updates (aptitude update/upgrade) and installed/configured my software (Apache, Ruby, etc.) but if I reboot the instance or start a new one I'll have to do all this work over again. How do you persist these types of things over a reboot? Do you build a new AMI every time you adjust some tiny piece of the system? Or is there some way to feed it a script on startup that configures it in "real-time"? I know I could go all the way with a Reductive Labs Puppet style setup but that's a bit too much for my needs right now (1-2 servers). Any best practices on this? Update: I found a bit of information on using User-Data to run scripts at instance boot time.

    Read the article

  • Update saved password for basic authentication using a script

    - by Kalamane
    I have a website that uses basic authentication as described on this webpage. Each of the computers I manage have the password saved in their browser. There is only one username and password for this. After someone logs in to the site this way, they are presented with their individual username and password prompt as part of the web page. The purpose of the initial username/password is to discourage non-technical employees that aren't supposed to be using the page from even viewing it. So far, when we've had to change this password, I've manually gone to each computer and updated the saved password. I'm writing a startup script to configure other aspects of these systems so that I can maintain them easier. I'd like to be able to update the saved password via this script. The operating system running on these machines is Windows XP SP3 and the browsers they're using to access this site are IE8 and IE9. How can I update the saved basic authentication information for a website via a script?

    Read the article

  • Cannot Login To phpMyAdmin

    - by Zach Dziura
    I'm running a simple LAMP server at home from which I host a personal blog. The server is running Arch Linux, with the latest-and-greatest versions of Apache, MySQL, and PHP. In order to easily maintain the databases, I installed phpMyAdmin. However, I cannot login. If I were to SSH into the server and run mysql -u <user> -p <password>, no errors show up and I'm immediately placed into the MySQL prompt. No problem. However, when I try to log in with phpMyAdmin, using those exact same credentials, nothing happens. No errors, no nothing, I'm just redirected back to the login page. Did I do something wrong? Thanks in advance for any and all answers!

    Read the article

  • In CentOS 4.3 Webmin 1.3000 bandwidth monitoring is eating disk space. How to delete those files?

    - by Silkograph
    I maintain Linux server being used for Mail, Squid and DNS service. Recently I observed that something was eating server disk space. But at last, today I caught the culprit which was consuming the disk by storing large number of files. On this server, Webmin 1.300 is installed. We use Squid proxy and Sarg to monitor Internet access. I always manually clear Sarg generated files under /var/www/html/squid for last few years. But I never realized that Webmin is also storing some kind of bandwidth log files in its' directory structure. I have noticed that under /etc/webmin/bandwidth/hours it has stored more thousands of files since year 2007 totaling about 17 GB. We have used 40 GB HDD for this server machine. My question is how can I delete those (/etc/webmin/bandwidth/hours) files safely?

    Read the article

  • Filesystem to quickly get recent modifications

    - by liori
    Hello, I've got relatively big filesystem (ext4) with lots of small files and I'd like to backup it. Making full backups often is not feasible to me so I want to have a way to make differential/incremental backups (differential preferred). But... this is laptop, and scanning for changed files takes lots of time. My questions: 1) Is it possible to get list of files changed since some date from ext4's journal? I know it wasn't designed with this idea in mind, and it might be too small for bigger timespans, but maybe it is somehow possible? 2) Is it possible to monitor filesystem modifications and maintain a list of changed files reliably? I think I could use inotify, but this might be too slow to monitor full filesystem and might be unreliable. (by reliable I mean either I get all modifications since last backup (and this list is not missing anything) or an error message). Laptop runs Debian unstable.

    Read the article

  • isolate web servers on intranet with dfl-800

    - by microchasm
    I administer a small network (10 users). I'm getting ready to deploy a internal webapp that will be hosted and accessed locally only. There are about 10 users on the network (192.168.111.0/24), a win2k3 server, apache (RHEL), and Mysql (RHEL), and various miscellaneous peripheries. I'd like to isolate the apache and sql boxes into a seperate area of the lan to keep things easier to maintain/grow. I've been reading about vlans, subnets, etc.. I'm not clear, however, which would be the best solution for our setup. Thanks for any tips and or advice.

    Read the article

  • Why doesn't my laptop battery charge while the laptop is in use?

    - by larryb82
    Up until a week ago, my laptop has always been able to charge the battery while I'm using it. Now, it will not charge unless the computer is sleeping, hibernating, or turned off. The icon in the start tray states that the battery is charging but it is not animated (it used to be) and of course the power level does not increase. Otherwise, the battery seems to be fine. The battery life is decent (2h+) and while the laptop is in use and plugged in the battery will maintain a constant charge. Any troubleshooting help would be great (i.e. is this a charger issue, battery issues, software issue, etc...)

    Read the article

  • How can I run 2 already installed OS at the same time?

    - by eran
    I have Win7 and Ubuntu installed on my PC, and I can choose which to run at boot time. I would like to be able to run the Ubuntu from within the Win7. Tools like VMWare allows one to create a new installation of a guest OS, which could then be run alongside the hosting OS. However, I already have the Ubuntu fully installed on my hard drive, and I'd like to maintain the dual boot option. Ideally, I'd like to be able to create a new virtual machine on my Win7, but instead of installing a new guest OS, just direct it to the existing installation. Is that possible?

    Read the article

  • how to manage credentials/access to multiple ssh servers

    - by geoaxis
    I would like to make a script which can maintain multiple servers via SSH. I want to control the authentication/authorization in such a manner that authentication is done by gateway and any other access is routed through this ssh server to internal services without any further authentication/authorization requirements. So if a user A can log into server_1 for example. He can then ssh to server_2 without any other authentication and do what ever he is allowed to do on server_2 (like shut down mysql, upgrade it and restart it. This could be done via some remote shell script). The problem that I am trying to solve is to come up with a deployment script for a JavaEE system which involves databases and tomcat instances. They need to be shutdown and re-spawned. The requirement is to have a deployment script which has minimal human interaction as possible for both developers and operation.

    Read the article

  • Can a consumer wireless router act as both a wireless client and access point?

    - by glibdud
    I'm going to be moving in the future, and integrating my home network into that of my landlord. I wish to maintain an isolated network while using his internet connection, so I'm planning on cascading my router off of his (WAN-to-LAN type configuration). Unfortunately, it looks like it might not be feasible to run a wire between the two. Therefore, I'd like to send my WAN connection over WiFi to his router. At my disposal, I have a WRT54GL (running Shibby's Tomato mod), and I just bought an Asus RT-N66U (I can be flexible with the firmware). My first thought was to set up the WRT as a wireless bridge, then run a wire between that and the N66U's WAN port. I'm reasonably sure I could make that work, but can I eliminate the WRT from the equation altogether? Can the N66U connect to the landlord's WiFi as a client, effectively using that as the WAN port, while simultaneously providing wireless access to my devices on an altogether different WLAN?

    Read the article

  • SFTP: How to keep data out of the DMZ

    - by ChronoFish
    We are investigating solutions to the following problem: We have external (Internet) users who need access to sensitive information. We could offer it to them via SFTP which would offer a secure transport method. However, we don't want to maintain the data on server as it would then reside in the DMZ. Is there an SFTP server that has "copy on access" such that if the box in the DMZ were to be compromised, no actual data resided on that box? I am envisioning an SFTP Proxy or SFTP passthrough. Does such a product exist currently?

    Read the article

  • Clone a Red Hat RAID as part of a disaster recovery plan

    - by Campo
    I am looking for recommendations to clone a Red Hat mirrored raid to a single hard drive located in the same machine. The idea is if the servers hardware ever has an issue we have a similar hardware machine ready to go. All we would have to do is pop in the cloned drive. If the servers RAID ever failed we could just switch to the single drive to maintain uptime and restore the original configuration on the spare server with a backup. This is a restaurant and they are open 7 days a week. We do have time from 12:am to 9:00am to perform the necessary steps for a clone and we talking about under 10 Gigs of information. There is a database on the server. I have looked into Rsync and Clonezilla. But I am just not confident either is capable of completing the task I want. Looking for some suggestions and possibly a step by step if you could be so kind.

    Read the article

  • What is the best IP/Subnet set up strategy for a multi-server webhosting setup?

    - by Roy Andre
    Sorry for the mixed-up title, but let me try to explain better: We run a hosting solution, which until now has supported shared hosting and VPSes. Easy enough. We are now getting larger clients which require a more complex setup. We have more or less settled the server-setup itself, which will consist of: 1-2 Frontend Proxy/Load balancing servers 2+ Application servers 1 Database server 1 optional Memcached server The issue we are dealing with is to agree on a flexible and easy-to-maintain IP setup. So far we've been into VLAN'ing the internal servers in its own subnet, we've though of assigning an official IP to each server, and so on. What will be the best approach here? Any best practices? Using one official IP on the Frontend server, and then just set up an internal subnet for the servers behind that? We could then just NAT in any eventual sources required to access for instance the DB server directly over 3306.

    Read the article

  • Automatic driver search & update on Windows?

    - by Ben
    I have a Dell laptop issued by my employer, and I always find it a real pain to search for, download and maintain their drivers. It baffles me that there does not seem to be a nice way (product, website, ...) to just download the stuff you need, without hassle. The same goes for the other Windows based laptops in my direct environment. Are there any (preferably free) automated solutions available? Or do you have a nice workflow - other than searching the manufacturers website - to help smoothing this process?

    Read the article

  • shut down FTP from IIS 6 after <X> failed login attempts

    - by Justin C
    Is there a setting in IIS 6 to turn an FTP site off after a specified number of failed login attempts? It has already been documented on this site that a Windows server sitting on a static IP address can record tens of thousands of failed login attempts a month. One server I maintain has had tens of thousands of attempts made against the FTP port. I have solid passwords in place, so I am not overly concerned. I rarely have to use the FTP, so for the most part I turn it on and off as I need it. Sometimes though I forget to turn it off when I am done, only to find the next day that my EventLog is full of audit failures. I would want to set a high number, in case I just messed up the password. Something like if 50 failed login attempts happen, just turn off the FTP site. Then if I need it later I can just start it again.

    Read the article

  • One vs. many domain user accounts in a server farm

    - by mjustin
    We are in a migration process of a group of related computers (Intranet servers, SQL, application servers of one application) to a new domain. In the past we used one domain user account for every computer (web1, web2, appserver1, appserver2, sql1, sqlbackup ...) to access central Windows resources like network shares. Every computer also has a local user account with the same name. I am not sure if this is necessary, or if it would be easier to configure and maintain to use one domain user account. Are there key advantages / disadvantages of having one single user account vs. dedicated accounts per computer for this group of background servers? If I am not wrong, one advantage besides easier administration of the user account could be that moving installed applications and services around between the computers does not require a check of the access rights anymore. (Except where IP addresses or ports are used)

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >