Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 450/590 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • domain user disabling screensaver

    - by RASG
    I have the following situation: Due to security reasons the screensaver is activated after 10 minutes, and immediately locks the screen. There are GPOs preventing the user from changing the screensaver parameters and the background image. In order to bypass the background policy, some users are using bginfo The problem is that for some reason now the screensaver doesn't work anymore. The settings are still the same (10 minutes; locked to the user) and comparing snapshots of the registry before and after executing bginfo doesn't show any significant modification. Any hints? EDIT 1: Ok, i figured whats going on, but now i have another question. bginfo refreshes the user settings by reading HKEY_CURRENT_USER\Control Panel\Desktop, which has ScreenSaveActive. If the user set it to 0, disables the screensaver. Why isnt HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows\Control Panel\Desktop, which sets ScreenSaveActive to 1, being enforced? or if it is being enforced, where is bginfo storing the value 0, and how can it bypass the policy? EDIT 2: I also discovered that after setting any value to HKEY_CURRENT_USER\Control Panel\Desktop\ScreenSaveActive, it can be deleted and the last value will remain active. For some reason HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows\Control Panel\Desktop\ScreenSaveActive value is not being enforced to the user.

    Read the article

  • no administrator password for Windows 7

    - by huskergirl78
    I'm a secretary and my boss set up my new Windows 7 OptiPlex 7010 (Dell) computer for me while I was on vacation (he does not remember setting any "administrator" password). We are a small office so there is no system password set, either. I've used it for 6 months, all the while I couldn't access network drives, etc., without an administrator password. It was annoying, but I could still get my work done. Finally, on a slow day I took it upon myself to "fix" the problem, and in all my infinite wisdom, I managed to change my user account from administrator to standard user, so now I really can't do anything. I can't download or install any programs, move or rename files, etc. I tried the Dell suggested solution, but the BIOS tells me there is no password set, so it has to be a Windows 7 problem. All the solutions I have come across require an administrator password to let me do them. What can I do to find out the admin password so I can use my own darn computer!? Is there a default admin password?

    Read the article

  • Technology mash: is this possible?

    - by Jon Story
    I'm in the process of setting up my own DNS+hosting on a couple of VPS and my home machines, mostly for academic/learning purposes, but also for convenient accessing of my files, hosting my personal websites, private git repositories etc. I've got a main web server with DNS, and a slave DNS server. I've also got a couple of machines at home doing file hosting, video streaming and all that fun stuff. I'm intending to use my VPS's to provide myself with a dynamic DNS system so that I can point mydomain.com at my DNS servers, with home.mydomain.com going into my home network via a raspberry pi. HOWEVER.... I've not got access to the network infrastructure at home (rented accommodation with managed internet), so I can't forward the ports on the router to my own machines. As such, I'm wondering if it's possible to route all the traffic via an SSH/HTTP tunnel through one of the VPS? My plan is to have the raspberry pi provide a VPN into my home network. The raspberry pi uses SSH to connect to the VPS, and the VPS forwards any traffic to home.mydomain.com via the tunnel to the raspberry pi. Is this even possible, and how do I go about it? I don't mind getting my hands dirty with coding and low level tools, I'm just not sure where to start or what the best way to go about it is.

    Read the article

  • IIS7 can't read web.config on shared Mac filesystem

    - by RobG
    I'm running a VirtualBox virtualized Windows 2008 Server on my Mac, just finished setting it up today. On it, I have SQL Server 2008, IIS and ColdFusion 9. I want to serve websites from my Mac filesystem (for development purposes). So I created a new website in IIS and pointed it at the appropriate path using a UNC path: \vboxsvr\rob\Sites\testsite, which contains the ColdFusion code and a web.config file. When I attempt to modify the file at all, or view the site in a web browser, I get an error: HTTP 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. I did some Googling, and found several similar problems, but nothing exactly like I have. The closest one seemed to indicate permissions. So I recreated the site and set it up to allow the Administrator (in Windows) to access the stuff. That didn't help. I can read/modify the files just fine from within Windows, but IIS itself can't seem to do it. What do I need to do to fix this? Thanks!

    Read the article

  • Printing special / extended characters on web page - Firefox or printer issue?

    - by edmicman
    My dad brought this to my attention and I'm looking into it, but thought I'd post here and see if anyone has any ideas... He's running Win7, using Firefox and printing to a wireless connected Brother HL-2170W printer. He's got a web page (http://cornandsoybeandigest.com/inputs/fertilizer/applying-nitrogen-after-planting-0512/index.html) that has extended/special characters in it - the funny "a" in Fernandez. The characters show correctly in Firefox on the page. He printed it, and the extended "a" printed as a diamond with a question mark. He says it shows that way in the print preview, too. I pulled up the same page in Ubuntu, Firefox, and it displayed in my print preview and physically printed everything correctly. I just checked on my wife's Win7 PC in Firefox and the print preview looked correct on her system, too. We have a Brother 2040 here. Soooo, my question is, is this possibly a problem in the browser somehow, or the printer driver? I'm leaning towards the printer driver now, but I can't say I've run into this before. Is it a setting somewhere? I just installed this printer for him the other day, using the CD that came with it; I could try updating the driver from Brother's website I guess. Is there anything else I should look at or check? Thanks!

    Read the article

  • How can I remove all drivers and other files related to a USB Mass Storage device?

    - by Bob
    I have a flash drive here that does not work on one OS on computer - let's call it the desktop Windows 7. It works fine on another computer - laptop Windows 7. It also works fine on Windows 8 on the same desktop computer. Other flash drives work fine under desktop Windows 7. So not a hardware issue, not a generic USB Mass Storage driver issue. It's something specific to this drive. On desktop Windows 7, I can connect the drive but no volume comes up under Windows Explorer. Ditto for Disk Management. With diskpart, loading hangs until I unplug the drive, if I replug it and try list disk it hangs again. If I unplug the drive at this point, list disk prints out all attached drives - including the just removed flash drive. The drive consistently appears under Device Manager, but uninstalling the drivers, restarting and reinstalling the drivers (by inserting the drive) only works for the first insertion. After that it fails again. I get the feeling that the driver files are not actually removed, and are corrupted, meaning every reinstall it's the same corrupted drivers being installed. Is there any way to remove these drivers completely? Or perhaps some other setting Windows 7 retains? Formatting the drive through another computer/OS does not help. I've also tried a complete wipe and rebuild of the MBR and single partition. The allocation unit size makes no difference; neither does a NTFS format. This is a relatively small matter, and I would not like to reinstall the entire OS!

    Read the article

  • Need to restore emails that Outlook deleted from the mail server

    - by Mugen
    I wanted to use POP mail for my Yahoo mail account. So I went to the Yahoo mail settings and enabled POP feature. I installed Microsoft Outlook 2007 and added the settings for my Yahoo mail account. Then I performed a sync. I successfully received all the emails from Yahoo account into my pc. But when I logged into my Yahoo mail online, to my horror all the mails were deleted. It seems someone in Microsoft for some reason decided to keep the default setting unchecked for "Leave copy of messages on the server". This has been a very old account and I need to have all the mails also present on the mail server so I can access it anytime and future purposes. Is there anyway I can restore these emails on the server again? Any ideas how I can get it back to the way it was? I have been googling this for quite sometime now but I'm not able to find it. Any helpful suggestions are welcome. Thanks.

    Read the article

  • Ubuntu and Windows and Separate HDs, oh my!

    - by LuxuryMode
    Need some major help. Running a Dell XPS/Dimension 630i. It came with "SATA 2 RAID 0 With Dual 500GB Hard Drives." I have installed a new, third non-raided drive and installed Ubuntu on it. So now I have Windows on the original hard drive and Ubuntu Linux on the new HD. When I get to the boot menu where I can select an OS, if I select windows I get an error: "No such drive, no such disk." Also, strangely in the first place, in order to even get to the bootloader menu I have had to disable ALL ports under the RAID config. Unless I do this, I will just get to a never-ending blinking cursor. I have tried every conceivable CMOS config and nothing else works. Tried setting port 3 (the new HD w/ Ubuntu) to first hard disk boot priority. Tried disabling all other ports and enabling the Ubuntu HD port and vice versa. Here's a pic of the error I get when I try to boot to Windows: http://imgur.com/TJ1mS. Also, please note that I can actually access all files from the raided Windows drive through Ubuntu. (Someone suggested just reinstalling windows from installation CD. Agree?)

    Read the article

  • Postfix additional transports - is it working?

    - by threecheeseopera
    I have enabled two additional transports in my postfix config to deal with recipient domains that demand connection limiting, per the instructions here at serverfault. However, I have no idea if this is working or not; in fact, I think it is not working, due to the send speeds I am seeing in the logs. How might I determine if my additional transports are working? If they aren't, do you have any tips on figuring out why? And, do you have any comments on my particular configuration? (am I a bucket of fail?) I have enabled the additional transports in master.cf: smtp inet n - - - - smtpd careful unix - - n - 10 smtp -o smtp_connect_timeout=5 -o smtp_helo_timeout=5 cautious unix - - n - - smtp -o smtp_connect_timeout=5 -o smtp_helo_timeout=5 I have set up the transport mapping file /etc/postfix/transport: hotmail.com cautious: yahoo.com careful: gmail.com cautious: earthlink.net cautious: msn.com cautious: live.com cautious: aol.com careful: I have set up the transport mapping and some connection-limiting settings in main.cf: transport_maps = hash:/etc/postfix/transport careful_initial_destination_concurrency = 5 careful_destination_concurrency_limit = 10 cautious_destination_concurrency_limit = 50 Finally, I have run converted the transport file to a db per the postfix docs: #> postmap /etc/postfix/transport And then restarted postfix. I do see my transport_maps setting when I run postconf, but I do not see any of the transport-specific settings ('careful_xxx_yyy_zzz'). Also the mail logs do not appear to be different in any way to what they were previously. Thanks!!!

    Read the article

  • “NT AUTHORITY\ANONYMOUS LOGON” error in Windows 7 (ASP.NET & Web Service)

    - by Tony_Henrich
    I have an asp.net web app which works fine in Windows XP machine in a domain. I am porting it to a Windows 7 stand alone machine. The app uses a web service which makes a call to sql server. The web server (IIS 7.5) and SQL Server are on the same stand alone machine. I enabled Windows authentication for the website and web service. The web service uses a trusted connection connection string. The web service credentials uses System.Net.CredentialCache.DefaultCredentials. I noticed username, password and domainname are blank after the call! The webservice and web site use the 'Classic .NET AppPool' with NetworkServices identity. I am getting an exception "NT AUTHORITY\ANONYMOUS LOGON" in the database call in the web service. I am assuming it's related to the blank credentials. I am expecting ASPNET user to be the security token to the database. Why is this not happening? Did I miss a setting? (Usually this happens when sql server and web server are on two different machines in a domain, delegation & double hopping, but in my case everything is on a dev box)

    Read the article

  • Desktop appliciations are unable to launch my browser in Windows 8

    - by Chevex
    I have a fresh copy of Windows 8 Pro installed from MSDN. I have Google Chrome installed (stable channel) and it is set as my default browser. I even went into Control Panel Default Programs to ensure that Chrome had all its defaults. When other desktop applications try to launch my browser they always fail. For example, while trying to install the Android SDK for Windows the installer accurately detected that I did not have the JDK installed. It provides a friendly button to visit java.oracle.com. When pressing this button, nothing happens at all. You can see that here: http://youtu.be/XXL8GhuWWg0 If it were only that application that was having issues I wouldn't think anything of it but I have been encountering similar issues all over the place. Probably the most irritating one is when visual studio has updates; clicking the update button does nothing. http://www.youtube.com/watch?v=zwd1mn3TId0 You can see in that screencast that Visual Studio is not able to launch the browser no matter what I click. The update button doesn't do anything and neither do the two links in the update's description. Any suggestions? I'm assuming it's a Windows issue since it is happening in multiple applications. UPDATE: Setting IE as the default browser fixes the issue. So it has something to do with it not being able to launch Chrome programmatically. Is it even possible to workaround this bug or do I have to suffer with IE as default for now?

    Read the article

  • How can I configure myhostname to work with Postfix?

    - by John Kelly Ferguson
    I'm going through the process of setting up a Discourse forum on my server (Ubuntu 12.04 x64) and am getting stuck at the point where I have to configure mailers. I'm following Discourse's instructions and am stuck trying to configure postfix for Mandrill. It is says to check my fully-qualified domain name by typing hostname -f When I enter in hostname -f, I get localhost. As far as I know, entering in hostname -f should return mydomainname.com. When I just enter in hostname, I get mydomainname which is correct because that is what I set my hostname to in /etc/hostname. Looking at some of my other settings, my /etc/hosts file reads: 127.0.0.1 localhost mydomainname # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters And in my /etc/postfix/main.cf file, I have myhostname set like this: myhostname = mydomainname.mydomainname.com (Should this be myhostname = mail.mydomainname.com instead?) And mydestination is the following: mydestination = mydomainname.com, localhost, localhost.localdomain, localhost I'm not that familiar with configuring hostnames. I've been reading Postfix's instructions, but haven't been able to figure it out yet. Any help on how to get this to work would be greatly appreciated. Thanks.

    Read the article

  • ubuntu preseed installation keep missing mirror files

    - by JackWu
    Install ubuntu12.04.2 with preseed file, but there is one buggy problem about preseed mirror setting. The symptom here is installing process got stuck. So I track down the log file, and find out the real problem, the installation is looking for a file that's not there. This is just one of them, another pops up if I faked this file. This all happened during preseed, so I believe preseed has something to do with this. I google ubuntu preseed mirror and find this post saying: # If you select ftp, the mirror/country string does not need to be set. #d-i mirror/protocol string ftp d-i mirror/country string manual d-i mirror/http/hostname string archive.ubuntu.com d-i mirror/http/directory string /ubuntu d-i mirror/http/proxy string # Alternatively: by default, the installer uses CC.archive.ubuntu.com where # CC is the ISO-3166-2 code for the selected country. You can preseed this # so that it does so without asking. #d-i mirror/http/mirror select CC.archive.ubuntu.com # Suite to install. #d-i mirror/suite string lucid # Suite to use for loading installer components (optional). #d-i mirror/udeb/suite string lucid # Components to use for loading installer components (optional). #d-i mirror/udeb/components multiselect main, restricted I wonder the difference between d-i mirror/http/hostname and d-i mirror/http/mirror, I mean they all specify a mirror, right? In my preseed file, this is no d-i mirror/http/mirror, and d-i mirror/http/hostname points to my own repo as you might notice in the previous image. Here is my question: Does preseed fetches file/resource from internet, if I use local repo? Why it's looking for file that's not even there? This has bothered for quite time, many thanks in advance to anyone who might give any help.

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • What is the alternative of Apache's global Alias in IIS instead of adding a Virtual Directory to every single sites one by one?

    - by Sk8erPeter
    In Apache, there's a way I can make phpMyAdmin available globally to all VirtualHosts I set up. In Apache, it looks like this: <IfModule mod_alias.c> Alias /phpmyadmin "c:/AppServ/www/phpMyAdmin" </IfModule> This way I reach phpmyadmin with prepending /phpmyadmin to all my domain names, and I can see phpmyadmin's initial page. (So for example it works for all my domains like this: http://example_1.com/phpmyadmin, http://example_2.com/phpmyadmin, http://example_3.com/phpmyadmin also does work). In IIS, there's an "Add Virtual Directory..." option when right clicking on a given site. Here I can set up e.g. phpMyAdmin's path to be reached with prepending /phpmyadmin to the given domain (e.g. http://example_1.com/phpmyadmin), but isn't there a "global" setting similar to Apache's Alias? Or do I have to add a virtual directory to every given sites one by one? I'm just curious, it's not a hard work to do it, but I'm interested in it if there exists another method to do it. Thanks in advance!

    Read the article

  • 403 Error when accessing vhost directive

    - by Ortix92
    I'm having some troubles with setting up my webserver (Centos 5.8). It's a brand new server and I'm trying to set a vhost to the following dir: /home/exo/public_html However whenever I restart httpd I get the following warning: Code: Starting httpd: Warning: DocumentRoot [/home/exo/public_html] does not exist Yes the directory does exist. So whenever I visit the domain exo-l.com it gives me a 403 error. This is my config file (I put this inside my httpd.conf because the files in conf.d were not included for some reason. Or at least my newly created vhost conf file, but that has 0 priority for now) <VirtualHost *:80> DocumentRoot /home/exo/public_html ServerName www.exo-l.com ServerAlias exo-l.com <Directory /home/exo/public_html> Order allow,deny Allow from all </Directory> </VirtualHost I'm completely clueless because this should work as far as I know. httpd is being run as apache:apache i tried chowning the public_html directory (also recursively) to exo:apache, apache:apache, root:root with no success. chmod 777 doesn't do anything either. a tail from the log: [Sat Oct 13 15:10:04 2012] [error] [client 82.***.***.61] (13)Permission denied: access to / denied [Sat Oct 13 15:10:04 2012] [error] [client 82.***.***.61] (13)Permission denied: access to / denied I also found something about selinux and that disabling it might help, but do I really want to do that?

    Read the article

  • Office Compatibility Pack and File Permissions

    - by hymie
    MS isn't my thing, so I hope somebody can give me a pointer. We have a Windows domain, with a Server-2003-SP1-Enterprise file server. One of the specific files is a MS Excel 2007 (XLSX) file created by user LK. In the "Security" preferences setting, about a half-dozen users (including me) have access to this file. LK is the owner and has "full control", while the rest of us have "Read" , "Read & Execute", and "Write" permission. LK is also the owner of the directory that this file resides in. I don't know if that's relevant. So far so good. My desktop machine has Windows XP SP3 , and Excel 2003 SP3 , and the "Office Compatibility Pack" which lets me read and write the new XLSX files. However, whenever I write the file, the permissions are changed. The newly-written file only has permissions for LK and me, and both are "Full control" So in short, what am I doing wrong, and how should I set this up to do it right, keeping the permissions on the file that were there when I started?

    Read the article

  • Can't connect to Server Manager from Windows 7

    - by SAdmin317
    I have a Windows 7 Pro 64bit with SP1 desktop that has RSAT tools installed. I opened Server Manager and can't connect to the server (Server 2008 R2 core). I followed the guide to enable everything on the server, added a registry key to enable read-only on the device manager as well. On the Windows 7 PC I turned on winrm, did the quick config, and added the server IP and name as trusted hosts. I still get an error when connecting. "Connecting to the remote server failed with the following error message: The WinRM client cannot process the requests. If the authentication scheme is different from Kerberos, or if the client computer is not joined to a domain, then HTTPS transport must be used or the destination machine must be added to the TrustedHosts configuration setting...." I also added the name of the server to the windows 7 /etc/hosts file. Ping the server name translates to the IP of the server. Also opened up the firewall for "Remote Volume Management" Both machines are on the same Workgroup, using the same Administrator account, with the same password. Any help appreciated.

    Read the article

  • Is there any proper documentation for mod-evasive?

    - by Question Overflow
    mod_evasive20 is one of the loaded modules on my httpd server. I read good things about how it can stop a DOS attack and wanted to try it out on my localhost. A search for mod_evasive turns up a blog post by the author which briefly describes what it does. Other than that, I can't seem to find a reference or a documentation on the apache modules site. I was wondering whether it is a module recognised by Apache since there is no mention of it on its website. I have a mod_evasive.conf file sitting in the /etc/http/conf.d folder that contains the following lines: LoadModule evasive20_module modules/mod_evasive20.so <IfModule mod_evasive20.c> DOSHashTableSize 3097 DOSPageCount 2 DOSSiteCount 50 DOSPageInterval 1 DOSSiteInterval 1 DOSBlockingPeriod 10 </IfModule> My understanding from the setting is that if I were to click refresh or send a form more than two times in a one second interval, apache will issue a 403 error and bar me from the site for 10 seconds. But that is not happening on my localhost. And I would like to know the reason. Thanks.

    Read the article

  • Linux Cups raspberry pi offload processing to server

    - by jaredmsaul
    I am interested in setting up a raspberry pi as the local end of a printing solution. In my testing the pi chokes on acting as a complete cups based print server. It seems a little underpowered for some of the ghostscipt processing and other filtering that occurs-- particularly on larger or complex documents the processing time can be 5 or more minutes. My question is can the processing be largely done elsewhere and the prepared end product of the processing chain be fed to the pi for output on the connected printer? So in this scenario any arbitrary document (html, pdf, text) is initially 'printed' on a relatively powerful machine but the output is stored in a file. This file is then grabbed by the pi and with all the heavy work out of the way easily printed using cups. I know files can be pushed through cups in raw mode but I am fuzzy on the pros and cons and the applicability in what I describe. I have tested this with pdftops creating a ps file then feeding that raw to cups and I think it works but it seems like there may be a cleaner solution. This scenario would ideally work for any number of printer types.

    Read the article

  • windows 7 monitor instantly resumes from power off

    - by user167328
    It seems that some service or program is preventing my monitor from going into power save mode but I cannot find a way to dig down the cause. Does anybody know a way to find out which event may be causing the problem? Background information: If I leave/lock my computer I want to immediately power off the monitor. Stupid windows has no program for this so I use a third party utility. I use Display.exe from Noel Danjou since 2005 and have had no problem with it on XP, my Notebook with w7 x64 pro and at the office with w7 x64 enterprise. With my new home pc with w7 x64 pro the monitor goes off but instantly comes back on after a second. So I tested using a power plan with the shortest timeout of 1 minute but with that the display never goes off at all. (Setting a power plan timeout is no useful option because then I cannot read a document or watch a movie without the monitor going off) I could try trial and error methods but this is cumbersome and not what I regard as a professional solution. So I would prefer a way to analyze the windows events to find out the reason for the display staying active. Add On: In the meantime I tried safe mode and also exited from all tray programs but the problem is still there. I also checked the BIOS for special power settings but still no solution (P.S. Mobo is H87-G41 PC Mate with latest BIOS rev.)

    Read the article

  • Equivalent of scp -l bandwidth_cap for .ssh/config?

    - by Mark Bennett
    Short form: You can limit the bandwidth the scp uses with the -l switch, you pass a number that's in kbits/sec. I'd rather set this in my .ssh/config file for certain names machines. What's the equivalent named setting for -l ? I haven't been able to find it. Followup question: Generally, not sure how to map back and forth between ssh command line options and config names, short of doing Google searches or manually comparing man pages on a case by case basis. Is there a table that directly equates the two? Longer form of first question, with context: I've started using ssh config quite a bit, especially now that I need to go through a proxy and do lots of port mappings. I even define the same machine more than once depending on what type of tunneling I need. However, when uploading a large file, it's difficult to do anything else on my machine. Even though I have more download bandwidth than up, I think that scp saturates the link so even my small requests can't reach the Internet. There's a fix for this, using the -l bandwidth command line switch for scp. scp -l 1000 bigfile.zip titan: I'd like to use this in my config instead, so I'd create an additional named entry called "titan-upload" and I'd use that as the target whenever I upload. So instead of: scp bigfile.zip titan: I'd say: scp bigfile.zip titan-upload Or even set different caps depending on where I am: scp bigfile.zip titan-upload-from-home vs. scp bigfile.zip titan-upload-from-work I'm generally on Mac and Linux.

    Read the article

  • Regarding partitions for dual-booting Ubuntu with pre-existing Windows 7

    - by Shasteriskt
    I have zero actual experience with configuring disk partitions and the stuff I have read for the past few hours have been confusing me a bit, so please bear with me. First of all, I'd like to explain what I'm setting to achieve: Windows 7 with: C:\ Windows 7 (pre-existing installation) D:\ Data (Already exists and has files already) Ubuntu 11 - Does not exist yet, but I already have a LiveCD in hand. \root directory for Ubuntu \home on its own partition I plan \swap on its own partition with around 8GB Here is the current situation: I have a single 500 GB hard-disk with Windows 7 x64 installed, and the current partition schemes is as follows: System Reserved: 100 MB (Primary, Active) C: 100 GB - Where Windows 7 is installed (Primary) D: 365 GB - Where my files are located, LOTS of free space (Primary) Now, I would like to shrink my D: drive and create around 40 GB of unallocated disk space for the Ubuntu installation, but here what's confusing me a bit: I'm thinking I would create an extended partition and subdivide it into 3 logical partitions for the Ubuntu setup I had in mind. (If you think my setup is a bad idea, please let me know & why. I also hope you can suggest a better one...) I am aware that I can only have up to 4 primary partitions, or 3 primary partitions with 1 extended parition max. Now, does the System Recovery portion count as one primary partition? I'm really new to these things and it is totally unclear to me. In shrinking my D: drive using Windows 7's Disk Management tool, I would get an unallocated free space which I don't know how to make an extended partition from. It seems like I can only create a primary partition from it, not an extended one. How do I go about it? (I'd also like to note, if it is of any importance, that I am trying to avoid using the option to install Ubuntu alongside Windows, and much rather prefer using the custom install where I can specify which drives I wish to use and stuff. Somehow I feel its safer that way.)

    Read the article

  • Scripting a permanent CTRL / CAPS swap in Gnome?

    - by Duncan Bayne
    I have a bash script that I use to configure a vanilla Ubuntu (10.10 Maverick Meerkat) installation to be exactly the way I want it. I make extensive use of gconftool-2 to configure the desktop, set up shortcut keys, etc. Now, I'm trying to swap the CTRL and CAPS keys. I have found two ways of doing this: In Gnome, go to System - Preferences - Keyboard - Layout - Options and make the change in there. This works well, but I don't know how to script this; the setting doesn't seem to be stored in the usual place as I can't find it with gconf-editor. Add the line setxkbmap -option "ctrl:swapcaps" to my .bashrc file. That works too, until I suspend the machine & then resume it. At that point the CTRL and CAPS behaviour return to normal, until I cause .bashrc to be run again by opening a new shell. This behaviour has been reported as a bug in RedHat. Could someone please suggest a way of switching those keys that is both permanent, and can be scripted? I'm sure I must be missing something obvious here ...

    Read the article

  • Dual-head monitor system Kubuntu 10.04

    - by andrii
    I have a notebook Asus V6X00V with 1400*1050 monitor(name: LVDS) and Dell Monitor 1920*1080 (VGA-0). I want to have a dual monitor system. At MS Windows everything is working fine. During the Kubuntu installation the Dell and the main notebook monitors have a right resolutions(1920*1080 & 1400*1050). But after some stage it have been changed to the 1152*864 for both. Now the right resolution is only during turning off process and when I am using the console. So it shows that system can use this resolutions. The problem is just in a settings. I am using Size & Orientation - System Settings for setting adjustment. Any option that changes resolution for any monitor or changing position(Absolute, Left Of, Right of and so on) cause the color line noise on the screens. I have tried xrandr: xrandr --output LVDS --mode 1400x1050 --pos 0x0 --output VGA-0 --mode 1920x1080 --right-of LVDS --pos 1400x0 but have received the same result. I have find out that for example the previous version of Randr(1.2, now I have xrandr 1.3) need a xorg.conf file modification to create a big virtual screen, but kubuntu 10.4 don't have xorg.conf and I don't know should I modify xorg for 1.3 version of xrandr or not. Please help me to solve this problem

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >