Search Results

Search found 50874 results on 2035 pages for 'change script'.

Page 555/2035 | < Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >

  • Cron job failing to backing up a Postgres database

    - by user705142
    I'm unsure what's going on here: I've got a backup script which runs fine under root. It produces a 300kb database dump in the proper directory. When running it as a cron job with exactly the same command however, an empty gzip file appears with nothing in it. The cron log shows no error, just that the command has been run. This is the script: #! /bin/bash DIR="/opt/backup" YMD=$(date "+%Y-%m-%d") su -c "pg_dump -U postgres mydatabasename | gzip -6 > "$DIR/database_backup.$YMD.gz" " postgres # delete backup files older than 60 days OLD=$(find $DIR -type d -mtime +60) if [ -n "$OLD" ] ; then echo deleting old backup files: $OLD echo $OLD | xargs rm -rfv fi And the cron job: 01 10 * * * root sh /opt/daily_backup_script.sh It produces a database_backup file, just an empty one. Anyone know what's going on here?

    Read the article

  • Not your typical Permission Denied Question

    - by Todd
    I recently reinstalled Ubuntu 11.10 (64 Bit) on my computer. (My hard drive took a powder) Before, I could "mv" files around with the command. Now when I try I get the permission denied message. I also get the message about "man sudo" when I open my terminal. I am pretty sure I did not get that before. Can I add a user/administrator and change something in my orginal admin that I cannot change myself? I am getting really frustrated with this. I do not recall having the same problem before. I tried qksudo nautilus and it appears to run then it sits there with the cursor blinking but does not move.

    Read the article

  • /var/tmp and file lifetime

    - by clorz
    Hi everyone I've got a cronjob that runs every minute and checks existence of a certain file. If there's no such file, job silently ends. If there is a file, then another script is started. That script removes the file when done. Its execution time can take up to 20 minutes. My questions are: Are there any flaws in this scheme? Is it ok to store such file in tmp? Can I be shure that nothing will attempt to remove it?

    Read the article

  • Releasing patches and updates to web service users

    - by Kalidoss.M
    I have written one web services using Java. Its already live (Up & Running). During development I have SVN(repository) + Jira for task maintenance + Maven for building the web services. Now i have some small update for my web services and i have created that task in Jira and committed the files in svn with respect to Jira-Id after all testing, etc.. Say my web services is used by 10 clients, we did not give our source code to them. Is there any steps/procedure available to release patch/updates? Is there any way to render/create the change log at the build time (maven). How do i manage the change log for all version or Patch updates during build time? (Automatically)

    Read the article

  • install 64 bit as dual boot with 32 bit

    - by profgeorgefhart
    I downloaded Ubuntu 12.10 - bit .iso onto a bootable USB and want to put it on to my 32 bit 12.10 system as a dual boot, using a totally separate 1 Tb disk [which already has data on it]. However, when I go to restart and hit F8 fir BIOS set-up, I can change all the settings except the boot search order. Is there some other way to force the startup to read the USB first? Is the fact that I cannot change the search order to do with the fact that the 32 bit OS was 'Installed in legacy mode' not 'EFI? How can I rectify this? (This might be related -- might I need to update my BIOS for my Intel DX38BT motherboard? I think their last upgrade was 2009.)

    Read the article

  • Preprocess outgoing email bodies with a linux smtp server/proxy?

    - by jdc0589
    I have an smtp server running locally on my server, and need to edit the contents of email bodies before they actually get sent out. I have tried using EmailRelay to proxy my smtp server with the --filter option to specify a filter/editing executable, but am getting some odd behavior. Currently, I specify an executable shell script as the filter program and all it is supposed to do is append some text to a log file and return 0 so I know it actually got called. The weird thing is the email gets sent but nothing shows up in my log file like it should (but it does when I run the script manualy). If I remove the 'exit 0' statement, the email does not send like I would expect. Are there any other options/suggestions?

    Read the article

  • SQL Server 2008 - Management Studio issue

    - by Phil Streiff
    This is a known, documented issue with SQL Server 2008 Management Studio, but certain DDL operations like ALTERing a column datatype from Management Studio fails. For example, in Object Explorer, navigate to a table column > right-click on column > Modify. Then, change column datatype or length, then save and this error message displays: To workaround this problem, go to Query Editor and issue the following DDL statement instead:  TABLE dbo.FTPFile ALTER COLUMN CmdLine VARCHAR (100) ; ALTER   GO   The column change is successfuly applied now.

    Read the article

  • Receiving a notification on a Cluster Group Failover

    - by Diego
    Hi all, I have several Windows Cluster set up and I have the need of keeping track of failovers. I'd need to receive a notification of sort whenever a group fails over. I've seen some examples around, but I can't rely on the approach of just sending a message whenever the resources are stopped/restarted as it would generate too many false alarms. In few words, I need to be notified if and only if the group really fails over. I was thinking that probably the best way is monitoring the System Event Log, but, if possible, I'd prefer not having to write a script/program from scratch for this issue. Is there any script/product that already does it? Thanks.

    Read the article

  • Repeat the csv header twice without "Append" (PowerShell 1.0)

    - by Mark
    I have prepared a PowerShell script to export a list of system users in CSV format. The script can output the users list with Export-csv with single header row (the header row at top). However my requirement is to repeat the header row twice in my file. It is easy to achieve in PowerShell 3.0 with "Append" (e.g. $header | out-file $filepath -Append) Our server envirnoment is running PowerShell 1.0. Hence I cannot do it. Is there any workaround? I cannot manually add it myself. Thank you.

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • Backup Solr home

    - by user226188
    I'm new to Solr: I've successfully installed Tomcat and Solr 4.3.1 webapp, and two collections on a CentOS 6.4 machine. Now, my server is in production and I need to make backups of solr. So, I would like to know what is the best way to backup solr... For the moment I'm dooing: stop tomcat = tar of my solr home = start tomcat, but I've read that is not a good solution? Moreover, this implie to stop all the tomcat which have other webapps than solr. I've also heard that there is a script named "backup" in solr home bin's folder ? but my bin folder is empty :( I don't want to make an another slave server with replication, for me it's not a backup solution because my backup are supposed to be send to a bacula backup server all nights. There is no builtin solution that I can work around to make a script ? like a mysqldump for Mysql servers. Thanks for help !

    Read the article

  • Running 'dd' command at startup?

    - by Usman Ajmal
    Hi, I have set a script to run at Linux startup. The script contains a following line of code dd if=/dev/sda2 of=/dev/sda5 ?> result.txt Now, when my Linux Desktop appear, result.txt contain dd: opening '/dev/sda2': Permission denied If I prefix the dd command with sudo as: sudo dd if=/dev/sda2 of=/dev/sda5 ?> result.txt the result.txt contains sudo: no tty present and no askpass program specified Is there a way I can get around this problem? What I want is to copy 2nd parititon to 5th when a user logs in no matter if he is root, admin, Desktop or an unprivileged user. Thanks a lot as always.

    Read the article

  • Using socat to exec php cli

    - by RoyHB
    There are multiple client programs that periodically connect to a port on my server and send a single line of text. When a connection to the port is made I need to start a PHP CLI script that processes the data. There may be many of the remote scripts running/connecting at more or less the same time so I think it would be best if socat forked a process for each connection to run the script. I've gotten socat to do most of what I need, using the command socat tcp-l:myport,fork exec:mypath/socatTest.php I can read the input on php://stdIn. All is good. The problem is that the process doesn't seem to fork, so if a second external program sends data while another is doing the same it gets a connection refused error. Where have I gone wrong?

    Read the article

  • Ubuntu 13.10 Installer freezes on partition recognition on Alienware M17x

    - by Mutewinter
    I'm trying to install Ubuntu 13.10 on an Alienware M17x, after bypassing the graphical problems with "nomodeset" (annoying!), I'm facing a rather different problem: I click on "Install Ubuntu", the guided installation progress reach the point where the partition, mount point etc. needs to be selected and...nothing. In the partition selection menu it sees only dev/sda, but in the window where the actual way the disk is partitioned should appear nothing shows, it's blank. I've tried to click on "change..." to try to force it to read something, but the installer simply quits. The button "change partition table" etcetera are greyed out (well, obviously, since no partition table has been read). What's that? The Alienware has Windows 7 and legacy BIOS (so no UEFI here). Anyone has an idea? Thanks for your time and help!

    Read the article

  • Different fan behaviour in my laptop after upgrade, what to do now?

    - by student
    After upgrading from lubuntu 13.10 to 14.04 the fan of my laptop seems to run much more often than in 13.10. When it runs, it doesn't run continously but starts and stops every second. fwts fan results in Results generated by fwts: Version V14.03.01 (2014-03-27 02:14:17). Some of this work - Copyright (c) 1999 - 2014, Intel Corp. All rights reserved. Some of this work - Copyright (c) 2010 - 2014, Canonical. This test run on 12/05/14 at 21:40:13 on host Linux einstein 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64. Command: "fwts fan". Running tests: fan. fan: Simple fan tests. -------------------------------------------------------------------------------- Test 1 of 2: Test fan status. Test how many fans there are in the system. Check for the current status of the fan(s). PASSED: Test 1, Fan cooling_device0 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device1 of type Processor has max cooling state 10 and current cooling state 0. PASSED: Test 1, Fan cooling_device2 of type LCD has max cooling state 15 and current cooling state 10. Test 2 of 2: Load system, check CPU fan status. Test how many fans there are in the system. Check for the current status of the fan(s). Loading CPUs for 20 seconds to try and get fan speeds to change. Fan cooling_device0 current state did not change from value 0 while CPUs were busy. Fan cooling_device1 current state did not change from value 0 while CPUs were busy. ADVICE: Did not detect any change in the CPU related thermal cooling device states. It could be that the devices are returning static information back to the driver and/or the fan speed is automatically being controlled by firmware using System Management Mode in which case the kernel interfaces being examined may not work anyway. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. ================================================================================ 3 passed, 0 failed, 0 warning, 0 aborted, 0 skipped, 0 info only. Test Failure Summary ================================================================================ Critical failures: NONE High failures: NONE Medium failures: NONE Low failures: NONE Other failures: NONE Test |Pass |Fail |Abort|Warn |Skip |Info | ---------------+-----+-----+-----+-----+-----+-----+ fan | 3| | | | | | ---------------+-----+-----+-----+-----+-----+-----+ Total: | 3| 0| 0| 0| 0| 0| ---------------+-----+-----+-----+-----+-----+-----+ Here is the output of lsmod lsmod Module Size Used by i8k 14421 0 zram 18478 2 dm_crypt 23177 0 gpio_ich 13476 0 dell_wmi 12761 0 sparse_keymap 13948 1 dell_wmi snd_hda_codec_hdmi 46207 1 snd_hda_codec_idt 54645 1 rfcomm 69160 0 arc4 12608 2 dell_laptop 18168 0 bnep 19624 2 dcdbas 14928 1 dell_laptop bluetooth 395423 10 bnep,rfcomm iwldvm 232285 0 mac80211 626511 1 iwldvm snd_hda_intel 52355 3 snd_hda_codec 192906 3 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel snd_hwdep 13602 1 snd_hda_codec snd_pcm 102099 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30144 1 snd_seq_midi coretemp 13435 0 kvm_intel 143060 0 kvm 451511 1 kvm_intel snd_seq 61560 2 snd_seq_midi_event,snd_seq_midi joydev 17381 0 serio_raw 13462 0 iwlwifi 169932 1 iwldvm pcmcia 62299 0 snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29482 2 snd_pcm,snd_seq lpc_ich 21080 0 cfg80211 484040 3 iwlwifi,mac80211,iwldvm yenta_socket 41027 0 pcmcia_rsrc 18407 1 yenta_socket pcmcia_core 23592 3 pcmcia,pcmcia_rsrc,yenta_socket binfmt_misc 17468 1 snd 69238 17 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_idt,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_midi soundcore 12680 1 snd parport_pc 32701 0 mac_hid 13205 0 ppdev 17671 0 lp 17759 0 parport 42348 3 lp,ppdev,parport_pc firewire_ohci 40409 0 psmouse 102222 0 sdhci_pci 23172 0 sdhci 43015 1 sdhci_pci firewire_core 68769 1 firewire_ohci crc_itu_t 12707 1 firewire_core ahci 25819 2 libahci 32168 1 ahci i915 783485 2 wmi 19177 1 dell_wmi i2c_algo_bit 13413 1 i915 drm_kms_helper 52758 1 i915 e1000e 254433 0 drm 302817 3 i915,drm_kms_helper ptp 18933 1 e1000e pps_core 19382 1 ptp video 19476 1 i915 I tried one answer to the similar question: loud fan on Ubuntu 14.04 and created a /etc/i8kmon.conf like the following: # Run as daemon, override with --daemon option set config(daemon) 1 # Automatic fan control, override with --auto option set config(auto) 1 # Status check timeout (seconds), override with --timeout option set config(timeout) 2 # Report status on stdout, override with --verbose option set config(verbose) 1 # Temperature thresholds: {fan_speeds low_ac high_ac low_batt high_batt} set config(0) {{0 0} -1 55 -1 55} set config(1) {{0 1} 50 60 55 65} set config(2) {{1 1} 55 80 60 85} set config(3) {{2 2} 70 128 75 128} With this setup the fan goes on even if the temperature is below 50 degree celsius (I don't see a pattern). However I get the impression that the CPU got's hotter in average than without this file. What changes from 13.10 to 14.04 may be responsible for this? If this is a bug, for which package I should report the bug?

    Read the article

  • How do you know which domain owns the hosting?

    - by BubbleStalker
    For example if I have 1) host adress 2) login 3) password, I am entering by SSH on ruby on rails hosting, then how can i be sured that this hosting belongs to a specific domain? for example how can I know if www.site.com - belongs to some specific hosting to which I have access. I am asking this because I have access to hosting of ruby on rails, and when i modify files, there is no changes, i've tried to use the files "script", "serv", "restart.txt" - by ssh: touch tmp/restart.txt ./serv restart script restart nothing of the above helped...and I don't know what to do, any ideas?

    Read the article

  • VMware Player 5.0 or VMware Workstation 9.0 after upgrade to Ubuntu 12.10

    The upgrade process Upgrading Ubuntu 12.04 to latest version 12.10 - aka Quantal Quetzal - is straight forward and you only need to follow the offical upgrade instructions. Short version on the console looks like this: sudo do-release-upgrade This will update the repository entries, and start the upgrade process. After some minutes or hours of download and installation, you have to reboot your system once to get the new kernel loaded. As time of writing, I'm on '3.5.0-17-generic'. And as with any modification of the kernel version, you have to compile the necessary kernel modules to get VMware Player or Workstation up and running. Usually, this happens the first time you try start your VMware software and that's it. Well, again not so this time. Getting the kernel patch Luckily, the community over VMware is very active and you can get a new kernel patch in the online forums here. Get the download and put in a folder have write permissions. Then you extract the archive on the console like so: tar -xjvf vmware9_kernel35_patch.tar.bz2 Then you change into the newly created folder: cd vmware9_kernel3.5_patch/ And you execute the available shell script as root (superuser) like so: sudo ./patch-modules_3.5.0.sh This will stop any running instances of VMware software, patches the source files and runs the compile process for your active environment. This might take some time depending on your machine, and once completed you can start VMware Player or Workstation as previously. In case that you are going to apply the patch again, the script will simply quit with the following output: /usr/lib/vmware/modules/source/.patched found. You have already patched your sources. Exiting You might remove the .patched file in case that you upgraded/changed your kernel and you need to apply the patch again. Disclaimer: The patch is "as-is" and the patcher is originally created by Artem S. Tashkinov, and later modified by An_tony. Please refer to the VMware forum in case of questions or problems. There are also patches available for older versions of VMware Player or Workstation.

    Read the article

  • Ubuntu 12.04 - PPTP VPN is the only Internet Access

    - by user212553
    I know this has been covered. I've read dozens of posts but still have questions. I have a work server whose traffic should never leave my house without encryption. The VPN is PPTP. Currently I have a cron job that checks the status of the ppp0 adapter each minute. If the connection drops, which it does fairly often, it shuts key components down. It's fairly easy to restart PPTP with "nmcli con up id 'myVPNServer'" but there's no assurance it will reconnect and I need a better way to stop traffic (other than killing apps) when ppp0 is down. The two options I've seen discussed are the firewall (UFW, Firestarter, IPTables) or the route tables. I could be easily swayed to consider the firewall option but I focused on the route tables since no new function needs to be started. My questions involve the way the route tables change and then specifics on rules. When I start the PPTP VPN the route tables change. That suggests that if the VPN drops, the table will change back, defeating my stated intent of preventing external traffic. How can I make "sticky" changes to the route table that will persist even if the VPN connection drops? Perhaps the check boxes "Ignore automatically obtained routes" or "Use this connection only for resources on it's network" (which are part of the VPN configuration options)? It would seem that, if I can force the active VPN route table to stay in effect, even when the VPN drops, that this will effectively kill any external traffic should the VPN drop. This will give me the latitude to run a routine to restart the VPN from the command line (assuming the route table rules don't prevent me re-establishing the connection). My route table, with the VPN active is (ip route list): Any comments on what 10.10.1.1 is? $ ip route list default dev ppp0 proto static 10.10.1.1 dev ppp0 proto kernel scope link src 10.10.1.11 VPN_Server_IP_Address via 192.168.1.1 dev eth0 proto static VPN_Server_IP_Address via 192.168.1.1 dev eth0 src 192.168.1.60 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.60 metric 1

    Read the article

  • rsync assigns deny permission

    - by user773478
    Currently a script is used to copy files using rsync (version 2.6.9 protocol version 29) from Linux/Unix servers to W2K3 server using very basic command such as "rsync -v source_server::share_name/file_name /cygdrive///file_name" The script further makes copy of this downloaded file for other purposes. This is part of a larger middleware that is being moved to new hardware on W2K8R2 Second part of making copy of the file does not work using more recent rsync client version 3.0.7 protocol version 30 (shows up as cwRsync in add/remove programs) Reason being rsync assigns special permissions to file that includes deny. The user (service account) which downloads the file is in local admin group. The file can be copied elsewhere using rsync. It can be deleted. But cannot be opened or copied locally by same user as deny permission supersedes.

    Read the article

  • MadMACs is attempting to run after wifi autoconnect in Windows 7

    - by Dan
    I have been trying to get MadMACs to run on startup with my Win 7 x64 install. I've used the default registry startup option that is built into the script, but when I startup wlan0 is not randomized, and, in fact, the popup asking whether I want to allow the program to modify my machine comes up after WiFi connection (and obviously before the script has run). I would really like to get this working, but I'm at a dead end. Googling has not returned anything useful so any nudges in the right direction would be appreciated!

    Read the article

  • What DNS server to use for dynamic load-balancing of website?

    - by Marki555
    I will have 2 servers in different datacenters (different countries) and I want to use DNS load-balancing mainly for High Availability of website hosted on those 2 servers. It is just ad tracking site, which records hit in local database and returns few lines on html code. I want to return 2 A records each time because of DNS pinning in browsers (if one server fails, browser will try second A record which it has already cached). Both servers will be acting also as DNS servers for redundancy. Now comes my proposed solution: I will use BIND and have both servers as a master for that zone. On each server there will be running script, which will periodically test availability (http) of both servers and remove IP from DNS in case of failure. Now the questions :) 1) Is BIND suitable for this solution? I think BIND performance is good and it is easy to manipulate the zone file via script. And as I will modify the zone only in case of failure/maintenance, the modifications (and thus bind reload) won't be often. 2) I plan to use TTL of 5 minutes. The website will have about 1000-3000 req/s but from distinct clients (each IP only 1-3 requests), so I think the DNS load won't be too much. I suppose their ISPs will cache the responses for those 5 mins. Is there any reason to lower the TTL even more? 3) Is my master-master approach good? Or should I make one of the servers master and the other one slave? Right now each server can monitor both itself and the other one. If only webservice fails, both DNS nodes will notice it. If the whole server fails, then the remaining DNS node will notice it and the failed node will not answer DNS queries anyway. 4) Is it a big issue when one NS server does not respond to queries? If yes, I can make a third DNS, so anytime at least 2 of them would accept queries... 5) Should I rewrite the zone file via script, or just use dynamic DNS update (for example via nsupdateutility)?

    Read the article

  • Is there any complications or side effects for changing final field access/visibility modifier from private to protected?

    - by Software Engeneering Learner
    I have a private final field in one class and then I want to address that field in a subclass. I want to change field access/visibility modifier from private to protected, so I don't have to call getField() method from subclass and I can instead address that field directly (which is more clear and cohessive). Will there be any side effects or complications if I change private to protected for a final field? UPDATE: from logical point of view, it's obvious that descendant should be able to directly access all predecessor fields, right? But there are certain constraints that are imposed on private final fields by JVM, like 100% initialization guarantee after construction phase(useful for concurrency) and so on. So I would like to know, by changing from private to protected, won't that or any other constraints be compromised?

    Read the article

  • How can Automator or Applescript rotate 500,000 images in folders and subfolders?

    - by Ludi
    I have very specific question. I've got 500,000 images that sit in 98 subfolders. Some of the subfolders contain 25,000 images. Is there any script/automator workflow that I can use? I was trying to create a workflow: Ask for Finder Items Get Folder Contents (tick repeat for each subfolder found) Rotate images It works OK, but only for these folders where there is less than 4,096 files: I normally receive following error message: Rotate images failed - 1 error too many arguments (12019) -- limit is 4096 Is there any way to increase this limit or create completely different apple script? I really hope that someone will help me with this one.

    Read the article

  • HP 655 notebook (ubuntu 12.04) keeps consuming energy after closing the lid

    - by Bastian van Binsbergen
    I am unable to change power settings so that when I close the lid the battery stops consuming energy. There are three options in the power settings menu: When the screen is closed: 1 Do nothing 2 Pause 3 Sleep First I could not change anything. I see all the options but I could not click on option 2 and 3. (grey text instead of black) I already made it possible to put in to sleep mode. But I can't pause (suspend) the laptop. Anyone an idea?

    Read the article

  • Customizing Gnome Shell in 11.10

    - by gentmatt
    Since Unity on my old MBP is not that snappy, I really want to change my GUI to Gnome Shell. However, I've really been struggeling with the customization of Gnome Shell. Can you help me with the following: Is is possible to have the dock on the screen all the time? Can I enable 'hot corners' for features like 'scaling' in compiz? Is there a ccsm-like GUI configuration tool? Can I enable wobbly windows in Gnome Shell? How do I change the shortcut for the WIN-KEY to start this 'Gnome-Mission-Control' (by that I mean this thing u see when going to the top left corner). Thank you.

    Read the article

< Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >