Search Results

Search found 22345 results on 894 pages for 'greasemonkey script'.

Page 338/894 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Why does printf report an error on all but three (ASCII-range) Unicode Codepoints, yet is fine with all others?

    - by fred.bear
    The 'printf' I refer to is the standard-issue "program" (not the built-in): /usr/bin/printf I was testing printf out as a viable method of convert a Unicode Codepoint Hex-literal into its Unicoder character representation, I was looking good, and seemed flawless..(btw. the built-in printf can't do this at all (I think)... I then thought to test it at the lower extreme end of the code-spectrum, and it failed with an avalanche of errors.. All in the ASCII range (= 7 bits) The strangest thing was that 3 value printed normally; they are: $ \u0024 @ \u0040 ` \u0060 I'd like to know what is going on here. The ASCII character-set is most definitely part of the Unicode Code-point sequence.... I am puzzled, and still without a good way to bash script this particular converion.. Suggestions are welcome. To be entertained by that same avalanche of errors, paste the following code into a terminal... # Here is one of the error messages # /usr/bin/printf: invalid universal character name \u0041 # ...for them all, run the following script ( for nib1 in {0..9} {A..F}; do for nib0 in {0..9} {A..F}; do [[ $nib1 < A ]] && nl="\n" || nl=" " $(type -P printf) "\u00$nib1$nib0$nl" done done echo )

    Read the article

  • How can I too many files upload more fast way to Cloud files in Rasckspace?

    - by andy kim
    I have a lot of image files, it's all I want to upload to RackSpace cloud files about a million in a single directory the fastest and most efficient way. but I'm use uploading python-cloudfiles script is very slow and I want to know different ways or python script code. because one by one connection upload is very slow. I think one files tar and uncompress directory is better way. but cloudfiles do not support this way. Who know any other way?

    Read the article

  • How do I allow a (local) user to start/stop services with a scheduled task?

    - by Mulmoth
    Hi, on a Windows 2008 R2 server I have two small .cmd-scripts to start/stop a certain service. They look like this net start MyService and net stop MyService I want to execute these script via scheduled task, and I thought it would be best to create a local user for this job. The user is not member of the Administrators group. But the scripts fail with exit code 2. When I logon with this local user and try to execute these script in command line, I see a message like (maybe not exactly translated from german to english): Error code 5: Access denied It doesn't matter whether I start the command line as Administrator or not. How can this local user gain rights to do the job?

    Read the article

  • VMware Player 5.0 or VMware Workstation 9.0 after upgrade to Ubuntu 12.10

    The upgrade process Upgrading Ubuntu 12.04 to latest version 12.10 - aka Quantal Quetzal - is straight forward and you only need to follow the offical upgrade instructions. Short version on the console looks like this: sudo do-release-upgrade This will update the repository entries, and start the upgrade process. After some minutes or hours of download and installation, you have to reboot your system once to get the new kernel loaded. As time of writing, I'm on '3.5.0-17-generic'. And as with any modification of the kernel version, you have to compile the necessary kernel modules to get VMware Player or Workstation up and running. Usually, this happens the first time you try start your VMware software and that's it. Well, again not so this time. Getting the kernel patch Luckily, the community over VMware is very active and you can get a new kernel patch in the online forums here. Get the download and put in a folder have write permissions. Then you extract the archive on the console like so: tar -xjvf vmware9_kernel35_patch.tar.bz2 Then you change into the newly created folder: cd vmware9_kernel3.5_patch/ And you execute the available shell script as root (superuser) like so: sudo ./patch-modules_3.5.0.sh This will stop any running instances of VMware software, patches the source files and runs the compile process for your active environment. This might take some time depending on your machine, and once completed you can start VMware Player or Workstation as previously. In case that you are going to apply the patch again, the script will simply quit with the following output: /usr/lib/vmware/modules/source/.patched found. You have already patched your sources. Exiting You might remove the .patched file in case that you upgraded/changed your kernel and you need to apply the patch again. Disclaimer: The patch is "as-is" and the patcher is originally created by Artem S. Tashkinov, and later modified by An_tony. Please refer to the VMware forum in case of questions or problems. There are also patches available for older versions of VMware Player or Workstation.

    Read the article

  • MadMACs is attempting to run after wifi autoconnect in Windows 7

    - by Dan
    I have been trying to get MadMACs to run on startup with my Win 7 x64 install. I've used the default registry startup option that is built into the script, but when I startup wlan0 is not randomized, and, in fact, the popup asking whether I want to allow the program to modify my machine comes up after WiFi connection (and obviously before the script has run). I would really like to get this working, but I'm at a dead end. Googling has not returned anything useful so any nudges in the right direction would be appreciated!

    Read the article

  • rsnapshot stats

    - by Obscur Moirage
    I'd like to retrieve the following stats from rsnapshot files synced added files modded files deleted files Is there a feature to retrieve these in rsnapshot, or is there another product that's able to do it? EDIT: As requested, I'll try to show that I'm not just asking what I want to do without any research. I wasn't able to locate any rsnapshot feature doing this. Maybe I'm searching in a wrong direction. So, I've built a not very pretty script, called each time before rsnapshot is ran. This Perl script stores each file MD5, in order to compare backup files structures between rsnapshot updates. I'm pretty sure it's worthless to show this code here. I think that keeping an eye on what change on a server, for example, is a useful feature. So, I'm asking. @pauska Most of the time, I'm trying to search for an answer myself, which is not the case here. Thanks

    Read the article

  • How to kill all screens that has been up longer then 4 weeks?

    - by Darkmage
    Im creating a script that i am executing every night at 03.00 that will kill all screens that has been running longer than 3 weeks. anyone done anything similar that can help? If you got a script or suggestion to a better method please help by posting :) I was thinking maybe somthing like this. First do a dump to textfile ps -U username -ef | grep SCREEN dump.txt then do a loop running through all lines of dump.txt with a regex and putting pid of the prosseses with STIME 3weeksago in a array. then do a kill loop on the array result.

    Read the article

  • Don't run cron job if already running

    - by webnoob
    Hi All, I know this question has been asked already but I either didn't understand the answer or it didn't apply to me. I have a php script that I am calling every 1 minute using CPanel to set up the Cron Job. The nature of the script means that it could overrun for just over the minute so I need to know how to stop the next one running if the first one hasn't completed. I have a VPS running CENTOS 5.5 and have access to WHM and CPanel. I have never used Linux before (only just got the server yesterday) so I have no idea what I am doing and would appreciate some help if possible. If I need to provide more information please let me know (I don't know what info you would need at the moment). Thanks.

    Read the article

  • How to avoid Remove-Item PowerShell errors "process cannot access the file"?

    - by Michael Freidgeim
    We are using TfsDeployer and PowerShell script to remove the folders ising Remove-Item before deployment of a new version. Sometimes the PS script failed with the error Remove-Item : Cannot remove item Services\bin: The process cannot access the file Services\bin' because it is being used by another proc Get-ChildItem -Path $Destination -Recurse | Remove-Item <<<< -force -recurse + CategoryInfo : WriteError: (C:\Program File..\Services\bin:DirectoryInfo) [Remove-Item], IOException FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand I’ve tried to follow the answer from PowerShell remove force to pipe get-childitem -recurse into remove-item. get-childitem * -include *.csv -recurse | remove-item ,but the error still happens periodically. We are using unlocker to manually kill locking application, (it’s usually w3wp), but I prefer to find automated solution. Another (not ideal) option is to-suppress-powershell-errors get-childitem -recurse -force -erroraction silentlycontinue Any suggestions are welcome.

    Read the article

  • Add folder name to beginning of filename - getting multiple renames

    - by Flibble Wibble
    I've used dbenham's excellent response to the question of how to add the folder name to the beginning of a filename in a cmd script. @echo off pushd "Folder" for /d %%D in (*) do ( for %%F in ("%%~D\*") do ( for %%P in ("%%F\..") do ( ren "%%F" "%%~nxP_%%~nxF" ) ) ) popd What I'm finding is that seemingly randomly (though it probably isn't) sometimes the script will run through several child folders and rename correctly but then it gets to a folder where it gets stuck in a loop and starts adding the folder name repeatedly to the file inside. I have 90,000 files in 300 folders to rename this weekend. Any chance you can guess the cause? PS: Is there a maximum number of files that are acceptable in each folder?

    Read the article

  • Bringing the xenbr0 interface up on XEN under Ubuntu 8.04

    - by iyl
    I installed XEN on Ubuntu 8.04 using this tutorial: http://www.howtoforge.com/ubuntu-8.04-server-install-xen-from-ubuntu-repositories but after I reboot with the XEN kernel, I don't have xenbr0 device. I see that network-bridge script runs and it creates peth0 device, but not xenbr0. I have a very basic IP setup, with a single static IP defined in /etc/network/interfaces. The only unusual thing is that my hosting (1&1) gave me a netmask 255.255.255.255, so I had to add the default gateway with this script: /sbin/route add -host 10.255.255.1 dev eth0 /sbin/route add default gw 10.255.255.1 Everything else is plain vanilla Ubuntu 8.04.

    Read the article

  • Crontab -- scheduling my backups

    - by Garfonzo
    I want to do a backup every Friday night (no, this is not the whole backup routine, just part of it). Each Friday night's backup will not be overwritten until 4 weeks later. So, essentially, I have a four revolving backups: Week1, week2, week3, and week4. Now, I need the week1 backup script to run every 4 weeks. But I also want week2's script to run every four weeks. I know that I can tell the crontab to execute something every X weeks/days/hours/whatever. However, how do I set it up so that each of these four scripts actually run on different weeks, how do I avoid all 4 scripts running on the same night, then dutifully waiting for weeks only to all run again? Thanks.

    Read the article

  • How can I "shadow" the filesystem on Linux?

    - by happy_emi
    On a Linux environment sometimes I need to run a script as root which will add/modify serveral files on my fs. Basically I'd like to know exactly which files are modified and how WITHOUT opening the script and trying to guess the code. I was thinking about using something like unionfs: the main fs would be accessible in readonly mode and all changes are written on a file used as a partition and "mounted" in write mode. Are there other ways to achieve the same goal (i.e. other than unionfs)?

    Read the article

  • rsync assigns deny permission

    - by user773478
    Currently a script is used to copy files using rsync (version 2.6.9 protocol version 29) from Linux/Unix servers to W2K3 server using very basic command such as "rsync -v source_server::share_name/file_name /cygdrive///file_name" The script further makes copy of this downloaded file for other purposes. This is part of a larger middleware that is being moved to new hardware on W2K8R2 Second part of making copy of the file does not work using more recent rsync client version 3.0.7 protocol version 30 (shows up as cwRsync in add/remove programs) Reason being rsync assigns special permissions to file that includes deny. The user (service account) which downloads the file is in local admin group. The file can be copied elsewhere using rsync. It can be deleted. But cannot be opened or copied locally by same user as deny permission supersedes.

    Read the article

  • How do you know which domain owns the hosting?

    - by BubbleStalker
    For example if I have 1) host adress 2) login 3) password, I am entering by SSH on ruby on rails hosting, then how can i be sured that this hosting belongs to a specific domain? for example how can I know if www.site.com - belongs to some specific hosting to which I have access. I am asking this because I have access to hosting of ruby on rails, and when i modify files, there is no changes, i've tried to use the files "script", "serv", "restart.txt" - by ssh: touch tmp/restart.txt ./serv restart script restart nothing of the above helped...and I don't know what to do, any ideas?

    Read the article

  • LDAP loginShell on platforms with different paths

    - by neoice
    I'm using LDAP to deal with users and authentication across my network. I'm now adding FreeBSD hosts and have hit a problem with login shells. on Linux, shells tend to be in /bin/$shellname, so setting my login shell in LDAP to /bin/zsh works perfectly. on FreeBSD, /bin/zsh doesnt exist, I need to use /usr/local/bin/zsh. is there a solution to this? I imagine I might be able to make some sort of login-shell.sh script that LDAP passes out as the "shell" and then use the script to determine the actual shell for the user, but I'm not a fan of that idea. I'm using Debian and FreeBSD, both with a standard OpenLDAP/PAM/nss setup. edit: it looks like using /bin/sh and adding an exec $shell to .profile would "work", but that doesnt scale very well.

    Read the article

  • How to batch edit a list of files?

    - by user43144
    I have a list of files where I need to remove some lines that have been added yesterday by a spambot. The section I want to remove looks like this: ^M <script>[...] bunch of malware code [...]</script> That section seems to have been appended to the files, so I can be relatively sure it's the last lines of each file that contain this part. Now I know a bit of Linux, but not enough to do this via a command. How would I go about and do this?

    Read the article

  • Verify linux user passwords

    - by zero_r
    Hi there I got a linux server that has several dozen users. I also have the cleartext password for every user (i know - bad security). I would like to know if the passwords are correct. Since the users are all ftp users and have the nologin shell, I cannot just write a script to check if login works. How can I do a local check on passwords? Script output could look like this: $ check_userpw < user_pw_list.txt user1 ok user2 ok user3 mismatch! user4 ok Thanks

    Read the article

  • How do I debug an upstart job?

    - by Cerales
    I have the following job in /etc/init/collector: start on runlevel [2345] stop on runlevel [!2345] expect daemon exec /usr/bin/twistd -y /path/to/my/tac/file When I start the job with sudo service collector start, it hangs. If I ctrl-c and run initctl list, I see this: collector start/killed, process 616 I can't see an instance of the twistd daemon in ps, and the HTTP server it's supposed to be providing does not exist. I even tried this without 'expect daemon' and with a simple call to a one-line bash script using a script stanza, and it still doesn't work. I think I'm doing something very wrong. What could it be?

    Read the article

  • PHP errors not being displayed

    - by Mike
    I'm using PHP with Apache on Ubuntu 12.10. Errors are not being displayed to the browser for some reason and I can't figure it out. I have the following in my php.ini file: error_reporting = E_ALL & ~E_DEPRECATED display_errors = On display_startup_errors = On log_errors = On I am also positive that I have edited the correct ini file by verifying it with php_ini_loaded_file(). I can also verify that the values are correctly set by doing the following in my script: echo ini_get("display_errors"); // Outputs 1 echo ini_get("display_startup_errors"); // Outputs 1 echo ini_get("log_errors"); // Outputs 1 echo ini_get("error_reporting"); // Outputs -1 I have tried what seems like every possible combination of these settings (and restarting Apache after each change) and it is just not outputting errors. I am also not using ini_set anywhere in the script. It is being set only from the ini file. Any ideas why errors aren't being displayed?

    Read the article

  • Open application in background without losing current window focus. Fedora 17, Gnome 3

    - by Ishan
    I'm running a script in the background which loads an image with feh depending on which application is currently in focus. However, whenever the script opens the image, window focus is lost to feh. I was able to circumvent this by using xdotool to switch back to the application that was originally in focus, but this introduces a short annoying period of time where the focus is switched from feh to the application. My question is this: is there any way to launch feh in the background such that window focus is NOT lost? System: Fedora 17, Gnome 3, Bash Thanks a ton!

    Read the article

  • Apache+FastCGI Timeout Error: "has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds"

    - by Sadjad Fouladi
    I've recently installed mod_fastcgi and Apache 2.2. I have a simple cgi script as below (test.fcgi): #!/bin/sh echo sadjad But when I invoke 'mysite.com/test.fcgi' I see "Internal Server Error" after a short period of time. The error.log file shows this error message: [Tue Jan 31 22:23:57 2006] [warn] FastCGI: (dynamic) server "~/public_html/oaduluth/dispatch.fcgi" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds This is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ django.fcgi/$1 [QSA,L] What could the problem be? Is it my .htaccess file?

    Read the article

  • OS X - Automatically Set Execute Permissions for New Files?

    - by i help X u
    I'm using OS X 10.6.4 and am trying to set a folder to automatically enable execute permissions on new script files copied or created in a directory. I have used Sandbox 2 to set every permission for the folder to enabled with sticky bits and the inherit flag set but I still have to manually set the execute flag using chmod for every new flag. I've done: chmod -R a+rwxs ~/scripts I've done: chmod 7777 ~/scripts And the permissions for the folder show as: drwsrwsrwt+ for the folder. But if I add a new script file it's set to "-rw-r--r--+" (the default) I looked at setting "unmask 000" in the .profile file but the default value for files is 666 with an unmask of 022 so that's not relevant since I would need a default value of 777 for files. I have figure out how to use chmod in an AppleScript triggered by a folder action to automate this but I'm wondering if there is a simple ACL or chmod setting I'm missing. So, is there a way to automatically set execute permission for new files? (Without using a folder action and AppleScript?)

    Read the article

  • need a different backup solution

    - by DigitalJedi
    I just built a new media/backup server using Ubuntu 12.04 64bit. I installed a hard drive to be used only for music, pictures, and videos and formatted it fat32 so my 1 and only Windows PC could map those folders as netshares. My laptop, also running Ubuntu 12.04, is what I am using the most so new media is first downloaded on my laptop. I've already got the music, videos, and pictures folders from my server mounting as shares on my laptop on boot thanks to some fstab edits and sshfs. Now I'm wanting either an app or script that could backup any new files I add to my local media folders to the mounted folders on my server. I've been Googling all day and found a few apps like rsync but they seem to have issues with ext4 to vfat backups. I thought maybe a script would be best but I'm new to scripting in Linux and don't want to mess anything up. Basically I am looking for something that will backup only newly added files to the server. I figure I could schedule it once a week. There are some stipulations. For example, my local music folder has over 700 folders for each artist/band then sub folders inside those for albums. I want something smart enough to only copy newly added content so I'm guessing the modified date would probably be a good condition if I were scripting. I'm rambling. Any suggestions would be GREATLY appreciated. I'm not finding anything to suit my needs. I'm almost to the point of just learning bas scripting so I can write something but then it will be a couple weeks or so before I have a possible solution and I'd like something in place sooner.

    Read the article

  • Squid randomly stops serving requests. How can I resolve this issue?

    - by Vijay
    The squid (2.7) proxy that I have running on ubuntu 8.10 stops accepting new requests after being online for a while, due to reasons that I can't discover. However doing a squid -k reload resolves the problem immediately. Now I manually run this command by monitoring the log and if i don't see any activity for 5 minutes I reload the config. Now on my quest for a solution I had several ideas: diagnose the root cause and eliminate it setup a script to automatically reload script if no new entries in access.log for the past 3 minutes painstakingly upgrade server to newer ubuntu version while keeping network offline or during off hours to minimize downtime. so i thought I would turn to you for solutions to option 2), as I do not understand squid enough for 1), and I'm avoiding 3) as long as i can. so can ideas?

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >