Search Results

Search found 22521 results on 901 pages for 'script fu'.

Page 653/901 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Applescript won't open applications on my external monitor

    - by jpadvo
    I'm trying to open a new MacVim window with Applescript, and have found partial success with this: do shell script "cd \"~/code/application\"; ~/bin/mvim > /dev/null 2>&1" This works fine, and opens a new MacVim window with it's working directory set to ~/code/application. BUT it always opens on the screen of my laptop, not on the external monitor with the currently active space where I am working. Is there a way to get MacVim to open in the current space? Edit: same problem with opening a finder window: tell application "Finder" to make new Finder window

    Read the article

  • Stream tar.gz file from FTP server

    - by linker
    Here is the situation: I have a tar.gz file on a FTP server which can contain an arbitrary number of files. Now what I'm trying to accomplish is have this file streamed and uploaded to HDFS through a Hadoop job. The fact that it's Hadoop is not important, in the end what I need to do is write some shell script that would take this file form ftp with wget and write the output to a stream. The reason why I really need to use streams is that there will be a large number of these files, and each file will be huge. It's fairly easy to do if I have a gzipped file and I'm doing something like this: wget -O - "ftp://${user}:${pass}@${host}/$file" | zcat But I'm not even sure if this is possible for a tar.gz file, especially since there are mutliple files in the archive. I'm a bit confused on what direction to take for this, any help would be greatly appreciated.

    Read the article

  • SCCM 2012: How to properly update the content of an application?

    - by Omnomnomnom
    I recently set up a new SCCM 2012 environment at my workplace and now we are creating our applications for distribution. Some applications are set up using a script. When during testing, something was not right and the content of the application needs to be changed. The distribution point keeps on serving the old content to the clients. I was wondering what the proper procedure is for updating the DP's when the content of an application changes. I have tried redistributing to the distribution points and deleting old revisions but to no avail.

    Read the article

  • Oracle EZConnect in Mediawiki

    - by raindog308
    Mediawiki supports Oracle and I'm trying to configure it in the installer. The installers says you can use EZConnect...something like: user/pass@//server.example.com/dbname or since the installer has fields elsewhere for user/pass server.example.com/dbname The installer includes a link to the EZConnect docs: http://docs.oracle.com/cd/E11882_01/network.112/e10836/naming.htm. All the examples in that doc include a forward slash. But every combination I've tried results in an error like this: Invalid database TNS "sever.example.com/service_name". Use only ASCII letters (a-z, A-Z), numbers (0-9), underscores (_) and dots (.). I can't find any examples of EZConnect that don't include a forward slash. That error is from Mediawiki, not Oracle. I'm tailing the listener log and there is no connection made - Medaiwiki is returning an error without trying to connect. I'm using php OCI8 with the Oracle instant client. I don't have a tnsnames.ora setup for this client - which is kind of the point of EZConnect. I did write a test php script that connects via oci_connect just fine. Has anyone configured Mediawiki to use Oracle with EZConnect? If so, what did you use in the installer?

    Read the article

  • Problems with the backup

    - by marcodv
    I did a script which run around 4 o'clock in the morning, for backup all the mysql databases and the config file for 250 linux vm. The problem is that it tooks ages for complete and more than 50% of these vm, need more than 8 hours for complete. More or less all the vm had the same configuration,I mean Same amount of ram same amount of disk space same number of cpu Debian 6.0.5 I am saving these backup on amazon s3, because is the cheapest solutions that I've found. Now my questions is: Has anyone some solutions or suggestions about that? On one blog I've read that probably the ionice and nice combination could be good work around about that. any thought?

    Read the article

  • Search multiple tables

    - by gilden
    I have developed a web application that is used mainly for archiving all sorts of textual material (documents, references to articles, books, magazines etc.). There can be any given number of archive tables in my system, each with its own schema. The schema can be changed by a moderator through the application (imagine something similar to a really dumbed down version of phpMyAdmin). Users can search for anything from all of the tables. By using FULLTEXT indexes together with substring searching (fields which do not support FULLTEXT indexing) the script inserts the results of a search to a single table and by ordering these results by the similarity measure I can fairly easily return the paginated results. However, this approach has a few problems: substring searching can only count exact results the 50% rule applies to all tables separately and thus, mysql may not return important matches or too naively discards common words. is quite expensive in terms of query numbers and execution time (not an issue right now as there's not a lot of data yet in the tables). normalized data is not even searched for (I have different tables for categories, languages and file attatchments). My planned solution Create a single table having columns similar to id, table_id, row_id, data Every time a new row is created/modified/deleted in any of the data tables this central table also gets updated with the data column containing a concatenation of all the fields in a row. I could then create a single index for Sphinx and use it for doing searches instead. Are there any more efficient solutions or best practises how to approach this? Thanks.

    Read the article

  • How to find malformed - corrupted - dos - BOMByte Files in Linux

    - by Syquus
    I've several problems maintaining large production servers, in which some developers drop files from Windows environments, sometime with BOM-bytes (We use UTF8, and no need for that), causing lots of troubles. Other times, I got a "no end of line" and "[DOS]" labels when vim-editing files directly on the server. I recently discovered how to find for the bom byte, and how to delete it in a batch script. What about illegal bytes, bad EOLs? Is it safe to use DOS Text Files on a linux environment? Any drawbacks If I use to convert them with dos2unix cmd ? Regards

    Read the article

  • How to allow Mac OS X's native Apache/PHP installation to access WebServer directories?

    - by Martin Bean
    I have a problem bugging me with Mac OS X's native Apache/PHP installation. With my PHP scripts, I have to alter the file permissions on each folder I want to access. For example, in an upload script I would have to set the destination directory to 'read & write' for the group 'everyone'. However, I believe this is not the best practice and would like all of my directories to be readily writable to PHP. My scripts are stored in /Library/WebServer/Documents/, which is Mac OS X's default directory to serve web pages locally.

    Read the article

  • bottle.py on EC2 micro instance causes 2 order of magnitude slowdown

    - by user61633
    Cross-posted from StackOverflow: I wrote a little toy script to solve this type of game, and put it on my new micro EC2 instance. It works perfectly, but while it takes around 0.5 seconds to run a local version, and takes under 0.5 seconds to run both the local and the bottle.py version on my home computer, running the bottle.py version on the EC2 instance takes over 2 minutes. Python has the cpu pegged at 99% the entire time. Only 7.4% memory usage, consistently, and no swapping. The only guess I have is initialization time for bottle.py on EC2, but if it were that, why would it be ~200x faster on my own computer with bottle.py?

    Read the article

  • How can I prevent frame breaking in Chrome from Google image searches, etc.?

    - by Nick T
    More often than not, websites with any number of images will use frame breaking scripts to lose Google Image Search's results frame (e.g. this relatively benign case). While I somewhat understand the reasons for doing so (as ineloquently put forth by these people), more often than not, such breakout/redirects dump me to a useless page that doesn't have the image I was looking for, plus it makes going "back" rather irritating as you need to click twice or more (some pages jam you through several redirects it seems) in rapid succession. Other than having reflexes to copy the 'Full-size image' hyperlink quicker than loading the breakout script, is there a way to get my actual result?

    Read the article

  • Ubuntu apt-get install (--download-only) executed from another machine on behalf of mine

    - by Maroloccio
    I have a server on a network segment with no direct or indirect access to the Internet. I want to perform an: apt-get install <package_name> Is there a way to somehow delegate the process of downloading the required files to another machine by exporting the server configuration so as to satisfy all dependencies while running: apt-get install --download-only <package_name> Can, in effect, apt-get install read a configuration from an exported archive rather than from the local package database? Can the list of packages to be downloaded be retrieved, along with an installation script to perform the installation, instead of the actual packages? (a further level of indirection which would help me schedule this with wget at appropriate times...)

    Read the article

  • nginx + IIS + GET

    - by Eralde
    I have nginx on pc "A" & IIS with ASP.NET on pc "B". nginx is configured like this: ... location ~ ((Web|Script)Resource.*)$ { proxy_pass "B"/$1; proxy_redirect off; proxy_set_header REMOTE_ADDR $remote_addr; proxy_set_header REQUEST_URI $request_uri; proxy_set_header HTTP_REFERER $http_referer; #proxy_set_header REQUEST_URI $request_uri; proxy_set_header QUERY_STRING $query_string; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }... but requests to "B"/WebScript?a=b&c=d aren't able to deliver GET data (a=b&c=d) to IIS part. Could anyone help with this?

    Read the article

  • Process that needs a volume starting before volume mounts

    - by user36126
    The destination for incoming CrashPlan backups on my server (11.04) is /media/SeagateBig (SeagateBig is the volume name of my 2TB USB drive). When the server boots, two things happen: 1) SeagateBig auto-mounts and 2) CrashPlan starts. The problem is, that often these two things don't happen in that order. Then I get: Crashplan starts looks for /media/SeagateBig doesn't find it instead of waiting for it, CREATES IT Now it's backing up onto my / filesystem. NOT COOL. Meanwhile, when SeagateBig finally gets around to mounting, it finds that /media/SeagateBig already exists, shrugs, and creates /media/SeagateBig_ as its mount point. What I need is a way for the order to be enforced - where SeagateBig mounts and then and only then the CrashPlan service is started. Unless I learn that CrashPlan can be told to wait for its destination directory, never to create it... which I am also investigating. But the CrashPlanEngine script is installed by the product so I am loath to modify it, as I know I could by having it loop until df greps successfully for "SeagateBig".

    Read the article

  • Pass User Data to AWS client

    - by bearrito
    Has anyone successful passed user data to the AWS CLI ? I have tried various incantations of the following but it does not work. Docs say string must be base64 encoded : http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html The instance logs never indicate the script is executed and chef is installed. aws ec2 run-instances --image-id ami-a73264ce --count 1 --instance-type t1.micro --key-name scrubbed --iam-instance-profile Arn=arn:aws:iam::scrubbed:instance-profile/scrubbed --user-data $(base64 chef_user_data.sh --wrap=0) chef_user_data.sh #!/bin/bash curl -L https://www.opscode.com/chef/install.sh | sudo bash

    Read the article

  • How to fundamentally approach creating a 'financial planner' application?

    - by Anonymous -
    I want to create a financial planning application (for personal use), for which the overall functionality will be this: User (me..) can create different 'scenarios'. Each scenario is configured with different incomings/outgoings. Scenarios can be 'explored' in a calendar format with projections taking into account tax, interest (on both debt and savings) and so on and so forth. My problem lies in how to fundamentally approach the project. I've considered: When creating incomings/outgoings for a script to apply them to each day in a 'days' table of a database, acting as a method of caching. This means that if I wanted to look at January 14th, 2074 there aren't thousands of cycles of calculations to run through and the result can just be pulled. Do each calculation dynamically, but again, I'm finding it hard to visuallize how I would handle different tax allowances (I'm based in the UK by the way), payrises and 'changes' to my incomings/outgoings. I've sat on this for a couple of days and am struggling to come up with an elegant approach to my problem. There may well be software out there that does what I'm looking to do (in fact I'm sure it is) but I would like to develop this myself for learning purposes, to be able to add it to my personal life 'toolset' and to allow me to expand on it in the future. Many thanks to all who have any input on my dilemna.

    Read the article

  • How to make NFS mounts available while offline?

    - by lpanebr
    Problem: I work on a notebook and while at work I have access to many NFS mounted drives. When I get home they are obviously not available. Windows 7 solution: My business partner uses Windows 7 and maps the folders via samba. Windows 7 has a very nice feature that let's he make these folders available offline. So when when he connects to the work network the changes get synchronized! Question: Is there a way to mimic that in ubuntu? What I have now: Server to local sync: I have added rsync entries on my crontab to copy server folders => local folders every five minutes. When at work I used the NFS mapped folders and while outside work I use the local copies. When I get at work I manually run a script that syncs local folders => server folders. Problems with my setup: slow startup when not at work (I guess do to the fstab trying to map the server folders) no conflict checking/managing I have to remember to sync manually and be careful because of the different file locations recent files do not work between work and home

    Read the article

  • Creating a command that compress a file and save it on a usb, but cannot detect the usb in linux.

    - by Lance
    First of all I can't detect the USB on linux using the command line. I check the directory dev and still cannot find the usb. used the df command to check the usb. I plug and typed df and then unplug and typed df again and nothing changed. We are using a server(telnet) to use the command line of linux on a windows 7 OS. The second problem I have is how can I execute the bash script that I have made. It seems that I cant put my .sh file in /usr/bin/ I would like to make my command executable in all directories like a normal command. Sorry, im still newbie at this things. This is what I get on staying on Windows too much. Sorry for my english. Thank you in advance.

    Read the article

  • Ubuntu Studio 10.4 boots to terminal mode only

    - by Don
    I did clean install from ISO DVD. It boots only to command line asking for login. After I login I enter STARTX and desktop appears and some of the studd actually works, but there are a lot of problems: there is no way to reboot or shut down except to stopx/logoff and enter shutdown sound system doesn't respond pulse audio volume control gives "connection failed, connection refused" DVD drives not available GDebi Package installer is grayed out and so I can't use it (but synaptic package manager works OK) Software Center won't start when clicked -- it just stops trying DBus cant run because it says /usr/local/var/run/dbus/system_bus_socket file not found (also /var/run/dbus/system_bus_socket file not found) There's just a lot of things wrong and I can't help but think something is missing or was mis-coded in the distro so that there are typos in some script somewhere. If anyone can tell me where to begin to untangle this mess I'd appreciate it, but I think it all begins when it can't start the GUI automatically and I need to enter STARTX.

    Read the article

  • Why is my /dev/random so slow when using dd?

    - by Mikey
    I am trying to semi-securely erase a bunch of hard drives. The following is working at 20-50Mb/s dd if=/dev/zero of=/dev/sda But dd if=/dev/random of=/dev/sda seems not to work. Also when I type dd if=/dev/random of=stdout It only gives me a few bytes regardless of what I pass it for bs= and count= Am I using /dev/random wrong? What other info should I look for to move this troubleshooting forward? Is there some other way to do this with a script or something like makeMyLifeEasy | dd if=stdin of=/dev/sda Or something like that...

    Read the article

  • mysqldump and wamp

    - by Adam
    I am running a wamp server and trying to use mysqldump to backup a mysql database I have. The following is the PHP code I am using to run mysqldump. exec("mysqldump backup -u$user -p$pass > $sql_file"); When I run the script the page just loads inifnately and the backup is not created. A blank file is being created so I know something is happening. Extra info: exec() is not disabled PHP is not running in safe mode Any ideas?? Win XP, WAMP, MYSQL 5.0.51b

    Read the article

  • Dynamic authentication realms in Apache

    - by Cogsy
    I have a front end server acting as a gateway proxy for many (a dynamic 'many') building monitors with embedded webservers. They are accessed with a URL like: http://www.example.com/monitor1/ http://www.example.com/monitor2/ ... I'm trying to restrict access to these monitors to only the users that own them. So what I need is a way of specifying rights to users or groups for specific directories. The standard auth mechanisms I see in Apache won't work because I need to specify every location. I'd prefer some dynamic map or script. Any suggestions?

    Read the article

  • Simple server status page hosted externally available for users

    - by Chris
    I am looking for any kind of script - can be asp or php or any other web language - that gives me the ability to log outages and the current state of the network for our organisation. This would be similar to any major Telco's "Network Status" page, but I just want to tell the user's out there if the systems are up and running and have a history of recent outages. This would be for our remote user's so they could go to a webpage (externally hosted from our main site) and see that we are currently having problems with our network. What are other people out there using?

    Read the article

  • How do I configure a swap partition using swapspace

    - by jcalfee314
    I finally have the swapspace project installed and running (via init.d). The purpose is to have a dynamically re-sizing swap partition. I'm clueless however on how to use it. It has good documentation but just does not go into that last step. How to I configure a swap partition using swapspace? The process is probably the same for any 3rd party program that would provide a swap space implementation to the kernel. I know this was intended to run as a process because the project provides an init.d script.

    Read the article

  • Stopping/Starting windows services

    - by Geek
    I have four windows services which start up automatically when the machine starts. There after, I want to restart those services every 8 hours in a particular order. For eg. Stop s1,s2,s3,s4 and than restart them in some other order like s4,s3,s2,s1. The condition is that I should wait for each service to stop completely before I stop another one. I would want to write a .BAT or some script. Is it possible to define a CRON for 8 hours, this is not there in Advanced tasks ? Can I do it using windows scheduler ? Please suggest. thanks in advance.

    Read the article

  • Apache Conf files: If Hostname=="Web4" Then Use This IP for VirtualHost

    - by jroberts
    I am getting ready to do a "spring cleaning" on the web heads at work. I would really like to put my config files into a git repo, and use the same config files for all the web heads. This is a problem for the sites that are on port 443. Is there anyway to do an if statement or something like that inside the conf file itself? I am trying to avoid writing a script to generate the conf files. Any ideas are greatly appreciated!! Thank you! Jeff

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >