Search Results

Search found 23811 results on 953 pages for 'javax script'.

Page 679/953 | < Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >

  • readlink: illegal option -- f

    - by Scott
    Recently the script was working fine, but from some days I'm receiving such message, while running the readlink -f "$0" command: readlink: illegal option -- f usage: readlink [-n] [file ...] I was running the following code to debug: #!/bin/sh DIR=`pwd` RLPATH=`which readlink` RLOUT=`readlink -f -- "${0}"` DIROUT=`dirname -- ${RLOUT}` echo "dir: ${DIR}" echo "path: ${PATH}" echo "path to readlink: ${RLPATH}" echo "readlink output: ${RLOUT}" echo "dirname output: ${DIROUT}" Output: # ./debug.sh readlink: illegal option -- f usage: readlink [-n] [file ...] usage: dirname string [...] dir: /home/svr path: /sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/root/bin path to readlink: /usr/bin/readlink readlink output: dirname output: What is wrong ?

    Read the article

  • Directory Not Found Error

    - by noobguy
    I am trying to verify tails and when I get to the command prompt portion of the verification some difficulties seem to have arose. Below is the script: noob@noob-System-Product-Name:~$ cd [/media/noob/UUI] bash: cd: [/media/noob/UUI]: No such file or directory noob@noob-System-Product-Name:~$ gpg --keyid-format long --import tails-signing.key gpg: can't open `tails-signing.key': No such file or directory gpg: Total number processed: 0 Same thing happens when I try from download directory; noob@noob-System-Product-Name:~$ cd [/home/noob/Downloads] bash: cd: [/home/noob/Downloads]: No such file or directory noob@noob-System-Product-Name:~$ gpg --keyid-format long --import tails-signing.key gpg: can't open `tails-signing.key': No such file or directory gpg: Total number processed: 0 Any suggestions would be greatly appreciated.

    Read the article

  • What ports tend to be unfiltered by boneheaded firewalls?

    - by Reid
    Hi all, I like to be able to ssh into my server (shocking, I know). The problem comes when I'm traveling, where I face a variety of firewalls in hotels and other institutions, having a variety of configurations, sometimes quite boneheaded. I'd like to set up an sshd listening on a port that has a high probability of getting through this mess. Any suggestions? The sshd currently listens on a nonstandard (but < 1024) port to avoid script kiddies knocking on the door. This port is frequently blocked, as is the other nonstandard port where my IMAP server lives. I have services running on ports 25 and 80 but anything else is fair game. I was thinking 443 perhaps. Much appreciated! Reid

    Read the article

  • Reverse WiFi to Broadcast connection coming from a USB device

    - by Daniel Clem
    I am using the app called Clockworkmod Tether. It connects using a script ( command line " gksu ./run.sh " ) on the computer. All my programs connect to the internet perfectly, minitube, midori, transmission torrents. But the network manager does not show any connection, wired or wireless. So this may cause issues? What I want to do is take this connection, and be able to share it some way, any way, by wireless. This Acer Timeline "Aspire 5810TZ" does have an Ethernet, so wired out to a router might be an option. But I would prefer to simply reverse the Wireless card to broadcast out to about 2 or 3 devices. Is this possible? Yes I have taken a look at the other questions already posted here, but the answers are 1 year old or older, and not clear at all. I am moderately comfortable (4.5 out of 10 ) on the command line. But pretty much need line by line directions for what commands are needed and what order, ect. Edit I have already tried the method of "Left click network manager, Create New Wireless Network" It is created fine, and I am able to connect to it with a tablet, but am un-able to get an outside connection with it. Using the "Shared to other computers" option because DHCP doesn't seem to work, and WEP Passphrase Security. I get an IP address on the connected device. But as I said, won't bring up any outside webpages or the like. So perhaps this is the wrong approach?

    Read the article

  • Install full version of Unity on Chrome OS using crouton

    - by Sam Kong
    I have an Acer Chromebook. Using crouton, I installed Ubuntu (unity) on it. I am pretty familiar Ubuntu 12.04. But the installed one is very minimized package. My fonts are missing and I manually installed language pack for Korean but still browser can't display Korean characters. Is there a way to install the whole packages via crouton like when you install Ubuntu 12.04 with the CD? Or is there a script that installs the missing packages on the bare Ubuntu? Thanks. Sam

    Read the article

  • What do I need to know to design a language and write a interpreter for it?

    - by alFReD NSH
    I know this question has been asked and even there are thousands of books and articles about it. But the problem is that there are too many, and I don't know are they good enough, I have to design a language and write a interpreter for it. The base language is javascript (using nodejs) but it's ok if the compiler was written in another language that I can use from node. I had done a research about compiler compilers in JS, there is jison (Bison implementaion in JS), waxeye, peg.js. I decided to give jison a try, due to the popularity and its being used by coffee script, so it should be able to cover my language too. The grammar definition syntax is similar to bison. But when I tried read the bison manual it seemed very hard to understand for me. And I think it's because I don't know a lot of things about what I'm doing. Like I don't what is formal language theory. I am experienced in Javascript (I'm more talented in JS than most average programmers). And also know basic C and C++ (not much experience but can write a working code for basic things). I haven't had any formal education, so I may not be familiar with some software engineering and computer science principles. Though everyday I try to grasp a lot of articles and improve. So I'm asking if you know any good book or article that can help me. Please also write why the resource you're suggesting is good. --update-- The language I'm trying to create, is not really complicated. All it has is expressions (with or without units), comparisons and logical operators. There are no functions, loops, ... The goal is to create a language that non-programmers can easily learn. And to write customized validations and calculations.

    Read the article

  • Apache 2.4 and PHP 5.4 getting connection reset errors in the browser

    - by zuallauz
    In the weekend I upgraded my development web server to Apache 2.4 and PHP 5.4. In my web application which was previously working great on Apache 2.2 and PHP 5.3 it now starts getting these messages saying the "connection was reset" in Firefox. See screenshot. I am connecting to the linux machine via local LAN. I'm assuming it might be something to do with the new version of Apache or PHP, or the new LAMP stack which I downloaded from BitNami? It would seem to happen every 5-10 requests and throw this error, perhaps more likely to trigger it is if I send a POST request from a page. Is it timing out the script or something? These are just basic dynamic pages I'm loading and they worked perfectly in Apache 2.2 and PHP5.3. Here are my httpd.conf and PHP.ini if that has any clues. Any ideas? Any help much appreciated.

    Read the article

  • Many small scripts, one repository or multiple?

    - by The Jug
    A co-worker and myself have run into an issue that we have multiple opinions on. Currently we have a git repository that we are keeping all of our cronjobs in. There are about 20 crons and they are not really related except for the fact that they are all small python scripts and essential for some activity. We are using a fabric.py file to deploy and a requirements.txt file to manage requirements for all of the scripts. Our issue is basically, do we keep all of these scripts in one git repository or should we be separating them out into their own repositories? By keeping them in one repository it is easier to deploy them onto one server. We can use just one cron file for all the scripts. However this feels wrong, as the 20 cronjobs are not logically related. Additionally, when using one requirements.txt file for all the scripts, it's hard to figure out what the dependencies are for a particular script and they all have to use the same versions of packages. We could separate all of the scripts out into their own repositories but this creates 20 different repositories that need to be remembered and dealt with. Most of these scripts are not very large and that solution seems to be overkill. A related question is, do we use one big crontab file for all cronjobs, or a separate file for each? If each has their own, how does one crontab's installation avoid overwriting the other 19? This also seems like a pain as there would then by 20 different cron files to keep track of. In short, our main question and issue is do we keep them all closely bundled as one repository or do we separate them out into their own repository with their own requirements.txt and fabfile.py? We feel like we're also probably looking over some really simple solution. Is there an easier way to deal with this issue?

    Read the article

  • How to update Adobe's software unattendedly?

    - by jubel
    I would like to use unattended-upgrade to update the Adobe Reader, Flash Player and everything else of the Canonical partners. There fore, I added in /etc/apt/apt.conf.d/50unattended-upgrades Unattended-Upgrade::Allowed-Origins { "${distro_id} ${distro_codename}-security"; "${distro_id} ${distro_codename}-updates"; "Canonical ${distro_codename}"; // "${distro_id} ${distro_codename}-proposed"; // "${distro_id} ${distro_codename}-backports"; }; sudo unattended-upgrade --dry-run -d says Initial blacklisted packages: Starting unattended upgrades script Allowed origins are: ['o=Ubuntu,a=oneiric-security', 'o=Ubuntu,a=oneiric-updates', 'o=Canonical,a=oneiric'] Checking: acroread-common (["<Origin component:'partner' archive:'' origin:'' label:'' site:'archive.canonical.com' isTrusted:False>"]) Checking: adobe-flash-properties-gtk (["<Origin component:'partner' archive:'' origin:'' label:'' site:'archive.canonical.com' isTrusted:False>"]) Checking: adobe-flashplugin (["<Origin component:'partner' archive:'' origin:'' label:'' site:'archive.canonical.com' isTrusted:False>"]) Checking: adobereader-deu (["<Origin component:'partner' archive:'' origin:'' label:'' site:'archive.canonical.com' isTrusted:False>"]) Checking: handbrake-cli (["<Origin component:'main' archive:'oneiric' origin:'LP-PPA-stebbins-handbrake-snapshots' label:'HandBrake Snapshots' site:'ppa.launchpad.net' isTrusted:True>"]) Checking: handbrake-gtk (["<Origin component:'main' archive:'oneiric' origin:'LP-PPA-stebbins-handbrake-snapshots' label:'HandBrake Snapshots' site:'ppa.launchpad.net' isTrusted:True>"]) Checking: sopcast-player (["<Origin component:'main' archive:'oneiric' origin:'LP-PPA-ferramroberto-sopcast' label:'LffL Sopcast' site:'ppa.launchpad.net' isTrusted:True>"]) pkgs that look like they should be upgraded: Fetched 0 B in 0s (0 B/s) blacklist: [] InstCount=0 DelCount=0 BrokenCout=0 No packages found that can be upgraded unattended And it won't update. How can I update the third-party software automatically?

    Read the article

  • PDF with 200 DPI JPEGs look blurry in viewer

    - by queueoverflow
    I often scan images, and I usually do that as 200 DPI JPEG files. If I look at them in Gwenview (Linux image viewer) the text looks readable, and on 100% it looks perfectly sharp. When I convert them into a PDF, using convert from the Imagick package and open that in Okular or Evince (Linux document viewer) the text looks blurry. I wrote a little script that creates a HTML page with all the images and opens it in the browser. Firefox scales the images a little sharper than the PDF viewers. Firefox JPEGs: Evince PDF Is there any way to get the PDF to scale the images sharply?

    Read the article

  • webm html5 videos lose connection with apache server

    - by Jizbo Jonez
    webm html5 videos that are played through a domain on my server sometimes lose connection. A video that is playing will start to buffer and then stop part of way through with that message "Video playback aborted due to a network error." displayed on the html5 video player. I am delivering the webm videos via a php script on an LAMP server. There doesn't seem to be any errors in the server logs. Is there any php.ini settings or httpd.conf that I need to set? I recently set 'Keep Alive" to "on" in httpd.conf could that be causing this?

    Read the article

  • How Google Web Starter Kit serves adaptive image for mobile?

    - by 5argon
    My website weirdly (in a good way) serves smaller images when viewed on mobile. I wanted to know what cause this? As far as I know this is not the default behaviour, so I think it must be Google Web Starter Kit's doing.Here is the debug information when debugging on device. All images became 231 B size no matter how large it actually is. (On desktop debugging the size varies.) I tried using Google Web Starter Kit (https://github.com/google/web-starter-kit) recently. The tools in it are made of Ruby, Node.js, SASS and Gulp to help you 'build' website. Pre-build you can enjoy automatic reload because the Gulp script will watch all files for you. When build it will run various tools to minify HTML,CSS and compress images. According to this page https://developers.google.com/web/fundamentals/tools/build/build_site the gulp-imagemin was used. So I guess the imagemin is doing the mobile optimization for me? What kind of compression can serve automatically resized image on mobile? And why is the size 231 B? Is this related to my screen size?

    Read the article

  • Best practices: Ajax and server side scripting with stored procedures

    - by Luka Milani
    I need to rebuild an old huge website and probably to port everyting to ASP.NET and jQuery and I would like to ask for some suggestion and tips. Actually the website uses: Ajax (client site with prototype.js) ASP (vb script server side) SQL Server 2005 IIS 7 as web server This website uses hundred of stored procedures and the requests are made by an ajax call and only 1 ASP page that contain an huge select case Shortly an example: JAVASCRIPT + PROTOTYPE: var data = { action: 'NEWS', callback: 'doNews', param1: $('text_example').value, ......: ..........}; AjaxGet(data); // perform a call using another function + prototype SERVER SIDE ASP: <% ...... select case request("Action") case "NEWS" With cmmDB .ActiveConnection = Conn .CommandText = "sp_NEWS_TO_CALL_for_example" .CommandType = adCmdStoredProc Set par0DB = .CreateParameter("Param1", adVarchar, adParamInput,6) Set par1DB = .CreateParameter(".....", adInteger, adParamInput) ' ........ ' can be more parameters .Parameters.Append par0DB .Parameters.Append par1DB par0DB.Value = request("Param1") par1DB.Value = request(".....") set rs=cmmDB.execute RecodsetToJSON rs, jsa ' create JSON response using a sub End With .... %> So as you can see I have an ASP page that has a lot of CASE and this page answers to all the ajax request in the site. My question are: Instead of having many CASES is it possible to create dynamic vb code that parses the ajax request and creates dynamically the call to the desired SP (also implementing the parameters passed by JS)? What is the best approach to handle situations like this, by using the advantages of .Net + protoype or jQuery? How the big sites handle situation like this? Do they do it by creating 1 page for request? Thanks in advance for suggestion, direction and tips.

    Read the article

  • How to OCR PDF files with Old German Gothic (Fraktur) text?

    - by Jason
    I have been successfully using Adobe Acrobat X to OCR many scanned documents which I use for my research. However I have begun studying old German documents which use the Fraktur script, also known as Gothic. SuperUser won't let me post an image of it, but you can find examples of what it looks like in the Wikipedia article (linked above). I have read about special programs which OCR the text, such as ABBY FineReader für Fraktur, but first it works on Windows (and I use a Mac), and second I'd like to find a Fraktur plugin for Acrobat to fit into my already-existing workflow. Are there any Fraktur OCR plugins for Acrobat? Generally, where should one look for Acrobat OCR plugins?

    Read the article

  • linux/shell: change a file's modify timestamp relatively?

    - by index
    My Canon camera produces files like IMG_1234.JPG and MVI_1234.AVI. It also timestamps those files. Unfortunately during a trip to another timezone several cameras were used, one of which did not have the correct time zone set - meta data mess.. Now I would like to correct this. Proposed algorithm: 1 read file's modify date 2 add delta, i.e. hhmmss (preferred: change timezone) 3 write new timestamp Unless someone knows a tool or a combination of tools that do the trick directly, maybe one could simplify the calculation using epoch time (seconds since ..) and whip up a shell script. Any help appreciated!

    Read the article

  • Just bought Logitech G35 headset, want to enable/disable headet in one action

    - by Roy Rico
    Hello, Just bought Logitech G35 USB headset, want to enable/disable headet in one action. I have windows 7 64 bit, and currently, to switch from the headset to my spakers (or vice versa) without unplugging the headset, i have to go thru this steps: right click on the sound icon select "sounds" select the "Playback" tab right click headset and hit enable right click speakers and hit disable same steps when wanting to switch back I have to disable the devices, because not all of the programs which are outputting sound will switch immediately if I just set either device to default. I am looking for this to be a single action (either a batch script or a windows shortcut to some action with parameters?) Thanks!

    Read the article

  • Apache configuration to accept all data

    - by ServerDown
    Hi, I have apache running on port 7979 to talk with a device that sends data to webserver and later will run php scripts to process and send reply xml. The problem now is that it sends data like POST HTTP/1.1 Content-Type:text/xml Content-Length:369 Followed by XML When apache sees this it gives a 400 error. Since the device cannot be changed is there any way to accept the full data sent from the device and write to some log? Currently apache simply keeps sending 400 errors back. If there was a way to log the entire xml or create some custom handler for 400 error then the xml could be read by a php script. Looking forward to solutions.

    Read the article

  • Get the "source network address" in Event ID 529 audit entries on Windows XP

    - by Make it useful Keep it simple
    In windows server 2003 when an Event 529 (logon failure) occures with a logon type of 10 (remote logon), the source network IP address is recorded in the event log. On a windows XP machine, this (and some other details) are omitted. If a bot is trying a brute force over RDP (some of my XP machines are (and need to be) exposed with a public IP address), i cannot see the originating IP address so i don't know what to block (with a script i run every few minutes). The DC does not log this detail either when the logon attempt is to the client xp machine and the DC is only asked to authenticate the credentials. Any help getting this detail in the log would be appreciated.

    Read the article

  • create a .deb Package from scripts or binaries

    - by tdeutsch
    I searched for a simple way to create .deb Packages for things which have no source code to compile (configs, shellscripts, proprietary software). This was quite a problem because most of the package tutorials are assuming you have a source tarball you want to compile. Then I've found this short tutorial (german). Afterwards, I created a small script to create a simple repository. Like this: rm /export/my-repository/repository/* cd /home/tdeutsch/deb-pkg for i in $(ls | grep my); do dpkg -b ./$i /export/my-repository/repository/$i.deb; done cd /export/avanon-repository/repository gpg --armor --export "My Package Signing Key" > PublicKey apt-ftparchive packages ./ | gzip > Packages.gz apt-ftparchive packages ./ > Packages apt-ftparchive release ./ > /tmp/Release.tmp; mv /tmp/Release.tmp Release gpg --output Release.gpg -ba Release I added the key to the apt keyring and included the source like this: deb http://my.default.com/my-repository/ ./ It looks like the repo itself is working well (I ran into some problems, to fix them I needed to add the Packages twice and make the temp-file workaround for the Release file). I also put some downloaded .deb into the repo, it looks like they are also working without problems. But my self created packages didn't... Wenn i do sudo apt-get update, they are causing errors like this: E: Problem parsing dependency Depends E: Error occurred while processing my-printerconf (NewVersion2) E: Problem with MergeList /var/lib/apt/lists/my.default.com_my-repository_._Packages E: The package lists or status file could not be parsed or opened. Has anyone an idea what I did wrong?

    Read the article

  • Simple monitoring of a Raspberry Pi powered screen - Part 2

    - by Chris Houston
    If you have read my previous blog post Raspberry Pi entrance signed backed by Umbraco - Part 1 which describes how we used a Raspberry Pi to drive an Entrance sign for QV Offices you will have seen I mentioned a follow up post about monitoring the sign.As the sign is mounted in the entrance of the building on the ground floor and the reception is on the 1st floor, this meant that if there was a fault of any kind showing on the screen, the first person to see this was inevitably one of QV Offices' clients as they walked into the building.Although the QV Offices' team were able to check the Umbraco website address that the sign uses, this did not always mean that everything was working as expected. We noticed a couple of times that the sign had Wifi issues (it is now hard wired) and this caused the Chromium browser to render a 404 error when it tried to refresh the screen.The simple monitoring solutionWe added the following line to our refresh script, so that after the sign had been refreshed a screen shot of the Raspberry Pi would be taken:import -display :0 -window root ~/screenshot.jpgFinally we wrote a small Crontab task that ran on a QV Offices Mac that grabs this screen shot and saved it on the desktop, admittedly we have used a package that it not mega secure, but in reality this is an internal system that only runs an office sign, so we are not to concerned about it being hacked.*/5 * * * * /usr/local/bin/sshpass -p 'password' /usr/bin/scp [email protected]:screenshot.jpg Desktop/QVScreenShot.jpgAs the file icon updates, if the image changes, this gives a quick visual indication of the status of the sign, if for some reason the icon does not look correct the QV Offices administrator can just click on the file to see the exact image currently displayed on the sign.Sometimes a quick and easy solution is better than a more complex and expensive one.

    Read the article

  • My PowerShell functions do not appear to be registered

    - by Frank
    Hi there, I have a ps1 script in which I define 2 functions as such: function Invoke-Sql([string]$query) { Invoke-Sqlcmd -ServerInstance $Server -Database $DB -User $User -Password $Password -Query $query } function Get-Queued { Invoke-Sql "Select * From Comment where AwaitsModeration = 1" } I then call the ps1 file by typing it in (it's in a folder in the path, and autocompletion works) However, I cannot start using the functions. I am confused, because when I copy / paste the functions into the console, all is fine and they work. I also have a function defined in my profile, and it works. Where am I thinking wrong, why doesn't it work what I'm trying to do?

    Read the article

  • Is SYN flooding still a threat?

    - by Rob
    Well recently I've been reading about different Denial of Service methods. One method that kind of stuck out was SYN flooding. I'm a member of some not-so-nice forums, and someone was selling a python script that would DoS a server using SYN packets with a spoofed IP address. However, if you sent a SYN packet to a server, with a spoofed IP address, the target server would return the SYN/ACK packet to the host that was spoofed. In which case, wouldn't the spoofed host return an RST packet, thus negating the 75 second long-wait, and ultimately failing in its attempt to DoS the server?

    Read the article

  • How to convert Windows filenames (from a checksums.md5) to *nix notation so I can use it on my shell with md5sum?

    - by Somebody still uses you MS-DOS
    I have some checksums.md5 verification files from an ntfs external drive, but using windows notation: \ instead of /, spaces between file names (not escaped), reserved shell characters (like (, &, ', to name a few). The checksums.md5 has a bunch of checksums and filenames: ;Created by program ;2010 f12f75c1f2d1a658dc32ca6ef9ef3ffc *My Windows & Files (2010)\[bak]\testing.wmv 53445e1a0821b790872e60bd7a166887 *My Windows Files' 2 (2012)\[bak]\testing.wmv 53445e1a0821b790872e60bd7a166887 *My Windows Files ˜nicóde (2012)\[bak]\testing.wmv ;Finished I want to use this checksums.md5 to verify the files that I've copied to my machine: but I'm on a Linux, so I need to convert the names inside checksums.md5 from Windows to Linux to use the md5sum utility from the shell. The first line in my example would become: f12f75c1f2d1a658dc32ca6ef9ef3ffc My\ Windows\ \&\ Files\ \(2010\)/\[bak\]/testing.wmv Is there some application for this (converting a file listing, from windows cmd notation, to linux shell notation) or will I need to create a bash script using sed that just "replaces" what is "wrong" with the filenames?

    Read the article

  • compile error in Ubuntu 10

    - by yozloy
    Hey guys I got a vps which run solusVM. I'm now trying to install ruby 1.9.2 in it. I follow this guide: after I run this command apt-get update apt-get -y install build-essential zlib1g zlib1g-dev libxml2 libxml2-dev libxslt-dev I got this error below root@makserver:/usr/local/src/ruby-1.9.2-p0# apt-get -f install Reading package lists... Done Building dependency tree... Done Correcting dependencies... Done The following extra packages will be installed: libc6 Suggested packages: glibc-doc The following packages will be upgraded: libc6 1 upgraded, 0 newly installed, 0 to remove and 80 not upgraded. Need to get 0B/4252kB of archives. After this operation, 4096B disk space will be freed. Do you want to continue [Y/n]? y debconf: apt-extracttemplates failed: Bad file descriptor (Reading database ... 21594 files and directories currently installed.) Preparing to replace libc6 2.11.1-0ubuntu7.2 (using .../libc6_2.11.1-0ubuntu7.8_amd64.deb) ... open2: fork failed: Cannot allocate memory at /usr/share/perl5/Debconf/ConfModule.pm line 59 dpkg: error processing /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 12 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Anybody can tell me how can I correct this. Thanks

    Read the article

  • Use cURL with multiple POSTs

    - by Austin
    I'm trying to use cURL to download the contents of webpages that require forms to get to. In a browser it looks something like this 1.) Login using POST 2.) Pick which page to go to using another POST 3.) Pick another page... using POST 4.) etc.. until I get to the page I want, then download all textfiles linked to on that page. I am attempting to do this using a bash script and some loops with the values that change for each POST. My problem is how do I do multiple POSTs with cURL? Must there be cookies involved? FYI the website is http://metagenomics.nmpdr.org/ (MG-RAST).

    Read the article

< Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >