Search Results

Search found 4755 results on 191 pages for 'scripting dictionary'.

Page 55/191 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • VB6 Parser/Lexer/Scripter

    - by rlb.usa
    I've got a game in VB6 and it works great and all, but I have been toying with the idea of creating a scripting engine. Ii'm thinking I'd like VB6 to read in flat text script files for me and then lex/parse/execute them. I have good programming experience, and I've built a simple C compiler, as well as a LOGO emulator before. My question is: Are there any tools that I can use, like Lexx/Yakk/Bison to help me? How should I approach this problem in regards to lexing, parsing, and feeding the commands back to VB6 so I can handle them? Is this idea a BAD IDEA in the sense that there are too many obstacles in the way (For example, building minesweeper in assembly, though not impossible, is very difficult, and a bad idea.)?

    Read the article

  • Win32 script environment for testing http redirects?

    - by Anders Lindahl
    The past few days I've been working with setting up an Apache server on Windows. The server is supposed to host several .htaccess files, each redirecting (or, in some cases, proxying) to different hosts. I want to create tests for these redirectons, and the solution I'm currently considering is a CGI script running on the same server, sending GET requests to it and verifying that it gets the correct redirection headers back. A scripting solution (vscript/jscript) seems worth exploring, but so far I've only managed to rule out Microsoft.XMLHTTP because it follows the redirect "behind the scenes". Are there any libraries or other solutions already present on a reasonably standard Windows Server that can do this kind of low-level HTTP work? If not, any other suggestions of simple environments to set up for verifying redirects?

    Read the article

  • Shell script to count files, then remove oldest files

    - by Nic Hubbard
    I am new to shell scripting, so I need some help here. I have a directory that fills up with backups. If I have more than 10 backup files, I would like to remove the oldest files, so that the 10 newest backup files are the only ones that are left. So far, I know how to count the files, which seems easy enough, but how do I then remove the oldest files, if the count is over 10? if [ls /backups | wc -l > 10] then echo "More than 10" fi

    Read the article

  • Validating parameters to a bash script

    - by nickf
    I'm a total newbie to doing any bash scripting, but I came up with a basic one to help automate the process of removing a number of folders as they become unneeded. #!/bin/bash rm -rf ~/myfolder1/$1/anotherfolder rm -rf ~/myfolder2/$1/yetanotherfolder rm -rf ~/myfolder3/$1/thisisafolder This is evoked like so: ./myscript.sh <{id-number}> The problem is that if you forget to type in the id-number (as I did just then), then it could potentially delete a lot of things that you really don't want deleted. Is there a way you can add any form of validation to the command line parameters? In my case, it'd be good to check that a) there is one parameter, b) it's numerical, and c) that folder exists; before continuing with the script.

    Read the article

  • Using an ActiveX object from an Outlook hosted webpage - possible?

    - by Nic Wise
    I'm trying to do the following: We have an outlook plugin, written in .NET (and C++). It does various things, and is manually installed on the end users machines (usually via AD deployment or similar) We are changing our search to use a webpage-based search, but from within outlook. That part is ok, however we want to communicate from the webpage to the surrounding outlook application. We can call into outlook by exposing an ActiveX object from our plugin, however we get security warnings, even if it's signed and marked as safe for scripting. Is this even possible? Has anyone done it? Anyone have a better way of doing it? We only need to pass in a small amount of data (a message id), and only from the webpage to outlook [update]: This is the error: automation server can't create object. We can get around it a bit by turning things off in IE, but thats not a good way to do it! Thanks

    Read the article

  • What are the reasons to use dos batch programs in Windows?

    - by DVK
    Question What would be a good (ideally, technical) reason to ever program some non-trivial task in dos batch language on a modern Windows system as opposed to downloading either PowerShell, or ActiveState Perl? To be more specific, I make the following two assumptions for the duration of this question: anyone technical enough to be able to write a medium-complexity batch script is technical enough to install either of the scripting interpreters. Neither of those two present enough of a learning curve for basic batch replacement tasks that said curve would outweigh the pain of doing any remotely-non-trivial task in batch. Notes "You need a batch program for autoexec.bat" is not a valid reason. Your autoexec.bat may consist of simply calling non-batch script. If you disagree with either of my 2 assumptions above, that's fine, and I may be wrong. But my question is specifically "assuming those 2 assumptions are correct, what would be the reason to still stick with batch?" If it makes it easier to suspend disbelief (in case you disagree with me), add in a 3rd assumption that the question is limited to people who already posess at least some modicum of PowerShell or Perl experience. To re-iterate - this is not meant to be a subjective question about how easy it is to learn PSh or ASPerl compared to doing advanced batch coding. That is a separate question that is too subjective to be bothered with in this post. Background: I used to do some fairly complicated batch programming back in the elder days, and remember batch as one of the worst possble programming languages I had encountered. The idea for this question came after seeing a bunch of batch questions on SO, and trying to grok the answer of one of them out of sheer curiosity and giving up in pain after a minute, exclaiming mentally "why would anyone go through this pain instead of doing that in 1 line of Perl?" :) My own plausible answer I assume there may be an an likely DOS-compatible system, which has DOS interpreter but has no compatible PowerShell or Perl... I'm not aware of one but not completely impossible.

    Read the article

  • Add/remove xml tags using a bash script.

    - by sixtyfootersdude
    I have an xml file that I want to configure using a bash script. For example if I had this xml: <a> <b> <bb> <yyy> Bla </yyy> </bb> </b> <c> <cc> Something </cc> </c> <d> bla </d> </a> (confidential info removed) I would like to write a bash script that will remove section <b> (or comment it) but keep the rest of the xml intact. I am pretty new the the whole scripting thing. I was wondering if anyone could give me a hint as to what I should look into. I was thinking that sed could be used except sed is a line editor. I think it would be easy to remove the <b> tags however I am unsure if sed would be able to remove all the text between the <b> tags. I will also need to write a script to add back the deleted section.

    Read the article

  • CentOS - Convert Each WAV File to MP3/OGG

    - by Benny
    I am trying to build a script (I'm pretty new to linux scripting) and I can't seem to figure out why I'm not able to run this script. If I keep the header (#!/bin/sh) in, I get the following: -bash: /tmp/ConvertAndUpdate.sh: /bin/sh^M: bad interpreter: No such file or directory If I take it out, I get the following: 'tmp/ConvertAndUpdate.sh: line 2: syntax error near unexpected token `do 'tmp/ConvertAndUpdate.sh: line 2: `do Any ideas? Here is the full script: #!/bin/sh for file in *.wav; do mp3=$(basename .$file. .wav).mp3; #echo $mp3 nice lame -b 16 -m m -q 9 .resample 8 .$file. .$mp3.; touch .reference .$file. .$mp3.; chown apache.apache .$mp3.; chmod 600 .$mp3.; rm -f .$file.; mv .$file. /converted; sql="UPDATE recordings SET IsReady=1 WHERE Filename='${file%.*}'" echo $sql | mysql --user=me --password=pasword Recordings #echo $sql done

    Read the article

  • Processing a log to fix a malformed IP address ?.?.?.x

    - by skymook
    I would like to replace the first character 'x' with the number '7' on every line of a log file using a shell script. Example of the log file: 216.129.119.x [01/Mar/2010:00:25:20 +0100] "GET /etc/.... 74.131.77.x [01/Mar/2010:00:25:37 +0100] "GET /etc/.... 222.168.17.x [01/Mar/2010:00:27:10 +0100] "GET /etc/.... My humble beginnings... #!/bin/bash echo Starting script... cd /Users/me/logs/ gzip -d /Users/me/logs/access.log.gz echo Files unzipped... echo I'm totally lost here to process the log file and save it back to hd... exit 0 Why is the log file IP malformed like this? My web provider (1and1) has decide not to store IP address, so they have replaced the last number with the character 'x'. They told me it was a new requirement by 'law'. I personally think that is bs, but that would take us off topic. I want to process these log files with AWstats, so I need an IP address that is not malformed. I want to replace the x with a 7, like so: 216.129.119.7 [01/Mar/2010:00:25:20 +0100] "GET /etc/.... 74.131.77.7 [01/Mar/2010:00:25:37 +0100] "GET /etc/.... 222.168.17.7 [01/Mar/2010:00:27:10 +0100] "GET /etc/.... Not perfect I know, but least I can process the files, and I can still gain a lot of useful information like country, number of visitors, etc. The log files are 200MB each, so I thought that a shell script is the way to go because I can do that rapidly on my Macbook Pro locally. Unfortunately, I know very little about shell scripting, and my javascript skills are not going to cut it this time. I appreciate your help.

    Read the article

  • I write barely functional scripts that tend to not be resuable and make the baby jesus cry. Please h

    - by maxxpower
    I received a request to add around 100 users to a linux box the users are already in ldap so I can't just use newusers and point it at a text file. Another admin is taking care of the ldap piece so all I have to do is create all the home directories and chown them to the correct user once he adds the users to the box. creating the directories isn't a problem, but I'd like a more elegant script for chowning them to the correct user. what I have currently basically looks like chown -R testuser1 testgroup1 /home/tetsuser1; chown -R testuser2 testgroup2 /home/testgroup2; chown -R testsuser3 testgroup1 /home/testuser3 bascially I took the request that the user name and group name popped it into excel added a column of "chown -R" to the front, then added a column of "/", copied and pasted the username column after it and then added a column of ";" and dragged it down to the second to last row. Popped it into notepad ran some quick find and replaces and in less than a minute I have a completed request and a sad empty feeling. I know this was a really ghetto method and I'm trying to get away from using excel to avoid learning new scripting techniques so here's my real question. tl;dr I made 100 home directories and chowned them to the correct users, but it was ugly. Actual question below. You have a file named idlist that looks like this (only with say 1000 users and real usernames and groups) testuser1 testgroup1 testuser2 testgroup2 testuser3 testgroup1 write a script that creates home directories for all the users and chowns the created directories to the correct user and group. To make the directories I used the following(feel free to flame/correct me on this as well. ) var= 'cut -f1 -d" " idlist' (I used backticks not apostrophes around the cut command) mkdir $var

    Read the article

  • restrict script inside iframe to run only within pages of same top-level domain?

    - by Justin Grant
    I'd like to enforce a requirement that client script inside a page (which in turn is loaded inside an iframe of another page) will only run when the parent page is on the same top-level domain as the framed page (although it may be on another hostname in that domain). Is this do-able? I assume that the easy solution of looking at top.location.host won't be available due to cross-site scripting limitations, but I'm wondering if other javascript hackery could suffice. Constraints on any potential solution inculde: I need to be able to run XmlHttpRequest calls inside the child page, and I need to validate that the hostname is in the same domain before I make those calls. (this makes a document.domain solution challenging because AFAIK setting document.domain disables the ability to make XmlHttpRequest calls. I can control client-side script and HTML on both parent or child (and I can create new pages if needed), but I can't make any server-side code changes. I can't simulate the above via server-side calls or proxies, because the child page's hostname uses a forms auth system with hostname-scoped cookies that I can't get access to from the parent page since it's on a different hostname. I don't have enough control over the child-frame site to be able to put both sites behind the same reverse-proxy or load-balancer (which would enable me to put both sites on the same hostname). I don't actually need to access any UI inside the IFrame-- the iframe is invisible and I'm only using it to run javascript within the security context of a site on a different hostname from the parent page. So at this point I'm stumped. Got any ideas? I want to make sure I'm not overlooking an easy solution before giving up.

    Read the article

  • I trying to backreference using the sed command

    - by Paul
    I am relative new to shell scripting and sed. I need to substitute a pattern, globably, but I also need to remember (or save) part of the pattern and use it later in the same substitute command. The saved pattern will be varible, so I need to use a wild card. For example, input message=trt:GetAudioSourcesRequest/> and I want to end up with something like input message=trt:GetAudioSourcesRequest PAUL/GetAudioSourcesRequest/> but the function string "GetAudioSourcesRequest" will change (in length also) throughtout the file, so I need a wild card, e.g. sed -i "s/input message=trt:<wild card in here>/>/input message=trt:<print wild card> PAUL/<print wild card>/> I have managed to get the following command to nearly do what I want but it is too rigid. It only stores a 4 syllable pattern so if I have a function name such as GetProfileRequest, this doesn't work echo "input message=\"trt:GetAudioSourcesRequest\"/>" | sed 's/input message=\"trt:\([A-Z][a-z]*\)\([A-Z][a-z]*\)\([A-Z][a-z]*\)\([A-Z][a-z]*\).*/input message=\"trt:\1\2\3\4\ PAUL\/\1\2\3\4"\/\>/g' This outputs input message="trt:GetAudioSourcesRequest PAUL/GetAudioSourcesRequest"/> Which is ok but when I use GetProfileRequest this doesn't. I have come accross \W and [^[:alnum:]] or [[:alnum:]] but I don't how to use them Thanks in advance.

    Read the article

  • Game engine deployment strategy for the Android?

    - by Jeremy Bell
    In college, my senior project was to create a simple 2D game engine complete with a scripting language which compiled to bytecode, which was interpreted. For fun, I'd like to port the engine to android. I'm new to android development, so I'm not sure which way to go as far as deploying the engine on the phone. The easiest way I suppose would be to require the engine/interpreter to be bundled with every game that uses it. This solves any versioning issues. There are two problems with this. One: this makes each game app larger and two: I originally released the engine under the LGPL license (unfortunately), but this deployment strategy makes it difficult to conform to the rules of that license, particularly with respect to allowing users to replace the lib easily with another version. So, my other option is to somehow have the engine stand alone as an Activity or service that somehow responds to intents raised by game apps, and somehow give the engine app permissions to read the scripts and other assets to "run" the game. The user could then be able to replace the engine app with a different version (possibly one they made themselves). Is this even possible? What would you recommend? How could I handle it in a secure way?

    Read the article

  • Best way to convert log files (*.txt) to web-friendly files (*.html, *.jsp, etc)?

    - by prometheus
    I have a bunch of log files which are pure text. Here is an example of one... Overall Failures Log SW Failures - 03.09.2010 - /logs/swfailures.txt - 23 errors - 24 warnings HW Failures - 03.09.2010 - /logs/hwfailures.txt - 42 errors - 25 warnings SW Failures - 03.10.2010 - /logs/swfailures.txt - 32 errors - 27 warnings HW Failures - 03.10.2010 - /logs/hwfailures.txt - 11 errors - 31 warnings These files can get quite large and contain a lot of other information. I'd like to produce an HTML file from this log that will add links to key portions and allow the user to open up other log files as a result... SW Failures - 03.09.2010 - <a href="/logs/swfailures.txt">/logs/swfailures.txt</a> - 23 errors - 24 warnings This is greatly simplified as I would like to add many more links and other html elements. My question is -- what is the best way to do this? If the files are large, should I generate the html before serving it to the user or will jsp do? Should I use perl or other scripting languages to do this? What are your thoughts and experiences?

    Read the article

  • Creating, using and managing XML component dictionaries quick tutorials

    - by drrwebber
    XML Component Dictionary capabilities are provided in conjunction with the CAM Editor toolset.  These dictionaries accelerate the development of consistent XML information exchanges using standard sets of dictionary components. The quick tutorials are aimed at showing the 'how to' of the basic capabilities to jump start use of XML dictionaries with the CAM Editor. The collection of dictionary tutorials videos run for a total of approximately 20 minutes.  Each video can be reviewed individually also. Learn how to use the dictionary functions to create dictionaries by harvesting data model components from existing XSD schema, SQL database table schema, or simple Excel / Open Office spreadsheets with tables of components listed.Also included are tips and functions relating to use of NIEM exchange development, IEPD and EIEM techniques.These videos should be viewed in conjunction with reviewing the overall concepts and techniques described in the companion video on the CAM Editor and Dictionaries overview.  The approach is aligned with OASIS and Core Components Technical Specification (CCTS) standards specifications for XML components and dictionaries.Dictionary collections can be stored locally on the file system, or local network, or collaboratively on the web or cloud deployment, or can be shared and managed securely using the Oracle Enterprise Repository (OER) tool. Also included are techniques relating to the use of the NIEM approach for developing XML exchange schema and IEPD packages.  This includes generating reuse scores, wantlist, and cross reference spreadsheets. Included in the latest release of the CAM Editor is the ability to use the analyse dictionary tool to determine duplicate components, conflicting component definitions, missing component descriptions and so on.  This ensures high quality dictionary component specifications.  Using the CAM Editor you can also create MindMap models and UML physical models of your dictionary components sets. For a complete guide to using the CAM Editor see the main YouTube video tutorials website and the CAM Editor website.

    Read the article

  • Apache Cordova (Phonegap): is jsop needed for cross-site scripting?

    - by DEX
    I've just started using Apache Cordova. I have an library that makes calls (via ajax) to a soap server. When I run these on my local machine in chrome, I get cross site scripting errors when trying to make calls to the service. When I run the same exact code using the Cordova browser in the iOS emulator, the scripts seem to hit the server fine and the response data is received properly. So my question is how is the Cordova browser able to make these requests without cross-site scripting permissions & JSONP ? One thing I noticed is that when the request is sent from iOS, there is no "Origin" header. Is this allowing the Cordova browser to stealthily circumvent cross-site scripting requirements? Is it possible that the node.js server on the device (I believe this is how Cordova works) is manipulating the headers to allow this? I'd like to avoid enabling cross-site scripting on my site so I think this "feature" is nice, but I'm wondering if it's a security hole as well. Anyone have experience with this?

    Read the article

  • Run FFmpeg from Shell Script

    - by Abs
    Hello all, I have found a useful shell script that shows all files in a directory recursively. Where it prints the file name echo "$i"; #Display File name. I would instead like to run an ffmpeg command on non MP3 files, how can I do this? I have very limited knowledge of shell scripts so I appreciate if I was spoon fed! :) //if file is NOT MP3 ffmpeg -i [the_file] -sameq [same_file_name_with_mp3_extension] //delete old file Here is the shell script for reference. DIR="." function list_files() { if !(test -d "$1") then echo $1; return; fi cd "$1" echo; echo `pwd`:; #Display Directory name for i in * do if test -d "$i" #if dictionary then list_files "$i" #recursively list files cd .. else echo "$i"; #Display File name fi done } if [ $# -eq 0 ] then list_files . exit 0 fi for i in $* do DIR="$1" list_files "$DIR" shift 1 #To read next directory/file name done

    Read the article

  • Is it worth caching a Dictionary for foreign key values in ASP.net?

    - by user169867
    I have a Dictionary<int, string> cached (for 20 minutes) that has ~120 ID/Name pairs for a reference table. I iterate over this collection when populating dropdown lists and I'm pretty sure this is faster than querying the DB for the full list each time. My question is more about if it makes sense to use this cached dictionary when displaying records that have a foreign key into this reference table. Say this cached reference table is a EmployeeType table. If I were to query and display a list of employee names and types should I query for EmployeeName and EmployeeTypeID and use my cached dictionary to grab the EmployeeTypeIDs name as each record is displayed or is it faster to just have the DB grab the EmployeeName and JOIN to get the EmployeeType string bypassing the cached Dictionary all together. I know both will work but I'm interested in what will perform the fastest. Thanks for any help.

    Read the article

  • Binding on a port with netpipes/netcat

    - by mindas
    I am trying to write a simple bash script that is listening on a port and responding with a trivial HTTP response. My specific issue is that I am not sure if the port is available and in case of bind failure I fall back to next port until bind succeeds. So far to me the easiest way to achieve this was something like: for (( i=$PORT_BASE; i < $(($PORT_BASE+$PORT_RANGE)); i++ )) do if [ $DEBUG -eq 1 ] ; then echo trying to bind on $i fi /usr/bin/faucet $i --out --daemon echo test 2>/dev/null if [ $? -eq 0 ] ; then #success? port=$i if [ $DEBUG -eq 1 ] ; then echo "bound on port $port" fi break fi done Here I am using faucet from netpipes Ubuntu package. The problem with this is that if I simply print "test" to the output, curl complains about non-standard HTTP response (error code 18). That's fair enough as I don't print HTTP-compatible response. If I replace echo test with echo -ne "HTTP/1.0 200 OK\r\n\r\ntest", curl still complains: user@server:$ faucet 10020 --out --daemon echo -ne "HTTP/1.0 200 OK\r\n\r\ntest" ... user@client:$ curl ip.of.the.server:10020 curl: (56) Failure when receiving data from the peer I think the problem lies in how faucet is printing the response and handling the connection. For example if I do the server side in netcat, curl works fine: user@server:$ echo -ne "HTTP/1.0 200 OK\r\n\r\ntest\r\n" | nc -l 10020 ... user@client:$ curl ip.of.the.server:10020 test user@client:$ I would be more than happy to replace faucet with netcat in my main script, but the problem is that I want to spawn independent server process to be able to run client from the same base shell. faucet has a very handy --daemon parameter as it forks to background and I can use $? (exit status code) to check if bind succeeded. If I was to use netcat for a similar purpose, I would have to fork it using & and $? would not work. Does anybody know why faucet isn't responding correctly in this particular case and/or can suggest a solution to this problem. I am not married neither to faucet nor netcat but would like the solution to be implemented using bash or it's utilities (as opposed to write something in yet another scripting language, such as Perl or Python).

    Read the article

  • Pickled my dictionary from ZODB but i got a less in size one?

    - by Someone Someoneelse
    I use ZODB and i want to copy my 'database_1.fs' file to another 'database_2.fs', so I opened the root dictionary of that 'database_1.fs' and I (pickle.dump) it in a text file. Then I (pickle.load) it in a dictionary-variable, in the end I update the root dictionary of the other 'database_2.fs' with the dictionary-variable. It works, but I wonder why the size of the 'database_1.fs' not equal to the size of the other 'database_2.fs'. They are still copies of each other. def openstorage(store): #opens the database data={} data['file']=filestorage data['db']=DB(data['file']) data['conn']=data['db'].open() data['root']=data['conn'].root() return data def getroot(dicty): return dicty['root'] def closestorage(dicty): #close the database after Saving transaction.commit() dicty['file'].close() dicty['db'].close() dicty['conn'].close() transaction.get().abort() then that's what i do:- import pickle loc1='G:\\database_1.fs' op1=openstorage(loc1) root1=getroot(op1) loc2='G:database_2.fs' op2=openstorage(loc2) root2=getroot(op2) >>> len(root1) 215 >>> len(root2) 0 pickle.dump( root1, open( "save.txt", "wb" )) item=pickle.load( open( "save.txt", "rb" ) ) #now item is a dictionary root2.update(item) closestorage(op1) closestorage(op2) #after I open both of the databases #I get the same keys in both databases #But `database_2.fs` is smaller that `database_2.fs` in size I mean. >>> len(root2)==len(root1)==215 #they have the same keys True Note: (1) there are persistent dictionaries and lists in the original database_1.fs (2) both of them have the same length and the same indexes.

    Read the article

  • Compress small strings, With what to create external dictionary?

    - by Chris
    I want to compress much small strings (about 75-100 length c# string). At the time the dictionary is created I already know all short strings (nearly a trillion). There will no additional short strings in future. I need to extra exactly one string without decompress other strings. Now I am looking for a library or the best way to do the following: Create a dictionary using all strings I have Using this dictionary to compress each string a way to compress one string using the dictionary from 1. I found a good related question, but this is not c# specific. Maybe there is something for c# I do not know, or a fancy library or someone has already done that. That is the reason I ask this question.

    Read the article

  • How can you enable forms scripting for outlook 2010 on Citrix servers ?

    - by Florent Courtay
    I'd like to deploy Office 2010 on Citrix servers, but i can't enable form scripting support. With outlook 2007, it was solved by adding Outlvbs.dll in the office directory, and running msiexec /i {<Outlook GUID>} ADDLOCAL=OutlookVBScript /qb But it seems this does not work anymore with Outlook 2010, I get the following error : Error 2711. An internal error has occured. (OutlookVBScript). I don't get much help from microsoft support site, as there isn't a lot of informations on office 2010 yet. Have anyone succeded in installing and using outlook 2010 with form scripting in a citrix environment ?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >