Search Results

Search found 9083 results on 364 pages for 'startup scripts'.

Page 321/364 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Upload image from URL to FTP server using PHP

    - by user1807556
    I want to upload a picture from another site to my FTP server using PHP. Example: File to upload("http://page.mi.fu-berlin.de/krudolph/stuff/stackoverflow.png") FTP-path("pictures/") This is what I've already tried: 1 $image = file_get_contents("http://img.youtube.com/vi/Rz8KW4Tveps/1.jpg"); file_put_contents("imgfolder/imgID.jpg", $image); 2 copy('http://img.youtube.com/vi/Rz8KW4Tveps/1.jpg', 'imgfolder/imgID.jpg'); 3 <?php set_time_limit (24 * 60 * 60); if (!isset($_POST['submit'])) die(); $file = fopen ($url, "rb"); if ($file) { $newf = fopen ($newfname, "wb"); if ($newf) while(!feof($file)) { fwrite($newf, fread($file, 1024 * 2000 ), 1024 * 2000 ); } } if ($file) { fclose($file); } if ($newf) { fclose($newf); } ?> 4 http://www.teckdevil.com/php-server-to-server-transfer-script-to-remotely-transfer-files/ 5 (kinda the same as first linked) Download files directly to my server # I don't get any errors when I'm running the scripts and I have chmod the directory to 777.

    Read the article

  • ASP.Net MVC: Change Routes dynamically

    - by Frank
    Hi, usually, when I look at a ASP.Net MVC application, the Route table gets configured at startup and is not touched ever after. I have a couple of questions on that but they are closely related to each other: Is it possible to change the route table at runtime? How would/should I avoid threading issues? Is there maybe a better way to provide a dynamic URL? I know that IDs etc. can appear in the URL but can't see how this could be applicable in what I want to achieve. How can I avoid that, even though I have the default controller/action route defined, that default route doesn't work for a specific combination, e.g. the "Post" action on the "Comments" controller is not available through the default route? Background: Comment Spammers usually grab the posting URL from the website and then don't bother to go through the website anymore to do their automated spamming. If I regularly modify my post URL to some random one, spammers would have to go back to the site and find the correct post URL to try spamming. If that URL changes constantly I'd think that that could make the spammers' work more tedious, which should usually mean that they give up on the affected URL.

    Read the article

  • Javascript autocomplete function href

    - by user896692
    I´ve the following javascript-function: <script type="text/javascript"> $(function() { var data = (<?php include("php/search_new.php"); ?>).Data.Recipes; var source = []; for (var i in data) { source.push({"href": "/php/get_recipe_byID.php?id=" + data[i].ID, "label": data[i].TITLE}); } $("#searchrecipes").autocomplete({ minLength: 3, source: source, select: function(event, ui) { window.location.href = ui.item.href; } }); }); </script> <input id="searchrecipes" type="text" name="searchrecipes" style="margin-left: 850px; margin-top: 0px; width:170px; background: #fff url(images/search_icon.png) no-repeat 100%;" onblur="this.style.background='#ffffff'; background: #fff url(images/search_icon.png) no-repeat 100%;" onfocus="this.style.background='#c40606'; background: url(images/search_icon.png) no-repeat 100%;" placeholder="Suchen..."></input> <input type="submit" name="buttonsenden" style="display:none;" value="" width: 5px></input> The function has already worked but suddenly it stopped working. The problem is, that the href on the dropdown-autocomplete isn´t clickable. var data = ({"Data":{"Recipes":{"Recipe_5":{"ID":"5","TITLE":"Spaghetti Bolognese"},"Recipe_7":{"ID":"7","TITLE":"Wurstel"},"Recipe_9":{"ID":"9","TITLE":"Schnitzel"},"Recipe_10":{"ID":"10","TITLE":null},"Recipe_19":{"ID":"19","TITLE":null},"Recipe_20":{"ID":"20","TITLE":"Hundefutter"},"Recipe_26":{"ID":"26","TITLE":"Apfelstrudel"},"Recipe_37":{"ID":"37","TITLE":null},"Recipe_38":{"ID":"38","TITLE":"AENDERUNG"},"Recipe_39":{"ID":"39","TITLE":null},"Recipe_40":{"ID":"40","TITLE":"Schnitzel"},"Recipe_42":{"ID":"42","TITLE":"Release-Test"},"Recipe_43":{"ID":"43","TITLE":"Wurstel2"}}},"Message":null,"Code":200}).Data.Recipes; All the necessary jquery scripts are available. What can be the problem?

    Read the article

  • Combining JavaScript for Google Analytics with yours. (Asynchronous tracking.)

    - by lorenzo 72
    I have a JavaScript file which is loaded up at the end of my HTML page. Rather than adding the script code for asynchronous tracking for Google in yet another script I would rather combine the two scripts together. So instead of this: <html> ... <script src="myScript.js"> <!-- google analytics --> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXX-X']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga); })(); </script> </html> I would have that bit of code in the second script tag at the end of my 'myScript.js'. I have not found one place in google documentation where it suggests to combine the script with yours.

    Read the article

  • Run FFmpeg from Shell Script

    - by Abs
    Hello all, I have found a useful shell script that shows all files in a directory recursively. Where it prints the file name echo "$i"; #Display File name. I would instead like to run an ffmpeg command on non MP3 files, how can I do this? I have very limited knowledge of shell scripts so I appreciate if I was spoon fed! :) //if file is NOT MP3 ffmpeg -i [the_file] -sameq [same_file_name_with_mp3_extension] //delete old file Here is the shell script for reference. DIR="." function list_files() { if !(test -d "$1") then echo $1; return; fi cd "$1" echo; echo `pwd`:; #Display Directory name for i in * do if test -d "$i" #if dictionary then list_files "$i" #recursively list files cd .. else echo "$i"; #Display File name fi done } if [ $# -eq 0 ] then list_files . exit 0 fi for i in $* do DIR="$1" list_files "$DIR" shift 1 #To read next directory/file name done

    Read the article

  • Database source control with Oracle

    - by borjab
    I have been looking during hours for a way to check in a database into source control. My first idea was a program for calculating database diffs and ask all the developers to imlement their changes as new diff scripts. Now, I find that if I can dump a database into a file I cound check it in and use it as just antother type of file. The main conditions are: Works for Oracle 9R2 Human readable so we can use diff to see the diferences. (.dmp files doesn't seem readable) All tables in a batch. We have more than 200 tables. It stores BOTH STRUCTURE AND DATA It supports CLOB and RAW Types. It stores Procedures, Packages and its bodies, functions, tables, views, indexes, contraints, Secuences and synonims. It can be turned into an executable script to rebuild the database into a clean machine. Not limitated to really small databases (Supports least 200.000 rows) It is not easy. I have downloaded a lot of demos that does fail in one way or another. EDIT: I wouldn't mind alternatives aproaches provided that they allows us to check a working system against our release DATABASE STRUCTURE AND OBJECTS + DATA in a bath mode. By the way. Our project has been developed for years. Some aproaches can be easily implemented when you make a fresh start but seem hard at this point. EDIT: To understand better the problem let's say that some users can sometimes do changes to the config data in the production eviroment. Or developers might create a new field or alter a view without notice in the realease branch. I need to be aware of this changes or it will be complicated to merge the changes into production.

    Read the article

  • Is Subversion's 'Lazy Copy' still lazy when overwriting a previously deleted file?

    - by JW
    Is Subversion's 'Lazy Copy' still lazy when overwriting a previously deleted file? I store my externals in a separate folder for each version: i.e say for dojo I'd have: webroot\ scripts\ dojo-v-1.0.0\ dojo-v-1.1.0\ etc. By doing this, for me at least, I feel it makes it easier to switch over to a new version. By only adding each new version i am not really giving svn the history it needs to do lazy copies. So one tactic I have used is to svn copy over the old version over to where the new one will be then svn delete that whole folder then unpack my newer version into that place then svn add them The idea is to avoid having a massive amount of duplicated data in my repo. I hope svn is looking at the new files and saying, "hey, i already had this once, copied, then deleted...so i am only going to be lazy store the changes". That was my theory - but does that happen in practice? p.s. Yes I know an alternative is to set the 'externals properties on the folder' - but that's another question.

    Read the article

  • Python vs all the major professional languages [closed]

    - by Matt
    I've been reading up a lot lately on comparisons between Python and a bunch of the more traditional professional languages - C, C++, Java, etc, mainly trying to find out if its as good as those would be for my own purposes. I can't get this thought out of my head that it isn't good for 'real' programming tasks beyond automation and macros. Anyway, the general idea I got from about two hundred forum threads and blog posts is that for general, non-professional-level progs, scripts, and apps, and as long as it's a single programmer (you) writing it, a given program can be written quicker and more efficiently with Python than it could be with pretty much any other language. But once its big enough to require multiple programmers or more complex than a regular person (read: non-professional) would have any business making, it pretty much becomes instantly inferior to a million other languages. Is this idea more or less accurate? (I'm learning Python for my first language and want to be able to make any small app that I want, but I plan on learning C eventually too, because I want to get into driver writing eventually. So I've been trying to research each ones strengths and weaknesses as much as I can.) Anyway, thanks for any input

    Read the article

  • JSON Returning Null in PHP

    - by kira423
    Here is the two scripts I have Script 1: if(sha1($json+$secret) == $_POST['signature']) { $conversion_id = md5(($obj['amount'])); echo "OK"; echo $conversion_id; mysql_query("INSERT INTO completed (`id`,`uid`,`completedid`) VALUES ('','".$obj['uid']."','".$conversion_id."')"); } else { } ?> Script 2: <? $json = $_POST['payload']; $secret = "78f12668216b562a79d46b170dc59f695070e532"; $obj = json_decode($json); if(sha1($json+$secret) == $_POST['signature']) { print "OK"; } else { } ?> The problem here is that it is returning all NULL values. I am not an expert with JSON so I have no idea what is going on here. I really have no way of testing it because the information is coming from an outside website sending information such as this: { payload: { uid: "900af657a65e", amount: 50, adjusted_amount: 25 }, signature: "4dd0f5da77ecaf88628967bbd91d9506" } The site allows me to test the script, but because json_decode is providing NULL values it will not get through the signature block. Is there a way I can test it myself? Or is there a simple error in this script that I may have just looked over?

    Read the article

  • Multiple jQuery includes in a document

    - by bah
    Hi, I have a document which uses old jQuery and I need new jQuery for a particular plug-in. My document structure looks like this: <html> <head> <script type="text/javascript" src="jQuery.old.js"></script> </head> <body> <script> $("#elem").doSomething(); // use old jQuery </script> <!-------- My plugin begins --------> <script type="text/javascript" src="jQuery.new.js"></script> <script type="text/javascript" src="jQuery.doSomething.js"></script> <script> $().ready(function(){ $("#elem").doSomething(); // use new jQuery }); </script> <div id="elem"></div> <!-------- My plugin ends ----------> <script> $("#elem").doSomething(); // use old jQuery </script> </body> </html> I have googled for this question but found nothing that would look like my case (I need first to load old javascript (in the head) and THEN new (in the body). By the way, in the Firefox looks like old jQuery lib loads and scripts that depends on it works, but script that uses new version, and in IE and Chrome everything is exactly opposite.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Does the managed main UI thread stay on the same (unmanaged) Operation System thread?

    - by Daniel Rose
    I am creating a managed WPF UI front-end to a legacy Win32-application. The WPF front-end is the executable; as part of its startup routines I start the legacy app as a DLL in a second thread. Any UI-operation (including CreateWindowsEx, etc.) by the legacy app is invoked back on the main UI-thread. As part of the shutdown process of the app I want to clean up properly. Among other things, I want to call DestroyWindow on all unmanaged windows, so they can properly clean themselves up. Thus, during shutdown I use EnumWindows to try to find all my unmanaged windows. Then I call DestroyWindow one the list I generate. These run on the main UI-thread. After this background knowledge, on to my actual question: In the enumeration procedure of EnumWindows, I have to check if one of the returned top-level windows is one of my unmanaged windows. I do this by calling GetWindowThreadProcessId to get the process id and thread id of the window's creator. I can compare the process id with Process.GetCurrentProcess().Id to check if my app created it. For additional security, I also want to see if my main UI-thread created the window. However, the returned thread id is the OS's ThreadId (which is different than the managed thread id). As explained in this question, the CLR reserves the right to re-schedule the managed thread to different OS threads. Can I rely on the CLR to be "smart enough" to never do this for the main UI thread (due to thread-affinity of the UI)? Then I could call GetCurrentThreadId to get the main UI-thread's unmanaged thread id for comparison.

    Read the article

  • Silverlight Not Rendering On Navigation

    - by Azmath
    I'm trying to create a site that requires login. Its entirely designed in silverlight. So my first page, home.xaml loads in mysite.aspx and it basically has a login page. AFter login, the user is redirected to another page user.aspx. in that page, i've embedded another silverlight control called nav.xaml. so now when user.aspx loads it is supposed to load a silverlight control. i've programmed app.xaml.vb such that it loads nav.xaml in the rootlayout when the page requesting is user.aspx. but for some reason its not working. my app.xaml.vb code: Private Sub Application_Startup(ByVal o As Object, ByVal e As StartupEventArgs) Handles Me.Startup If e.InitParams.ContainsKey("ReqPage") Then If e.InitParams("ReqPage") = "userpage" Then Me.RootVisual = New Nav() End If Else Me.RootVisual = New Home() End If End Sub in IE, half of the nav.xaml is rendered. but in firefox nothing is rendered. so wats going on exactly? pls help!

    Read the article

  • Phonegap: Will my mobile app 'feel' faster or slower once ported to phonegap?

    - by user15872
    So I'm designing everything in mobile Safari and I know that phonegap is essentially a stripped webview but... Question: Will my application will run better in phonegap? (revised below) a)I imagine my navigation and core app will load faster as the scripts and images are on the hard drive. Is this True? b)I assume since they've been working on it for 2 years now that they may have made some optimizations to make it quicker than just an average safari window. Is this true? (Assuming both html5/js/css code bases are pretty much the same and app is running on iOS.) Update: Sorry, I meant to compare apples to slightly different apples. Question 1 revised: Will my app see any performance benefits running with in a phonegap environment vs standard mobile safari? (compare mobile - to mobile) 1b) In what ways, other than loading time has phonegap optimized performance over standard mobile safari? Follow ups: 1) Are there any pitfalls, other than large libraries, that may cause phonegap to suffer a serious performance hit vs stand mobile safari? 2) Can I mix native and webview rendering? (i.e the top half of my app is rendered in with html/css/js and the bottom half native)

    Read the article

  • How do interpreters written in C and C++ bind identifiers to C(++) functions

    - by sub
    I'm talking about C and/or C++ here as this are the only languages I know used for interpreters where the following could be a problem: If we have an interpreted language X how can a library written for it add functions to the language which can then be called from within programs written in the language? PHP example: substr( $str, 5, 10 ); How is the function substr added to the "function pool" of PHP so it can be called from within scripts? It is easy for PHP storing all registered function names in an array and searching through it as a function is called in a script. However, as there obviously is no eval in C(++), how can the function then be called? I assume PHP doesn't have 100MB of code like: if( identifier == "substr" ) { return PHP_SUBSTR(...); } else if( ... ) { ... } Ha ha, that would be pretty funny. I hope you have understood my question so far. How do interpreters written in C/C++ solve this problem? How can I solve this for my own experimental toy interpreter written in C++?

    Read the article

  • How do C and C++ interpreters bind identifiers to functions

    - by sub
    I'm talking about C and/or C++ here as this are the only languages I know used for interpreters where the following could be a problem: If we have an interpreted language X how can a library written for it add functions to the language which can then be called from within programs written in the language? PHP example: substr( $str, 5, 10 ); How is the function substr added to the "function pool" of PHP so it can be called from within scripts? It is easy for PHP storing all registered function names in an array and searching through it as a function is called in a script. However, as there obviously is no eval in C(++), how can the function then be called? I assume PHP doesn't have 100MB of code like: if( identifier == "substr" ) { return PHP_SUBSTR(...); } else if( ... ) { ... } Ha ha, that would be pretty funny. I hope you have understood my question so far. How do C/C++ interpreters solve this problem? How can I solve this for my own experimental toy interpreter?

    Read the article

  • At what point is it worth using a database?

    - by radix07
    I have a question relating to databases and at what point is worth diving into one. I am primarily an embedded engineer, but I am writing an application using QT to interface with our controller. We are at an odd point where we have enough data that it would be feasible to implement a database (around 700+ items and growing) to manage everything, but I am not sure it is worth the time right now to deal with. I have no problems implementing the GUI with files generated from excel and parsed in, but it gets tedious and hard to track even with VBA scripts. I have been playing around with converting our data into something more manageable for the application side with Microsoft Access and that seems to be working well. If that works out I am only a step (or several) away from using an SQL database and using the QT library to access and modify it. I don't have much experience managing data at this level and am curious what may be the best way to approach this. So what are some of the real benefits of using a database if any in this case? I realize much of this can be very application specific, but some general ideas and suggestions on how to straddle the embedded/application programming line would be helpful. Thanks

    Read the article

  • Reason for socket.error

    - by August Flanagan
    Hi, I am a complete newbie when it comes to python, and programming in general. I've been working on a little webapp for the past few weeks trying to improve my coding chops. A few days ago my laptop was stolen so I went out and got a new MacBook Pro. Thank God I had everything under subversion control. The problem is now that I am on my new machine a script that I was running has stopped working and I have no idea why. This is really the only part of what I have been writing that I borrowed heavily for existing scripts. It is from the widely available whois.py script and I have only slightly modified it as follows (see below). It was running fine on my old system (running ubuntu), but now the socket.error is being raised. I'm completely lost on this, and would really appreciate any help. Thanks! def is_available(domainname, whoisserver="whois.verisign-grs.com", cache=0): if whoisserver is None: whoisserver = "whois.networksolutions.com" s = None while s == None: try: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setblocking(0) try: s.connect((whoisserver, 43)) except socket.error, (ecode, reason): if ecode in (115, 150): pass else: raise socket.error, (ecode, reason) ret = select.select([s], [s], [], 30) if len(ret[1])== 0 and len(ret[0]) == 0: s.close() raise TimedOut, "on connect " s.setblocking(1) except socket.error, (ecode, reason): print ecode, reason time.sleep(1) s = None s.send("%s \n\n" % domainname) page = "" while 1: data = s.recv(8196) if not data: break page = page + data s.close()

    Read the article

  • DLL Export C/C++ 6.00 function invoked by VB6

    - by nashth
    Hi all, I have been attemptng to create a DLL with C/C++ that can be accessed by VB6, and that's right I get error "453 Can't find DLL entry point myFunctionName in myDllName.dll" upon calling the function from a VB6 app. After searching the Web, including this site, I see that I am not alone, and I have tried the various solutions posted but error "453" is unexcapable. This is Not a COMM dll, and I believe that is possible when created via C/C++. In any case, please help, if you can. Please refer to the following simple test case below: The DLL created as a C/C++ 6.00 Win32 Dynamic-Link Library: #include // Note that I did try the line below rather than the def file, but to no avail... // #pragma comment(linker, "/EXPORT:ibask32=_ibask32@0") // Function definition extern "C" int __declspec(dllexport) __stdcall ibask32() { MessageBox(NULL,"String","Sample Code", NULL); return 0L; } The def file: LIBRARY "Gpib-32" EXPORTS ibask32 Now for the VB App: The following is the entire content of the startup Form1, Form_Load Option Explicit Private Sub Form_Load() Call ibask End Sub The following is a BAS module file that is added to the project: Option Explicit Declare Function ibask32 Lib "Gpib-32.dll" Alias "ibask" () As Long Sub ibask() Call ibask32 ' Note: This is the point of failure End Sub Thanks in advance if a workable solution can be provided, Tom

    Read the article

  • How to map keys in vim differently for different kinds of buffers

    - by Yogesh Arora
    The problem i am facing is that i have mapped some keys and mouse events for seraching in vim while editing a file. But those mappings impact the functionality if the quickfix buffer. I was wondering if it is possible to map keys depending on the buffer in which they are used. EDIT - I am adding more info for this question Let us consider a scenario. I want to map <C-F4> to close a buffer/window. Now this behavior could depend on a number of things. If i am editing a buffer it should just close that buffer without changing the layout of the windows. I am using buffkil plugin for this. It does not depend on extension of file but on the type of buffer. I saw in vim documentation that there are unlisted and listed buffer. So if it is listed buffer it should close using bufkill commands. If it is not a listed buffer it should use <c-w>c command to close buffer and changing the window layout. I am new at writing vim functions/scripts, can someone help me getting started on this

    Read the article

  • Loading a view routed by a URL parameter (e.g., /users/:id) in MEAN stack

    - by Matt Rowles
    I am having difficulties with trying to load a user by their id, for some reason my http.get call isn't hitting my controller. I get the following error in the browser console: TypeError: undefined is not a function at new <anonymous> (http://localhost:9000/scripts/controllers/users.js:10:8) Update I've fixed my code up as per comments below, but now my code just enters an infinite loop in the angular users controllers (see code below). I am using the Angular Express Generator for reference Backend - nodejs, express, mongo routes.js: // not sure if this is required, but have used it before? app.param('username', users.show); app.route('/api/users/:username') .get(users.show); controller.js: // This never gets hit exports.show = function (req, res, next, username) { User.findOne({ username: username }) .exec(function (err, user) { req.user = user; res.json(req.user || null); }); }; Frontend - angular app.js: $routeProvider .when('/users/:username', { templateUrl: function( params ){ return 'users/view/' + params.username; }, controller: 'UsersCtrl' }) services/user.js: angular.module('app') .factory('User', function ($resource) { return $resource('/api/users/:username', { username: '@username' }, { update: { method: 'PUT', params: {} }, get: { method: 'GET', params: { username:'username' } } }); }); controllers/users.js: angular.module('app') .controller('UsersCtrl', ['$scope', '$http', '$routeParams', '$route', 'User', function ($scope, $http, $routeParams, $route, User) { // this returns the error above $http.get( '/api/users/' + $routeParams.username ) .success(function( user ) { $scope.user = user; }) .error(function( err) { console.log( err ); }); }]); If it helps, I'm using this setup

    Read the article

  • WindowsFormsApplicationBase SplashScreen makes login form ignore keypresses until I click on it - how to debug?

    - by Tom Bushell
    My WinForms app has a simple modal login form, invoked at startup via ShowDialog(). When I run from inside Visual Studio, everything works fine. I can just type in my User ID, hit the Enter key, and get logged in. But when I run a release build directly, everything looks normal (the login form is active, there's a blinking cursor in the User ID MaskedEditBox), but all keypresses are ignored until I click somewhere on the login form. Very annoying if you are used to doing everything from the keyboard. I've tried to trace through the event handlers, and to set the focus directly with code, to no avail. Any suggestions how to debug this (outside of Visual Studio), or failing that - a possible workaround? Edit Here's the calling code, in my Main Form: private void OfeMainForm_Shown(object sender, EventArgs e) { OperatorLogon(); } private void OperatorLogon() { // Modal dialogs should be in a "using" block for proper disposal using (var logonForm = new C21CfrLogOnForm()) { var dr = logonForm.ShowDialog(this); if (dr == DialogResult.OK) SaveOperatorId(logonForm.OperatorId); else Application.Exit(); } } Edit 2 Didn't think this was relevant, but I'm using Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase for it's splash screen and SingleInstanceController support. I just commented out the splash screen code, and the problem has disappeared. So that's opened up a whole new line of inquiry... Edit 3 Changed title to reflect better understanding of the problem

    Read the article

  • maven ant echoproperties task

    - by user373201
    I am new to maven. I have written build scripts using ant. I am trying to display all the evn properties, user defined properties, system properties etc. in maven. In ant i could do the following . I tried to do the same with maven with the maven-antrun-plugin But get the following error. Embedded error: Could not create task or type of type: echoproperties. Ant could not find the task or a class this task relies upon. How can i see all properties in maven with or without using echoproperties. This is my configuration in maven <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>${maven.plugin.antrun.version}</version> <executions> <execution> <phase>validate</phase> <goals> <goal>run</goal> </goals> <configuration> <tasks> <echo>Displaying value of properties</echo> <echo>[org.junit.version] ${org.junit.version}</echo> <echoproperties prefix="org" /> </tasks> </configuration> </execution> </executions> </plugin>

    Read the article

  • Embedded scripting engine in .Net app

    - by Nate
    I am looking to replace an old control being used for scripting an application. The control used to be called SAX Basic, but is now called WinWrap. It provides us with two primary functions. 1) It's a scripting engine (VB) 2) It has a GUI for developing and debugging scripts that get run in the hosting application. The first feature it provides is actually pretty easy to replace. There are so many great methods of running just about any kind of code at runtime that it's almost a non-issue. Just about any language targeting the .Net runtime will work for us. We've looked at running C#, PowerShell, VB.Net, IronPython, etc. I've also taken a brief look at Lua and F#, but honestly the language isn't the biggest barrier here. Now, for the hard part that seems to keep getting me stuck. We want a code editor, and debugger. Something simple, not unlike PowerShell's ISE would be fine. Just as long as a file could be created, saved, debugged and executed. I'm currently looking into Visual Studio 2010 Shell (Isolated) and I'm also looking at the feasibility of embedding PowerShell ISE in my application. Are there any other editor's I could embed/use in my application? Purchasing a product is not out of the question. It comes down to a combination of ease of use, how well it meets our needs, and how simple deployment and licensing is for developers. Thanks for the pointers

    Read the article

  • Browser timing out attempting to load images

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >