Search Results

Search found 10586 results on 424 pages for 'zend rest route'.

Page 296/424 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • shell script passing subset of arguments

    - by arav
    From the wrapper shell scripts i am calling the Java program. I want the Unix shell script to pass all the arguments to java program except the EMAIL argument. HOW Can i remove the EMAIL argument and pass the rest of the arguments to the java program. EMAIL argument can come at any position. valArgs() { until [ $# -eq 0 ]; do case $1 in -EMAIL) MAILFLAG=Y shift break ;; esac done } main() { valArgs "$@" $JAVA_HOME/bin/java -d64 -jar WEB-INF/lib/test.jar "$@"

    Read the article

  • How to allow multiple inputs from user using R?

    - by Juan
    For example, if I need that the user specifies the number of rows and columns of a matrix: PROMPT: Number of rows?: USER INPUT: [a number] I need that R 'waits' for the input. Then save [a number] into a variable v1. Next, PROMPT: Number of columns?: USER INPUT: [another number] Also save [another number] into a variable v2. At the end, I will have two variables (v1, v2) that will be used in the rest of the code. "readline" only works for one input at a time. I can't run the two lines together v1 <- readline("Number of rows?: ") v2 <- readline("Number of columns?: ") Any ideas or suggestions? Thank you in advance

    Read the article

  • Dynamic scoping in Clojure?

    - by j-g-faustus
    Hi, I'm looking for an idiomatic way to get dynamically scoped variables in Clojure (or a similar effect) for use in templates and such. Here is an example problem using a lookup table to translate tag attributes from some non-HTML format to HTML, where the table needs access to a set of variables supplied from elsewhere: (def *attr-table* ; Key: [attr-key tag-name] or [boolean-function] ; Value: [attr-key attr-value] (empty array to ignore) ; Context: Variables "tagname", "akey", "aval" '( ; translate :LINK attribute in <a> to :href [:LINK "a"] [:href aval] ; translate :LINK attribute in <img> to :src [:LINK "img"] [:src aval] ; throw exception if :LINK attribute in any other tag [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules ; ignore string keys, used for internal bookkeeping [(string? akey)] [] )) ; ignore I want to be able to evaluate the rules (left hand side) as well as the result (right hand side), and need some way to put the variables in scope at the location where the table is evaluated. I also want to keep the lookup and evaluation logic independent of any particular table or set of variables. I suppose there are similar issues involved in templates (for example for dynamic HTML), where you don't want to rewrite the template processing logic every time someone puts a new variable in a template. Here is one approach using global variables and bindings. I have included some logic for the table lookup: ;; Generic code, works with any table on the same format. (defn rule-match? [rule-val test-val] "true if a single rule matches a single argument value" (cond (not (coll? rule-val)) (= rule-val test-val) ; plain value (list? rule-val) (eval rule-val) ; function call :else false )) (defn rule-lookup [test-val rule-table] "looks up rule match for test-val. Returns result or nil." (loop [rules (partition 2 rule-table)] (when-not (empty? rules) (let [[select result] (first rules)] (if (every? #(boolean %) (map rule-match? select test-val)) (eval result) ; evaluate and return result (recur (rest rules)) ))))) ;; Code specific to *attr-table* (def tagname) ; need these globals for the binding in html-attr (def akey) (def aval) (defn html-attr [tagname h-attr] "converts to html attributes" (apply hash-map (flatten (map (fn [[k v :as kv]] (binding [tagname tagname akey k aval v] (or (rule-lookup [k tagname] *attr-table*) kv))) h-attr )))) (defn test-attr [] "test conversion" (prn "a" (html-attr "a" {:LINK "www.google.com" "internal" 42 :title "A link" })) (prn "img" (html-attr "img" {:LINK "logo.png" }))) user=> (test-attr) "a" {:href "www.google.com", :title "A link"} "img" {:src "logo.png"} This is nice in that the lookup logic is independent of the table, so it can be reused with other tables and different variables. (Plus of course that the general table approach is about a quarter of the size of the code I had when I did the translations "by hand" in a giant cond.) It is not so nice in that I need to declare every variable as a global for the binding to work. Here is another approach using a "semi-macro", a function with a syntax-quoted return value, that doesn't need globals: (defn attr-table [tagname akey aval] `( [:LINK "a"] [:href ~aval] [:LINK "img"] [:src ~aval] [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules [(string? ~akey)] [] ))) Only a couple of changes are needed to the rest of the code: In rule-match?, when syntax-quoted the function call is no longer a list: - (list? rule-val) (eval rule-val) + (seq? rule-val) (eval rule-val) In html-attr: - (binding [tagname tagname akey k aval v] - (or (rule-lookup [k tagname] *attr-table*) kv))) + (or (rule-lookup [k tagname] (attr-table tagname k v)) kv))) And we get the same result without globals. (And without dynamic scoping.) Are there other alternatives to pass along sets of variable bindings declared elsewhere, without the globals required by Clojure's binding? Is there an idiomatic way of doing it, like Ruby's binding or Javascript's function.apply(context)?

    Read the article

  • Design Patterns: What's the antithesis of Front Controller?

    - by Brian Lacy
    I'm familiar with the Front Controller pattern, in which all events/requests are processed through a single centralized controller. But what would you call it when you wish to keep the various parts of an application separate at the presentation layer as well? My first thought was "Facade" but it turns out that's something entirely different. In my particular case, I'm converting an application from a sprawling procedural mess to a clean MVC architecture, but it's a long-term process -- we need to keep things separated as much as possible to facilitate a slow integration with the rest of the system. Our application is web-based, built in PHP, so for instance, we have an "index.php" and an IndexController, a "account.php" and an AccountController, a "dashboard.php" and DashboardController, and so on.

    Read the article

  • Change color of a table cell using javascript using dropdown menu

    - by Mike Burzycki
    I'd like to use some javascript code to change the background color of a single cell within a table. I have some code below which allows me to change the page background color. This is similar in concept to what I would like to do, but I would really like to be able to change just one cell...not the whole page. I have thought about making the rest of the cell borders and background colors white, leaving the cell I want to manipulate transparent, but I think this probably a brute force method that will cause me trouble down the road. Does anyone have any advice to do this with javascript? The page background color changing code is here: <form name="bgcolorForm">Try it now: <select onChange="if(this.selectedIndex!=0) document.bgColor=this.options[this.selectedIndex].value"> <option value="choose">set background color <option value="FFFFCC">light yellow <option value="CCFFFF">light blue <option value="CCFFCC">light green <option value="CCCCCC">gray <option value="FFFFFF">white </select></form> Thanks for the help, Mike

    Read the article

  • Importing/Exporting Relationships in MS Access

    - by lamcro
    I have a couple of mdb files with the exact table structure. I have to change the primary key of the main table from autonumber to number in all of them, which means I have to: Drop the all the relationships the main table has Change the main table Create the relationships again,... for all the tables. Is there any way to export the relationships from one file and importing them to all the rest? I am sure this can be done with some macro/vb code. Does anyone has an example I could use? Thanks.

    Read the article

  • [ZF & jQuery] How can I access an URL using AJAX, receive no response, but just manipulate HTML?

    - by rasouza
    I don't know if it's better used with AJAX (tell me, otherwise) but here is my problem: Assuming i'm using Zend Framework, I have a table with several registries from a database with a delete button on each row. That's like this [...] <tbody> <?php foreach ($row as $reg) { ?> <tr <?php if ($reg['value'] < 0) { echo "class='error'"; } ?>> <td><?php echo $reg['creditor'] ?></td> <td><?php echo $reg['debtor'] ?></td> <td><?php echo $reg['reason'] ?></td> <td>R$ <?php echo number_format(abs($reg['value']), 2, ',', ' ')?></td> <td><a href="#" id="<?php echo $reg['id']; ?>" class="delete"><img src="http://192.168.0.102/libraries/css/blueprint/plugins/buttons/icons/cross.png" alt=""/></a></td> </tr> <?php } ?> </tbody> [...] I would like to .fadeOut() and delete (through the link history/delete/id/ROW_ID ) a table row when clicked in the respective delete button. My deleteAction() has no render. It really shouldn't have one, it just deletes a row in the database. Still, how can I make it happen? I tried: // TR Fading when deleted $('.delete') .click(function() { $.ajax({ type: 'GET', url: 'history/delete/id/'+$(this).attr('id'), success: function() { $(this).parent().parent().fadeOut(); } }); return false; }); without success

    Read the article

  • PostgreSQL - best way to return an array of key-value pairs

    - by Matt W
    I'm trying to select a number of fields, one of which needs to be an array with each element of the array containing two values. Each array item needs to contain a name (character varying) and an ID (numeric). I know how to return an array of single values (using the ARRAY keyword) but I'm unsure of how to return an array of an object which in itself contains two values. The query is something like SELECT t.field1, t.field2, ARRAY(--with each element containing two values i.e. {'TheName', 1 }) FROM MyTable t I read that one way to do this is by selecting the values into a type and then creating an array of that type. Problem is, the rest of the function is already returning a type (which means I would then have nested types - is that OK? If so, how would you read this data back in application code - i.e. with a .Net data provider like NPGSQL?) Any help is much appreciated.

    Read the article

  • How to properly log out of facebook

    - by Gublooo
    This is a repeated question and I have followed both the suggestions provided in these StackOverflow links: How to log-out users using FaceBook connect in php and zend Trouble logging out of a FaceBook connect site and destroying sessions The issue is - the code works 90% of the time. Thats the weird part. Out of the 100 times I've logged in and out - I've experienced this problem 5-6 times and 2 of my beta test users have reported the same issue. So when it works- if u click the logout link - u get the facebook popup saying - you being logged out - when it does'nt work - absolutely nothing happens - the page does not refresh - it just sits on that page doing nothing. This is the javascript code that gets called on clicking logout function logout() { FB.Connect.get_status().waitUntilReady(function(status) { switch(status) { case FB.ConnectState.connected: FB.Connect.logoutAndRedirect("http://www.example.com/login/logout"); break; case FB.ConnectState.userNotLoggedIn: window.location = "http://www.example.com/login/logout"; break; } }); return false; } This is the php code: $this-_auth-clearIdentity(); $face = Zend_Registry::get('facebook'); $fb = new Facebook($face['appapikey'], $face['appsecret']); //$fb-clear_cookie_state(); $fb-expire_session(); Anyone experienced such sporadic issues. Thanks

    Read the article

  • searching between dates in MYSQL in this format 03/17/10.11:22:45

    - by Kelso
    I have a script that automatically populates a mysql database with data every hour. It populates the date field like 03/17/10.12:34:11 and so on. I'm working on pulling data based on 1 day at a time from a search script. If i use select * from call_logs where call_initiated between '03/17/10.12:00:00' and '03/17/10.13:00:00' it works, but when I try to add the rest of the search params, it ignores the call_initiated field. select * from call_logs where caller_dn='2x9xxx0000' OR called_dn='2x9xxx0000' AND call_initiated between '03/17/10.12:00:00' and '03/17/10.13:00:00' ^-- I x'd out a couple of the numbers. I've also tried without the between function, and used = <= to pull the records, but have the same results. Im sure its an oversight, thanks in advance.

    Read the article

  • twitter's profile widget include in a separate javascript file

    - by raulricardo21
    Hi stackoverflowers, I'm just trying to put the custom twitter's profile widget on my page, but I like to put the code in a separate javascript's file. so, I don't know how to do that. I mean, I put this on head tag <script type="text/javascript" src="http://widgets.twimg.com/j/2/widget.js"></script> the create a div for the widget, an put the rest of the code in another javascript new TWTR.Widget(json_twitter_options).render().setUser('username').start(); But, how to "put" the result in that widget... I'm totally lost, thanks in advance.

    Read the article

  • Kill a Perl system call after a timeout

    - by Fergal
    I've got a Perl script I'm using for running a file processing tool which is started using backticks. The problem is that occasionally the tool hangs and It needs to be killed in order for the rest of the files to be processed. Whats the best way best way to apply a timeout after which the parent script will kill the hung process? At the moment I'm using: foreach $file (@FILES) { $runResult = `mytool $file >> $file.log`; } But when mytool hangs after n seconds I'd like to be able to kill it and continue to the next file.

    Read the article

  • Programmer Health - how to avoid going blind and sick!

    - by stefanyko
    Hi All! this is my very first time here! so nice to meet you guys! ;-) When i was starting with this job, ( when i have chosed of doing this for the rest of my life) i have thinked also, one day, before or after, i would become blind or at least sick by drinking 5 or more coffee per day and of course by sitting down on my pc for hours!!! ;-\ From many years now, i'm asking myself how my eyes can stay in front o the monitor for so many hours per day!? well now i'm to a point of no return!!! i feel my eyes each day more tired, and my productivity is waning, but i can't change work now and i don't want do this!!!! what i need to do for prevent this to become a more serious problem for me and for my eyes!? any suggestion will be really appreciated!!! Thanks!

    Read the article

  • How many layers are between my program and the hardware?

    - by sub
    I somehow have the feeling that modern systems, including runtime libraries, this exception handler and that built-in debugger build up more and more layers between my (C++) programs and the CPU/rest of the hardware. I'm thinking of something like this: 1 + 2 OS top layer Runtime library/helper/error handler a hell lot of DLL modules OS kernel layer Do you really want to run 1 + 2?-Windows popup (don't take this serious) OS kernel layer Hardware abstraction Hardware Go through at least 100 miles of circuits Eventually arrive at the CPU ADD 1, 2 Go all the way back to my program Nearly all technical things are simply wrong and in some random order, but you get my point right? How much longer/shorter is this chain when I run a C++ program that calculates 1 + 2 at runtime on Windows? How about when I do this in an interpreter? (Python|Ruby|PHP) Is this chain really as dramatic in reality? Does Windows really try "not to stand in the way"? e.g.: Direct connection my binary < hardware?

    Read the article

  • What is the basic design idea behind the Scala for-loop implicit box/unboxing of numerical types?

    - by IODEV
    I'm trying to understand the behavior of Scala for-loop implicit box/unboxing of "numerical" types. Why does the two first fail but not the rest? 1) Fails: scala for (i:Long <- 0 to 10000000L) {} <console>:19: error: type mismatch;<br> found : Long(10000000L) required: Int for (i:Long <- 0 to 10000000L) {} ^ 2 Fails: scala for (i <- 0 to 10000000L) {} <console>:19: error: type mismatch; found : Long(10000000L) required: Int for (i <- 0 to 10000000L) {} ^ 3) Works: scala for (i:Long <- 0L to 10000000L) {} 4) Works: scala for (i <- 0L to 10000000L) {}

    Read the article

  • Need alternative field names for these reserved words

    - by MattSlay
    “type” and “class” are likely reserved or problematic words in C# and/or Ruby, two languages I may use to program against my new database schema in the future. So, in order to avoid potential conflicts with those languages, I’m looking for alternative names for these field names in my tables. In this case, it is from my Machines table, where I have: “class” field (values would be something like “manual” or “computerized”) and “type” field (values would be “lathe” or “mill”) I could call the fields “machineclass” and “machinetype”, but that is inconsistent with naming scheme in the rest of my schema (meaning, I do not re-use the table name in the field… For instance, I use Machine.name, not Machine.machinename) Any thought on this madness?

    Read the article

  • Visual Studio Pre build events and batch set

    - by helloworld922
    Hi, I'm trying to create call a batch file which sets a bunch of environment variables prior to building. The batch file looks something like this (it's automatically generated before-hand to detect ATI Stream SDK or NVidia CUDA toolkit): set OCL_LIBS_X86="%ATISTREAMSDKROOT%libs\x86" set OCL_LIBS_X64="%ATISTREAMSDKROOT%libs\x86_64" set OCL_INCLUDE="%ATISTREAMSDKROOT%include" However, the rest of the build doesn't seem to have access to these variables, so when I try to reference $(OCL_INCLUDE) in the C/C++GeneralAdditional include directories, it will first give me warning that environment variable $(OCL_INCLUDE) was not found, and when I try to include CL/cl.hpp the compile will fail with: fatal error C1083: Cannot open include file: 'CL/cl.hpp': No such file or directory I know that I could put these variables into the registry if I wanted to access them from the visual studio GUI, but I would really prefer not to do this. Is there a way to to get these environment variables to stick after the pre-build events? I can't reference $(ATISTREAMSDKROOT) directly because the project must be able to build for both ATI Stream and NVidia Cuda.

    Read the article

  • EntityManager and two DAO with PersistenceContextType.EXTENDED

    - by hsd
    Hi All, I have a problem with my entity manager in my application. I have two DAO clasess like this: @Repository public abstract class DaoA { protected ClassA persistentClass; @PersistenceContext(name="my.persistence", type=PersistenceContextType.EXTENDED) protected EntityManager entityManager; -------------- some typical action for DAO -------------- } Second DAO is for ClassB and looks similar to DaoA. The rest of things are done for me by the Spring framework. When I'm debugging the application I recognize that both DAO objects have different instances of EntityManager. In the result my two different DAOs are connected with different PersistenceContext. Question is if this is correct behaviour or not? I would like to have the same PersistenceContext for all my DAO classes. Please give me a hint if this is possible and if I understood the JPA correctly? Regards Hsd

    Read the article

  • Getting "on the wire" Size of Messages in WCF

    - by Mystagogue
    While I'm making SOAP or REST invocations to WCF, I'd like to have the channel stack on either end (client and server) record the on-the-wire size of the data received. So I'm guessing I need to add a custom behavior to the channel stack on either side. That is, on the server side I'd record the IP-header advertised size that was received. On the client side I'd record the IP-header advertised size that was returned from the server. But this presupposes that this information is visible to a custom WCF behavior at the channel stack level. Perhaps it is only visible at the level of ASP.NET (at a layer beneath WCF)? In short, does anyone have any further insight on if and how this information is accessible? I must qualify that this "size" data will be collected in a production environment, as part of regular business logic calls. This question is related to my earlier bandwidth question.

    Read the article

  • Error calling webservice from JQuery

    - by Robban
    I have a strange problem when I'm trying to call a simple webservice method from Jquery. Locally it works fine, but on my test-server it does not. The jquery request looks like this (only showing the actual request and not the rest of the method): $.ajax({ type: "POST", url: "/Service/Service.asmx/AddTab", data: "tab=" + element.innerHTML, success: function(msg) { alert('success'); } }); When I run this locally from the test-server it works fine, which has me wondering if it could be some setting that I've missed in the IIS. If I navigate to the .asmx file and click the AddTab method I get a list of SOAP 1.1 and SOAP 1.2 XML, but not the HTTP POST request. If I navigate to it locally I get all three (SOAP 1.1, SOAP 1.2 and HTTP Post) The service is set up as follows: [WebService(Namespace = "mynamespace")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] [ScriptService()] public class Service : System.Web.Services.WebService { [WebMethod(EnableSession=true)] [ScriptMethod()] public void AddTab(string tab) { //Some code to add a tab which evidently works locally... } } Anyone have a clue what I'm missing here?

    Read the article

  • Is Form validation and Business validation too much?

    - by Robert Cabri
    I've got this question about form validation and business validation. I see a lot of frameworks that use some sort of form validation library. You submit some values and the library validates the values from the form. If not ok it will show some errors on you screen. If all goes to plan the values will be set into domain objects. Here the values will be or, better said, should validated (again). Most likely the same validation in the validation library. I know 2 PHP frameworks having this kind of construction Zend/Kohana. When I look at programming and some principles like Don't Repeat Yourself (DRY) and single responsibility principle (SRP) this isn't a good way. As you can see it validates twice. Why not create domain objects that do the actual validation. Example: Form with username and email form is submitted. Values of the username field and the email field will be populated in 2 different Domain objects: Username and Email class Username {} class Email {} These objects validate their data and if not valid throw an exception. Do you agree? What do you think about this aproach? Is there a better way to implement validations? I'm confused about a lot of frameworks/developers handling this stuff. Are they all wrong or am I missing a point? Edit: I know there should also be client side kind of validation. This is a different ballgame in my Opinion. If You have some comments on this and a way to deal with this kind of stuff, please provide.

    Read the article

  • Why does my custom component raise AVs in the IDE?

    - by Mason Wheeler
    I'm trying to write a simple component that will allow you to embed one or more SDL rendering surfaces on a Delphi window, using the SDL 1.3 APIs. It will compile and install just fine, but when I try to use the component in the form designer, it raises AVs whenever I try to access its properties in the object inspector, save the form, or delete the component, and placing one on a form then trying to run gives a linker error: it apparently can't read the DFM properly for whatever reason. The DLL can be found at http://www.libsdl.org/tmp/SDL-1.3-dll.zip and the source code to my component can be downloaded here. SDL.pas is a JEDI-SDL header file; the rest is my own code. I don't see any reason for this to raise AVs in the form designer. If I dynamically create the control at runtime I don't have any stability issues. Can anyone take a look at this and maybe provide some feedback that might help me clear it up?

    Read the article

  • Loader.php trying to load Doctrine classes, but we use Propel!

    - by kewpiedoll99
    We are finding cases where we get the following 500 error: File xyz.php does not exist or class "xyz" was not found in the file at () in SF_ROOT_DIR/lib/vendor/Zend/Loader.php line 107 ... where xyz == Memcache (when trying to use symfony cc on the command line) or sfDoctrineAdminGenerator (when using an old-ish AdminGenerator-generated CMS page). We use Propel, but Loader.php is trying to load classes used only for Doctrine. Currently I am using a filthy hack where I request Loader.php to check if the file is either of these two cases, and if so simply return rather than trying to load it. Obviously, this is unacceptable longer term. Has anybody encountered this, and how did you solve it? Edited to add: We have: class ProjectConfiguration extends sfProjectConfiguration { public function setup() { // for compatibility / remove and enable only the plugins you want $this->enableAllPluginsExcept(array('sfDoctrinePlugin')); } } And we have a propel.ini file in our top level config directory. This has only started in the past four weeks or so, and we've had a stable build for over a year now. I'm pretty sure Doctrine is totally disabled.

    Read the article

  • How to setup directories in Visual Studio when using boost?

    - by Rich
    Hi, I have introduced boost to our code base, on my machine I created a boost directory called Thirdparty.Boost and added that as an additional include directory in my Visual Studio setting, all is fine. However I now want to check in my changes, so the rest of the team can get them. Inorder to build the code they would need to setup boost as I have (problem number 1). In addition we have a build server, which will need changing (problem 2). I have a way of distributing boost to everyone including the build server, so that's not a problem I need a way of referring to the boost directory without changing the default settings in Visual Studio. Why don't you change it on a project level I hear you cry? The solution has over 200 projects, which would require a lot of changes. I just wondered if there was another way? Cheers Rich

    Read the article

  • Need help understanding _set_security_error_handler()

    - by Emil D
    So , I've been reading this article: http://msdn.microsoft.com/en-us/library/aa290051%28VS.71%29.aspx And I would like to define my custom handler.However, I'm not sure I understand the mechanics well.What happens after a call is made to the user-defined function ( e.g. the argument of _set_security_error_handler() ) ? Does the program still terminate afterward ? If that is the case, is it possible to terminate only the current thread(assuming that it is not the main thread of the application).AFAIK, each thread has its own stack , so if the stack of a thread gets corrupted, the rest of the application shouldn't be affected. Finally, if it is indeed possible to only terminate the current thread of execution, what potential problems could such an action cause? I'm trying to do all this inside an unmanaged C++ dll that I would like to use in my C# code.

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >