Search Results

Search found 5425 results on 217 pages for 'drupal modules'.

Page 193/217 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Strings exported from a module have changed line breaks

    - by Jesse Millikan
    In a DrScheme project, I'm using a MrEd editor-canvas% with text% and inserting a string from a literal in a Scheme file. This results in an extra blank line in the editor for each line of text I'm trying to insert. I've tracked this down to the apparent fact that string literals from outside modules are getting extra line breaks. Here's a full example. The editor is irrelevant at this point, but it displays the result. ; test-literals.ss (module test-literals scheme (provide (all-defined-out)) (define exported-string "From another module with some more line breaks. ")) ; editor-test.ss (module editor-test scheme (require mred "test-literals.ss") (define w (instantiate frame% ("Editor Test" #f) )) (define c (instantiate editor-canvas% (w) (line-count 12) (min-width 400))) (define editor (instantiate text% ())) (send c set-editor editor) (send w show #t) (send editor erase) (send editor insert "Some text with some line breaks. ") (send editor insert exported-string)) And the result in the editor is Some text with some line breaks. From another module with some more line breaks. I've traced in and figured out that it's changing Unix line breaks to Windows line breaks when strings are imported from another module, but these display as double line breaks. Why is this happening and is there a way to stop it other than changing every imported string?

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • Unit testing opaque structure based C API

    - by Nicolas Goy
    I have a library I wrote with API based on opaque structures. Using opaque structures has a lot of benefits and I am very happy with it. Now that my API are stable in term of specifications, I'd like to write a complete battery of unit test to ensure a solid base before releasing it. My concern is simple, how do you unit test API based on opaque structures where the main goal is to hide the internal logic? For example, let's take a very simple object, an array with a very simple test: WSArray a = WSArrayCreate(); int foo = 5; WSArrayAppendValue(a, &foo); int *bar = WSArrayGetValueAtIndex(a, 0); if(&foo != bar) printf("Eroneous value returned\n"); else printf("Good value returned\n"); WSRelease(a); Of course, this tests some facts, like the array actually acts as wanted with 1 value, but when I write unit tests, at least in C, I usualy compare the memory footprint of my datastructures with a known state. In my example, I don't know if some internal state of the array is broken. How would you handle that? I'd really like to avoid adding codes in the implementation files only for unit testings, I really emphasis loose coupling of modules, and injecting unit tests into the implementation would seem rather invasive to me. My first thought was to include the implementation file into my unit test, linking my unit test statically to my library. For example: #include <WS/WS.h> #include <WS/Collection/Array.c> static void TestArray(void) { WSArray a = WSArrayCreate(); /* Structure members are available because we included Array.c */ printf("%d\n", a->count); } Is that a good idea? Of course, the unit tests won't benefit from encapsulation, but they are here to ensure it's actually working.

    Read the article

  • Delphi App has "No Debug Info" when Debugging

    - by James L.
    We have built an application that uses packages and components. When we debug the application, the "Event Log" in the IDE often shows the our BPLs are being loaded without debug information ("No Debug Info"). This doesn't make sense because all our packages and EXEs are built with debug. _(each project) | Options | Compiling_ [ x ] Assertions [ x ] Debug information [ x ] Local symbols Symbol reference info = "Reference info" [ ] Use debug .dcus [ x ] Use imported data references _(each project) | Options | Linking_ [ x ] Debug information Map file = Detailed We have 4 projects, all built with runtime pacakges: Core.bpl Components.bpl Plugin.bpl (uses both #1 & #2) MainApp.exe (uses #1) Problems Observed 1) Many times when we debug, the Components.bpl is loaded with debug info, but all values in the "Local Variables" window are blank. If you hover your mouse over a variable in the code, there is no popup, and Evaluate window also shows nothing (the "Result" pane is always blank). 2) Sometimes the Event Log shows "No Debug Info" for various BPLs. For instance, if we activate the Plugin.bpl project and set it's Run | Parameter's Host Application to be the MainApp.exe, and then press F9, all modules seems to load with "Has Debug Info" except for the Plugin.bpl module. When it loads, the Event Log shows "No Debug Info". However, if we close the app and immediately press F9, it will run it again without recompiling anything and this time Plugin.bpl is loaded with debug ("Has Debug Info"). Questions 1) What would cause the "Local Variables" window to not display the values? 2) Why are BPLs sometimes loaded without debug info when the BPL was complied with debug and all the debug files (dcu, map, etc.) are available?

    Read the article

  • Binding type variables that only occur in assertions

    - by Giuseppe Maggiore
    Hi! I find it extremely difficult to describe my problem, so here goes nothing: I have a bunch of assertions on the type of a function. These assertions rely on a type variable that is not used for any parameter of the function, but is only used for internal bindings. Whenever I use this function it does not compile because, of course, the compiler has no information from which to guess what type to bind my type variable. Here is the code: {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances, UndecidableInstances, FlexibleContexts, EmptyDataDecls, ScopedTypeVariables, TypeOperators, TypeSynonymInstances #-} class C a a' where convert :: a -> a' class F a b where apply :: a -> b class S s a where select :: s -> a data CInt = CInt Int instance S (Int,String) Int where select (i,_) = i instance F Int CInt where apply = CInt f :: forall s a b . (S s a, F a b) => s -> b f s = let v = select s :: a y = apply v :: b in y x :: Int x = f (10,"Pippo") And here is the generated error: FunctorsProblems.hs:21:4: No instances for (F a Int, S (t, [Char]) a) arising from a use of `f' at FunctorsProblems.hs:21:4-17 Possible fix: add an instance declaration for (F a Int, S (t, [Char]) a) In the expression: f (10, "Pippo") In the definition of `x': x = f (10, "Pippo") Failed, modules loaded: none. Prelude>

    Read the article

  • APC values randomly disappear

    - by Michael
    I'm using APC for storing a map of class names to class file paths. I build the map like this in my autoload function: $class_paths = apc_fetch('class_paths'); // If the class path is stored in application cache - search finished. if (isset($class_paths[$class])) { return require_once $class_paths[$class]; // Otherwise search in known places } else { // List of places to look for class $paths = array( '/src/', '/modules/', '/libs/', ); // Search directories and store path in cache if found. foreach ($paths as $path) { $file = DOC_ROOT . $path . $class . '.php'; if (file_exists($file)) { echo 'File was found in => ' . $file . '<br />'; $class_paths[$class] = $file; apc_store('class_paths', $class_paths); return require_once $file; } } } I can see as more and more classes are loaded, they are added to the map, but at some point the apc_fetch returns NULL in the middle of a page request, instead of returning the map. Getting => class_paths Array ( [MCS\CMS\Helper\LayoutHelper] => /Users/mbl/Documents/Projects/mcs_ibob/core/trunk/src/MCS/CMS/Helper/LayoutHelper.php [MCS\CMS\Model\Spot] => /Users/mbl/Documents/Projects/mcs_ibob/core/trunk/src/MCS/CMS/Model/Spot.php ) Getting => class_paths {null} Many times the cached value will also be gone between page requests. What could be the reason for this? I'm using APC as an extension (PECL) running PHP 5.3.

    Read the article

  • PHP: Retrieving JSON via jQuery ajax help

    - by iamjonesy
    Hey I have a script that is creating and echoing a JSON encoded array of magento products. I have a script that calls this script using jQuery's ajax function but I'm not getting a proper response. This is the script that creates the array: $collection = Mage::getModel('catalog/product')->getCollection(); $collection->addAttributeToSelect('name'); $collection->addAttributeToSelect('price'); $products = array(); foreach ($collection as $product){ $products[$product->getPrice()] = $product->getName(); } header('Content-Type: application/x-json; charset=utf-8'); echo(json_encode($products)); Here is my jQuery: <select id="products"> <option value="#">Select</option> </select> <script type="text/javascript"> jQuery.noConflict(); jQuery(document).ready(function(){ jQuery.ajax({ type: "GET", url: "http://localhost.com/magento/modules/products/get.php", dataType: "json", success: function(products) { jQuery.each(products,function(price,name) { var opt = jQuery('<option />'); opt.val(name); opt.text(price); jQuery('#products').append(opt); }); } }); }); </script> I'm getting a response from this but I'm not seeing a any JSON. I'm using firebug. I can see there has been a JSON encoded response but the response tab is emtyp and my select boxes have no options. Can anyone see and problems with my code? Here is the response I should get: {"82.9230":"Dummy","177.0098":"Dummy 2","76.0208":"Dummy 3","470.6054":"Dummy 4","357.0083":"Dummy Product 5"} Thanks, Billy

    Read the article

  • Why this Either-monad code does not type check?

    - by pf_miles
    instance Monad (Either a) where return = Left fail = Right Left x >>= f = f x Right x >>= _ = Right x this code frag in 'baby.hs' caused the horrible compilation error: Prelude> :l baby [1 of 1] Compiling Main ( baby.hs, interpreted ) baby.hs:2:18: Couldn't match expected type `a1' against inferred type `a' `a1' is a rigid type variable bound by the type signature for `return' at <no location info> `a' is a rigid type variable bound by the instance declaration at baby.hs:1:23 In the expression: Left In the definition of `return': return = Left In the instance declaration for `Monad (Either a)' baby.hs:3:16: Couldn't match expected type `[Char]' against inferred type `a1' `a1' is a rigid type variable bound by the type signature for `fail' at <no location info> Expected type: String Inferred type: a1 In the expression: Right In the definition of `fail': fail = Right baby.hs:4:26: Couldn't match expected type `a1' against inferred type `a' `a1' is a rigid type variable bound by the type signature for `>>=' at <no location info> `a' is a rigid type variable bound by the instance declaration at baby.hs:1:23 In the first argument of `f', namely `x' In the expression: f x In the definition of `>>=': Left x >>= f = f x baby.hs:5:31: Couldn't match expected type `b' against inferred type `a' `b' is a rigid type variable bound by the type signature for `>>=' at <no location info> `a' is a rigid type variable bound by the instance declaration at baby.hs:1:23 In the first argument of `Right', namely `x' In the expression: Right x In the definition of `>>=': Right x >>= _ = Right x Failed, modules loaded: none. why this happen? and how could I make this code compile ? thanks for any help~

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • Beginning Haskell: "not in scope" Unprecedented error

    - by user1071838
    So I just started learning Haskell, and this (http://learnyouahaskell.com) nifty book is giving a lot of help. So yesterday I wrote in a text file doubleMe x = x + x and saved it as double.hs. So after saving that I open up my command prompt, CD to the right folder, type in "ghci" to get haskell started, and then type in >doubleMe 5 10 and everything seems to work. Now today, I do the same thing and this happens (actual copy paste from command line) . . . C:\Users\myName\haskell>ghci GHCi, version 7.0.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package ffi-1.0 ... linking ... done. Prelude> :l double.hs [1 of 1] Compiling Main ( double.hs, interpreted ) Ok, modules loaded: Main. *Main> doubleMe 5 <interactive>:1:1: Not in scope: `doubleMe' So basically, everything was working fine, but now haskell can't find the function I wrote in double.hs. Can anyone tell what is going on? I'm pretty lost and confused. This is just a guess but does it have to do with *Main at all? Thanks for the help.

    Read the article

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • WYSIWYG in Doxygen

    - by Adam Shiemke
    I'm working on a fairly large project written in C. The idea was to build a library of modular blocks that can be reused across several platforms. Each module is assocaited with a word document in .docx format (huge pain to diff-merge). In these docs, an interface section is specified, listing datatypes and publicly accessable functions. These were often inconsistant with the actual implementation in code, and wading through all this documentation was a pain. I've been working to switch to doxygen to simplify document managemnet. I haven't found a good way to embed the previously written documentation into the doxygen output. I've copy-pasted them into sections and used modules to group the sources together, but the document sections look ugly in the comments (the output is pretty) and since doxygen takes a while to parse through our code (about 30 mins), validating formatting is a pain. Is there some way to WYSIWIG large blocks of documentation into doxygen? I feel this would improve the number of people documenting their code, and the quality of that documentation. I considered linking to html, but that splits out the documentation. I also considered putting them inline in html, but this also seems like a pain and would mean everyone needs a WYSIWIG HTML edditor (or some html skillz). Any ideas on how to make things easier and prettier? Thanks loads.

    Read the article

  • How can I use `pipe` to facilitate interprocess communication in Perl?

    - by Shiftbit
    Can anyone explain how I can successfully get my processes communicating? I find the perldoc on IPC confusing. What I have so far is: $| = 1; $SIG{CHLD} = {wait}; my $parentPid = $$; if ($pid = fork();) ) { if ($pid == 0) { pipe($parentPid, $$); open PARENT, "<$parentPid"; while (<PARENT>) { print $_; } close PARENT; exit(); } else { pipe($parentPid, $pid); open CHILD, ">$pid"; or error("\nError opening: childPid\nRef: $!\n"); open (FH, "<list") or error("\nError opening: list\nRef: $!\n"); while(<FH>) { print CHILD, $_; } close FH or error("\nError closing: list\nRef: $!\n"); close CHILD or error("\nError closing: childPid\nRef: $!\n); } else { error("\nError forking\nRef: $!\n"); } First: What does perldoc pipe mean by READHANDLE, WRITEHANDLE? Second: Can I implement a solution without relying on CPAN or other modules?

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • general learning methodology

    - by momo
    just wanted to hear on the different general learning paths people embark on when learning a new language/framework. the one i currently use, which is how i learned bash and am currently learning python, is: instant hacking tutorial (very short tutorial introducing the basic syntax, variable declaration, loops, data types, etc. and how they are generally used) in depth tutorial with good programming style and slightly topic-specific (e.g. Mark Pilgrim's Dive into Python), important topics for me personally are regex methods, file IO, and ways the different data types are utilized best (i wrote a very primitive bayesian spam filter using python's dictionaries to keep track of word occurrences) spaced-repition of syntax or short recipes (i use anki, with questions like 'create dictionary with filename and filesize metadata, human-readable' or simpler ones like 'match 0 - 3 occurences of the letter M in a string', or 'return/create an iterator from two sequences') the use of spaced-repitition has been invaluable, and i credit it with the ease that i can recall/create python algorithms. however, i've recently started looking into django, and i've found that spaced-repitition, at least in my case, doesn't work very well for learning a framework, it works best with short code recipes (either that or i should start looking into more basic django framework tutorials). the problem i'm encountering is that since framework programming is not only algorithms, but actually learning the API, which can be quite complex since you have to learn all the methods, modules, the places where they are stored, and the sequence of which things have to be done. for ex. in django to start a project that deals with polls (from the django tutorial), one has to create the project, edit the settings.py file, create the polls app, edit the models.py file (which requires knowing the classes that are present in the module models), edit the urls.py file, etc. i found that my spaced-repition method didn't work very well for this type of learning, so i wanted to ask you guys what method(s) you use for learning the different frameworks/APIs.

    Read the article

  • Perl Module from relative path not working in IIS6

    - by Laramie
    Disclaimer: I am not a PERL developer and am learning by the seat of my pants. I am using a Perl module called from a CGI scipt in IIS 6 that is bombing. The identical folder structure on an XP machine (IIS 5.1) works fantastic. If I remove the module loading command at line 9, it will print "about to load" and "ok", but When I try to run use Language::Guess; I receive The specified CGI application misbehaved by not returning a complete set of HTTP headers. in the browser. The folder structure is /cgi-bin/test.pl /PerlModules/Language/Guess.pm I have tried adjusting the file/folder permissions and have reviewed my IIS configuration time and again. It runs fine from the command line on the IIS machine or if I copy the module into \Perl\site\lib, but I don't have permission to load modules on the shared server this script is destined for. Am I missing something simple? Here is test.pl use strict; use CGI ':standard'; print header("text/html"); use lib "..\\PerlModules\\"; print "about to load<br/>"; #bombs here use Language::Guess; print "ok"

    Read the article

  • Problem implementing Interceptor pattern

    - by ph0enix
    I'm attempting to develop an Interceptor framework (in C#) where I can simply implement some interfaces, and through the use of some static initialization, register all my Interceptors with a common Dispatcher to be invoked at a later time. The problem lies in the fact that my Interceptor implementations are never actually referenced by my application so the static constructors never get called, and as a result, the Interceptors are never registered. If possible, I would like to keep all references to my Interceptor libraries out of my application, as this is my way of (hopefully) enforcing loose coupling across different modules. Hopefully this makes some sense. Let me know if there's anything I can clarify... Does anyone have any ideas, or perhaps a better way to go about implementing my Interceptor pattern? Update: I came across Spring.NET. I've heard of it before, but never really looked into it. It sounds like it has a lot of great features that would be very useful for what I'm trying to do. Does anyone have any experience with Spring.NET? TIA, Jeremy

    Read the article

  • Updating Data through Objects

    - by Chacha102
    So, lets say I have a record: $record = new Record(); and lets say I assign some data to that record: $record->setName("SomeBobJoePerson"); How do I get that into the database. Do I..... A) Have the module do it. class Record{ public function __construct(DatabaseConnection $database) { $this->database = $database; } public function setName($name) { $this->database->query("query stuff here"); $this->name = $name; } } B) Run through the modules at the end of the script class Record{ private $changed = false; public function __construct(array $data=array()) { $this->data = $data; } public function setName($name) { $this->data['name'] = $name; $this->changed = true; } public function isChanged() { return $this->changed; } public function toArray() { return $this->array; } } class Updater { public function update(array $records) { foreach($records as $record) { if($record->isChanged()) { $this->updateRecord($record->toArray()); } } } public function updateRecord(){ // updates stuff } }

    Read the article

  • mod_rewrite with location-based ACL in apache?

    - by Alexey
    Hi. There is a CGI-script that provides some API for our customers. Call syntax is: script.cgi?module=<str>&func=<str>[&other-options] The task is to make different authentiction rules for different modules. Optionally, it will be great to have nice URLs. My config: <VirtualHost *:80> DocumentRoot /var/www/example ServerName example.com # Global policy is to deny all <Location /> Order deny,allow Deny from all </Location> # doesn't work :( <Location /api/foo> Order deny,allow Deny from all Allow from 127.0.0.1 </Location> RewriteEngine On # The only allowed type of requests: RewriteRule /api/(.+?)/(.+) /cgi-bin/api.cgi?module=$1&func=$2 [PT] # All others are forbidden: RewriteRule /(.*) - [F] RewriteLog /var/log/apache2/rewrite.log RewriteLogLevel 5 ScriptAlias /cgi-bin /var/www/example <Directory /var/www/example> Options -Indexes AddHandler cgi-script .cgi </Directory> </VirtualHost> Well, I know that problem is order of processing that directives. <Location>s will be processed after mod_rewrite has done its work. But I believe there is a way to change it. :) Using of standard Order deny,allow + Allow from <something> directives is preferable because it's commonly used in other places like this. Thank you for your attention. :)

    Read the article

  • Should I go for Arrays or Objects in PHP in a CouchDB/Ajax app?

    - by karlthorwald
    I find myself converting between array and object all the time in PHP application that uses couchDB and Ajax. Of course I am also converting objects to JSON and back (for sometimes couchdb but mostly Ajax), but this is not so much disturbing my workflow. At the present I have php objects that are returned by the CouchDB modules I use and on the other hand I have the old habbit to return arrays like array("error"="not found","data"=$dataObj) from my functions. This leads to a mixed occurence of real php objects and nested arrays and I cast with (object) or (array) if necessary. The worst thing is that I know more or less by heart what a function returns, but not what type (array or object), so I often run into type errors. My plan is now to always cast arrays to objects before returning from a function. Of course this implies a lot of refactoring. Is this the right way to go? What about the conversion overhead? Other ideas or tips? Edit: Kenaniah's answer suggests I should go the other way, this would mean I'd cast everything to arrays. And for all the Ajax / JSON stuff and also for CouchDB I would use $myarray = json_decode($json_data,$assoc = true); //EDIT: changed to true, whcih is what I really meant Even more work to change all the CouchDB and Ajax functions but in the end I have better code.

    Read the article

  • Scope of "library" methods

    - by JS
    Hello, I'm apparently laboring under a poor understanding of Python scoping. Perhaps you can help. Background: I'm using the 'if name in "main"' construct to perform "self-tests" in my module(s). Each self test makes calls to the various public methods and prints their results for visual checking as I develop the modules. To keep things "purdy" and manageable, I've created a small method to simplify the testing of method calls: def pprint_vars(var_in): print("%s = '%s'" % (var_in, eval(var_in))) Calling pprint_vars with: pprint_vars('some_variable_name') prints: some_variable_name = 'foo' All fine and good. Problem statement: Not happy to just KISS, I had the brain-drizzle to move my handy-dandy 'pprint_vars' method into a separate file named 'debug_tools.py' and simply import 'debug_tools' whenever I wanted access to 'pprint_vars'. Here's where things fall apart. I would expect import debug_tools foo = bar debug_tools.pprint_vars('foo') to continue working its magic and print: foo = 'bar' Instead, it greets me with: NameError: name 'some_var' is not defined Irrational belief: I believed (apparently mistakenly) that import puts imported methods (more or less) "inline" with the code, and thus the variable scoping rules would remain similar to if the method were defined inline. Plea for help: Can someone please correct my (mis)understanding of scoping regards imports? Thanks, JS

    Read the article

  • Recommendations to handle development and deployment of php web apps using shared project code

    - by Exception e
    I am wondering what the best way (for a lone developer) is to develop a project that depends on code of other projects deploy the resulting project to the server I am planning to put my code in svn, and have shared code as a separate project. There are problems with svn:externals which I cannot fully estimate. I've read subversion:externals considered to be an anti-pattern, and How do you organize your version control repository, but there is one special thing with php-projects (and other interpreted source code): there is no final executable resulting from your libraries. External dependencies are thus always on raw source code. Ideally I really want to be able to develop simultaneously on one project and the projects it dependends on. Possible way: Check out a projects' dependency in a sub folder as a working copy of the trunk. Problems I foresee: When you want to deploy a project, you might want to freeze its dependencies, right? The dependency code should not end up as a duplicate in the projects repository, I think. *(update1: I additionally assume svn:ignore will pose problems if I cannot fall back on symlinks, see my comment) I am still looking for suggestions that do not require the use junction points. They are a sort of unsupported hack in winxp, which may break some programs* This leads me to the last part of the question (as one has influence on the other): how do you deploy apps whith such dependencies? I've looked into BuildOut for Python, but it seems to be tightly related to the python ecosystem (resolving and fetching python modules from the web etc). I am very eager to learn about your best practices.

    Read the article

  • How do I fork correctly in a perl module for znc?

    - by rarbox
    I'm currently writing an IRC bot. The scripts are loaded as perl modules in ZNC but the bot gets disconnected with an Input/Output error if I create a forked process. This is a working example script without fork, but this causes the bot to freeze until the script finishes doing its task. package imdb; use warnings; use strict; sub new { my ($class) = @_; my $self = {}; bless( $self, $class ); return( $self ); } sub OnChanMsg { my ($self, $nick, $channel,$text) = @_; #unless (my $pid = fork()) { my $result = a_slow_process($text); ZNC::PutIRC( "PRIVMSG $channel :$result" ); # exit; #} return( ZNC::CONTINUE ); } sub OnShutdown { my ( $me ) = @_; } sub a_slow_process { my $input = shift; sleep 10; return "You said $input."; } 1; The fork code that is causing the error is commented out. How do I fix this? Edited to add: I was told that ZNC::PutIRC should not be put in the child process.

    Read the article

  • Yii user rights extension

    - by jailed abroad
    i have installed the yii rights extension and here is my code after installation, database tables are created after installation. 'modules'=>array( // uncomment the following to enable the Gii tool 'rights'=>array( 'superuserName'=>'Admin', // Name of the role with super user privileges. 'authenticatedName'=>'Authenticated', //// Name of the authenticated user role. 'userIdColumn'=>'id',// Name of the user id column in the database. 'userNameColumn'=>'username', // Name of the user name column in the database. 'enableBizRule'=>true, // Whether to enable authorization item business rules. 'enableBizRuleData'=>false, //Whether to enable data for business rules. 'displayDescription'=>true, // Whether to use item description instead of name. ' // Key to use for setting success flash messages. 'flashErrorKey'=>'RightsError', / Key to use for setting error flash messages. // 'install'=>true, // Whether to install rights. 'baseUrl'=>'/rights', // Base URL for Rights. Change if module is nested. 'layout'=>'rights.views.layouts.main', // Layout to use for displaying Rights. 'appLayout'=>'application.views.layouts.main', //Application layout. 'cssFile'=>'rights.css', // Style sheet file to use for Rights. ' 'install'=>false, // Whether to enable installer. 'debug'=>false, ), 'gii'=>array( 'class'=>'system.gii.GiiModule', 'password'=>'1234', // If removed, Gii defaults to localhost only. Edit carefully to taste. 'ipFilters'=>array('127.0.0.1','::1'), ), ), But when i type url http://localhost/rightsTest/index.php/rights then it says There must be at least one superuser! I have tried many things but unable to find answer. Thanks for your help.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >