Search Results

Search found 6079 results on 244 pages for 'define'.

Page 178/244 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • maven generate eclipse project for custom packaging

    - by Riduidel
    Hi,for a project I'm working on, I've defined a custom maven packaging, and the associated lifecycle (through the definition of a components.xml and the definition of a LifecycleMapping). Obviously, this packaging corresponds to a specific development, for which a plugin has been created in Eclipse. What I would like to do is configure Eclipse according to my pom.xml content. I've obviously looked at Customizable build lifecycle, but I'm more than confused by provided information. From what I understand, I must define in my target project a build plugin, in which i'll add configuration elements specific to my project. As an example, having a configurator called mycompany.mydev.MyEclipseConfigurator, I'll have to write <build> <plugins> <plugin> <groupId>org.maven.ide.eclipse</groupId> <artifactId>lifecycle-mapping</artifactId> <version>0.9.9-SNAPSHOT</version> <configuration> <mappingId>customizable</mappingId> <configurators> <configurator id='mycompany.mydev.MyEclipseConfigurator'/> </configurators> </configuration> </plugin> </plugins> </build> Am I right ?

    Read the article

  • Blackberry horizontal scroll in bottom menu bar

    - by Dachmt
    Hi, I have a MainScreen that has an overlay menu at the bottom of the screen, over a map. To do that, I have an MenuHoster (extends HorizontalFieldManager) that takes all width of the screen, 45px height and at the bottom of the screen. It overlays a map. In my MenuHoster, and add another HorizontalFieldManager that has a background image (my menu bar image) and focusable buttons. I have too many buttons to be all in my screen, so I want to an horizontal screen. When I create my second HorizontalFieldManager that hosts the buttons, I define the Manager.HORIZONTAL_SCROLL as style. My buttons are focusable, but nothing happens... The focus change to the next buttons but my HorizontalFieldManager does not scroll... I tried putting the same parameter in my MenuHoster HorizontalFieldManager, the scroll is working but all my screen goes right or left, and I want only the content of my menu bar... So there is my map and my menu going left, my buttons at the bottom and the rest of the screen white. Any idea why my second HorizontalFieldManager hosting the buttons does not scoll? Thank you!

    Read the article

  • Which pattern should be used for editing properties with modal view controller on iPhone?

    - by Matthew Daugherty
    I am looking for a good pattern for performing basic property editing via a modal view on the iPhone. Assume I am putting together an application that works like the Contacts application. The "detail" view controller displays all of the contact's properties in a UITableView. When the UITableView goes into edit mode a disclosure icon is displayed in the cells. Clicking a cell causes a modal "editor" view controller to display a view that allows the user to modify the selected property. This view will often contain only a single text box or picker. The user clicks Cancel/Save and the "editor" view is dismissed and the "detail" view is updated. In this scenario, which view is responsible for updating the model? The "editor" view could update the property directly using Key-Value Coding. This appears in the CoreDataBooks example. This makes sense to me on some level because it treats the property as the model for the editor view controller. However, this is not the pattern suggested by the View Controller Programming Guide. It suggests that the "editor" view controller should define a protocol that the "detail" controller adopts. When the user indicates they are done with the edit, the "detail" view controller is called back with the entered value and it dismisses the "editor" view. Using this approach the "detail" controller updates the model. This approach seems problematic if you are using the same "editor" view for multiple properties since there is only a single call-back method. Would love to get some feedback on what approach works best.

    Read the article

  • Conditional alternative table row styles in HTML5

    - by Budda
    Is there any changes regarding this questions Conditional alternative table row styles since HTML5 came in? Here is a copy of original question: Is it possible to style alternate table rows without defining classes on alternate tags? With the following table, can CSS define alternate row styles WITHOUT having to give the alternate rows the class "row1/row2"? row1 can be default, so row2 is the issue. <style> .altTable td { } .altTable .row2 td { background-color: #EEE; } </style> <table class="altTable"> <thead><tr><td></td></tr></thead> <tbody> <tr><td></td></tr> <tr class="row2"><td></td></tr> <tr><td></td></tr> <tr class="row2"><td></td></tr> </tbody> </table> Thanks a lot!

    Read the article

  • Polynomial operations using operator overloading

    - by Vlad
    I'm trying to use operator overloading to define the basic operations (+,-,*,/) for my polynomial class but when i run the program it crashes and my computer frozes. Update3 Ok i successfully done the first two operations(+,-). Now at multiplication, after multiplying each term of the first polynomial with each of the second i want to sort the poly list descending and then if there are more than one term with the same power to merge them in only one term, but for some reason it doesn't compile because of the sort function which doesn't work. Here's what I got: polinom operator*(const polinom& P) const { polinom Result; constIter i, j, lastItem = Result.poly.end(); Iter it1, it2; int nr_matches; for (i = poly.begin() ; i != poly.end(); i++) { for (j = P.poly.begin(); j != P.poly.end(); j++) Result.insert(i->coef * j->coef, i->pow + j->pow); } sort(Result.poly.begin(), Result.poly.end(), SortDescending()); lastItem--; while (true) { nr_matches = 0; for (it1 = Result.poly.begin(); it < lastItem; it1++) { for (it2 = it1 + 1;; it2 <= lastItem; it2++){ if (it2->pow == it1->pow) { it1->coef += it2->coef; nr_matches++; } } Result.poly.erase(it1 + 1, it1 + (nr_matches + 1)); } return Result; } Also here's SortDescending: struct SortDescending { bool operator()(const term& t1, const term& t2) { return t2.pow < t1.pow; } }; What did i do wrong? Thanks!

    Read the article

  • gcc: Do I need -D_REENTRANT with pthreads?

    - by stefanB
    On Linux (kernel 2.6.5) our build system calls gcc with -D_REENTRANT. Is this still required when using pthreads? How is it related to gcc -pthread option? I understand that I should use -pthread with pthreads, do I still need -D_REENTRANT? On a side note, is there any difference that you know off between the usage of REENTRANT between gcc 3.3.3 and gcc 4.x.x ? When I use -pthread gcc option I can see that _REENTRANT gets defined. Will omitting -D_REENTRANT from command line make any difference, for example could some objects be compiled without multithreaded support and then linked into binary that uses pthreads and will cause problems? I assume it should be ok just to use: g++ -pthread > echo | g++ -E -dM -c - > singlethreaded > echo | g++ -pthread -E -dM -c - > multithreaded > diff singlethreaded multithreaded 39a40 > #define _REENTRANT 1 We're compiling multiple static libraries and applications that link with the static libraries, both libraries and application use pthreads. I believe it was required at some stage in the past but want to know if it is still required. Googling hasn't returned any recent information mentioning -D_REENTRANT with pthreads. Could you point me to links or references discussing the use in recent version of kernel/gcc/pthread? Clarification: At the moment we're using -D_REENTRANT and -lpthread, I assume I can replace them with just g++ -pthread, looking at man gcc it sets the flags for both preprocessor and linker. Any thoughts?

    Read the article

  • Facebooker Causing Problems with Rails Integration Testing

    - by Eric Lubow
    I am (finally) attempting to write some integration tests for my application (because every deploy is getting scarier). Since testing is a horribly documented feature of Rails, this was the best I could get going with shoulda. class DeleteBusinessTest < ActionController::IntegrationTest context "log skydiver in and" do setup do @skydiver = Factory( :skydiver ) @skydiver_session = SkydiverSession.create(@skydiver) @biz = Factory( :business, :ownership = Factory(:ownership, :skydiver = @skydiver )) end context "delete business" do setup do @skydiver_session = SkydiverSession.find post '/businesses/destroy', :id = @biz.id end should_redirect_to('businesses_path()'){businesses_path()} end end end In theory, this test seems like it should pass. My factories seem like they are pushing the right data in: Factory.define :skydiver do |s| s.sequence(:login) { |n| "test#{n}" } s.sequence(:email) { |n| "test#{n}@example.com" } s.crypted_password '1805986f044ced38691118acfb26a6d6d49be0d0' s.password 'secret' s.password_confirmation { |u| u.password } s.salt 'aowgeUne1R4-F6FFC1ad' s.firstname 'Test' s.lastname 'Salt' s.nickname 'Mr. Password Testy' s.facebook_user_id '507743444' end The problem I am getting seems to be from Facebooker only seems to happen on login attempts. When the test runs, I am getting the error: The error occurred while evaluating nil.set_facebook_session. I believe that error is to be expected in a certain sense since I am not using Facebook here for this session. Can anyone provide any insight as to how to either get around this or at least help me out with what is going wrong?

    Read the article

  • How do I work around the GCC "error: cast from ‘SourceLocation*’ to ‘int’ loses precision" error when compiling cmockery.c?

    - by Daryl Spitzer
    I need to add unit tests using Cmockery to an existing build environment that uses as hand-crafted Makefile. So I need to figure out how to build cmockery.c (without automake). When I run: g++ -DHAVE_CONFIG_H -DPIC -I ../cmockery-0.1.2 -I /usr/include/malloc -c ../cmockery-0.1.2/cmockery.c -o obj/cmockery.o I get a long list of errors like this: ../cmockery-0.1.2/cmockery.c: In function ‘void initialize_source_location(SourceLocation*)’: ../cmockery-0.1.2/cmockery.c:248: error: cast from ‘SourceLocation*’ to ‘int’ loses precision Here are lines 247:248 of cmockery.c: static void initialize_source_location(SourceLocation * const location) { assert_true(location); assert_true is defined on line 154 of cmockery.h: #define assert_true(c) _assert_true((int)(c), #c, __FILE__, __LINE__) So the problem (as the error states) is GCC doesn't like the cast from ‘SourceLocation*’ to ‘int’. I can build Cmockery using ./configure and make (on Linux, and on Mac OS X if I export CFLAGS=-I/usr/include/malloc first), without any errors. I've tried looking at the command-line that compiles cmockery.c when I run make (after ./configure): gcc -DHAVE_CONFIG_H -I. -I. -I./src -I./src -Isrc/google -I/usr/include/malloc -MT libcmockery_la-cmockery.lo -MD -MP -MF .deps/libcmockery_la-cmockery.Tpo -c src/cmockery.c -fno-common -DPIC -o .libs/libcmockery_la-cmockery.o ...but I don't see any options that might work around this error. In "error: cast from 'void*' to 'int' loses precision", I see I could change (int) in cmockery.h to (intptr_t). And I've confirmed that works. But since I can build Cmockery with ./configure and make, there must be a way to get it to build without modifying the source.

    Read the article

  • How to get virtual com-port number if DBT_DEVNODES_CHANGED event accrues?

    - by Nick Toverovsky
    Hi! Previously I defined com-port number using DBT_DEVICEARRIVAL: procedure TMainForm.WMDEVICECHANGE(var Msg: TWMDeviceChange); var lpdb : PDevBroadcastHdr; lpdbpr: PDevBroadCastPort; S: AnsiString; begin {????????? ?????????} lpdb := PDevBroadcastHdr(Msg.dwData); case Msg.Event of DBT_DEVICEARRIVAL: begin {??????????} if lpdb^.dbch_devicetype = DBT_DEVTYP_PORT {DBT_DEVTYP_DEVICEINTERFACE} then begin lpdbpr:= PDevBroadCastPort(Msg.dwData); S := StrPas(PWideChar(@lpdbpr.dbcp_name)); GetSystemController.Init(S); end; end; DBT_DEVICEREMOVECOMPLETE: begin {????????} if lpdb^.dbch_devicetype = DBT_DEVTYP_PORT then begin lpdbpr:= PDevBroadCastPort(Msg.dwData); S := StrPas(PWideChar(@lpdbpr.dbcp_name)); GetSystemController.ProcessDisconnect(S); end; end; end; end; Unfortunately, the hardware part of a device with which I was working changed and now Msg.Event has value BT_DEVNODES_CHANGED. I've read msdn. It is said that I should use RegisterDeviceNotification to get any additional information. But, if I got it right, it can't be used for serial ports. The DBT_DEVICEARRIVAL and DBT_DEVICEREMOVECOMPLETE events are automatically broadcast to all top-level windows for port devices. Therefore, it is not necessary to call RegisterDeviceNotification for ports, and the function fails if the dbch_devicetype member is DBT_DEVTYP_PORT. So, I am confused. How can I define the com-port of a device, if a get DBT_DEVNODES_CHANGED in WMDEVICECHANGE event?

    Read the article

  • GNU Make: How to call $(wildcard) within $(eval)

    - by bengineerd
    I'm trying to create a generic build template for my Makefiles, kind of like they discuss in the eval documentation. I can't seem to get the wildcard function to work within an eval. The basic code I'm having issues with looks like this. SRC_DIR = ./src/ PROG_NAME = test define PROGRAM_template $(1)_SRC_DIR = $(join $(SRC_DIR), $(1)/) $(1)_SRC_FILES = $(wildcard $$($(1)_SRC_DIR)*.c) endef $(eval $(call PROGRAM_template, $(PROG_NAME))) all: @echo $(test_SRC_DIR) @echo $(test_SRC_FILES) @echo $(wildcard $(wildcard $(test_SRC_DIR)*.c) When I run make with this, the output is ./src/test [correct list of all .c files in ./src/test/] Basically, the wildcard call within PROGRAM_template is not being eval'd as I expect it. The call results in an empty list. The join call is being eval'd correctly though. So, what am I doing wrong? My guess is that $$($(1)_SRC_DIR) is not correct, but I can't figure out the right way to do it. EDIT Once this was solved, it didn't take long for me to hit another issue with eval. I posted it as a new question at http://stackoverflow.com/questions/2428506/workaround-for-gnu-make-3-80-eval-bug

    Read the article

  • Emacs recursive project search

    - by hekevintran
    I am switching to Emacs from TextMate. One feature of TextMate that I would really like to have in Emacs is the "Find in Project" search box that uses fuzzy matching. Emacs sort of has this with ido, but ido does not search recursively through child directories. It searches only within one directory. Is there a way to give ido a root directory and to search everything under it? Update: The questions below pertain to find-file-in-project.el from Michal Marczyk's answer. If anything in this message sounds obvious it's because I have used Emacs for less than one week. :-) As I understand it, project-local-variables lets me define things in a .emacs-project file that I keep in my project root. How do I point find-file-in-project to my project root? I am not familiar with regex syntax in Emacs Lisp. The default value for ffip-regexp is: ".*\\.\\(rb\\|js\\|css\\|yml\\|yaml\\|rhtml\\|erb\\|html\\|el\\)" I presume that I can just switch the extensions to the ones appropriate for my project. Could you explain the ffip-find-options? From the file: (defvar ffip-find-options "" "Extra options to pass to `find' when using find-file-in-project. Use this to exclude portions of your project: \"-not -regex \\".vendor.\\"\"") What does this mean exactly and how do I use it to exclude files/directories? Could you share an example .emacs-project file?

    Read the article

  • What was Tim Sweeney thinking? (How does this C++ parser work?)

    - by Frank Krueger
    Tim Sweeney of Epic MegaGames is the lead developer for Unreal and a programming language geek. Many years ago posted the following screen shot to VoodooExtreme: As a C++ programmer and Sweeney fan, I was captivated by this. It shows generic C++ code that implements some kind of scripting language where that language itself seems to be generic in the sense that it can define its own grammar. Mr. Sweeney never explained himself. :-) It's rare to see this level of template programming, but you do see it from time to time when people want to push the compiler to generate great code or because they want to create generic code (for example, Modern C++ Design). Tim seems to be using it to create a grammar in Parser.cpp - you can see what look like prioritized binary operators. If that is the case, then why does Test.ae look like it's also defining a grammar? Obviously this is a puzzle that needs to be solved. Victory goes to the answer with a working version of this code, or the most plausible explanation, or to Tim Sweeney himself if he posts an answer. :-)

    Read the article

  • Diff between <head id="Head1" runat="server"> and <asp:ContentPlaceHolder runat="server" ID="HeadCon

    - by justSteve
    Looking for an better understanding of how an mvc project should define javascript and css includes. I'm working with sample code where includes are defined like: <head id="Head1" runat="server"> <title><asp:ContentPlaceHolder ID="TitleContent" runat="server" />Affiliate Checkout</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <meta http-equiv="pragma" content="no-cache"> <script type="text/javascript" src="/Scripts/jquery.js"></script> <script type="text/javascript" src="/Scripts/jquery-ui-1.7.2.custom.min.js"></script> . . . <asp:ContentPlaceHolder runat="server" ID="HeadContent"></asp:ContentPlaceHolder> </head> I'm reading that to be that _all pages looking at this MasterPage will get jquery and jqueryUI and, additionally, each page will have the opportunity to add head elements thankx to the content placeholder HeadContent tag. The specific problem i'm troubleshooting is an instance where my rendered page is not including the 'prama no-cache' tag - as you see, it's defined in the upper level header section. Other .js and .css elements are making it into the rendered page so it very confusing to see that the no-cache tag isn't. When execute a 'View Generated Source' - the 'charset' is present the 'no-cache' is not.

    Read the article

  • Android Studio Could not call IncrementalTask.taskAction() on task ':project:dexDebug'

    - by akenawell85x
    I recently decided to switch from Eclipse to Android Studio. I imported a project I was working on and am now getting this error when I try to run the project. Gradle: Execution failed for task ':project:dexDebug'. > Could not call IncrementalTask.taskAction() on task ':project:dexDebug' I've been cruising this site for 2 days now and trying different suggestions to no avail. I did run gradlew compileDebug --stacktrace and this is what I got: C:\Users\adam\AndroidStudioProjects\projectProject>gradlew compileDebug --stacktrace Relying on packaging to define the extension of the main artifact has been deprecated and is scheduled to be removed in Gradle 2.0 :project:preBuild UP-TO-DATE :project:preDebugBuild UP-TO-DATE :project:preReleaseBuild UP-TO-DATE :project:prepareComAndroidSupportAppcompatV71800Library UP-TO-DATE :project:prepareComGoogleAndroidGmsPlayServices3225Library UP-TO-DATE :project:prepareDebugDependencies :project:compileDebugAidl UP-TO-DATE :project:compileDebugRenderscript UP-TO-DATE :project:generateDebugBuildConfig UP-TO-DATE :project:mergeDebugAssets UP-TO-DATE :project:mergeDebugResources UP-TO-DATE :project:processDebugManifest UP-TO-DATE :project:processDebugResources UP-TO-DATE :project:generateDebugSources UP-TO-DATE :project:compileDebug UP-TO-DATE BUILD SUCCESSFUL Total time: 10.459 secs However I am still getting that error when I try to actually run the project. Here is my build.gradle (i do have a 'libs' folder in my project with all the jars for a google maps/places app): buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.6.+' } } apply plugin: 'android' repositories { mavenCentral() } android { compileSdkVersion 18 buildToolsVersion "18.1.1" defaultConfig { minSdkVersion 8 targetSdkVersion 18 } } dependencies { compile fileTree(dir: 'libs') compile 'com.google.android.gms:play-services:3.2.25' compile 'com.android.support:support-v4:18.0.0' compile 'com.android.support:appcompat-v7:+' } and my settings.gradle: include ':project', ':project:libs:android-support-v4', ':project:libs:google-api-client-1.10.3-beta', ':project:libs:google-api-client-android2-1.10.3-beta', ':project:libs:google-http-client-1.10.3-beta', ':project:libs:google-http-client-android2-1.10.3-beta', ':project:libs:google-oauth-client-1.10.1-beta', ':project:libs:gson-2.1', ':project:libs:guava-11.0.1', ':project:libs:jackson-core-asl-1.9.4', ':project:libs:jsr305-1.3.9', ':project:libs:protobuf-java-2.2.0', ':project:libs:GoogleAdMobAdsSdk-6.4.1' As I said, I've tried pretty much everything I have read on here about this error and am having no luck. Any help would be greatly appreciated.

    Read the article

  • error C2504: 'BASECLASS' : base class undefined

    - by numerical25
    I checked out a post similar to this but the linkage was different the issue was never resolved. The problem with mine is that for some reason the linker is expecting there to be a definition for the base class, but the base class is just a interface. Below is the error in it's entirety c:\users\numerical25\desktop\intro todirectx\godfiles\gxrendermanager\gxrendermanager\gxrendermanager\gxdx.h(2) : error C2504: 'GXRenderer' : base class undefined Below is the code that shows how the headers link with one another GXRenderManager.h #ifndef GXRM #define GXRM #include <windows.h> #include "GXRenderer.h" #include "GXDX.h" #include "GXGL.h" enum GXDEVICE { DIRECTX, OPENGL }; class GXRenderManager { public: static int Ignite(GXDEVICE); private: static GXRenderer *renderDevice; }; #endif at the top of GxRenderManager, there is GXRenderer , windows, GXDX, GXGL headers. I am assuming by including them all in this document. they all link to one another as if they were all in the same document. correct me if I am wrong cause that's how a view headers. Moving on... GXRenderer.h class GXRenderer { public: virtual void Render() = 0; virtual void StartUp() = 0; }; GXGL.h class GXGL: public GXRenderer { public: void Render(); void StartUp(); }; GXDX.h class GXDX: public GXRenderer { public: void Render(); void StartUp(); }; GXGL.cpp and GXDX.cpp respectively #include "GXGL.h" void GXGL::Render() { } void GXGL::StartUp() { } //...Next document #include "GXDX.h" void GXDX::Render() { } void GXDX::StartUp() { } Not sure whats going on. I think its how I am linking the documents, I am not sure.

    Read the article

  • JSF: SelectOneRadio shortcuts for options

    - by pakore
    I have the following h:selectOneRadio: <h:selectOneRadio style="text-align:left" layout="pageDirection" id="#{prefix}_rating" value="#{examinationPanel.tempResult}" > <f:selectItems value="#{examinationPanel.options}"></f:selectItems> </h:selectOneRadio> And then the backing bean is loading the options based on some userpreferences: public List<SelectItem> loadOptions (Set<ResultEnum> possibleResults){ messages =ResourceBundle.getBundle("com.eyeprevent.resources.Messages",FacesContext.getCurrentInstance().getViewRoot().getLocale()); List<SelectItem> options = new ArrayList<SelectItem>(); for(ResultEnum result:possibleResults){ SelectItem option = new SelectItem(result,messages.getString(result.name())); options.add(option); } return options; } How can I define shortcuts for the options in the radio group? por example typing "1" selects the first option, typing "2" the second, etc... Thanks in advance! ps. I'm using Jsf 1.2, Richfaces, Primefaces and Facelets.

    Read the article

  • Need help modifying C++ application to accept continuous piped input in Linux

    - by GreeenGuru
    The goal is to mine packet headers for URLs visited using tcpdump. So far, I can save a packet header to a file using: tcpdump "dst port 80 and tcp[13] & 0x08 = 8" -A -s 300 | tee -a ./Desktop/packets.txt And I've written a program to parse through the header and extract the URL when given the following command: cat ~/Desktop/packets.txt | ./packet-parser.exe But what I want to be able to do is pipe tcpdump directly into my program, which will then log the data: tcpdump "dst port 80 and tcp[13] & 0x08 = 8" -A -s 300 | ./packet-parser.exe Here is the script as it is. The question is: how do I need to change it to support continuous input from tcpdump? #include <boost/regex.hpp> #include <fstream> #include <cstdio> // Needed to define ios::app #include <string> #include <iostream> int main() { // Make sure to open the file in append mode std::ofstream file_out("/var/local/GreeenLogger/url.log", std::ios::app); if (not file_out) std::perror("/var/local/GreeenLogger/url.log"); else { std::string text; // Get multiple lines of input -- raw std::getline(std::cin, text, '\0'); const boost::regex pattern("GET (\\S+) HTTP.*?[\\r\\n]+Host: (\\S+)"); boost::smatch match_object; bool match = boost::regex_search(text, match_object, pattern); if(match) { std::string output; output = match_object[2] + match_object[1]; file_out << output << '\n'; std::cout << output << std::endl; } file_out.close(); } } Thank you ahead of time for the help!

    Read the article

  • Scheme Depth-first search of a graph function

    - by John Retallack
    This is a homework question,I'm trying to do a Depth-first search function in Scheme,Here's the code I've written so far: (define explore (?(node visited) (let* ([neighbors (force (cdr node))] [next (nextNode visited neighbors)] [is-visited (member? node visited)]) (cond ;if I have no unvisited neighbours print current node and go up one level [(equal? next #f) (begin (display (car node)) (display " "))] ;if current node is not visited and I have unvisited neighbors ;print current node,mark as visited and visit it's neighbors [(and (equal? is-visited #f) (not (equal? next #f))) (begin (display (car node)) (display " ") (explore next (cons node visited)))]) ;go and visit the next neighbor (if (not (equal? (nextNode (cons next visited) neighbors) #f )) (explore (nextNode (cons next visited) neighbors) (cons node visited)))))) 'node' is the current node 'visited' is a list in witch I keep track of the nodes I visited 'nextNode' is a function that returns the first unvisited neighbor if any or #f otherwise 'member?' test's if a node is in the visited list The Graph representation is using adjacent made using references to nodes with letrec so that's why I use force in 'neighbors': Eg: (letrec ([node1 (list "NY" (delay (list node2 node3)))],where node2 and node3 are defined as node1 The problem witch I'm dealing with is that my visited lists looses track of some of the nodes I visited when I come out of recursion,How can I fix this ?

    Read the article

  • Nunit Relative Path failing

    - by levi.siebens
    I'm having an issue with Nunit where I cannot find an image file when I run my tests and each time it looks for images it looks in the Nunit folder instead of looking inside the folder where the binary resides. Below is a detailed description of what's happening. I'm building a binary that is under test which contains the definition for some game elements and png files which will define the sprites I'm using (for sanity sake call it Binary1) Nunit runs tests from a seperate binary (Binary1Test) executing test methods against the first binary (Binary1). All tests pass, unless the test executes code in Binary1 which then requires Binary1 to use one of the image files (which are defined via a relative path). When the method is called, Nunit throws a file not found exception stating that it cannot find the file and states it's looking inside of the Program Files\Nunit.net 2.0 folder So I have no idea why the code is doing this, and to make matters more confusing when I pull up Enviornment.CurrentDirectory it gives me the correct path (the path to my debug folder) and not the path to nunit. Also if I use this instead of using the relative path, my tests will run without issue. So my question is, does anyone know why in the case of loading relative paths from within my binary that nunit decides to use it's directory instead of the directory where the binary is located and where the images are stored? Thanks.

    Read the article

  • Auto-sizing and positioning in Flex

    - by Addsy
    I am working on a flex app that uses XML templates to dynamically create DisplayObjects. These templates define different layouts that can be used for each page of content in the app (ie , 2 columns, 3 columns etc etc). The administrator can select from one of these and populate each area with their content. The templates add one of 3 types of DisplayObject - HBox, VBox or a third component - LibraryContentContainer (an mxml component that is defined as part of the app) - which is effectively a canvas element with a TextArea inside. The problem that I am getting is that I need each of these areas to automatically resize to fit the length of the content but don't seem to be able to find an effective way to do so. In the LibraryContentContainer, when the value of the TextArea is set, I am calling .validateNow() on the LibraryContentContainer. I then set the height property on both the TextArea and LibraryContentContainer to match the textHeight property of the TextArea. In the following example, this is the LibraryContentContainer, viewer is the TextArea and the value property of the TextArea is bound to this.__Value. v is the variable containing the content for the textarea this.__Value = v; this.validateNow(); this.viewer.height = this.viewer.textHeight; this.height = this.viewer.height; This works to a degree in that the TextArea grows or shrinks depending on the length of content, but it's still not great - sometimes there are still vertical scrollbars even tho the size of the TextArea has grown. Anyone got any ideas? Thanks Adam

    Read the article

  • Using Essential Use Cases to design a UI-centric Application

    - by Bruno Brant
    Hello all, I'm begging a new project (oh, how I love the fresh taste of a new project!) and we are just starting to design it. In short: The application is a UI that will enable users to model an execution flow (a Visio like drag & drop interface). So our greatest concern is usability and features that will help the users model fast and clearly the execution flow. Our established methodology makes extensive use of Use Cases in order to create a harmonious view of the application between the programmers and users. This is a business concern, really: I'd prefer to use an Agile Method with User Stories rather than User Cases, but we need to define a clear scope to sell the product to our clients. However, Use Cases have a number of flaws, most of which are related to the fact that they include technical details, like UI, etc, as can be seem here. But, since we can't use User Stories and a fully interactive design, I've decided that we compromise: I will be using Essential Use Cases in order to hide those details. Now I have another problem: it's essential (no pun intended) to have a clear description of UI interaction, so, how should I document it? In other words, how do I specify a application through the use of Essential Use Cases where the UI interaction is vital to it? I can see some alternatives: Abandon the use of Use Cases since they don't correctly represent the problem Do not include interface descriptions in the use case, but create another documentation (Story Boards) and link then to the Essential Use Cases Include UI interaction description to the Essential Use Cases, since they are part of the business rules in the perspective of the users and the application itself

    Read the article

  • Constant NSDictionary/NSArray for class methods.

    - by Jeff B
    I am trying to code a global lookup table of sorts. I have game data that is stored in character/string format in a plist, but which needs to be in integer/id format when it is loaded. For instance, in the level data file, a "p" means player. In the game code a player is represented as the integer 1. This let's me do some bitwise operations, etc. I am simplifying greatly here, but trying to get the point across. Also, there is a conversion to coordinates for the sprite on a sprite sheet. Right now this string-integer, integer-string, integer-coordinate, etc. conversion is taking place in several places in code using a case statement. This stinks, of course, and I would rather do it with a dictionary lookup. I created a class called levelInfo, and want to define the dictionary for this conversion, and then class methods to call when I need to do a conversion, or otherwise deal with level data. NSString *levelObjects = @"empty,player,object,thing,doohickey"; int levelIDs[] = [0,1,2,4,8]; // etc etc @implementation LevelInfo +(int) crateIDfromChar: (char) crateChar { int idx = [[crateTypes componentsSeparatedByString:@","] indexOfObject: crateChar]; return levelIDs[idx]; } +(NSString *) crateStringFromID: (int) crateID { return [[crateTypes componentsSeparatedByString:@","] objectAtIndex: crateID]; } @end Is there a better way to do this? It feels wrong to basically build these temporary arrays, or dictionaries, or whatever for each call to do this translation. And I don't know of a way to declare a constant NSArray or NSDictionary. Please, tell me a better way....

    Read the article

  • Mapping a collection of enums with NHibernate

    - by beaufabry
    Mapping a collection of enums with NHibernate Specifically, using Attributes for the mappings. Currently I have this working mapping the collection as type Int32 and NH seems to take care of it, but it's not exactly ideal. The error I receive is "Unable to determine type" when trying to map the collection as of the type of the enum I am trying to map. I found a post that said to define a class as public class CEnumType : EnumStringType { public CEnumType() : base(MyEnum) { } } and then map the enum as CEnumType, but this gives "CEnumType is not mapped" or something similar. So has anyone got experience doing this? So anyway, just a simple reference code snippet to give an example with [NHibernate.Mapping.Attributes.Class(Table = "OurClass")] public class CClass : CBaseObject { public enum EAction { do_action, do_other_action }; private IList<EAction> m_class_actions = new List<EAction>(); [NHibernate.Mapping.Attributes.Bag(0, Table = "ClassActions", Cascade="all", Fetch = CollectionFetchMode.Select, Lazy = false)] [NHibernate.Mapping.Attributes.Key(1, Column = "Class_ID")] [NHibernate.Mapping.Attributes.Element(2, Column = "EAction", Type = "Int32")] public virtual IList<EAction> Actions { get { return m_class_actions; } set { m_class_actions = value;} } } So, anyone got the correct attributes for me to map this collection of enums as actual enums? It would be really nice if they were stored in the db as strings instead of ints too but it's not completely necessary.

    Read the article

  • rails with fcgi error

    - by qichunren
    LoadError (Expected /web/zhao_backend2/app/controllers/admin_controller.rb to define AdminController): /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activesupport-2.0.2/lib/active_support/dependencies.rb:249:in load_missing_constant' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activesupport-2.0.2/lib/active_support/dependencies.rb:453:inconst_missing' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activesupport-2.0.2/lib/active_support/dependencies.rb:465:in const_missing' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activesupport-2.0.2/lib/active_support/inflector.rb:257:inconstantize' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/activesupport-2.0.2/lib/active_support/core_ext/string/inflections.rb:148:in constantize' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/actionpack-2.0.2/lib/action_controller/routing.rb:1426:inrecognize' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/actionpack-2.0.2/lib/action_controller/dispatcher.rb:170:in handle_request' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/actionpack-2.0.2/lib/action_controller/dispatcher.rb:115:indispatch' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/actionpack-2.0.2/lib/action_controller/dispatcher.rb:126:in dispatch_cgi' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/actionpack-2.0.2/lib/action_controller/dispatcher.rb:9:indispatch' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/rails-2.0.2/lib/fcgi_handler.rb:101:in process_request' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/rails-2.0.2/lib/fcgi_handler.rb:149:inwith_signal_handler' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/rails-2.0.2/lib/fcgi_handler.rb:99:in process_request' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/rails-2.0.2/lib/fcgi_handler.rb:77:inprocess_each_request' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/fcgi-0.8.8/lib/fcgi.rb:612:in each_cgi' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/fcgi-0.8.8/lib/fcgi.rb:609:ineach' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/fcgi-0.8.8/lib/fcgi.rb:609:in each_cgi' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/rails-2.0.2/lib/fcgi_handler.rb:76:inprocess_each_request' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/rails-2.0.2/lib/fcgi_handler.rb:50:in process!' /usr/local/ruby-1.8.7-p248/lib/ruby/gems/1.8/gems/rails-2.0.2/lib/fcgi_handler.rb:24:inprocess!' /public/dispatch.fcgi:24 Rendering /web/zhao_backend2/public/500.html (500 Internal Server Error)

    Read the article

  • Subset a data.frame by list and apply function on each part, by rows

    - by aL3xa
    This may seem as a typical plyr problem, but I have something different in mind. Here's the function that I want to optimize (skip the for loop). # dummy data set.seed(1985) lst <- list(a=1:10, b=11:15, c=16:20) m <- matrix(round(runif(200, 1, 7)), 10) m <- as.data.frame(m) dfsub <- function(dt, lst, fun) { # check whether dt is `data.frame` stopifnot (is.data.frame(dt)) # check if vectors in lst are "whole" / integer # vector elements should be column indexes is.wholenumber <- function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol # fall if any non-integers in list idx <- rapply(lst, is.wholenumber) stopifnot(idx) # check for list length stopifnot(ncol(dt) == length(idx)) # subset the data subs <- list() for (i in 1:length(lst)) { # apply function on each part, by row subs[[i]] <- apply(dt[ , lst[[i]]], 1, fun) } # preserve names names(subs) <- names(lst) # convert to data.frame subs <- as.data.frame(subs) # guess what =) return(subs) } And now a short demonstration... actually, I'm about to explain what I primarily intended to do. I wanted to subset a data.frame by vectors gathered in list object. Since this is a part of code from a function that accompanies data manipulation in psychological research, you can consider m as a results from personality questionnaire (10 subjects, 20 vars). Vectors in list hold column indexes that define questionnaire subscales (e.g. personality traits). Each subscale is defined by several items (columns in data.frame). If we presuppose that the score on each subscale is nothing more than sum (or some other function) of row values (results on that part of questionnaire for each subject), you could run: > dfsub(m, lst, sum) a b c 1 46 20 24 2 41 24 21 3 41 13 12 4 37 14 18 5 57 18 25 6 27 18 18 7 28 17 20 8 31 18 23 9 38 14 15 10 41 14 22 I took a glance at this function and I must admit that this little loop isn't spoiling the code at all... BUT, if there's an easier/efficient way of doing this, please, let me know!

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >