Search Results

Search found 31973 results on 1279 pages for 'network library'.

Page 470/1279 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Why is my Map broken?

    - by Kirk
    Scenario: Creating a server which has Room objects which contain User objects. I want to store the rooms in a Map of some sort by Id (a string). Desired Behavior: When a user makes a request via the server, I should be able to look up the Room by id from the library and then add the user to the room, if that's what the request needs. Currently I use the static function in my Library.java class where the Map is stored to retrieve Rooms: public class Library { private static Hashtable<String, Rooms> myRooms = new Hashtable<String, Rooms>(); public static addRoom(String s, Room r) { myRooms.put(s, r); } public static Room getRoomById(String s) { return myRooms.get(s); } } In another class I'll do the equivalent of myRoom.addUser(user); What I'm observing using Hashtable, is that no matter how many times I add a user to the Room returned by getRoomById, the user is not in the room later. I thought that in Java, the object that was returned was essentially a reference to the data, the same object that was in the Hashtable with the same references; but, it isn't behaving like that. Is there a way to get this behavior? Maybe with a wrapper of some sort? Am I just using the wrong variant of map? Help?

    Read the article

  • Creating and using a static lib in xcode

    - by Alasdair Morrison
    I am trying to create a static library in xcode and link to that static library from another program. So as a test i have created a BSD static C library project and just added the following code: //Test.h int testFunction(); //Test.cpp #include "Test.h" int testFunction() { return 12; } This compiles fine and create a .a file (libTest.a). Now i want to use it in another program so I create a new xcode project (cocoa application) Have the following code: //main.cpp #include <iostream> #include "Testlib.h" int main (int argc, char * const argv[]) { // insert code here... std::cout << "Result:\n" <<testFunction(); return 0; } //Testlib.h extern int testFunction(); I right clicked on the project - add - existing framework - add other Selected the .a file and it added it into the project view. I always get this linker error: Build TestUselibrary of project TestUselibrary with configuration Debug Ld build/Debug/TestUselibrary normal x86_64 cd /Users/myname/location/TestUselibrary setenv MACOSX_DEPLOYMENT_TARGET 10.6 /Developer/usr/bin/g++-4.2 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk -L/Users/myname/location/TestUselibrary/build/Debug -L/Users/myname/location/TestUselibrary/../Test/build/Debug -F/Users/myname/location/TestUselibrary/build/Debug -filelist /Users/myname/location/TestUselibrary/build/TestUselibrary.build/Debug/TestUselibrary.build/Objects-normal/x86_64/TestUselibrary.LinkFileList -mmacosx-version-min=10.6 -lTest -o /Users/myname/location/TestUselibrary/build/Debug/TestUselibrary Undefined symbols: "testFunction()", referenced from: _main in main.o ld: symbol(s) not found collect2: ld returned 1 exit status I am new to macosx development and fairly new to c++. I am probably missing something fairly obvious, all my experience comes from creating dlls on the windows platform. I really appreciate any help.

    Read the article

  • Shoulda and Paperclip testing

    - by trobrock
    I am trying to test a couple models that have an attachment with Paperclip. I have all of my validations passing except for the content-type check. # myapp/test/unit/project_test.rb should_have_attached_file :logo should_validate_attachment_presence :logo should validate_attachment_size(:logo).less_than(1.megabyte) should_validate_attachment_content_type :logo, :valid => ["image/png", "image/jpeg", "image/pjpeg", "image/x-png"] # myapp/app/models/project.rb has_attached_file :logo, :styles => { :small => "100x100>", :medium => "200x200>" } validates_attachment_presence :logo validates_attachment_size :logo, :less_than => 1.megabyte validates_attachment_content_type :logo, :content_type => ["image/png", "image/jpeg", "image/pjpeg", "image/x-png"] The errors I am getting: 1) Failure: test: Client should validate the content types allowed on attachment logo. (ClientTest) [/Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/assertions.rb:55:in `assert_accepts' vendor/plugins/paperclip/shoulda_macros/paperclip.rb:44:in `__bind_1276100387_499280' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `call' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `test: Client should validate the content types allowed on attachment logo. ']: Content types image/png, image/jpeg, image/pjpeg, image/x-png should be accepted and rejected by logo This happens on two different models that are set up the same way.

    Read the article

  • Purge complete Python installation on OS X

    - by Konrad Rudolph
    I’m working on a recently-upgraded OS X Snow Leopard and MacPorts and I’m running into problems at every corner. The first problem is the sheer number of installed Python versions: altogether, there are four: 2.5, 2.6 and 3.0 in /Library/Frameworks/Python.framework 2.6 in /opt/local/Library/Frameworks/Python.framework/ (MacPorts installation) So there are at least two useless/redundant versions: 2.5 and the redundant 2.6. Additionally, the pre-installed Python is giving me severe problems because some of the pre-installed libraries (in particular, scipy, numpy and matplotlib) don’t work properly. I am sorely tempted to purge the complete /Library/Frameworks/Python.framework path, as well as the MacPorts Python installation. After that, I’ll start from a clean slate by installing a properly configured Python, e.g. that from Enthought. Am I running headlong into trouble? Or is this a sane undertaking? (In particular, I need a working Python in the next few days and if I end up with a non-working Python this would be a catastrophe of medium proportions. On the other hand, some features I need from matplotlib aren’t working now.)

    Read the article

  • does anyone know of good delphi docking components?

    - by X-Ray
    we'd like to add movable panels to an application. presently we've used DevExpress docking library but have found them to be disappointingly quirky & difficult to work with. it also has some limitations that aren't so great. auto-hide, pinning, and moving of pages by drag-and-drop are all features we'd like to use. the built-in delphi docking doesn't seem to be full-featured enough to do the things we need (also see sample below). perhaps i should dig deeper into delphi's docking abilities...my initial impression is that they seem very toolbar-oriented rather than something i can drop a frame into. i'm not experienced at docking topics. my only experience has been with the DevExpress docking library where i needed to programmatically create & dock panels. is it my imagination or are DevExpress's products unduly difficult to use/learn? the DevExpress Ribbon Bar component compared to the d2009 Ribbon Bar was certainly a useful experience. i will migrate to the d2009 Ribbon Bar as soon as convenient to do so. it was refreshingly straight-forward to learn and use. a sharp contrast compared to the DevExpress equivalent. if it takes 4x as longer to make it using the DevExpress equivalent, it's time to change direction. what would you suggest in regard to the docking library? thank you for your suggestions/comments!

    Read the article

  • Strange File Upload issue with asp.net site on a web farm

    - by Coov
    I have a basic asp.net file upload page. When I test file uploads from my local machine, it works fine. When I test file uploads from our dev machine, it works fine. When I deploy the site to our production webfarm, it behaves strangely. If I access the site from off the network, I can load file-after-file without issue. If I access the site from within our network, I can load the first file just fine but any subsequent files result it a bad sequence of commands error. I'm not sure if this is web farm issue, a network issue, or something else. It feels like a connection is not being disposed of properly but it doesn't make sense why everything works fine remotely. Markup: <asp:FileUpload ID="FileUpload1" runat="server" Width="350px" /> <asp:Button ID="btnSubmit" runat="server" Text="Upload" onclick="btnSubmit_Click" /> Code: if (FileUpload1.HasFile) { FtpWebRequest ftpRequest; FtpWebResponse ftpResponse; ftpRequest = (FtpWebRequest)FtpWebRequest.Create(new Uri("ftp://ftp.myftpsite.com/" + FileUpload1.FileName)); ftpRequest.Method = WebRequestMethods.Ftp.UploadFile; ftpRequest.Proxy = null; ftpRequest.UseBinary = true; ftpRequest.Credentials = new NetworkCredential("username", "password"); ftpRequest.KeepAlive = false; byte[] fileContents = new byte[FileUpload1.PostedFile.ContentLength]; using (Stream fr = FileUpload1.PostedFile.InputStream) { fr.Read(fileContents, 0, FileUpload1.PostedFile.ContentLength); } using (Stream writer = ftpRequest.GetRequestStream()) { writer.Write(fileContents, 0, fileContents.Length); } ftpResponse = (FtpWebResponse)ftpRequest.GetResponse(); Response.Write(ftpResponse.StatusDescription); }

    Read the article

  • Define a swig interface file for generation of wrapper to every type from some header file

    - by Dmitriy Matveev
    Hi! We're using some C library in our Java project. Several years ago some other developer which has retired few years ago (as always) has created all the wrappers for us. The wrappers were generated by the swig, but the interface file is lost now. The basic idea of library and the wrappers for it is following: There only one function which returns pointer to some complex object. And there are wrapper for that function. The complex object is a tree-like structure with dozens of node kinds and types (C structures) used to represent them. There are hundreds of wrappers for every field of every type and we're trying to use them all. The library was updated some time ago and now there are some new data we unaware of which yet, but would like to use. This data is contained in some of the objects indirectly contained or referenced from the object created by the function we call (Some new fields and types were added). I know that I shouldn't make any changes to the wrappers by hand and should rather modify the interface, but as I already wrote it's missing. For now I only want to generate wrappers some few types which are added/changed and them to our old wrappers, but later I want to start creation of interface file which will define "what and how should be wrapped". All the definitions necessary for us are defined in single header file. Is it possible to tell swig to generate wrappers for every type in this header? If so, how can I write such interface file?

    Read the article

  • std::locale breakage on MacOS 10.6 with LANG=en_US.UTF-8

    - by fixermark
    I have a C++ application that I am porting to MacOSX (specifically, 10.6). The app makes heavy use of the C++ standard library and boost. I recently observed some breakage in the app that I'm having difficulty understanding. Basically, the boost filesystem library throws a runtime exception when the program runs. With a bit of debugging and googling, I've reduced the offending call to the following minimal program: #include <locale> int main ( int argc, char *argv [] ) { std::locale::global(std::locale("")); return 0; } This program fails when I run this through g++ and execute the resulting program in an environment where LANG=en_US.UTF-8 is set (which on my computer is part of the default bash session when I create a new console window). Clearing the environment variable (setenv LANG=) allows the program to run without issues. But I'm surprised I'm seeing this breakage in the default configuration. My questions are: Is this expected behavior for this code on MacOS 10.6? What would a proper workaround be? I can't really re-write the function because the version of the boost libraries we are using executes this statement internally as part of the filesystem library. For completeness, I should point out that the program from which this code was synthesized crashes when launched via the 'open' command (or from the Finder) but not when Xcode runs the program in Debug mode. edit The error given by the above code on 10.6.1 is: $ ./locale terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid Abort trap

    Read the article

  • Mono doesn't write settings defaults

    - by Petar Minchev
    Hi guys! Here is my problem. If I use only one Windows Forms project and call only - Settings.Default.Save() when running it, Mono creates a user.config file with the default value for each setting. It is fine, so far so good. But now I add a class library project, which is referenced from the Windows Forms project and I move the settings from the Windows Forms project to the Class Library one. Now I do the same - Settings.Default.Save() and to my great surprise, Mono creates a user.config file with EMPTY values(NOT the default ones) for each setting?! What's the difference between having the settings in the Windows Forms Project or in the class library one? And by the way it is not a operating system issue. It is a Mono issue, because it doesn't work both under Windows and Linux. If I don't use Mono everything is fine, but I have to port my application to Linux, so I have to use Mono. I am really frustrated, it is blocking a project:( Thanks in advance for any suggestion you have. Regards, Petar

    Read the article

  • Cookie add in the Global.asax warning in application log

    - by Ioxp
    In my Global.ASAX file i have the following: System.Web.HttpCookie isAccess = new System.Web.HttpCookie("IsAccess"); isAccess.Expires = DateTime.Now.AddDays(-1); isAccess.Value = ""; System.Web.HttpContext.Current.Response.Cookies.Add(isAccess); So every time this method this is logged in the application events as a warning: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 5/25/2010 12:23:20 PM Event time (UTC): 5/25/2010 4:23:20 PM Event ID: c515e27a28474eab8d99720c3f5a8e90 Event sequence: 4148 Event occurrence: 332 Event detail code: 0 Application information: Application domain: /LM/W3SVC/2100509645/Root-1-129192259222289896 Trust level: Full Application Virtual Path: / Application Path: <PathRemoved>\www\ Machine name: TIPPER Process information: Process ID: 6936 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: NullReferenceException Exception message: Object reference not set to an instance of an object. Request information: Request URL: Request path: User host address: User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 7 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at ASP.global_asax.Session_End(Object sender, EventArgs e) in <PathRemoved>\Global.asax:line 113 Any idea why this code would cause this error?

    Read the article

  • Where should you put 3rd party .NET dlls when using git submodules to avoid duplication

    - by Tim Abell
    I have two .NET library projects in Visual Studio 2008 that both make use of the MySql Connector for .NET (MySql.Data.dll). These libraries are then in turn both used by a .NET command line application which also uses the Connector. The library projects are pulled in to the application's solution as git submodules and referenced by project in Visual Studio. I'm looking for the most effective strategy for storing and referencing the MySql Connector library. I have tried having the MySql.Data.dll checked in to all three projects (in their root folder), this was problematic when one project changed to a newer version of the connector dll. Although each project had its own version of the dll, only one was packaged into the resultant application leading to an API mismatch which was hard to pin down. This has put me off this approach. I have tried having the command line application reference the connector dll that is held in a submodule, however this only removes the possibility of version mismatches when there is only one submodule rather than two as in this case. I am contemplating putting the dll in the global assembly cache (GAC) of all machines that need to build or use the application, but I'm wary of not having all dependencies for an application available in source control.

    Read the article

  • Is it possible to use 2 versions of jQuery on the same page?

    - by Ben McCormack
    NOTE: I know similar questions have already been asked here and here, but I'm looking for additional clarification as to how to make this work. I'm adding functionality to an existing web site that is already using an older version of the jQuery library (1.1.3.1). I've been writing my added functionality against the newest version of the jQuery library (1.4.2). I've tested the website using only the newer version of jQuery and it breaks functionality, so now I'm looking at using both versions on the same page. How is this possible? What do I need to do in my code to specify that I'm using one version of jQuery instead of another? For example, I'll put <script> tags for both versions of jQuery in the header of my page, but what do I need to do so that I know for sure in my calling code that I'm calling one version of the library or another? Maybe something like this: //Put some code here to specify a variable that will be using the newer //version of jquery: var $NEW = jQuery.theNewestVersion(); //Now when I use $NEW, I'll know it's the newest version and won't //conflict with the older version. $NEW('#personName').text('Ben'); //And when I use the original $ in code, or simply 'jquery', I'll know //it's the older version. $('#personName').doSomethingWithTheOlderVersion();

    Read the article

  • Quartz.Net scheduler works locally but not on remote host

    - by Glinkot
    Hi. I have a timed quartz.net job working fine on my dev machine, but once deployed to a remote server it is not triggering. I believe the job is scheduled ok, because if I postback, it tells me the job already exists (I normally check for postback however). The email code definitely works, as the 'button1_click' event sends emails successfully. I understand I have full or medium trust on the remove server. My host says they don't apply restrictions that they know of which would affect it. Any other things I need to do to get it running? using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using Quartz; using Quartz.Impl; using Quartz.Core; using Aspose.Network.Mail; using Aspose.Network; using Aspose.Network.Mime; using System.Text; namespace QuartzTestASP { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { ISchedulerFactory schedFact = new StdSchedulerFactory(); IScheduler sched = schedFact.GetScheduler(); JobDetail jobDetail = new JobDetail("testJob2", null, typeof(testJob)); //Trigger trigger = TriggerUtils.MakeMinutelyTrigger(1, 3); Trigger trigger = TriggerUtils.MakeSecondlyTrigger(10, 5); trigger.StartTimeUtc = DateTime.UtcNow; trigger.Name = "TriggertheTest"; sched.Start(); sched.ScheduleJob(jobDetail, trigger); } } protected void Button1_Click1(object sender, EventArgs e) { myutil.sendEmail(); } } class testJob : IStatefulJob { public testJob() { } public void Execute(JobExecutionContext context) { myutil.sendEmail(); } } public static class myutil { public static void sendEmail() { // tested code lives here and works fine when called from elsewhere } } }

    Read the article

  • Problem with load testing Web Service - VSTS 2008

    - by Carlos
    Hello, I have a webtest with makes a simple call to a WebService which looks like that: MyWebService webService = new MyWebService(); webService.Timeout = 180000; webService.myMethod(); I am not using ThinkTimes, also the Run Duration is set to 5 minutes. When I ran this test simulating only 1 user, I check the counters and I found something like that: Tests Total: 4500 Network Interface\Bytes sent (agent machine): 35,500 Then I ran the same tests, but this time simulating 2 users and I got something like that: Tests Total: 2225 Network Interface\Bytes sent (agent machine): 30,500 So when I increased the numbers of users the tests/sec was half than when I use only 1 user and the bytes sent by the agent was also lower. I think it is strange, because it doesn't seems I have a bottleneck in my agent machine since CPU is never higher than 30% and I have over 1.5GB of RAM free, also my network utilization is like 0.5% of its capacity. In order to troubleshot this I ran a test using Step Pattern, the simulated users went from 20 to 800 users. When I check the requests/sec it is practically constant through the whole test, so it is clear there is something in my test or my environment which is preventing the number of requests from gets higher. It would be a expected behavior if the "response time" was getting higher because it would tell me the requests wasn't been processed properly, but the strange thing is the response time is practically constant all the time and it is pretty low actually. I have no idea why my agent can't send more requests when I increase the numbers of users, any help/tip/guess would be really appreciate.

    Read the article

  • Need help in reading callgrind output

    - by n179911
    Hi, I have run callgrind with my application like this "valgrind --tool=callgrind MyApplication" and then call 'callgrind_annotate --auto=yes ./callgrind.out.2489' I see output like 768,097,560 PROGRAM TOTALS -------------------------------------------------------------------------------- Ir file:function -------------------------------------------------------------------------------- 18,624,794 /build/buildd/eglibc-2.11.1/elf/dl-lookup.c:do_lookup_x [/lib/ld-2.11.1.so] 18,149,492 /src/js/src/jsgc.cpp:JS_CallTracer'2 [/src/firefox-debug-objdir/js/src/libmozjs.so] 16,328,897 /src/layout/style/nsCSSDataBlock.cpp:nsCSSExpandedDataBlock::DoAssertInitialState() [/src/firefox-debug-objdir/toolkit/library/libxul.so] 13,376,634 /build/buildd/eglibc-2.11.1/nptl/pthread_getspecific.c:pthread_getspecific [/lib/libpthread-2.11.1.so] 13,005,623 /build/buildd/eglibc-2.11.1/malloc/malloc.c:_int_malloc [/lib/libc-2.11.1.so] 10,404,453 ???:0x0000000000009190 [/usr/lib/libpangocairo-1.0.so.0.2800.0] 10,358,646 /src/xpcom/io/nsFastLoadFile.cpp:NS_AccumulateFastLoadChecksum(unsigned int*, unsigned char const*, unsigned int, int) [/src/firefox-debug-objdir/toolkit/library/libxul.so] 8,543,634 /src/js/src/jsscan.cpp:js_GetToken [/src/firefox-debug-objdir/js/src/libmozjs.so] 7,451,273 /src/xpcom/typelib/xpt/src/xpt_arena.c:XPT_ArenaMalloc [/src/firefox-debug-objdir/toolkit/library/libxul.so] 7,335,131 ???:g_type_check_instance_is_a [/usr/lib/libgobject-2.0.so.0.2400.0] I have a few questions: What does the number on the right mean? Does it mean it spend accumulative that long in calling the function on the right? How can I tell how many times that function has been called and Does that include the time spend in calling the functions called by that function? What does line with ??? mean? e.g. ???:0x0000000000009190 [/usr/lib/libpangocairo-1.0.so.0.2800.0] Thank you.

    Read the article

  • Basic compile issue with QT4

    - by Cobus Kruger
    I've been trying to get a dead simple listing from a university textbook to compile with the newest QT SDK for Windows I downloaded last night. After struggling through the regular nonsense (no make.bat, need to manually add environment variables and so on) I am finally at the point where I can build. But only one of the two libraries seem to work. The .pro file I use is dead simple: SUBDIRS += utils \ dataobjects TEMPLATE = subdirs In each of these two subfolders I have the source for a library. Running QMAKE generates a makefile and running Make runs through all the preliminaries and then fails on the g++ call: g++ -enable-stdcall-fixup -Wl,-enable-auto-import -Wl,-enable-runtime-pseudo-reloc --out-implib,libdataobjects.a -shared -mthreads -Wl -Wl,--out-implib,c:\Users\Cobus\workspace\lib\libdataobjects.a -o ..\..\lib\dataobjects.dll object_script.dataobjects.Debug -L"c:\Users\Cobus\Portab~1\Qt\2010.02.1\qt\lib" -LC:\Users\Cobus\workspace\lib -lutils -lQtXmld4 -lQtGuid4 -lQtCored4 c:/users/cobus/portab~1/qt/2010.02.1/mingw/bin/../lib/gcc/mingw32/4.4.0/../../../../mingw32/bin/ld.exe: cannot find -lutils The problem seems to be right near the end of the command line, where -lutils is added, indicating that there is a library by the name of utils. While I would have expected to see that, you'll notice the library names after --out include lib in the name, so they become libutils and libdataobjects. I have tried to figure out why this is happening, to no avail. Anyone have an idea what's going on?

    Read the article

  • Is Form validation and Business validation too much?

    - by Robert Cabri
    I've got this question about form validation and business validation. I see a lot of frameworks that use some sort of form validation library. You submit some values and the library validates the values from the form. If not ok it will show some errors on you screen. If all goes to plan the values will be set into domain objects. Here the values will be or, better said, should validated (again). Most likely the same validation in the validation library. I know 2 PHP frameworks having this kind of construction Zend/Kohana. When I look at programming and some principles like Don't Repeat Yourself (DRY) and single responsibility principle (SRP) this isn't a good way. As you can see it validates twice. Why not create domain objects that do the actual validation. Example: Form with username and email form is submitted. Values of the username field and the email field will be populated in 2 different Domain objects: Username and Email class Username {} class Email {} These objects validate their data and if not valid throw an exception. Do you agree? What do you think about this aproach? Is there a better way to implement validations? I'm confused about a lot of frameworks/developers handling this stuff. Are they all wrong or am I missing a point? Edit: I know there should also be client side kind of validation. This is a different ballgame in my Opinion. If You have some comments on this and a way to deal with this kind of stuff, please provide.

    Read the article

  • gethostbyname fails for local hostname after resuming from hibernate (Vista+7?)

    - by John
    Just wondering if anyone else has spotted this: On some user's machines running our software, occasionally the call to Win32 winsock gethostbyname fails with error code 11004. For the argument to gethostbyname, I'm passing in the result from gethostname. Now the docs say 11004 is WSANO_DATA. None of the descriptions seem to be relevant (it occurs if you pass in an IP6 address, but as I say, I'm passing in a hostname). Even more interesting is that the MSDN suggests that this combination (gethostname followed by gethostbyname) should never fail, not even if there is no IP address (in that case it would just return empty list of IPs). Here is the quote from the gethostname MSDN entry: ...it is guaranteed that the name returned will be successfully parsed by gethostbyname and WSAAsyncGetHostByName. It only ever happens after resuming from hibernate, in that short period when the network is restarting, and only on Vista/7 (well I've only seen it on Vista and 7). One theory I had was that it related to IP6. Maybe for a short period the network reports an IP6 address but not the corresponging IP4 address (I'm pretty sure that all the client machines are dual IP stack, but I could be wrong). I tried to reproduce by turning off my network card (to force no IP addresses) and couldn't reproduce. Anyone seen this before? Any ideas? John

    Read the article

  • 'Stack level too deep' error in engine-like plugin with globalize

    - by nutsmuggler
    Hello folks. I have built an engine-like plugin thanks to the new features of Rails 2.3. It's a 'Product' module for a CMS, extrapolated from a previously existing (and working) model/controller. The plugin relies on easy_fckeditor and on globalize (description and title field are localised), and I suspect that globalized could be the culprit here... Everything works fine, except for the update action. I get the following error message: (posting just the first lines, all the message is about attribute_methods) stack level too deep /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:64:in `generated_methods?' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:241:in `method_missing' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:249:in `method_missing' For referenze, the full error stack is here: http://pastie.org/596546 I've tried to debug eliminating all the input fields, one by one, but I keep getting the error. fckeditor doesn't seem the culprit (error even without fckeditor) This is the action: def update params[:product][:term_ids] ||= [] @product = Product.find(params[:id]) respond_to do |format| if @product.update_attributes(params[:product]) flash[:notice] = t(:Product_was_successfully_updated) format.html { redirect_to products_path } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @product.errors, :status => :unprocessable_entity } end end end As you see it's quite straightforward. Of course I am not hoping someone to solve this question straightaway, I'd just like to have a head up, a suggestion about where to look to solve this issue. Thanks in advance, Davide

    Read the article

  • PHP: Exception not caught by try ... catch

    - by Christian Brenner
    I currently am working on an autoloader class for one of my projects. Below is the code for the controller library: public static function includeFileContainingClass($classname) { $classname_rectified = str_replace(__NAMESPACE__.'\\', '', $classname); $controller_path = ENVIRONMENT_DIRECTROY_CONTROLLERS.strtolower($classname_rectified).'.controller.php'; if (file_exists($controller_path)) { include $controller_path; return true; } else { // TODO: Implement gettext('MSG_FILE_CONTROLLER_NOTFOUND') throw new Exception('File '.strtolower($classname_rectified).'.controller.php not found.'); return false; } } And here's the code of the file I try to invoke the autoloader on: try { spl_autoload_register(__NAMESPACE__.'\\Controller::includeFileContainingClass'); } catch (Exception $malfunction) { die($malfunction->getMessage()); } // TESTING ONLY $test = new Testing(); When I try to force a malfunction, I get the following message: Fatal error: Uncaught exception 'Exception' with message 'File testing.controller.php not found.' in D:\cerophine-0.0.1-alpha1\application\libraries\controller.library.php:51 Stack trace: #0 [internal function]: application\Controller::includeFileContainingClass('application\Tes...') #1 D:\cerophine-0.0.1-alpha1\index.php(58): spl_autoload_call('application\Tes...') #2 {main} thrown in D:\cerophine-0.0.1-alpha1\application\libraries\controller.library.php on line 51 What seems to be wrong?

    Read the article

  • problems with Console.SetOut in Release Mode?

    - by Matt Jacobsen
    i have a bunch of Console.WriteLines in my code that I can observe at runtime. I communicate with a native library that I also wrote. I'd like to stick some printf's in the native library and observe them too. I don't see them at runtime however. I've created a convoluted hello world app to demonstrate my problem. When the app runs, I can debug into the native library and see that the hello world is called. The output never lands in the textwriter though. Note that if the same code is run as a console app then everything works fine. C#: [DllImport("native.dll")] static extern void Test(); StreamWriter writer; public Form1() { InitializeComponent(); writer = new StreamWriter(@"c:\output.txt"); writer.AutoFlush = true; System.Console.SetOut(writer); } private void button1_Click(object sender, EventArgs e) { Test(); } and the native part: __declspec(dllexport) void Test() { printf("Hello World"); } Update: hamishmcn below started talking about debug/release builds. I removed the native call in the above button1_click method and just replaced it with a standard Console.WriteLine .net call. When I compiled and ran this in debug mode the messages were redirected to the output file. When I switched to release mode however the calls weren't redirected. Console redirection only seems to work in debug mode. How do I get around this?

    Read the article

  • Importing from referenced assembly - MEF

    - by cmaduro
    I have the following simplified code: namespace Silverbits.Applications { public partial class SilverbitsApplication : Application { [Import("MainPage")] public UserControl MainPage { get { return RootVisual as UserControl; } set { RootVisual = value; } } public SilverbitsApplication() { this.Startup += this.SilverbitsApplication_StartUp; this.Exit += new EventHandler(SilverbitsApplication_Exit); this.UnhandledException += this.SilverbitsApplication_UnhandledException; InitializeComponent(); } private void SilverbitsApplication_StartUp(object sender, StartupEventArgs e) { CompositionInitializer.SatisfyImports(this); } } namespace Manpower4U { public class App : SilverbitsApplication { public App() : base() { } } } namespace Manpower4U { [Export("MainPage")] public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); } } } The idea is that I have a Silverbits Library which is a completely different solution. And I have Manpower4U silverlight application that references my Silverbits library. I want to export MainPage from Manpower4U and set it to the RootVisual in my SilverbitsApplication class. SilverbitsApplication class is basically App.xaml/App.cs from the silverlight application, only I put it in a class library and subclassed App.cs file in Manpower4U, which is now the entry point of Manpower4U. MEF cannot resolve the import. How do I get this to work?

    Read the article

  • JavaScript snippet to read and output XML file on page load?

    - by Banderdash
    Hey guys, hoping I might get some help. Have XML file here of a list of books each with unique id and numeral value for whether they are checked out or not. I need a JavaScript snippet that requests the XML file after the page loads and displays the content of the XML file. XML file looks like this: <?xml version="1.0" encoding="UTF-8" ?> <response> <library name="My Library"> <book id="1" checked-out="1"> <authors> <author>John Resig</author> </authors> <title>Pro JavaScript Techniques (Pro)</title> <isbn-10>1590597273</isbn-10> </book> <book id="2" checked-out="0"> <authors> <author>Erich Gamma</author> <author>Richard Helm</author> <author>Ralph Johnson</author> <author>John M. Vlissides</author> </authors> <title>Design Patterns: Elements of Reusable Object-Oriented Software</title> <isbn-10>0201633612</isbn-10> </book> ... </library> </response> Would LOVE any and all help!

    Read the article

  • How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ?

    - by Ernst de Haan
    How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ? I have an open-source project that I am converting from Ant to Maven, including its unit tests. Here's the project source repository with the Maven project: http://github.com/znerd/logdoc My question pertains to the primary module, called "base". This module has a unit test that tests the behaviour of the static method getVersion() in the class org.znerd.logdoc.Library. This method returns: Library.class.getPackage().getImplementationVersion() The getImplementationVersion() method returns a value of a setting in the manifest file. So far, so good. I have tested this in the past and it works well, as long as the manifest is indeed available on the classpath at the path META-INF/MANIFEST.MF (either on the file system or inside a JAR file). Now my challenge is that the manifest file is not available when I run the unit tests: mvn test Surefire runs the unit tests, but my unit test fails with a mesage indicating that Library.getVersion() returned null. When I want to check the JAR, I find that it has not even been generated. Maven/Surefire runs the unit tests against the classes, before the resources are added to the classpath. So can I either run the unit tests against the JAR (implicitly requiring the JAR to be generated first) or can I make sure the resources (including the manifest file) are generated/copied under target/classes before the unit tests are run? Note that I use Maven 2.2.0, Java 1.6.0_17 on Mac OS X 10.6.2, with JUnit 4.8.1.

    Read the article

  • Api/plugins for open source libraries?

    - by fayer
    whenever i use a open source library eg. Doctrine i always ending up coding a class (so called Facade) to use the Doctrine library. so next time i want to create a user i just type: $fields = array('name' => 'peter', 'email' => '[email protected]'); Doctrine_Facade::create_entity($entity, $fields); then it creates an entity with the provided information. so i guess, all coders will create their own "Facade". i wonder how usual it is with open source Facades to download and interact with the open source libraries? is this rare cause i haven't seen any of these. in some frameworks i have seen them called plugins, eg. plugins for twitter api or facebook api. so whenever you download a library, should you search for plugins/facades on the net, or is it better to just try coding your own? i just thought it would be great for everyone not to reinvent the wheel. thanks.

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >