Search Results

Search found 9715 results on 389 pages for 'bad passwords'.

Page 326/389 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Using Hibernate with MS ACCESS 2007 Database (Free JDBC Driver)

    - by Quentin T.
    1. I want to do a reverse engineering action with the Hibernate plugin of Eclipse on a MS Access 2007 Database. I'm forced to use a existing MS Access 2007 db. A easy solution is to buy the HXTT. But I want to use a free driver to do my work. So I tried to apply this post : http://www.programmingforfuture.com/2011/06/how-to-use-ms-access-with-hibernate.html (That uses the SQL Server dialect and the driver sun.jdbc.odbc.JdbcOdbcDriver) Unfortunately I have an error that nobody seems to have been on the internet: Exception while generating code Reason : org.hibernate.exception.GenericJDBCException: Error while reading primary key meta data for `c:/myaccessdb.mdb`.TableTest1 I have try to change the primary key on my MS Access DB (deleting all primary key) or to try the reverse engineering on a MS ACCESS with only one table without primary key, but I got all times the problems. 2. The purpose of my job is to transfer daily (weekly) an Oracle 11g database with data from an existing database MS ACCESS 2007. And I thought to use a procedure (Hibernate EJB) Java to be launched automatically every week to do the data transfer. Is this is the best solution ? Configuration : sun.jdbc.odbc.JdbcOdbcDriver v??? Hibernate v3.4 Eclipse ps: If you are a HXTT developer or seller please be indulgent with my post ;). Making money by making people believe that you help, it's bad ! A solution is to use Derby Client driver, as the solution in the post: Does anyone know if Hibernate and java will work effectively with Access? But a clarification of the answer of Rich Seller is required. Could you explain your answer and explain your configuration (hibernate.cfg.xml, persistence.xml and what URL you use in the property name="hibernate.connection.url") without using paying HXTT driver but with the free Derby driver.

    Read the article

  • C++ - passing references to boost::shared_ptr

    - by abigagli
    If I have a function that needs to work with a shared_ptr, wouldn't it be more efficient to pass it a reference to it (so to avoid copying the shared_ptr object)? What are the possible bad side effects? I envision two possible cases: 1) inside the function a copy is made of the argument, like in ClassA::take_copy_of_sp(boost::shared_ptr<foo> &sp) { ... m_sp_member=sp; //This will copy the object, incrementing refcount ... } 2) inside the function the argument is only used, like in Class::only_work_with_sp(boost::shared_ptr<foo> &sp) //Again, no copy here { ... sp->do_something(); ... } I can't see in both cases a good reason to pass the boost::shared_ptr by value instead of by reference. Passing by value would only "temporarily" increment the reference count due to the copying, and then decrement it when exiting the function scope. Am I overlooking something? Andrea. EDIT: Just to clarify, after reading several answers : I perfectly agree on the premature-optimization concerns, and I alwasy try to first-profile-then-work-on-the-hotspots. My question was more from a purely technical code-point-of-view, if you know what I mean.

    Read the article

  • How do i write task? (parallel code)

    - by acidzombie24
    I am impressed with intel thread building blocks. I like how i should write task and not thread code and i like how it works under the hood with my limited understanding (task are in a pool, there wont be 100 threads on 4cores, a task is not guaranteed to run because it isnt on its own thread and may be far into the pool. But it may be run with another related task so you cant do bad things like typical thread unsafe code). I wanted to know more about writing task. I like the 'Task-based Multithreading - How to Program for 100 cores' video here http://www.gdcvault.com/sponsor.php?sponsor_id=1 (currently second last link. WARNING it isnt 'great'). My fav part was 'solving the maze is better done in parallel' which is around the 48min mark (you can click the link on the left side). However i like to see more code examples and some API of how to write task. Does anyone have a good resource? I have no idea how a class or pieces of code may look after pushing it onto a pool or how weird code may look when you need to make a copy of everything and how much of everything is pushed onto a pool.

    Read the article

  • NSOperations or NSThread for bursts of smaller tasks that continuously cancel each other?

    - by RickiG
    Hi I would like to see if I can make a "search as you type" implementation, against a web service, that is optimized enough for it to run on an iPhone. The idea is that the user starts typing a word; "Foo", after each new letter I wait XXX ms. to see if they type another letter, if they don't, I call the web service using the word as a parameter. The web service call and the subsequent parsing of the result I would like to move to a different thread. I have written a simple SearchWebService class, it has only one public method: - (void) searchFor:(NSString*) str; This method tests if a search is already in progress (the user has had a XXX ms. delay in their typing) and subsequently stops that search and starts a new one. When a result is ready a delegate method is called: - (NSArray*) resultsReady; I can't figure out how to get this functionality 'threaded'. If I keep spawning new threads each time a user has a XXX ms. delay in the typing I end up in a bad spot with many threads, especially because I don't need any other search, but the last one. Instead of spawning threads continuously, I have tried keeping one thread running in the background all the time by: - (void) keepRunning { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; SearchWebService *searchObj = [[SearchWebService alloc] init]; [[NSRunLoop currentRunLoop] run]; //keeps it alive [searchObj release]; [pool release]; } But I can't figure out how to access the "searchFor" method in the "searchObj" object, so the above code works and keeps running. I just can't message the searchObj or retrieve the resultReady objects? Hope someone could point me in the right direction, threading is giving me grief:) Thank you.

    Read the article

  • Animation is slow on iPhone

    - by Anthony Chan
    I'm developing an app that would display images and change them according to the user's action. I've created a subclass of UIView to contain an image, an index number and an array of image names. The code is like this: @interface CustomPic : UIView { UIImageView *pic; NSInteger index; NSMutableArray *picNames; //<-- an array of NSString } And in the implementation part, it has a method to change the image using a dissolve effect. - (void)nextPic { index++; if (index >= [picNames count]) { index = 0; } UIImageView *temp = pic; pic.image = [UIImage imageNamed:[picNames objectAtIndex:index]]; temp.alpha = 0; [UIView beginAnimations:nil context:nil]; [UIView setAnimationCurve:UIViewAnimationCurveEaseOut]; [UIView setAnimationDuration:0.25]; pic.alpha = 0; temp.alpha = 1; [UIView commitAnimations]; } In the viewController, there are several CustomPic which would change the images depends on users' choice. The images would change as expected with the fade in/out effect, but the animation performance is really bad. I've tested it on an iPhone 3G, the Instruments shows that the animation is only 2-3FPS! I tried many methods to simplify and modify the codes but with no hope. Is there something wrong in my code or in my concept? Thanks for any help. P.S. all the images are 320*480 PNGs with a max size of 15KB.

    Read the article

  • How we Can post Video and Image in ASIHTTPRequest ?

    - by GhostRider
    I want to post one image to my webservice and one video too , but problem is that when it go to video part it give me Excess-bad Error NSString *url = [NSString stringWithFormat:@"http://example.com/add_videoxml.php"]; networkQueue = [[ASINetworkQueue alloc] init]; [networkQueue cancelAllOperations]; [networkQueue setShowAccurateProgress:YES]; //[networkQueue setUploadProgressDelegate:progressBar]; [networkQueue setDelegate:self]; [networkQueue setRequestDidFinishSelector:@selector(requestFinished:)]; [networkQueue setRequestDidFailSelector: @selector(requestFailed:)]; request= [[ASIFormDataRequest alloc] initWithURL:[NSURL URLWithString:url]] ; [request setPostValue:@"284" forKey:@"id"]; [request setPostValue:@"show" forKey:@"show"]; [request addRequestHeader:@"Content-Type" value:@"multipart/form-data;boundary=---------------------------1842378953296356978857151853"]; NSData *imgData=UIImageJPEGRepresentation(userImage, 0.9); if(imgData != nil){ [request setFile:imgData withFileName:@"Loveatnight" andContentType:@"image/jpeg" forKey:@"image"]; } //[request addRequestHeader:@"Content-Type" // value:@"multipart/form-data;boundary=---------------------------1842378953296356978857151853"]; if(videoData != nil){ [request setFile:videoData withFileName:@"Loveishard" andContentType:@"image/jpeg" forKey:@"uploadfile"]; }// error is come on that line [request setTimeOutSeconds:500]; //NSLog(@"%@",request); [networkQueue addOperation:request]; [networkQueue go]; Added by the OP [request setFile:videoData withFileName:@"Loveishard" andContentType:@"video/quicktime" forKey:@"uploadfile"]; i use this becuase my video formate is mov , but it again give error

    Read the article

  • How to test method call order with Moq

    - by Finglas
    At the moment I have: [Test] public void DrawDrawsAllScreensInTheReverseOrderOfTheStack() { // Arrange. var screenMockOne = new Mock<IScreen>(); var screenMockTwo = new Mock<IScreen>(); var screens = new List<IScreen>(); screens.Add(screenMockOne.Object); screens.Add(screenMockTwo.Object); var stackOfScreensMock = new Mock<IScreenStack>(); stackOfScreensMock.Setup(s => s.ToArray()).Returns(screens.ToArray()); var screenManager = new ScreenManager(stackOfScreensMock.Object); // Act. screenManager.Draw(new Mock<GameTime>().Object); // Assert. screenMockOne.Verify(smo => smo.Draw(It.IsAny<GameTime>()), Times.Once(), "Draw was not called on screen mock one"); screenMockTwo.Verify(smo => smo.Draw(It.IsAny<GameTime>()), Times.Once(), "Draw was not called on screen mock two"); } But the order in which I draw my objects in the production code does not matter. I could do one first, or two it doesn't matter. However it should matter as the draw order is important. How do you (using Moq) ensure methods are called in a certain order? Edit I got rid of that test. The draw method has been removed from my unit tests. I'll just have to manually test it works. The reversing of the order though was taken into a seperate test class where it was tested so it's not all bad. Thanks for the link about the feature they are looking into. I sure hope it gets added soon, very handy.

    Read the article

  • Did I implement clock drift properly?

    - by David Titarenco
    I couldn't find any clock drift RNG code for Windows anywhere so I attempted to implement it myself. I haven't run the numbers through ent or DIEHARD yet, and I'm just wondering if this is even remotely correct... void QueryRDTSC(__int64* tick) { __asm { xor eax, eax cpuid rdtsc mov edi, dword ptr tick mov dword ptr [edi], eax mov dword ptr [edi+4], edx } } __int64 clockDriftRNG() { __int64 CPU_start, CPU_end, OS_start, OS_end; // get CPU ticks -- uses RDTSC on the Processor QueryRDTSC(&CPU_start); Sleep(1); QueryRDTSC(&CPU_end); // get OS ticks -- uses the Motherboard clock QueryPerformanceCounter((LARGE_INTEGER*)&OS_start); Sleep(1); QueryPerformanceCounter((LARGE_INTEGER*)&OS_end); // CPU clock is ~1000x faster than mobo clock // return raw return ((CPU_end - CPU_start)/(OS_end - OS_start)); // or // return a random number from 0 to 9 // return ((CPU_end - CPU_start)/(OS_end - OS_start)%10); } If you're wondering why I Sleep(1), it's because if I don't, OS_end - OS_start returns 0 consistently (because of the bad timer resolution, I presume). Basically, (CPU_end - CPU_start)/(OS_end - OS_start) always returns around 1000 with a slight variation based on the entropy of CPU load, maybe temperature, quartz crystal vibration imperfections, etc. Anyway, the numbers have a pretty decent distribution, but this could be totally wrong. I have no idea.

    Read the article

  • Easy way to lock a file on a remote machine (windows)?

    - by roufamatic
    I've tracked down an error in my logs, and am trying to reproduce it. My theory is that a file sometimes gets locked in a specific folder, and when the application (ASP.NET) tries to delete that folder it hangs. I don't have the application running on my own machine so I'm debugging this on a remote server. But for the life of me, I can't seem to figure out a way to lock a file that prevents it from being deleted by the process. My first thought was to map the network path to a local drive and just leave a command prompt open to that folder. Locally that always fouls up my folder deletes, but apparently SMB is a bit more robust and doesn't grant me a lock. After that I created an infinte loop vbscript in the folder and executed it remotely. The file was deleted out from underneath the executing code. Man! I then tried creating a file on the server in that folder and removing all permissions. That didn't do the trick. I don't have access to the IIS settings so perhaps it's running under a privileged system account. So: what's a program that you know is free and I can quickly use to create an exclusive lock on a file so I can test my delete theory? Like a really, really bad Notepad clone or something. :-)

    Read the article

  • Show NSSegmentedControl menu when segment clicked, despite having set action

    - by Justin Williams
    I have an NSSegmentedControl on my UI with 4 buttons. The control is connected to a method that will call different methods depending on which segment is clicked: - (IBAction)performActionFromClick:(id)sender { NSInteger selectedSegment = [sender selectedSegment]; NSInteger clickedSegmentTag = [[sender cell] tagForSegment:selectedSegment]; switch (clickedSegmentTag) { case 0: [self showNewEventWindow:nil]; break; case 1: [self showNewTaskWindow:nil]; break; case 2: [self toggleTaskSplitView:nil]; break; case 3: [self showGearMenu]; break; } } Segment 4 has has a menu attached to it in the awakeFromNib method. I'd like this menu to drop down when the user clicks the segment. At this point, it only will drop if the user clicks & holds down on the menu. From my research online this is because of the connected action. I'm presently working around it by using some code to get the origin point of the segment control and popping up the context menu using NSMenu's popUpContextMenu:withEvent:forView but this is pretty hacktastic and looks bad compared to the standard behavior of having the menu drop down below the segmented control cell. Is there a way I can have the menu drop down as it should after a single click rather than doing the hacky context menu thing?

    Read the article

  • Is there any danger in committing to a component library such as SmartGwt or Swing?

    - by Banang
    Since February this year I have been working on an app that's built using SmartGWT components. Generally, I find the components very nice to work with, and the fact that they're open source and free to use is just fantastic. However, I can't seem to shake the feeling that it's not a durable way of developing, but I can't quite explain why. Maybe it's because I know that any minute now the team developing it could decide to stop, which would leave me and my team in a bit of a pickle, but I'm sure there must be something more. I have been trying to find ways of explaining this feeling to myself, but to no avail. Therefore I turn to you, dear community, to ask if you can come up with a good reason why committing to building your app (that's supposed to be around for many more years to come) using a component library such as SmartGWT is a bad idea? Is there any reason I should just have developed the components myself? Or did I make the right choice when deciding not to reinvent the wheel and just go for what was readily available?

    Read the article

  • What tasks aren't easy for PHP, ColdFusion and ASP?

    - by boost
    PHP, ColdFusion, and ASP (among many others) are usually sold on their strengths. What are their weaknesses? If one were to develop a niche product to handle the things that these products weren't so good at, what should it focus on? EDIT I'm trying to figure out what things PHP etc are bad at. They're all good at doing the nuts and bolts stuff, if one is looking with a bottom-to-top mindset. I'm thinking a little more globally, more top-to-bottom; what's difficult to achieve in PHP/ASP/CF without thousands of lines of code and twenty minutes of server time? EDIT Suppose company A comes up to you and says, "We want you to do x in PHP." What values of x will cause you to say, "Forget it, buddy, no one in their right mind would use PHP for that"? (swap PHP in the above quote for your favourite tool) EDIT Have we got to the point where everyone's needs can be met with PHP frameworks, Rails and ... er ... Java?

    Read the article

  • Access modifiers - Property on business objects - getting and setting

    - by Mike
    Hi, I am using LINQ to SQL for the DataAccess layer. I have similar business objects to what is in the data access layer. I have got the dataprovider getting the message #23. On instantiation of the message, in the message constructor, it gets the MessageType and makes a new instance of MessageType class and fills in the MessageType information from the database. Therefore; I want this to get the Name of the MessageType of the Message. user.Messages[23].MessageType.Name I also want an administrator to set the MessageType user.Messages[23].MessageType = MessageTypes.LoadType(3); but I don't want the user to publicly set the MessageType.Name. But when I make a new MessageType instance, the access modifier for the Name property is public because I want to set that from an external class (my data access layer). I could change this to property to internal, so that my class can access it like a public variable, and not allow my other application access to modify it. This still doesn't feel right as it seems like a public property. Are public access modifiers in this situation bad? Any tips or suggestions would be appreciated. Thanks.

    Read the article

  • How can you transform a set of numbers into mostly whole ones?

    - by Alice
    Small amount of background: I am working on a converter that bridges between a map maker (Tiled) that outputs in XML, and an engine (Angel2D) that inputs lua tables. Most of this is straight forward However, Tiled outputs in pixel offsets (integers of absolute values), while Angel2D inputs OpenGL units (floats of relative values); a conversion factor between these two is needed (for example, 32px = 1gu). Since OpenGL units are abstract, and the camera can zoom in or out if the objects are too small or big, the actual conversion factor isn't important; I could use a random number, and the user would merely have to zoom in or out. But it would be best if the conversion factor was selected such that most numbers outputted were small and whole (or fractions of small whole numbers), because that makes it easier to work with (and the whole point of the OpenGL units is that they are easy to work with). How would I find such a conversion factor reliably? My first attempt was to use the smallest number given; this resulted in no fractions below 1, but often lead to lots of decimal places where the factors didn't line up. Then I tried the mode of the sequence, which lead to the largest number of 1's possible, but often lead to very long floats for background images. My current approach gets the GCD of the whole sequence, which, when it works, works great, but can easily be thrown off course by a single bad apple. Note that while I could easily just pass the numbers I am given along, or pick some fixed factor, or use one of the conversions I specified above, I am looking for a method to reliably scale this list of integers to small, whole numbers or simple fractions, because this would most likely be unsurprising to the end user; this is not a one off conversion. The end users tend to use 1.0 as their "base" for manipulations (because it's simple and obvious), so it would make more sense for the sizes of entities to cluster around this.

    Read the article

  • Castle, sharing a transient component between a decorator and a decorated component

    - by Marius
    Consider the following example: public interface ITask { void Execute(); } public class LoggingTaskRunner : ITask { private readonly ITask _taskToDecorate; private readonly MessageBuffer _messageBuffer; public LoggingTaskRunner(ITask taskToDecorate, MessageBuffer messageBuffer) { _taskToDecorate = taskToDecorate; _messageBuffer = messageBuffer; } public void Execute() { _taskToDecorate.Execute(); Log(_messageBuffer); } private void Log(MessageBuffer messageBuffer) {} } public class TaskRunner : ITask { public TaskRunner(MessageBuffer messageBuffer) { } public void Execute() { } } public class MessageBuffer { } public class Configuration { public void Configure() { IWindsorContainer container = null; container.Register( Component.For<MessageBuffer>() .LifeStyle.Transient); container.Register( Component.For<ITask>() .ImplementedBy<LoggingTaskRunner>() .ServiceOverrides(ServiceOverride.ForKey("taskToDecorate").Eq("task.to.decorate"))); container.Register( Component.For<ITask>() .ImplementedBy<TaskRunner>() .Named("task.to.decorate")); } } How can I make Windsor instantiate the "shared" transient component so that both "Decorator" and "Decorated" gets the same instance? Edit: since the design is being critiqued I am posting something closer to what is being done in the app. Maybe someone can suggest a better solution (if sharing the transient resource between a logger and the true task is considered a bad design)

    Read the article

  • Setting the default stack size on Linux globally for the program

    - by wowus
    So I've noticed that the default stack size for threads on linux is 8MB (if I'm wrong, PLEASE correct me), and, incidentally, 1MB on Windows. This is quite bad for my application, as on a 4-core processor that means 64 MB is space is used JUST for threads! The worst part is, I'm never using more than 100kb of stack per thread (I abuse the heap a LOT ;)). My solution right now is to limit the stack size of threads. However, I have no idea how to do this portably. Just for context, I'm using Boost.Thread for my threading needs. I'm okay with a little bit of #ifdef hell, but I'd like to know how to do it easily first. Basically, I want something like this (where windows_* is linked on windows builds, and posix_* is linked under linux builds) // windows_stack_limiter.c int limit_stack_size() { // Windows impl. return 0; } // posix_stack_limiter.c int limit_stack_size() { // Linux impl. return 0; } // stack_limiter.cpp int limit_stack_size(); static volatile int placeholder = limit_stack_size(); How do I flesh out those functions? Or, alternatively, am I just doing this entirely wrong? Remember I have no control over the actual thread creation (no new params to CreateThread on Windows), as I'm using Boost.Thread.

    Read the article

  • To what extent should code try to explain fatal exceptions?

    - by Andrzej Doyle
    I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc. In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part. My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config? I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.

    Read the article

  • How should I progressively enhance this content with JavaScript?

    - by Sean Dunwoody
    The background to this problem is that I'm doing a computing project which involves Some drop down boxes for input, and a text input where the user can input a date. I've used YUI to enhance the form, so the calendar input uses the YUI calendar widget and the drop down list is converted into a horizontal list of inputs where the user only has to click once to select any input as opposed to two clicks with the drop down box (hope that makes sense, not sure how to explain it clearly) The problem is, that in the design section of my project I stated that I would follow the progressive enhancement principles. I am struggling however to ensure that users without JavaScript are able to view the drop down box / text input on said page. This is not because I do not necessarily know how, but the two methods I've tried seem unsatisfactory. Method 1 - I tried using YUI to hide the text box and drop down list, this seemed like the ideal solution however there was quite a noticeable lag when loading the page (especially for the first time), the text box and drop down list where visibile for at least a second. I included the script for this just before the end of the body tag, is there any way that I can run it onload with YUI? Would that help? Method 2 - Use the noscript tag . . . however I am loathed to do this as while it would be a simple solution, I have read various bad things about the noscript tag. Is there a way of making mehtod one work? Or is there a better way of doing this that I am yet to encounter?

    Read the article

  • Combining the jquery Calendar picker with the jquery validation and getting them to play nice?

    - by MBonig
    I'm just starting to get into jquery/javascript programming so this may be a beginner question but I sure can't seem to find anything about it. I have a text field on a form. I've used the JQuery UI Calendar picker to make data entry better. I've also added jquery validation to ensure that if the user enters the date by hand that it's still valid. Here is the html: <input class="datepicker" id="Log_ServiceStartTime_date" name="Log_ServiceStartTime_date" type="text" value="3/16/2010"></input> and javascript: $('form').validate(); $('#Log_ServiceStartTime_date').rules('add','required'); $('#Log_ServiceStartTime_date').rules('add','date'); the problem I'm having is this: If a user puts in a bad date, or no date at all, the validation correctly kicks in and the error description displays. However, if the user clicks on the textbox to bring up the calendar picker and selects a date, the date fills into the field but the validation does not go away. If the user clicks into the textbox and then back out, the validation correctly goes away. How can I get the calendar picker to kick off the validation routine once a date is selected? thanks

    Read the article

  • Why doesn't my test run tearDownAfterTestClass when it fails

    - by Memor-X
    in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this try { if(!mysqli_query($connection,$query)) { $this->assertTrue(false); } } catch (Exception $exc) { $msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL; $msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;; $this->fail($msg); } the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass so i am wondering, why isn't my tearDownAfterTestClass running when a test fails NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests

    Read the article

  • problems selecting a mutliple select value from database in Rails

    - by Ramy
    From inside of a form_for in rails, I'm inserting multiple select values into the database, like this: <div class="new-partner-form"> <%= form_for [:admin, matching_profile.partner, matching_profile], :html => {:id => "edit_profile", :multipart => true} do |f| %> <%= f.submit "Submit", :class => "hidden" %> <div class="rounded-block quarter-wide radio-group"> <h4>Exclude customers from source:</h4> <%= f.select :source, User.select(:source).group(:source).order(:source).map {|u| [u.source,u.source]}, {:include_blank => false}, {:multiple => true} %> <%= f.error_message_on :source %> </div> I'm then trying to pull the value from the database like this: def does_not_contain_source(matching_profiles) Expression.select(matching_profiles, :source) do |keyword| Rails.logger.info("Keyword is : " + keyword) @customer_source_tokenizer ||= Tokenizer.new(User.select(:source).where("id = ?", self.owner_id).map {|u| u.source}[0]) #User.select("source").where("id = ?", self.owner_id).to_s) @customer_source_tokenizer.divergent?(keyword) end end but getting this: ExpressionErrors: Bad syntax: --- - "" - B - "" this is what the value is in the database but it seems to choke when i access it this way. What's the right way to do this?

    Read the article

  • Drupal advanced ACLs for "untrusted" administrators

    - by redShadow
    I have a multi-site Drupal-6 installation containing websites of different customers. On each site, there is an "administrator" role that includes mainly the customer's account. We want to give as many permissions as possible to this privileged user, but this could bring to security leaks using just the Drupal Core permissions management system. The main thing to avoid is the customer account being able to run PHP code on the server (that would be like being logged on the server as the www-data user.. sounds really bad). To avoid that, it is not sufficient to deny PHP code evaluation for the role. Since the administrator role must have permissions to manage users, he could also change the password of the user #1 and login in the site as superadmin. The second goal would be to deny also some "confusing" administrative pages (such as module selection) but not others (such as site informations configuration, or theme selection, etc.) I found the User One module that seems to fix the first problem, but I have no idea on how to solve the second one. I found some modules around, but no-one seems to fit.. it seems like the most ACLs are thought to protect the content, and not the site itself, as if the site administrator would always be the server owner itself..

    Read the article

  • File descriptor limits and default stack sizes

    - by Charles
    Where I work we build and distribute a library and a couple complex programs built on that library. All code is written in C and is available on most 'standard' systems like Windows, Linux, Aix, Solaris, Darwin. I started in the QA department and while running tests recently I have been reminded several times that I need to remember to set the file descriptor limits and default stack sizes higher or bad things will happen. This is particularly the case with Solaris and now Darwin. Now this is very strange to me because I am a believer in 0 required environment fiddling to make a product work. So I am wondering if there are times where this sort of requirement is a necessary evil, or if we are doing something wrong. Edit: Great comments that describe the problem and a little background. However I do not believe I worded the question well enough. Currently, we require customers, and hence, us the testers, to set these limits before running our code. We do not do this programatically. And this is not a situation where they MIGHT run out, under normal load our programs WILL run out and seg fault. So rewording the question, is requiring the customer to change these ulimit values to run our software to be expected on some platforms, ie, Solaris, Aix, or are we as a company making it to difficult for these users to get going? Bounty: I added a bounty to hopefully get a little more information on what other companies are doing to manage these limits. Can you set these pragmatically? Should we? Should our programs even be hitting these limits or could this be a sign that things might be a bit messy under the covers? That is really what I want to know, as a perfectionist a seemingly dirty program really bugs me.

    Read the article

  • Circular dependency with generics

    - by devoured elysium
    I have defined the following interface: public interface IStateSpace<State, Action> where State : IState where Action : IAction<State, Action> // <-- this is the line that bothers me { void SetValueAt(State state, Action action); Action GetValueAt(State state); } Basically, an IStateSpace interface should be something like a chess board, and in each position of the chess board you have a set of possible movements to do. Those movements here are called IActions. I have defined this interface this way so I can accommodate for different implementations: I can then define concrete classes that implement 2D matrix, 3D matrix, graphs, etc. public interface IAction<State, Action> { IStateSpace<State, Action> StateSpace { get; } } An IAction, would be to move up(this is, if in (2, 2) move to (2, 1)), move down, etc. Now, I'll want that each action has access to a StateSpace so it can do some checking logic. Is this implementation correct? Or is this a bad case of a circular dependence? If yes, how to accomplish "the same" in a different way? Thanks

    Read the article

  • NSTableView selection & highlights

    - by Christian
    I have a NSTableView as a very central part of my Application and want it to integrate more with the rest of it. It has only one column (it's a list) and I draw all Cells (normal NSTextFieldCells) myself. The first problem is the highlighting. I draw the highlight myself and want to get rid of the blue background. I now fill the whole cell with the original background color to hide the blue background, but this looks bad when dragging the cell around. I tried overriding highlight:withFrame:inView: and highlightColorWithFrame:inView: of NSCell but nothing happened. How can I disable automatic highlighting? I also want all rows/cells to be deselected when I click somewhere outside my NSTableView. Since the background / highlight of the selected cell turns gray there must be an event for this, but I can't find it. I let my cells expand on a double click and may need to undo this. So getting rid of the gray highlight is not enough. EDIT: I add a subview to the NSTableView when a cell gets double clicked and then resignFirstResponder of the NSTableView gets called. I tried this: - (BOOL)resignFirstResponder { if (![[self subviews] containsObject:[[self window] firstResponder]]) { [self deselectAll:self]; ... } return YES; } Besides that it's not working I would need to implement this method for all objects in the view hierarchy. Is there an other solution to find out when the first responder leaves a certain view hierarchy?

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >