Search Results

Search found 25284 results on 1012 pages for 'test driven'.

Page 845/1012 | < Previous Page | 841 842 843 844 845 846 847 848 849 850 851 852  | Next Page >

  • A copy of ApplicationController has been removed from the module tree but is still active

    - by Matchu
    Whenever two concurrent HTTP requests go to my Rails app, the second always returns the following error: A copy of ApplicationController has been removed from the module tree but is still active! From there it gives an unhelpful stack trace to the effect of "we went through the standard server stuff, ran your first before_filter on ApplicationController (and I checked; it's just whichever filter runs first)", then offers the following: /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:414:in `load_missing_constant' /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:96:in `const_missing' which I'm assuming is a generic response and doesn't really say much. Google seems to tell me that people developing Rails Engines will encounter this, but I don't do that. All I've done is upgrade my Rails app from 2.2 (2.1?) to 2.3. What are some possible causes for this error, and how can I go about tracking down what's really going on? I know this question is vague, so would any other information be helpful? More importantly: I tried doing a test run in a "production" environment just now, and the error doesn't seem to persist. Does this only affect development, then, and need I not worry too much?

    Read the article

  • Rails: Custom template for email "deliver_" method?

    - by neezer
    I'm building an email system that stores my different emails in the database and calls the appropriate "deliver_" method via method_missing (since I can't explicitly declare methods since they're user-generated). My problem is that my rails app still tries to render the template for whatever the generated email is, though those templates don't exist. I want to force all emails to use the same template (views/test_email.html.haml), which will be setup to draw their formatting from my database records. How can I accomplish this? I tried adding render :template => 'test_email' in the test_email method in emailer_controller with no luck. models/emailer.rb: class Emailer < ActionMailer::Base def method_missing(method, *args) # not been implemented yet logger.info "method missing was called!!" end end controller/emailer_controller.rb: class EmailerController < ApplicationController def test_email @email = Email.find(params[:id]) Emailer.send("deliver_#{@email.name}") end end views/emails/index.html.haml: %h1 Listing emails %table{ :cellspacing => 0 } %tr %th Name %th Subject - @emails.each do |email| %tr %td=h email.name %td=h email.subject %td= link_to 'Show', email %td= link_to 'Edit', edit_email_path(email) %td= link_to 'Send Test Message', :controller => 'emailer', :action => 'test_email', :params => { :id => email.id } %td= link_to 'Destroy', email, :confirm => 'Are you sure?', :method => :delete %p= link_to 'New email', new_email_path Error I'm getting with the above: Template is missing Missing template emailer/name_of_email_in_database.erb in view path app/views

    Read the article

  • Ruby: what is the pitfall in this simple code excerpt that tests variable existence

    - by zipizap
    I'm starting with Ruby, and while making some test samples, I've stumbled against an error in the code that I don't understand why it happens. The code pretends to tests if a variable finn is defined?() and if it is defined, then it increments it. If it isn't defined, then it will define it with value 0 (zero). As the code threw an error, I started to decompose it in small pieces and run it, to better trace where the error was comming from. The code was run in IRB irb 0.9.5(05/04/13), using ruby 1.9.1p378 First I certify that the variable finn is not yet defined, and all is ok: ?> finn NameError: undefined local variable or method `finn' for main:Object from (irb):134 from /home/paulo/.rvm/rubies/ruby-1.9.1-p378/bin/irb:15:in `<main>' >> Then I certify that the following inline-condition executes as expected, and all is ok: ?> ((defined?(finn)) ? (finn+1):(0)) => 0 And now comes the code that throws the error: ?> finn=((defined?(finn)) ? (finn+1):(0)) NoMethodError: undefined method `+' for nil:NilClass from (irb):143 from /home/paulo/.rvm/rubies/ruby-1.9.1-p378/bin/irb:15:in `<main>' I was expecting that the code would not throw any error, and that after executing the variable finn would be defined with a first value of 0 (zero). But instead, the code thows the error, and finn get defined but with a value of nil. >> finn => nil Where might the error come from?!? Why does the inline-condition work alone, but not when used for the finn assignment? Any help apreciated :)

    Read the article

  • Cant get a location

    - by user1891806
    Iam trying to get a location (latitute and longitude), I cant figure out what Im doing wrong but my location keeps returning null. Anybody knows what I am doing wrong? public class ParseJSON extends Activity implements LocationListener { private TextView latituteField; private TextView longitudeField; LocationManager lm; String provider; int lat; int lon; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.test); lm = (LocationManager) getSystemService(Context.LOCATION_SERVICE); Criteria crit = new Criteria(); provider = lm.getBestProvider(crit, false); Location location = lm.getLastKnownLocation(provider); if (location != null){ lat = (int) location.getLatitude(); lon = (int) location.getLongitude(); latituteField = (TextView) findViewById(R.id.lattest); longitudeField = (TextView) findViewById(R.id.lontest); latituteField.setText(lat); longitudeField.setText(String.valueOf(lon)); } else { Toast.makeText(ParseJSON.this, "Couldnt get location", Toast.LENGTH_SHORT); } String readWeatherFeed = readWeatherFeed(); try { JSONArray jsonArray = new JSONArray(readWeatherFeed); Log.i(ParseJSON.class.getName(), "Number of entries " + jsonArray.length()); for (int i = 0; i < jsonArray.length(); i++) { JSONObject jsonObject = jsonArray.getJSONObject(i); Log.i(ParseJSON.class.getName(), jsonObject.getString("text")); } } catch (Exception e) { e.printStackTrace(); } } I dont have a clue, thanks in advance

    Read the article

  • Array output for option of command in bash script

    - by dewaforex
    Hi, Sorry for my bad english I'm stuck figure out with my bash script with array for option of command I make bash script to extract attachments from mkv file, and at the end merge again that attachments to mkv file after the video/audio has been encoding.. this is for extract attachment #find the total of attachment A=$(mkvmerge -i input.mkv | grep -i attachment | awk '{printf $3 "\n"}' | sed 's;\:;;' | awk 'END { print NR }') #extract it for (( i=1; i<=$A; i++ )) do font[${i}]="$(mkvmerge -i input.mkv | grep -i attachment | awk '{for (i=11; i <= NF; i++) printf($i"%c" , (i==NF)?ORS:OFS) }' | sed "s/'//g" | awk "NR==$i")" mkvextract attachments input.mkv $i:"${font[${i}]}" done And now for merge again the attachment for (( i=1; i<=$A; i++ )) do #seach for space between file name and and '\' before the space because some attachment has space in filename font1[${i}]=$(echo ${font[${i}]} | sed 's/ /\\ /g') #make option for add attachment attachment[${i}]=$"--attach-file ${font1[${i}]}" done mkvmerge -o output.mkv -d 1 -S test.mp4 sub.ass ${attachment[*]} The problem, still can't work for file name with space. When I tried echo the ${attachment[*]}, It's seem all right --attach-file Beach.ttf --attach-file Candara.ttf --attach-file CASUCM.TTF --attach-file Complete\ in\ Him.ttf --attach-file CURLZ_.TTF --attach-file Frostys\ Winterland.TTF --attach-file stilltim.ttf But the output still recognize the file name with space only the first word. mkvmerge v3.0.0 ('Hang up your Hang-Ups') built on Dec 6 2010 19:19:04 Automatic MIME type recognition for 'Beach.ttf': application/x-truetype-font Automatic MIME type recognition for 'Candara.ttf': application/x-truetype-font Automatic MIME type recognition for 'CASUCM.TTF': application/x-truetype-font Error: The file 'Complete\' cannot be attached because it does not exist or cannot be read. I hope somebody can help me. Thanks

    Read the article

  • thread reaches end but isn't removed

    - by pstanton
    I create a bunch of threads to do some processing: new Thread("upd-" + id){ @Override public void run(){ try{ doSomething(); } catch (Throwable e){ LOG.error("error", e); } finally{ LOG.debug("thread death"); } } }.start(); I know i should be using a threadPool but i need to understand the following problem before i change it: I'm using eclipse's debugger and looking at the threads in the debug pane which lists active threads. Many of them complete as you would expect, and are removed from the debug pane, however some seem to stay in the list of active threads even though the log shows the "thread death" entry for these. When i attempt to debug these threads, they either do not pause for debugging or show an error dialog: "A timeout occurred while retrieving stack frames for thread: upd-...". there is some synchronization going on within the doSomething() call but i'm fairly sure it's ok and since the "thread death" log is being called i'm assuming these threads aren't deadlocked in that method. i don't do any Thread.join()s, however i do call a third party API but doubt they do either. Can anyone think of another reason these threads are lingering? Thanks. EDIT: I created this test to check the Garbage Collection theory: Thread thread = new Thread("!!!!!!!!!!!!!!!!") { @Override public void run() { System.out.println("running"); ThreadUs.sleepQuiet(5000); System.out.println("finished"); // <-- thread removed from list here } }; thread.start(); ThreadUs.sleepQuiet(10000); System.out.println(thread.isAlive()); // <-- thread already removed from list but hasn't been GC'd ThreadUs.sleepQuiet(10000); this proves that it is nothing to do with garbage collection as eclipse removes the thread from the thread list as soon as it completes and isn't waiting for the object to be de-referenced/GC'd.

    Read the article

  • Why does my DataTemplate break the WPF designer?

    - by PRINCESS FLUFF
    Why does the DataTemplate line break the WPF designer in Visual Studio 2008? The program compiles and runs properly. The DataTemplate is applied as it should. However the entire DataTemplate block of code is underlined in red, and when I simply "build" the program without running, I get the error "Type reference cannot find public type named 'Character'" How come it can't find it in the designer yet the program applies the template properly? <UserControl x:Class="WPF_Tests.Tests.TwoCollecViews.TwoViews" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:DetailsPane="clr-namespace:WPF_Tests.Tests.DetailsPane" > <UserControl.Resources> <DataTemplate DataType="{x:Type DetailsPane:Character}"> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Path=Name}"></TextBlock> </StackPanel> </DataTemplate> </UserControl.Resources> <Grid> <ListBox ItemsSource="{Binding Path=Characters}" /> </Grid> </UserControl> EDIT: I am being told that this may be a bug in Visual Studio 2008, as it worked correctly in 2010. You can download the code here: http://www.mediafire.com/?z1myytvwm4n - The Test/TwoCollec xaml file's designer will break with this code.

    Read the article

  • ASP.NET MVC and NHibernate coupling

    - by Ben
    I have just started learning NHibernate. Over the past few months I have been using IoC / DI (structuremap) and the repository pattern and it has made my applications much more loosely coupled and easier to test. When switching my persistence layer to NHibernate I decided to stick with my repositories. Currently I am creating a new session on each method call but of course this means that I can not benefit from lazy loading. Therefore I wish to implement session-per-request but in doing so this will make my web project dependent on NHibernate (perhaps this is not such a bad thing?). I was planning to inject ISession into my repositories and create and dispose sessions on beginrequest/endrequest events (see http://ayende.com/Blog/archive/2009/08/05/do-you-need-a-framework.aspx) Is this a good approach? Presumably I cannot use session-per-request without having a reference to NHibernate in my web project? Having the web project dependent on NHibernate prompts my next (few) questions - why even bother with the repository? Since my web app is calling services that talk to the repositories, why not ditch the repositories and just add my NHibernate persistance code inside the services? And finally, is there really any need to split out into so many projects. Is a web project and an infrastructure project sufficient? I realise that I have veered off a bit from my original question but it seems that everyone seems to have their own opinion on these topics. Some people use the repository pattern with NHibernate, some don't. Some people stick their mapping files with the related classes, others have a separate project for this. Many thanks, Ben

    Read the article

  • IPreInsertEventListener makes object dirty, causes invalid update

    - by Groxx
    In NHibernate 2.1.2: I'm attempting to set a created timestamp on insert, as demonstrated here. I have this: public bool OnPreInsert(PreInsertEvent @event) { if (@event.Entity is IHaveCreatedTimestamp) { DateTime dt = DateTime.Now; string Created = ((IHaveCreatedTimestamp)@event.Entity).CreatedPropertyName; SetState(@event.Persister, @event.State, Created, dt); @event.Entity.GetType().GetProperty(Created).SetValue(@event.Entity, dt, null); } // return true to veto the insert return false; } The problem is that doing this (or duplicating Ayende's example precisely, or reordering or removing lines) causes an update after the insert. The insert uses the correct "now" value, @p6 = 3/8/2011 5:41:22 PM, but the update tries to set the Created column to @p6 = 1/1/0001 12:00:00 AM, which is outside MSSQL's range: Test 'CanInsertAndDeleteInserted' failed: System.Data.SqlTypes.SqlTypeException : SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM. I've tried the same thing with the PerformSaveOrUpdate listener, described here, but that also causes an update and an insert, and elsewhere there have been mentions of avoiding it because it is called regardless of if the object is dirty or not :/ Searching around elsewhere, there are plenty of claims of success after setting both the state and the object to the same value, or by using the save listener, or of the cause coming from collections or other objects, but I'm not having any success.

    Read the article

  • What version control system is best designed to *prevent* concurrent editing?

    - by Fred Hamilton
    We've been using CVS (with TortoiseCVS interface) for years for both source control and wide-ranging document control (including binaries such as Word, Excel, Framemaker, test data, simulation results, etc.). Unlike typical version control systems, 99% of the time we want to prevent concurrent editing - when a user starts editing a file, the pre-edit version of the file becomes read only to everyone else. Many of the people who will be using this are not programmers or even that computer savvy, so we're also looking for a system that let's people simply add documents to the repository, check out and edit a document (unless someone else is currently editing it), and check it back in with a minimum of fuss. We've gotten this to work reasonably well with CVS + TortoiseCVS, but we're now considering Subversion and Mercurial (and open to others if they're a better fit) for their better version tracking, so I was wondering which one supported locking files most transparently. For example, we'd like exclusive locking enabled as the default, and we want to make it as difficult as possible for someone to accidentally start editing a file that someone else has checked out. For example when someone checks out a file for editing, it checks with the master database first even if they have not recently updated their sandbox. Maybe it even won't let a user check out a document if it's off the network and can't check in with the mothership.

    Read the article

  • Always get exception when trying to Fill data to DataTable

    - by Sambath
    The code below is just a test to connect to an Oracle database and fill data to a DataTable. After executing the statement da.Fill(dt);, I always get the exception "Exception of type 'System.OutOfMemoryException' was thrown.". Has anyone met this kind of error? My project is running on VS 2005, and my Oracle database version is 11g. My computer is using Windows Vista. If I copy this code to run on Windows XP, it works fine. Thank you. using System.Data; using Oracle.DataAccess.Client; ... string cnString = "data source=net_service_name; user id=username; password=xxx;"; OracleDataAdapter da = new OracleDataAdapter("select 1 from dual", cnString); try { DataTable dt = new DataTable(); da.Fill(dt); // Got error here Console.Write(dt.Rows.Count.ToString()); } catch (Exception e) { Console.Write(e.Message); // Exception of type 'System.OutOfMemoryException' was thrown. } Update I have no idea what happens to my computer. I just reinstall Oracle 11g, and then my code works normally.

    Read the article

  • Calling ASP.NET Web API using JQuery ajax - cross site scripting issue

    - by SimonF
    I have a Web API which I am calling using the JQuery ajax function. When I test the service directly (using the Chrome RESTEasy extension) it works fine, however when I call it using the JQuery ajax function I get an error. I'm calling it on port 81: $.ajax({ url: "http://127.0.0.1:81/api/people", data: JSON.stringify(personToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newPerson) { callback(newPerson); } }, success: function (newPerson) { alert("New person created with an Id of " + newPerson.Id); }, error: function (jqXHR, textStatus, errorThrown) { alert('Error. '+textStatus+'. '+errorThrown); } }); ...but when I trace it using FireBug Lite the response comes from port 82: {"Message":"No HTTP resource was found that matches the request URI 'http://127.0.0.1:82/api/people'.","MessageDetail":"No action was found on the controller 'People' that matches the request."} I think the error is, effectively, due to cross-site scripting being blocked, but I'm not actually cross-site scripting, if you see what I mean. Has anyone else come across this and been able to fix it? Edit: Routing config (global.asax.vb) is: RouteTable.Routes.MapHttpRoute(name:="DefaultApi", routeTemplate:="api/{controller}/{id}", defaults:=New With {Key .id = System.Web.Http.RouteParameter.Optional}) Controller: Public Function PostValue(ByVal departmentid As Integer, ByVal emailaddress As String, ByVal firstname As String, ByVal lastname As String) As Guid Dim context As New WSMModelDataContext Dim bllPeople As New PeopleBLL(context) Return bllPeople.Create(firstname, lastname, emailaddress, departmentid) End Function When I debug it, it doesn't get as far as running the controller, although when calling it through RESTEasy it routes correctly and the controller executes successfully. The only difference seemes to be that wen called through RESTEasy it is (correctly) using http://127.0.0.1:81 but for some reason when called via JQuery/ajax it seems to be using http://127.0.0.1:82.

    Read the article

  • Memory leak for NSDictionary loaded by plist file

    - by Pask
    I have a memory leak problem that just can not understand! Watch this initialization method: - (id)initWithNomeCompositore:(NSString *)nomeCompositore nomeOpera:(NSString *)nomeOpera { if (self = [super init]) { NSString *pathOpere = [[NSBundle mainBundle] pathForResource:kNomeFilePlistOpere ofType:kTipoFilePlist]; NSDictionary *dicOpera = [NSDictionary dictionaryWithDictionary: [[[NSDictionary dictionaryWithContentsOfFile:pathOpere] objectForKey:nomeCompositore] objectForKey:nomeOpera]]; self.nomeCompleto = [[NSString alloc] initWithString:nomeOpera]; self.compositore = [[NSString alloc] initWithString:nomeCompositore]; self.tipologia = [[NSString alloc] initWithString:[dicOpera objectForKey:kKeyTipologia]]; } return self;} Then this little variation (note self.tipologia): - (id)initWithNomeCompositore:(NSString *)nomeCompositore nomeOpera:(NSString *)nomeOpera { if (self = [super init]) { NSString *pathOpere = [[NSBundle mainBundle] pathForResource:kNomeFilePlistOpere ofType:kTipoFilePlist]; NSDictionary *dicOpera = [NSDictionary dictionaryWithDictionary: [[[NSDictionary dictionaryWithContentsOfFile:pathOpere] objectForKey:nomeCompositore] objectForKey:nomeOpera]]; self.nomeCompleto = [[NSString alloc] initWithString:nomeOpera]; self.compositore = [[NSString alloc] initWithString:nomeCompositore]; self.tipologia = [[NSString alloc] initWithString:@"Test"]; } return self;} In the first variant is generated a memory leak, the second is not! And I just can not understand why! The memory leak is evidenced by Instruments, highlighted the line: [NSDictionary dictionaryWithContentsOfFile:pathOpere] This is the dealloc method: - (void)dealloc { [tipologia release]; [compositore release]; [nomeCompleto release]; [super dealloc];}

    Read the article

  • Capturing exit status from STDIN in Perl

    - by zigdon
    I have a perl script that is run with a command like this: /path/to/binary/executable | /path/to/perl/script.pl The script does useful things to the output for the binary file, then exits once STDIN runs out (< returns undef). This is all well and good, except if the binary exits with a non-zero code. From the script's POV, it thinks the script just ended cleanly, and so it cleans up, and exits, with a code of 0. Is there a way for the perl script to see what the exit code was? Ideally, I'd want something like this to work: # close STDIN, and if there was an error, exit with that same error. unless (close STDIN) { print "error closing STDIN: $! ($?)\n"; exit $?; } But unfortunately, this doesn't seem to work: $ (date; sleep 3; date; exit 1) | /path/to/perl/script.pl /tmp/test.out Mon Jun 7 14:43:49 PDT 2010 Mon Jun 7 14:43:52 PDT 2010 $ echo $? 0 Is there a way to have it Do What I Mean?

    Read the article

  • Overwriting lines in file in C

    - by KáGé
    Hi, I'm doing a project on filesystems on a university operating systems course, my C program should simulate a simple filesystem in a human-readable file, so the file should be based on lines, a line will be a "sector". I've learned, that lines must be of the same length to be overwritten, so I'll pad them with ascii zeroes till the end of the line and leave a certain amount of lines of ascii zeroes that can be filled later. Now I'm making a test program to see if it works like I want it to, but it doesnt. The critical part of my code: file = fopen("irasproba_tesztfajl.txt", "r+"); //it is previously loaded with 10 copies of the line I'll print later in reverse order /* this finds the 3rd line */ int count = 0; //how much have we gone yet? char c; while(count != 2) { if((c = fgetc(file)) == '\n') count++; } fflush(file); fprintf(file, "- . , M N B V C X Y Í U Á É L K J H G F D S A Ú O P O I U Z T R E W Q Ó Ü Ö 9 8 7 6 5 4 3 2 1 0\n"); fflush(file); fclose(file); Now it does nothing, the file stays the same. What could be the problem? Thank you.

    Read the article

  • Updating a Minimum spanning tree when a new edge is inserted

    - by Lynette
    Hello, I've been presented the following problem in University: Let G = (V, E) be an (undirected) graph with costs ce = 0 on the edges e € E. Assume you are given a minimum-cost spanning tree T in G. Now assume that a new edge is added to G, connecting two nodes v, tv € V with cost c. a) Give an efficient algorithm to test if T remains the minimum-cost spanning tree with the new edge added to G (but not to the tree T). Make your algorithm run in time O(|E|). Can you do it in O(|V|) time? Please note any assumptions you make about what data structure is used to represent the tree T and the graph G. b)Suppose T is no longer the minimum-cost spanning tree. Give a linear-time algorithm (time O(|E|)) to update the tree T to the new minimum-cost spanning tree. This is the solution I found: Let e1=(a,b) the new edge added Find in T the shortest path from a to b (BFS) if e1 is the most expensive edge in the cycle then T remains the MST else T is not the MST It seems to work but i can easily make this run in O(|V|) time, while the problem asks O(|E|) time. Am i missing something? By the way we are authorized to ask for help from anyone so I'm not cheating :D Thanks in advance

    Read the article

  • Where to put data management rules for complex data validation in ASP.NET MVC?

    - by TheRHCP
    Hello, I am currently working on an ASP.NET MVC2 project. This is the first time I am working on a real MVC web application. The ASP.NET MVC website really helped me to get started really fast, but I still have some obscure knowledge concerning datamodel validation. My problem is that I do not really know where to manage my filled datamodel when it comes to complex validation rules. For example, validating a string field with a Regex is quite easy and I know that I just have to decorate my field with a specific attribute, so data management rules are implemented in the model. But if I have multiple fields that I need to validate which each other, for example multiple datetime that need to be correctly set following a specific time rule, where do I need to validate them? I know that I could create my own validation attributes, but sometimes validation ask a specific validation path which is to complex to be validated using attributes. This first question also leads me to a related question which is, is it right to validate a model in the controller? Because for the moment that is the only way I found for complex validation. But I find this a bit dirty and I feel it does not really fit a the controller role and much harder to test (multiple code path). Thanks.

    Read the article

  • Learning Hibernate: too many connections

    - by stivlo
    I'm trying to learn Hibernate and I wrote the simplest Person Entity and I was trying to insert 2000 of them. I know I'm using deprecated methods, I will try to figure out what are the new ones later. First, here is the class Person: @Entity public class Person { private int id; private String name; @Id @GeneratedValue(strategy = GenerationType.TABLE, generator = "person") @TableGenerator(name = "person", table = "sequences", allocationSize = 1) public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Then I wrote a small App class that insert 2000 entities with a loop: public class App { private static AnnotationConfiguration config; public static void insertPerson() { SessionFactory factory = config.buildSessionFactory(); Session session = factory.getCurrentSession(); session.beginTransaction(); Person aPerson = new Person(); aPerson.setName("John"); session.save(aPerson); session.getTransaction().commit(); } public static void main(String[] args) { config = new AnnotationConfiguration(); config.addAnnotatedClass(Person.class); config.configure("hibernate.cfg.xml"); //is the default already new SchemaExport(config).create(true, true); //print and execute for (int i = 0; i < 2000; i++) { insertPerson(); } } } What I get after a while is: Exception in thread "main" org.hibernate.exception.JDBCConnectionException: Cannot open connection Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections Now I know that probably if I put the transaction outside the loop it would work, but mine was a test to see what happens when executing multiple transactions. And since there is only one open at each time, it should work. I tried to add session.close() after the commit, but I got Exception in thread "main" org.hibernate.SessionException: Session was already closed So how to solve the problem?

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • python calendar to calculate month backwards

    - by Suhail
    Hi, we are trying to create a calendar function in python. we have created a small content management system, the requirement is, there will be a drop down list on the top right hand corner of the website, which will give the options - Months - 1 month, 2 months, 3 months and so on..., if the user selects 8 months then it should show the postscount for the last 8 months. the issue is we tried to write a small code which would do the month calculations, but the bug is that it does not consider the months beyond the current year, it shows the postscount only for months of the current year. for example: if the user selects 3 months, it will show the count for the l 3 months i.e present month and the previous 2 months, but if the user selects option more than 4 months, it does not consider the months from previous year, it still shows the month of the present year only. I am pasting the code below:- def __getSpecifiedMailCount__(request, value): dbconnector= DBConnector() CdateList= "select cdate from mail_records" DateNow= datetime.datetime.today() DateNow= DateNow.strftime("%Y-%m") DateYear= datetime.datetime.today() DateYear= DateYear.strftime("%Y") DateMonth= datetime.datetime.today() DateMonth= DateMonth.strftime("%m") #print DateMonth def getMonth(value): valueDic= {"01": "Jan", "02": "Feb", "03": "Mar", "04": "Apr", "05": "May", "06": "Jun", "07": "Jul", "08": "Aug", "09": "Sep", "10": "Oct", "11": "Nov", "12": "Dec"} return valueDic[value] def getMonthYearandCount(yearmonth): MailCount= "select count(*) as mailcount from mail_records where cdate like '%s%s'" % (yearmonth, "%") MailCountResult= MailCount[0]['mailcount'] return MailCountResult MailCountList= [] MCOUNT= getMonthYearandCount(DateNow) MONTH= getMonth(DateMonth) MailCountDict= {} MailCountDict['monthyear']= MONTH + ' ' + DateYear MailCountDict['mailcount']= MCOUNT var_monthyear= MONTH + ' ' + DateYear var_mailcount= MCOUNT MailCountList.append(MailCountDict) i=1 k= int(value) hereMONTH= int(DateMonth) while (i < k): hereMONTH= int(hereMONTH) - 1 if (hereMONTH < 10): hereMONTH = '0' + str(hereMONTH) if (hereMONTH == '00') or (hereMONTH == '0-1'): break else: PMONTH= getMonth(hereMONTH) hereDateNow= DateYear + '-' + PMONTH hereDateNowNum= DateYear + '-' + hereMONTH PMCOUNT= getMonthYearandCount(hereDateNowNum) MailCountDict= {} MailCountDict['monthyear']= PMONTH + ' ' + DateYear MailCountDict['mailcount']= PMCOUNT var_monthyear= PMONTH + ' ' + DateYear var_mailcount= PMCOUNT MailCountList.append(MailCountDict) i = i + 1 #print MailCountList MailCountDict= {'monthmailcount': MailCountList} reportdata = MailCountDict['monthmailcount'] #print reportdata return render_to_response('test.html', locals())

    Read the article

  • Python regex look-behind requires fixed-width pattern

    - by invictus
    Hi When trying to extract the title of a html-page I have always used the following regex: (?<=<title.*>)([\s\S]*)(?=</title>) Which will extract everything between the tags in a document and ignore the tags themselves. However, when trying to use this regex in Python it raises the following Exception: Traceback (most recent call last): File "test.py", line 21, in pattern = re.compile('(?<=)([\s\S]*)(?=)') File "C:\Python31\lib\re.py", line 205, in compile return _compile(pattern, flags) File "C:\Python31\lib\re.py", line 273, in _compile p = sre_compile.compile(pattern, flags) File "C:\Python31\lib\sre_compile.py", line 495, in compile code = _code(p, flags) File "C:\Python31\lib\sre_compile.py", line 480, in _code _compile(code, p.data, flags) File "C:\Python31\lib\sre_compile.py", line 115, in _compile raise error("look-behind requires fixed-width pattern") sre_constants.error: look-behind requires fixed-width pattern The code I am using is: pattern = re.compile('(?<=<title.*>)([\s\S]*)(?=</title>)') m = pattern.search(f) if I do some minimal adjustments it works: pattern = re.compile('(?<=<title>)([\s\S]*)(?=</title>)') m = pattern.search(f) This will, however, not take into account potential html titles that for some reason have attributes or similar. Anyone know a good workaround for this issue? Any tips are appreciated.

    Read the article

  • Code Golf: Morse code

    - by LiraNuna
    The challenge The shortest code by character count, that will input a string using only alphabetical characters (upper and lower case), numbers, commas, periods and question mark, and returns a representation of the string in Morse code. The Morse code output should consist of a dash (-, ascii 0x2D) for a long beep (aka 'dah') and a dot (., ascii 0x2E) for short beep (aka 'dit'). Each letter should be separated by a space (' ', ascii 0x20), and each word should be separated by a forward slash (/, ascii 0x2F). Morse code table: Test cases: Input: Hello world Output: .... . .-.. .-.. --- / .-- --- .-. .-.. -.. Input: Hello, Stackoverflow. Output: .... . .-.. .-.. --- --..-- / ... - .- -.-. -.- --- ...- . .-. ..-. .-.. --- .-- .-.-.- Code count includes input/output (i.e full program).

    Read the article

  • ASP.NET MVC on Cassini: How can I force the "content" directory to return 304s instead of 200s?

    - by Portman
    Scenario: I have an ASP.NET MVC application developed in Visual Studio 2008. There is a root folder named "Content" that stores images and stylesheets. When I run locally (using Cassini) and browse my application, every resource from the "Content" directory is always downloaded. Using Firebug, I can verify that the web server returns an HTTP 200 ("ok"). Desired: I would like for Cassini to return HTTP 304 ("not modified") instead of 200. This is the behavior when running the site under IIS7. Reasoning: The site I am working on has a large number of static resources (often as many as 40 per page). Browsing the site is very fast on IIS7, because these resources are (correctly) cached by the browser. However, browsing the site on my local machine is painfully slow. Pages that render in under 1 second on IIS7 take over 30 seconds to render on Cassini. It's actually faster for me to upload the entire website every few minutes and test from there. (Yes, I recognize that this is perverse and crazy.) So: how can I instruct/trick Cassini into treating the "Content" directory like IIS7 does?

    Read the article

  • Parse XML function names and call within whole assembly

    - by Matt Clarkson
    Hello all, I have written an application that unit tests our hardware via a internet browser. I have command classes in the assembly that are a wrapper around individual web browser actions such as ticking a checkbox, selecting from a dropdown box as such: BasicConfigurationCommands EventConfigurationCommands StabilizationCommands and a set of test classes, that use the command classes to perform scripted tests: ConfigurationTests StabilizationTests These are then invoked via the GUI to run prescripted tests by our QA team. However, as the firmware is changed quite quickly between the releases it would be great if a developer could write an XML file that could invoke either the tests or the commands: <?xml version="1.0" encoding="UTF-8" ?> <testsuite> <StabilizationTests> <StressTest repetition="10" /> </StabilizationTests> <BasicConfigurationCommands> <SelectConfig number="2" /> <ChangeConfigProperties name="Weeeeee" timeOut="15000" delay="1000"/> <ApplyConfig /> </BasicConfigurationCommands> </testsuite> I have been looking at the System.Reflection class and have seen examples using GetMethod and then Invoke. This requires me to create the class object at compile time and I would like to do all of this at runtime. I would need to scan the whole assembly for the class name and then scan for the method within the class. This seems a large solution, so any information pointing me (and future readers of this post) towards an answer would be great! Thanks for reading, Matt

    Read the article

  • Why doesn't Perl file glob() work outside of a loop in scalar context?

    - by Rob
    According to the Perl documentation on file globbing, the <*> operator or glob() function, when used in a scalar context, should iterate through the list of files matching the specified pattern, returning the next file name each time it is called or undef when there are no more files. But, the iterating process only seems to work from within a loop. If it isn't in a loop, then it seems to start over immediately before all values have been read. From the Perl docs: In scalar context, glob iterates through such filename expansions, returning undef when the list is exhausted. http://perldoc.perl.org/functions/glob.html However, in scalar context the operator returns the next value each time it's called, or undef when the list has run out. http://perldoc.perl.org/perlop.html#I/O-Operators Example code: use warnings; use strict; my $filename; # in scalar context, <*> should return the next file name # each time it is called or undef when the list has run out $filename = <*>; print "$filename\n"; $filename = <*>; # doesn't work as documented, starts over and print "$filename\n"; # always returns the same file name $filename = <*>; print "$filename\n"; print "\n"; print "$filename\n" while $filename = <*>; # works in a loop, returns next file # each time it is called In a directory with 3 files...file1.txt, file2.txt, and file3.txt, the above code will output: file1.txt file1.txt file1.txt file1.txt file2.txt file3.txt Note: The actual perl script should be outside the test directory, or you will see the file name of the script in the output as well. Am I doing something wrong here, or is this how it is supposed to work?

    Read the article

< Previous Page | 841 842 843 844 845 846 847 848 849 850 851 852  | Next Page >