Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 345/423 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • Foreign/accented characters in sql query

    - by FromCanada
    I'm using Java and Spring's JdbcTemplate class to build an SQL query in Java that queries a Postgres database. However, I'm having trouble executing queries that contain foreign/accented characters. For example the (trimmed) code: JdbcTemplate select = new JdbcTemplate( postgresDatabase ); String query = "SELECT id FROM province WHERE name = 'Ontario';"; Integer id = select.queryForObject( query, Integer.class ); will retrieve the province id, but if instead I did name = 'Québec' then the query fails to return any results (this value is in the database so the problem isn't that it's missing). I believe the source of the problem is that the database I am required to use has the default client encoding set to SQL_ASCII, which according to this prevents automatic character set conversions. (The Java environments encoding is set to 'UTF-8' while I'm told the database uses 'LATIN1' / 'ISO-8859-1') I was able to manually indicate the encoding when the resultSets contained values with foreign characters as a solution to a previous problem with a similar nature. Ex: String provinceName = new String ( resultSet.getBytes( "name" ), "ISO-8859-1" ); But now that the foreign characters are part of the query itself this approach hasn't been successful. (I suppose since the query has to be saved in a String before being executed anyway, breaking it down into bytes and then changing the encoding only muddles the characters further.) Is there a way around this without having to change the properties of the database or reconstruct it? PostScript: I found this function on StackOverflow when making up a title, it didn't seem to work (I might not have used it correctly, but even if it did work it doesn't seem like it could be the best solution.):

    Read the article

  • Associate new Authlogic Model to existing Models

    - by BriteLite
    Hello, While playing around with Rails (since I am a newbie) while reading Agile Rails book I came across an issue using the Gem Authlogic that I don't know how to address. I have a simple business Model. The tables store the following information: Name, Address, Latitude, and Longitude. The above approach has been working fine, because using the console I can enter the information and it shows up, where I need it to. My issue now is that I want to add authentication to it. As in assign those records in the table, to individual accounts. Since Authlogic is an authentication gem, can this be done? What I am trying to get to here is that, I enter a few records and leave it at that. Few days later, I want to assign those individual rows in the table to an authlogic model so the person to whom the record should belong can authenticate to it and make changes. Any code samples, blog posts to better help me understand would be great! Thank You.

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • About redirected stdout in System.Diagnostics.Process

    - by sforester
    I've been recently working on a program that convert flac files to mp3 in C# using flac.exe and lame.exe, here are the code that do the job: ProcessStartInfo piFlac = new ProcessStartInfo( "flac.exe" ); piFlac.CreateNoWindow = true; piFlac.UseShellExecute = false; piFlac.RedirectStandardOutput = true; piFlac.Arguments = string.Format( flacParam, SourceFile ); ProcessStartInfo piLame = new ProcessStartInfo( "lame.exe" ); piLame.CreateNoWindow = true; piLame.UseShellExecute = false; piLame.RedirectStandardInput = true; piLame.RedirectStandardOutput = true; piLame.Arguments = string.Format( lameParam, QualitySetting, ExtractTag( SourceFile ) ); Process flacp = null, lamep = null; byte[] buffer = BufferPool.RequestBuffer(); flacp = Process.Start( piFlac ); lamep = new Process(); lamep.StartInfo = piLame; lamep.OutputDataReceived += new DataReceivedEventHandler( this.ReadStdout ); lamep.Start(); lamep.BeginOutputReadLine(); int count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); while ( count != 0 ) { lamep.StandardInput.BaseStream.Write( buffer, 0, count ); count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); } Here I set the command line parameters to tell lame.exe to write its output to stdout, and make use of the Process.OutPutDataRecerved event to gather the output data, which is mostly binary data, but the DataReceivedEventArgs.Data is of type "string" and I have to convert it to byte[] before put it to cache, I think this is ugly and I tried this approach but the result is incorrect. Is there any way that I can read the raw redirected stdout stream, either synchronously or asynchronously, bypassing the OutputDataReceived event? PS: the reason why I don't use lame to write to disk directly is that I'm trying to convert several files in parallel, and direct writing to disk will cause severe fragmentation. Thanks a lot!

    Read the article

  • Implementing a normal website inside ASP.NET MVC 2

    - by cc0
    I have a website consisting of an index.html, a number of style sheet files as well as some javascript files. Then I needed a way for this site to communicate efficiently with a Microsoft SQL Server, so I was recommended to use the MVC framework to facilitate that kind of communication. I created the C#.net controller code needed to output the necessary information from the database using URL parameters, so now I am trying to put the whole web-site together inside the MVC framework. I started an empty project-template in MVC 2 framework. I'm sure there must be a good way to implement the current code into this framework, but I am very uncertain as to what the best approach to this would be. Could anyone point me in the right direction here? I'm not sure whether I need to change any of the current HTML, or exactly what to add to it. I'd love to see some kind of guide or tutorial, or just any advice I can get as I try to learn this. Any help is very much appreciated!

    Read the article

  • Best way to detect similar email addresses?

    - by Chris
    I have a list of ~20,000 email addresses, some of which I know to be fraudulent attempts to get around a "1 per e-mail" limit. ([email protected], [email protected], [email protected], etc...). I want to find similar email addresses for evaluation. Currently I'm using a levenshtein algorithm to check each e-mail against the others in the list and report any with an edit distance of less than 2. However, this is painstakingly slow. Is there a more efficient approach? The test code I'm using now is: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; namespace LevenshteinAnalyzer { class Program { const string INPUT_FILE = @"C:\Input.txt"; const string OUTPUT_FILE = @"C:\Output.txt"; static void Main(string[] args) { var inputWords = File.ReadAllLines(INPUT_FILE); var outputWords = new SortedSet<string>(); for (var i = 0; i < inputWords.Length; i++) { if (i % 100 == 0) Console.WriteLine("Processing record #" + i); var word1 = inputWords[i].ToLower(); for (var n = i + 1; n < inputWords.Length; n++) { if (i == n) continue; var word2 = inputWords[n].ToLower(); if (word1 == word2) continue; if (outputWords.Contains(word1)) continue; if (outputWords.Contains(word2)) continue; var distance = LevenshteinAlgorithm.Compute(word1, word2); if (distance <= 2) { outputWords.Add(word1); outputWords.Add(word2); } } } File.WriteAllLines(OUTPUT_FILE, outputWords.ToArray()); Console.WriteLine("Found {0} words", outputWords.Count); } } }

    Read the article

  • Is NUnit's ExpectedExceptionAttribute only way to test if something raises an exception?

    - by Dariusz Walczak
    Hello, I'm completely new at C# and NUnit. In Boost.Test there is a family of BOOST_*_THROW macros. In Python's test module there is TestCase.assertRaises method. As far as I understand it, in C# with NUnit (2.4.8) the only method of doing exception test is to use ExpectedExceptionAttribute. Why should I prefer ExpectedExceptionAttribute over - let's say - Boost.Test's approach? What reasoning can stand behind this design decision? Why is that better in case of C# and NUnit? Finally, if I decide to use ExpectedExceptionAttribute, how can I do some additional tests after exception was raised and catched? Let's say that I want to test requirement saying that object has to be valid after some setter raised System.IndexOutOfRangeException. How would you fix following code to compile and work as expected? [Test] public void TestSetterException() { Sth.SomeClass obj = new SomeClass(); // Following statement won't compile. Assert.Raises( "System.IndexOutOfRangeException", obj.SetValueAt( -1, "foo" ) ); Assert.IsTrue( obj.IsValid() ); } Edit: Thanks for your answers. Today, I've found an It's the Tests blog entry where all three methods described by you are mentioned (and one more minor variation). It's shame that I couldn't find it before :-(.

    Read the article

  • What is the most efficient/elegant way to parse a flat table into a tree?

    - by Tomalak
    Assume you have a flat table that stores an ordered tree hierarchy: Id Name ParentId Order 1 'Node 1' 0 10 2 'Node 1.1' 1 10 3 'Node 2' 0 20 4 'Node 1.1.1' 2 10 5 'Node 2.1' 3 10 6 'Node 1.2' 1 20 What minimalistic approach would you use to output that to HTML (or text, for that matter) as a correctly ordered, correctly intended tree? Assume further you only have basic data structures (arrays and hashmaps), no fancy objects with parent/children references, no ORM, no framework, just your two hands. The table is represented as a result set, which can be accessed randomly. Pseudo code or plain English is okay, this is purely a conceptional question. Bonus question: Is there a fundamentally better way to store a tree structure like this in a RDBMS? EDITS AND ADDITIONS To answer one commenter's (Mark Bessey's) question: A root node is not necessary, because it is never going to be displayed anyway. ParentId = 0 is the convention to express "these are top level". The Order column defines how nodes with the same parent are going to be sorted. The "result set" I spoke of can be pictured as an array of hashmaps (to stay in that terminology). For my example was meant to be already there. Some answers go the extra mile and construct it first, but thats okay. The tree can be arbitrarily deep. Each node can have N children. I did not exactly have a "millions of entries" tree in mind, though. Don't mistake my choice of node naming ('Node 1.1.1') for something to rely on. The nodes could equally well be called 'Frank' or 'Bob', no naming structure is implied, this was merely to make it readable. I have posted my own solution so you guys can pull it to pieces.

    Read the article

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

  • JS/CSS include section replacement, Debug vs Release

    - by Bayard Randel
    I'd be interested to hear how people handle conditional markup, specifically in their masterpages between release and debug builds. The particular scenario this is applicable to is handling concatenated js and css files. I'm currently using the .Net port of YUI compress to produce a single site.css and site.js from a large collection of separate files. One thought that occurred to me was to place the js and css include section in a user control or collection of panels and conditionally display the <link> and <script> markup based on the Debug or Release state of the assembly. Something along the lines of: #if DEBUG pnlDebugIncludes.visible = true #else pnlReleaseIncludes.visible = true #endif The panel is really not very nice semantically - wrapping <script> tags in a <div> is a bit gross; there must be a better approach. I would also think that a block level element like a <div> within <head> would be invalid html. Another idea was this could possibly be handled using web.config section replacements, but I'm not sure how I would go about doing that.

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • A question about writing a background/automatic/silent downloader/installer for an app in C#.

    - by Mike Webb
    Background: I have a main application that needs to be able to go to the web and download DLL files associated with it (ones that we write, located on our server). It really needs to be able to download these DLL files to the application folder in "C:\Program Files\". In the past I have used System.Net.WebClient to download whatever files I wanted from the web. The Issue I have had a lot of trouble downloading data in the past and saving to files on a user's hard drive. I get many reports of users saying that this does not work and it is generally because of user rights issues in the program. In the cases where it was an issue with program user rights every user could go to the exact file location on the web, download it, and then save it to the right place manually. I want this to work like all the other programs I have seen download/install in this fassion (i.e. Firefox Pluign Updates, Flash Player, JAVA, Adobe Reader, etc). All of these work without a hitch. The Question Is there some code I need to use to give my downloader program special rights to the Program Files folder? Can I even do this? Is there a better class or library that I should use? Is there a different approach to downloading files I should take, such as using threads or something else to download data? Any help here is appreciated. I want to try to stay away from third-party apps/libraries if at all possible, other than Microsoft of course, due to licensing issues, but still send any suggestions my way. Again, other programs seem to have the rights issues and download capability figured out. I want this same capability.

    Read the article

  • Can I use my Ninject .NET project within Orchard CMS?

    - by Mattias Z
    I am creating a website using Orchard CMS and I have an external .NET project written with Ninject for dependency injection which I would like to use together with a module within Orchard CMS. I know that Orchard uses Autofac for dependency injection and this is causing me problems since I never worked with DI before. I have created a Autofac module UserModule which registers the source UserRegistrationSource and I configured Orchard to load the module like this: OrchardStarter.cs .... builder.RegisterModule(new UserModule()); .... UserModule.cs public class UserModule : Module { protected override void Load(ContainerBuilder builder) { builder.RegisterSource(new UserRegistrationSource()); base.Load(builder); } } UserRegistrationSource.cs public class UserRegistrationSource : IRegistrationSource { public bool IsAdapterForIndividualComponents { get { return false; } } public IEnumerable<IComponentRegistration> RegistrationsFor(Service service, Func<Service, IEnumerable<IComponentRegistration>> registrationAccessor) { var serviceWithType = service as IServiceWithType; if (serviceWithType == null) yield break; var serviceType = serviceWithType.ServiceType; if (!serviceType.IsInterface || !typeof(IUserServices).IsAssignableFrom(serviceType) || serviceType == typeof(IUserServices)) yield break; var registrationBuilder = ... yield return registrationBuilder.CreateRegistration(); } } But now I'm stuck... Should I somehow try to resolve the dependencies in UserRegistrationSource by getting the requested types from the Ninject container (kernel) with Autofac or is this the wrong approach? Or should the Ninject kernel do the resolves for Autofac for the external project?

    Read the article

  • Advice on Method overloads.

    - by Muhammad Kashif Nadeem
    Please see following methods. public static ProductsCollection GetDummyData(int? customerId, int? supplierId) { try { if (customerId != null && customerId > 0) { Filter.Add(Customres.CustomerId == customerId); } if (supplierId != null && supplierId > 0) { Filter.Add(Suppliers.SupplierId == supplierId); } ProductsCollection products = new ProductsCollection(); products.FetchData(Filter); return products; } catch { throw; } } public static ProductsCollection GetDummyData(int? customerId) { return ProductsCollection GetDummyData(customerId, (int?)null); } public static ProductsCollection GetDummyData() { return ProductsCollection GetDummyData((int?)null); } 1- Please advice how can I make overloads for both CustomerId and SupplierId because only one overload can be created with GetDummyData(int? ). Should I add another argument to mention that first argument is CustomerId or SupplierId for example GetDummyData(int?, string). OR should I use enum as 2nd argument and mention that first argument is CustoemId or SupplierId. 2- Is this condition is correct or just checking 0 is sufficient - if (customerId != null && customerId 0) 3- Using Try/catch like this is correct? 4- Passing (int?)null is correct or any other better approach. Edit: I have found some other posts like this and because I have no knowledge of Generics that is why I am facing this problem. Am I right? Following is the post. http://stackoverflow.com/questions/422625/overloaded-method-calling-overloaded-method

    Read the article

  • Python C API from C++ app - know when to lock

    - by Alex
    Hi Everyone, I am trying to write a C++ class that calls Python methods of a class that does some I/O operations (file, stdout) at once. The problem I have ran into is that my class is called from different threads: sometimes main thread, sometimes different others. Obviously I tried to apply the approach for Python calls in multi-threaded native applications. Basically everything starts from PyEval_AcquireLock and PyEval_ReleaseLock or just global locks. According to the documentation here when a thread is already locked a deadlock ensues. When my class is called from the main thread or other one that blocks Python execution I have a deadlock. Python Cfunc1() - C++ func that creates threads internally which lead to calls in "my class", It stuck on PyEval_AcquireLock, obviously the Python is already locked, i.e. waiting for C++ Cfunc1 call to complete... It completes fine if I omit those locks. Also it completes fine when Python interpreter is ready for the next user command, i.e. when thread is calling funcs in the background - not inside of a native call I am looking for a workaround. I need to distinguish whether or not the global lock is allowed, i.e. Python is not locked and ready to receive the next command... I tried PyGIL_Ensure, unfortunately I see hang. Any known API or solution for this ? (Python 2.4)

    Read the article

  • Javascript Rich Display WYSIWYG Component/Methodology

    - by Laramie
    quick back story-- I am working on ASP.Net based template editor that lets authors create text templates using Javascript inserted placeholder tags that will be filled in with dynamic text when the templates are used to display the final results. For example the author might create a template like The word [%12#add] was generated dynamically. The application would eventually replace the tag with a dynamic word down the road (though it's not specifically relevant to this post) The word foo was generated dynmamically. Depending on the circumstances, the template may be created in a text input, textarea or a modified version of the Ajax Control Toolkit HTML Editor. There might be 40 or more of these editable elements on the page, so using lots of stripped down or modified HTML editors would probably bog the page down too much. The problem is that the tags such as [%12#add] are displayed inline with the user text and the result is confusing and aesthetically gross. The goal is parse the contens of the source element and when a tags such as [%12#add] are encountered, display something prettier and less cryptic to the user such as a stylable element or image wherever tags such as [%12#add] occur. The application still needs the template text with the tags on postback. So the user might see The word tag placeholder was generated dynamically. but the original template would still be the value of the text input box The word [%12#add] was generated dynamically. It seems HTML editors like the ACT version and FckEditor accomplish this by rendering their output in an IFrame, but rather than kill myself trying to roll a lighter specialized version myself, I thought I'd ask if anyone knows of an existing free component or approach that has already tackled this. With good reason, I don't think S.O. allows HTML formatting, but the bold "tag placeholder" above would ideally be something like tag placeholder.

    Read the article

  • Simple continuously running XMPP client in python

    - by tom
    I'm using python-xmpp to send jabber messages. Everything works fine except that every time I want to send messages (every 15 minutes) I need to reconnect to the jabber server, and in the meantime the sending client is offline and cannot receive messages. So I want to write a really simple, indefinitely running xmpp client, that is online the whole time and can send (and receive) messages when required. My trivial (non-working) approach: import time import xmpp class Jabber(object): def __init__(self): server = 'example.com' username = 'bot' passwd = 'password' self.client = xmpp.Client(server) self.client.connect(server=(server, 5222)) self.client.auth(username, passwd, 'bot') self.client.sendInitPresence() self.sleep() def sleep(self): self.awake = False delay = 1 while not self.awake: time.sleep(delay) def wake(self): self.awake = True def auth(self, jid): self.client.getRoster().Authorize(jid) self.sleep() def send(self, jid, msg): message = xmpp.Message(jid, msg) message.setAttr('type', 'chat') self.client.send(message) self.sleep() if __name__ == '__main__': j = Jabber() time.sleep(3) j.wake() j.send('[email protected]', 'hello world') time.sleep(30) The problem here seems to be that I cannot wake it up. My best guess is that I need some kind of concurrency. Is that true, and if so how would I best go about that? EDIT: After looking into all the options concerning concurrency, I decided to go with twisted and wokkel. If I could, I would delete this post.

    Read the article

  • Adding and removing elements efficiently from Collection object

    - by user569125
    Hi, Below coding is the working sample,but still i am not happy with this code with related to performancewise.Please have a look and let me know if any better approach is there.Thanks in advance. Adding items to the arraylist object String resultItems[] = paging.getMoveLeftArray().split(","); String fields[]={"id","name","name1"}; leftObj=new ArrayList(); for(int i=0;i<resultItems.length;i++){ //below line mea TestVO bean=new TestVO(); String resultItem = resultItems[i]; String idANDname[] = resultItem.split("@"); String id = idANDname[0]; // name or id should not contain "-" String name[] = idANDname[1].split("-"); //values and fileds are always having same length for(int j=0;j<name.length;j++) { PropertyUtils.setProperty(bean, fields[j], name[j]); } leftObj.add(bean); } Removing items from the arraylist object:availableList contains all the TestVO objects: String []removeArray=paging.getMoveRightArray().split(","); tempList=new CopyOnWriteArrayList(); newTempList=new CopyOnWriteArrayList(); for(int i=0;i<availableList.size();i++){ boolean flag = false; TestVO tempObj = (TestVO )availableList.get(i); int id =(Integer)tempObj.getId(); // System.out.println("id value"+id); // availableList.get(i).getClass().getField(name); for(int j=0;j<removeArray.length;j++){ String resultItem = removeArray[j]; String idandname[] = resultItem.split("@"); for(int k=0;k<idandname.length;k++){ String ids[]=idandname[0].split("-"); if(id==Integer.parseInt(ids[0])){ flag = true; break; } } } if(flag){ tempList.add(tempObj); } else{ newTempList.add(tempObj); }

    Read the article

  • Risking the exception anti-pattern.. with some modifications

    - by Sridhar Iyer
    Lets say that I have a library which runs 24x7 on certain machines. Even if the code is rock solid, a hardware fault can sooner or later trigger an exception. I would like to have some sort of failsafe in position for events like this. One approach would be to write wrapper functions that encapsulate each api a: returnCode=DEFAULT; try { returnCode=libraryAPI1(); } catch(...) { returnCode=BAD; } return returnCode; The caller of the library then restarts the whole thread, reinitializes the module if the returnCode is bad. Things CAN go horribly wrong. E.g. if the try block(or libraryAPI1()) had: func1(); char *x=malloc(1000); func2(); if func2() throws an exception, x will never be freed. On a similar vein, file corruption is a possible outcome. Could you please tell me what other things can possibly go wrong in this scenario?

    Read the article

  • ASP.Net MVC: Showing the same data using different layouts...

    - by vdh_ant
    Hi guys I'm wanting to create a page that allows the users to select how they would like to view their data - i.e. summary (which supports grouping), grid (which supports grouping), table (which supports grouping), map, time line, xml, json etc. Now each layout would probably have different use a different view model, which inherit from a common base class/view model. The reason being that each layout needs the object structure that it deals with to be different (some need hierarchical others a flatter structure). Each layout would call the same repository method and each layout would support the same functionality, i.e. searching and filtering (hence these controls would be shared between layouts). The main exception to this would be sorting which only grid and table views would need to support. Now my question is given this what do people think is the best approach. Using DisplayFor to handle the rendering of the different types? Also how do I work this with the actions... I would imagine that I would use the one action, and pass in the layout types, but then how does this support the grouping required for the summary, grid and table views. Do i treat each grouping as just a layout type Also how would this work from a URL point of view - what do people think is the template to support this layout functionality Cheers Anthony

    Read the article

  • How to bundle extension methods requiring configuration in a library

    - by Greg
    Hi, I would like to develop a library that I can re-use to add various methods involved in navigating/searching through a graph (nodes/relationships, or if you like vertexs/edges). The generic requirements would be: There are existing classes in the main project that already implement the equivalent of the graph class (which contains the lists of nodes / relationships), node class and relationship class (which links nodes together) - the main project likely already has persistence mechanisms for the info (e.g. these classes might be built using Entity Framework for persistance) Methods would need to be added to each of these 3 classes: (a) graph class - methods like "search all nodes", (b) node class - methods such as "find all children to depth i", c) relationship class - methods like "return relationship type", "get parent node", "get child node". I assume there would be a need to inform the library with the extending methods the class names for the graph/node/relationships table (as different project might use different names). To some extent it would need to be like how a generics collection works (where you pass the classes to the collection so it knows what they are). Need to be a way to inform the library of which node property to use for equality checks perhaps (e.g. if it were a graph of webpages the equality field to use might be the URI path) I'm assuming that using abstract base classes wouldn't really work as this would tie usage down to have to use the same persistence approach, and same class names etc. Whereas really I want to be able to, for a project that has "graph-like" characteristics, the ability to add graph searching/walking methods to it.

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • On-Demand thumbnail creation with django and nginx

    - by sharjeel
    I want to generate thumbnails of images on the fly. My site is built with django and deployed using nginx which serves all the static content and communicates with django/apache using reverse proxy. Right now, for every image in my site, I generate all required sizes of thumbnails on-hand and deliver them when required. The problem is that whenever I change the size of a thumbnail, I have to regenerate all of them (and they are tons). However now I'd like to generate the thumbnail the first time it is accessed and later on nginx would deliver the same file over n over. If I delete that thumbnail file because of lesser accesses, it should get generated automatically the next time. Thumbnails in my case also have watermarks which require some computation logic of my application so a webserver thumbnail module might not work very well. The size of the thumbnail can be embedded in the URL. So http://www.example.com/thumbnail/abc_320x240.jpg gets the 320x240 size of the thumbnail. The approach I'm looking right now is to let nginx lookup the file and if it doesn't exist, forward the query to my django application which would create the thumbnail and send either the response or a redirect string. However I'm not sure about the concurrency issues and any other issues which might pop up later. What is the appropriate way to achieve this?

    Read the article

  • Add additional content to the middle of string from within a method

    - by Sammy T
    I am working with a log file and I have a method which is creating a generic entry in to the log. The generic log entry looks like this: public StringBuilder GetLogMessage(LogEventType logType, object message) { StringBuilder logEntry = new StringBuilder(); logEntry.AppendFormat("DATE={0} ", DateTime.Now.ToString("dd-MMM-yyyy", new CultureInfo(CommonConfig.EnglishCultureCode))); logEntry.AppendFormat("TIME={0} ", DateTime.Now.ToString("HH:mm:ss", new CultureInfo(CommonConfig.EnglishCultureCode))); logEntry.AppendFormat("ERRORNO={0} ", base.RemoteIPAddress.ToString().Replace(".", string.Empty)); logEntry.AppendFormat("IP={0}", base.RemoteIPAddress.ToString()); logEntry.AppendFormat("LANG={0} ", base.Culture.TwoLetterISOLanguageName); logEntry.AppendFormat("PNR={0} ", this.RecordLocator); logEntry.AppendFormat("AGENT={0} ", base.UserAgent); logEntry.AppendFormat("REF={0} ", base.Referrer); logEntry.AppendFormat("SID={0} ", base.CurrentContext.Session.SessionID); logEntry.AppendFormat("LOGTYPE={0} ", logType.ToString() ); logEntry.AppendFormat("MESSAGE={0} ", message); return logEntry; } What would be the best approach for adding additional parameters before "MESSAGE="? For example if I wanted to add "MODULE=" from a derived class when the GetLogMessage is being run. Would a delegate be what I am looking for or marking the method as virtual and overriding it or do I need something entirely different? Any help would be appreciated.

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >