Search Results

Search found 19878 results on 796 pages for 'multiple dispatch'.

Page 748/796 | < Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >

  • Python File Search Line And Return Specific Number of Lines after Match

    - by Simos Anderson
    I have a text file that has lines representing some data sets. The file itself is fairly long but it contains certain sections of the following format: Series_Name INFO Number of teams : n1 | Team | # | wins | | TeamName1 | x | y | . . . | TeamNamen1 | numn | numn | Some Irrelevant lines Series_Name2 INFO Number of teams : n1 | Team | # | wins | | TeamName1 | num1 | num2 | . where each section has a header that begins with the Series_Name. Each Series_Name is different. The line with the header also includes the number of teams in that series, n1. Following the header line is a set of lines that represents a table of data. For each series there are n1+1 rows in the table, where each row shows an individual team name and associated stats. I have been trying to implement a function that will allow the user to search for a Team name and then print out the line in the table associated with that team. However, certain team names show up under multiple series. To resolve this, I am currently trying to write my code so that the user can search for the header line with series name first and then print out just the following n1+1 lines that represent the data associated with the series. Here's what I have come up with so far: import re print fname = raw_input("Enter filename: ") seriesname = raw_input("Enter series: ") def findcounter(fname, seriesname): logfile = open(fname, "r") pat = 'INFO Number of teams :' for line in logfile: if seriesname in line: if pat in line: s=line pattern = re.compile(r"""(?P<name>.*?) #starting name \s*INFO #whitespace and success \s*Number\s*of\s*teams #whitespace and strings \s*\:\s*(?P<n1>.*)""",re.VERBOSE) match = pattern.match(s) name = match.group("name") n1 = int(match.group("n1")) print name + " has " + str(n1) + " teams" lcount = 0 for line in logfile: if line.startswith(name): if pat in line: while lcount <= n1: s.append(line) lcount += 1 return result The first part of my code works; it matches the header line that the person searches for, parses the line, and then prints out how many teams are in that series. Since the header line basically tells me how many lines are in the table, I thought that I could use that information to construct a loop that would continue printing each line until a set counter reached n1. But I've tried running it, and I realize that the way I've set it up so far isn't correct. So here's my question: How do you return a number of lines after a matched line when given the number of desired lines that follow the match? I'm new to programming, and I apologize if this question seems silly. I have been working on this quite diligently with no luck and would appreciate any help.

    Read the article

  • If array is thread safe, what the issue with this function?

    - by Ajay Sharma
    I am totally lost with the things that is happening with my code.It make me to think & get clear with Array's thread Safe concept. Is NSMutableArray OR NSMutableDictionary Thread Safe ? While my code is under execution, the values for the MainArray get's changes although, that has been added to Array. Please try to execute this code, onyour system its very much easy.I am not able to get out of this Trap. It is the function where it is returning Array. What I am Looking to do is : -(Array) (Main Array) --(Dictionary) with Key Value (Multiple Dictionary in Main Array) ----- Above dictionary has 9 Arrays in it. This is the structure I am developing for Array.But even before #define TILE_ROWS 3 #define TILE_COLUMNS 3 #define TILE_COUNT (TILE_ROWS * TILE_COLUMNS) -(NSArray *)FillDataInArray:(int)counter { NSMutableArray *temprecord = [[NSMutableArray alloc] init]; for(int i = 0; i <counter;i++) { if([temprecord count]<=TILE_COUNT) { NSMutableDictionary *d1 = [[NSMutableDictionary alloc]init]; [d1 setValue:[NSString stringWithFormat:@"%d/2011",i+1] forKey:@"serial_data"]; [d1 setValue:@"Friday 13 Sep 12:00 AM" forKey:@"date_data"]; [d1 setValue:@"Description Details " forKey:@"details_data"]; [d1 setValue:@"Subject Line" forKey:@"subject_data"]; [temprecord addObject:d1]; d1= nil; [d1 release]; if([temprecord count]==TILE_COUNT) { NSMutableDictionary *holderKey = [[NSMutableDictionary alloc]initWithObjectsAndKeys:temprecord,[NSString stringWithFormat:@"%d",[casesListArray count]+1],nil]; [self.casesListArray addObject:holderKey]; [holderKey release]; holderKey =nil; [temprecord removeAllObjects]; } } else { [temprecord removeAllObjects]; NSMutableDictionary *d1 = [[NSMutableDictionary alloc]init]; [d1 setValue:[NSString stringWithFormat:@"%d/2011",i+1] forKey:@"serial_data"]; [d1 setValue:@"Friday 13 Sep 12:00 AM" forKey:@"date_data"]; [d1 setValue:@"Description Details " forKey:@"details_data"]; [d1 setValue:@"Subject Line" forKey:@"subject_data"]; [temprecord addObject:d1]; d1= nil; [d1 release]; } } return temprecord; [temprecord release]; } What is the problem with this Code ? Every time there are 9 records in Array, it just replaces the whole Array value instead of just for specific key Value.

    Read the article

  • jQuery bind efficiency

    - by chelfers
    I'm having issue with load speed using multiple jQuery binds on a couple thousands elements and inputs, is there a more efficient way of doing this? The site has the ability to switch between product lists via ajax calls, the page cannot refresh. Some lists have 10 items, some 100, some over 2000. The issue of speed arises when I start flipping between the lists; each time the 2000+ item list is loaded the system drags for about 10 seconds. Before I rebuild the list I am setting the target element's html to '', and unbinding the two bindings below. I'm sure it has something to do with all the parent, next, and child calls I am doing in the callbacks. Any help is much appreciated. loop 2500 times <ul> <li><input type="text" class="product-code" /></li> <li>PROD-CODE</li> ... <li>PRICE</li> </ul> end loop $('li.product-code').bind( 'click', function(event){ selector = '#p-'+ $(this).prev('li').children('input').attr('lm'); $(selector).val( ( $(selector).val() == '' ? 1 : ( parseFloat( $(selector).val() ) + 1 ) ) ); Remote.Cart.lastProduct = selector; Remote.Cart.Products.Push( Remote.Cart.customerKey, { code : $(this).prev('li').children('input').attr('code'), title : $(this).next('li').html(), quantity : $('#p-'+ $(this).prev('li').children('input').attr('lm') ).val(), price : $(this).prev('li').children('input').attr('price'), weight : $(this).prev('li').children('input').attr('weight'), taxable : $(this).prev('li').children('input').attr('taxable'), productId : $(this).prev('li').children('input').attr('productId'), links : $(this).prev('li').children('input').attr('productLinks') }, '#p-'+ $(this).prev('li').children('input').attr('lm'), false, ( parseFloat($(selector).val()) - 1 ) ); return false; }); $('input.product-qty').bind( 'keyup', function(){ Remote.Cart.lastProduct = '#p-'+ $(this).attr('lm'); Remote.Cart.Products.Push( Remote.Cart.customerKey, { code : $(this).attr('code') , title : $(this).parent().next('li').next('li').html(), quantity : $(this).val(), price : $(this).attr('price'), weight : $(this).attr('weight'), taxable : $(this).attr('taxable'), productId : $(this).attr('productId'), links : $(this).attr('productLinks') }, '#p-'+ $(this).attr('lm'), false, previousValue ); });

    Read the article

  • Classes to Entities; Like-class inheritence problems

    - by Stacey
    Beyond work, some friends and I are trying to build a game of sorts; The way we structure some of it works pretty well for a normal object oriented approach, but as most developers will attest this does not always translate itself well into a database persistent approach. This is not the absolute layout of what we have, it is just a sample model given for sake of representation. The whole project is being done in C# 4.0, and we have every intention of using Entity Framework 4.0 (unless Fluent nHibernate can really offer us something we outright cannot do in EF). One of the problems we keep running across is inheriting things in database models. Using the Entity Framework designer, I can draw the same code I have below; but I'm sure it is pretty obvious that it doesn't work like it is expected to. To clarify a little bit; 'Items' have bonuses, which can be of anything. Therefore, every part of the game must derive from something similar so that no matter what is 'changed' it is all at a basic enough level to be hooked into. Sounds fairly simple and straightforward, right? So then, we inherit everything that pertains to the game from 'Unit'. Weights, Measures, Random (think like dice, maybe?), and there will be other such entities. Some of them are similar, but in code they will each react differently. We're having a really big problem with abstracting this kind of thing into a database model. Without 'Enum' support, it is proving difficult to translate into multiple tables that still share a common listing. One solution we've depicted is to use a 'key ring' type approach, where everything that attaches to a character is stored on a 'Ring' with a 'Key', where each Key has a Value that represents a type. This works functionally but we've discovered it becomes very sluggish and performs poorly. We also dislike this approach because it begins to feel as if everything is 'dumped' into one class; which makes management and logical structure difficult to adhere to. I was hoping someone else might have some ideas on what I could do with this problem. It's really driving me up the wall; To summarize; the goal is to build a type (Unit) that can be used as a base type (Table per Type) for generic reference across a relatively global scope, without having to dump everything into a single collection. I can use an Interface to determine actual behavior so that isn't too big of an issue. This is 'roughly' the same idea expressed in the Entity Framework.

    Read the article

  • Reliable session faulting for unknown reason

    - by Scarfman007
    I am trying to achieve the following - one client-side proxy instance (kept open) accessed by multiple threads using a reliable session. What I have managed so far is to have either A) a reliable session with a client-side proxy which is created and disposed per call or B) what I aim for, but without a reliable session. When I enable reliable sessions on my binding however, the following behaviour is exhibited: Client-side Upon application startup everything appears to work fine until roughly 18 messages in to the WCF session. I firstly get the proxy.InnerChannel.Faulted event raised, then an exception is caught at the point where I am calling the method on the proxy. The exception is a System.TimeoutException, with message: "The request channel timed out while waiting for a reply after 00:00:59.9062512. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout." The inner exception has a similar message: "The request operation did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout." With the method at the top of the inner stack trace being: System.ServiceModel.Channels.ReliableRequestSessionChannel.SyncRequest.WaitForReply(TimeSpan timeout) I then call proxy.Close followed by proxy.Abort (catching and ignoring exceptions). If I utilize the default settings (i.e. have simply <reliableSession/>), then calling proxy. Close results in another System.Timeout exception (although this time the allotted timeout is 00:00:00), however if I override the defaults as specified above no exception is thrown. Service-side Utilizing WCF tracing I get a System.ServiceModel.CommunicationException, with message: "The sequence has been terminated by the remote endpoint. The session has stopped waiting for a particular reply. Because of this the reliable session cannot continue. The reliable session was faulted." And a stack trace ending at: System.ServiceModel.AsyncResult.End[TAsyncResult](IAsyncResult result) When remotely attaching to the server I get the same message, which occurs when code execution steps over the return statement of my service in the service call which causes the error. The puzzling thing to me is that the service is stable and runs with options A) or B) as decribed at the beginning of my post, and occurs after a varying number of messages (around 18). The former fact points to there being nothing wrong with the code (indeed I have checked that no exceptions are thrown), and the latter just serves to confuse me and is why I modified the settings on the reliable session binding. I am quite stuck on this. Can anyone suggest why the reliable session would fault in such a way?

    Read the article

  • Using MySQL to generate daily sales reports with filled gaps, grouped by currency

    - by Shane O'Grady
    I'm trying to create what I think is a relatively basic report for an online store, using MySQL 5.1.45 The store can receive payment in multiple currencies. I have created some sample tables with data and am trying to generate a straightforward tabular result set grouped by date and currency so that I can graph these figures. I want to see each currency that is available per date, with a 0 in the result if there were no sales in that currency for that day. If I can get that to work I want to do the same but also grouped by product id. In the sample data I have provided there are only 3 currencies and 2 product ids, but in practice there can be any number of each. I can correctly group by date, but then when I add a grouping by currency my query does not return what I want. I based my work off this article. My reporting query, grouped only by date: SELECT calendar.datefield AS date, IFNULL(SUM(orders.order_value),0) AS total_value FROM orders RIGHT JOIN calendar ON (DATE(orders.order_date) = calendar.datefield) WHERE (calendar.datefield BETWEEN (SELECT MIN(DATE(order_date)) FROM orders) AND (SELECT MAX(DATE(order_date)) FROM orders)) GROUP BY date Now grouped by date and currency: SELECT calendar.datefield AS date, orders.currency_id, IFNULL(SUM(orders.order_value),0) AS total_value FROM orders RIGHT JOIN calendar ON (DATE(orders.order_date) = calendar.datefield) WHERE (calendar.datefield BETWEEN (SELECT MIN(DATE(order_date)) FROM orders) AND (SELECT MAX(DATE(order_date)) FROM orders)) GROUP BY date, orders.currency_id The results I am getting (grouped by date and currency): +------------+-------------+-------------+ | date | currency_id | total_value | +------------+-------------+-------------+ | 2009-08-15 | 3 | 81.94 | | 2009-08-15 | 45 | 25.00 | | 2009-08-15 | 49 | 122.60 | | 2009-08-16 | NULL | 0.00 | | 2009-08-17 | 45 | 25.00 | | 2009-08-17 | 49 | 122.60 | | 2009-08-18 | 3 | 81.94 | | 2009-08-18 | 49 | 245.20 | +------------+-------------+-------------+ The results I want: +------------+-------------+-------------+ | date | currency_id | total_value | +------------+-------------+-------------+ | 2009-08-15 | 3 | 81.94 | | 2009-08-15 | 45 | 25.00 | | 2009-08-15 | 49 | 122.60 | | 2009-08-16 | 3 | 0.00 | | 2009-08-16 | 45 | 0.00 | | 2009-08-16 | 49 | 0.00 | | 2009-08-17 | 3 | 0.00 | | 2009-08-17 | 45 | 25.00 | | 2009-08-17 | 49 | 122.60 | | 2009-08-18 | 3 | 81.94 | | 2009-08-18 | 45 | 0.00 | | 2009-08-18 | 49 | 245.20 | +------------+-------------+-------------+ The schema and data I am using in my tests: CREATE TABLE orders ( id INT PRIMARY KEY AUTO_INCREMENT, order_date DATETIME, order_id INT, product_id INT, currency_id INT, order_value DECIMAL(9,2), customer_id INT ); INSERT INTO orders (order_date, order_id, product_id, currency_id, order_value, customer_id) VALUES ('2009-08-15 10:20:20', '123', '1', '45', '12.50', '322'), ('2009-08-15 12:30:20', '124', '1', '49', '122.60', '400'), ('2009-08-15 13:41:20', '125', '1', '3', '40.97', '324'), ('2009-08-15 10:20:20', '126', '2', '45', '12.50', '345'), ('2009-08-15 13:41:20', '131', '2', '3', '40.97', '756'), ('2009-08-17 10:20:20', '3234', '1', '45', '12.50', '1322'), ('2009-08-17 10:20:20', '4642', '2', '45', '12.50', '1345'), ('2009-08-17 12:30:20', '23', '2', '49', '122.60', '3142'), ('2009-08-18 12:30:20', '2131', '1', '49', '122.60', '4700'), ('2009-08-18 13:41:20', '4568', '1', '3', '40.97', '3274'), ('2009-08-18 12:30:20', '956', '2', '49', '122.60', '3542'), ('2009-08-18 13:41:20', '443', '2', '3', '40.97', '7556'); CREATE TABLE currency ( id INT PRIMARY KEY, name VARCHAR(255) ); INSERT INTO currency (id, name) VALUES (3, 'Euro'), (45, 'US Dollar'), (49, 'CA Dollar'); CREATE TABLE calendar (datefield DATE); DELIMITER | CREATE PROCEDURE fill_calendar(start_date DATE, end_date DATE) BEGIN DECLARE crt_date DATE; SET crt_date=start_date; WHILE crt_date < end_date DO INSERT INTO calendar VALUES(crt_date); SET crt_date = ADDDATE(crt_date, INTERVAL 1 DAY); END WHILE; END | DELIMITER ; CALL fill_calendar('2008-01-01', '2011-12-31');

    Read the article

  • Generate MetaData with ANT

    - by Neil Foley
    I have a folder structure that contains multiple javascript files, each of these files need a standard piece of text at the top = //@include "includes.js" Each folder needs to contain a file named includes.js that has an include entry for each file in its directory and and entry for the include file in its parent directory. I'm trying to achive this using ant and its not going too well. So far I have the following, which does the job of inserting the header but not without actually moving or copying the file. I have heard people mentioning the <replace> task to do this but am a bit stumped. <?xml version="1.0" encoding="UTF-8"?> <project name="JavaContentAssist" default="start" basedir="."> <taskdef resource="net/sf/antcontrib/antcontrib.properties"> <classpath> <pathelement location="C:/dr_workspaces/Maven Repository/.m2/repository/ant-contrib/ant-contrib/20020829/ant-contrib-20020829.jar"/> </classpath> </taskdef> <target name="start"> <foreach target="strip" param="file"> <fileset dir="${basedir}"> <include name="**/*.js"/> <exclude name="**/includes.js"/> </fileset> </foreach> </target> <target name="strip"> <move file="${file}" tofile="${a_location}" overwrite="true"> <filterchain> <striplinecomments> <comment value="//@" /> </striplinecomments> <concatfilter prepend="${basedir}/header.txt"> </concatfilter> </filterchain> </move> </target> </project> As for the generation of the include files in the dir I'm not sure where to start at all. I'd appreciate if somebody could point me in the right direction.

    Read the article

  • Separating positive values from 'zero' values in an XSLT for-each statement

    - by danielle
    I am traversing an XML file (that contains multiple tables) using XSLT. Part of the job of the page is to get the title of each table, and present that title with along with the number of items that table contains (i.e. "Problems (5)"). I am able to get the number of items, but I now need to separate the sections with 0 (zero) items in them, and put them at the bottom of the list of table titles. I'm having trouble with this because the other items with positive numbers need to be left in their original order/not sorted. Here is the code for the list of titles: <ul> <xsl:for-each select="n1:component/n1:structuredBody/n1:component/n1:section/n1:title"> <li style="list-style-type:none;"> <div style = "padding:3px"><a href="#{generate-id(.)}"> <xsl:variable name ="count" select ="count(../n1:entry)"/> <xsl:choose> <xsl:when test = "$count != 0"> <xsl:value-of select="."/> (<xsl:value-of select="$count"/>) </xsl:when> <xsl:otherwise> <div id = "zero"><xsl:value-of select="."/> (<xsl:value-of select="$count"/>)</div> </xsl:otherwise> </xsl:choose> </a> </div> </li> </xsl:for-each> </ul> Right now, the "zero" div just marks each link as gray. Any help regarding how to place the "zero" divs at the bottom of the list would be greatly appreciated. Thank you!

    Read the article

  • HTML5 <audio> Safari live broadcast vs not

    - by Peter Parente
    I'm attempting to embed an HTML5 audio element pointing to MP3 or OGG data served by a PHP file . When I view the page in Safari, the controls appear, but the UI says "Live Broadcast." When I click play, the audio starts as expected. Once it ends, however, I can't start it playing again by clicking play. Even using the JS API on the audio element and setting currentTime to 0 fails with an index error exception. I suspected the headers from the PHP script were the problem, particularly missing a content length. But that's not the case. The response headers include a proper Content- Length to indicate the audio has finite size. Furthermore, everything works as expected in Firefox 3.5+. I can click play on the audio element multiple times to hear the sound replay. If I remove the PHP script from the equation and serve up a static copy of the MP3 file, everything works fine in Safari. Does this mean Safari is treating audio src URLs with query parameters differently than URLs that don't have them? Anyone have any luck getting this to work? My simple example page is: <!DOCTYPE html> <html> <head></head> <body> <audio controls autobuffer> <source src="say.php?text=this%20is%20a%20test&format=.ogg" /> <source src="say.php?text=this%20is%20a%20test&format=.mp3" /> </audio> </body> </html> HTTP Headers from PHP script: HTTP/1.x 200 OK Date: Sun, 03 Jan 2010 15:39:34 GMT Server: Apache X-Powered-By: PHP/5.2.10 Content-Length: 8993 Keep-Alive: timeout=2, max=98 Connection: Keep-Alive Content-Type: audio/mpeg HTTP Headers from direct file access: HTTP/1.x 200 OK Date: Sun, 03 Jan 2010 20:06:59 GMT Server: Apache Last-Modified: Sun, 03 Jan 2010 03:20:02 GMT Etag: "a404b-c3f-47c3a14937c80" Accept-Ranges: bytes Content-Length: 8993 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: audio/mpeg I tried hard-coding the Accept-Ranges header into the script too, but no luck.

    Read the article

  • How is "clean" testing done on the Macintosh without virtualization?

    - by Schnapple
    One of the things I've run across on Windows is when a web browser plugin or program you're developing makes an assumption that something is installed that, by default, isn't always present on Windows. A perfect example would be .NET - a whole lot of people running Windows XP have never installed any versions of .NET and so the installer needs to detect and remedy this if necessary. The way I've been testing this in Windows is to have a virtual machine with a snapshot of a clean, patched, but otherwise untouched install of XP or Vista or 7 or whatever. When I'm done testing I just discard any changes since the snapshot. Works great. I'm now developing something for the Macintosh, a platform which is very new to me, and I'm seeing that virtualization does not appear to be an option. It's explicitly forbidden in the EULA of Mac OS X, it's only allowed from Mac OS X Server, which seeing as how I'm targeting an end product is of no use to me, and the one program I see which can virtualize it - VirtualBox - only supports the server and actively nukes any discussion of running the consumer/client version of Mac OS X. And the only instructions I find anywhere on the topic seem to involve the use of "hacking" programs which is very much incompatible with the full-time gig I'm trying to do this for. So it looks like virtualization is out, but at various points I'm going to want or need to simulate what it's like to install and run this software on a "clean" Macintosh. How do people usually do this? Just buy multiple Macintoshes and use Time Machine? Am I thinking about this all wrong and everything Just Works? To be clear I'm not trying to run Mac OS X on a Windows machine. I have a Macintosh, I'm fine with virtualizing Mac OS X on Apple hardware, I'm just not seeing a route to making the non-Server version do this. I'm aware that Mac OS X Server can be virtualized but that's not what I'm going for. I'm aware that there are unsanctioned/unsupported methods of making Mac OS X run in virtualization programs like VirtualBox but for legal reasons I am not interested in those. My question is not "how can I do this?" but rather "so this thing I do on Windows seems to not be possible, generally, on the Macintosh, so what do people do to achieve what I'm going for?"

    Read the article

  • Reducer getting fewer records than expected

    - by sathishs
    We have a scenario of generating unique key for every single row in a file. we have a timestamp column but the are multiple rows available for a same timestamp in few scenarios. We decided unique values to be timestamp appended with their respective count as mentioned in the below program. Mapper will just emit the timestamp as key and the entire row as its value, and in reducer the key is generated. Problem is Map outputs about 236 rows, of which only 230 records are fed as an input for reducer which outputs the same 230 records. public class UniqueKeyGenerator extends Configured implements Tool { private static final String SEPERATOR = "\t"; private static final int TIME_INDEX = 10; private static final String COUNT_FORMAT_DIGITS = "%010d"; public static class Map extends Mapper<LongWritable, Text, Text, Text> { @Override protected void map(LongWritable key, Text row, Context context) throws IOException, InterruptedException { String input = row.toString(); String[] vals = input.split(SEPERATOR); if (vals != null && vals.length >= TIME_INDEX) { context.write(new Text(vals[TIME_INDEX - 1]), row); } } } public static class Reduce extends Reducer<Text, Text, NullWritable, Text> { @Override protected void reduce(Text eventTimeKey, Iterable<Text> timeGroupedRows, Context context) throws IOException, InterruptedException { int cnt = 1; final String eventTime = eventTimeKey.toString(); for (Text val : timeGroupedRows) { final String res = SEPERATOR.concat(getDate( Long.valueOf(eventTime)).concat( String.format(COUNT_FORMAT_DIGITS, cnt))); val.append(res.getBytes(), 0, res.length()); cnt++; context.write(NullWritable.get(), val); } } } public static String getDate(long time) { SimpleDateFormat utcSdf = new SimpleDateFormat("yyyyMMddhhmmss"); utcSdf.setTimeZone(TimeZone.getTimeZone("America/Los_Angeles")); return utcSdf.format(new Date(time)); } public int run(String[] args) throws Exception { conf(args); return 0; } public static void main(String[] args) throws Exception { conf(args); } private static void conf(String[] args) throws IOException, InterruptedException, ClassNotFoundException { Configuration conf = new Configuration(); Job job = new Job(conf, "uniquekeygen"); job.setJarByClass(UniqueKeyGenerator.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); // job.setNumReduceTasks(400); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } } It is consistent for higher no of lines and the difference is as huge as 208969 records for an input of 20855982 lines. what might be the reason for reduced inputs to reducer?

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • Microsoft Word Document Controls not accepting carriage returns

    - by Scott
    So, I have a Microsoft Word 2007 Document with several Plain Text Format (I have tried Rich Text Format as well) controls which accept input via XML. For carriage returns, I had the string being passed through XML containing "\r\n" when I wanted a carriage return, but the word document ignored that and just kept wrapping things on the same line. I also tried replacing the \r\n with System.Environment.NewLine in my C# mapper, but that just put in \r\n anyway, which still didn't work. Note also that on the control itself I have set it to "Allow Carriage Returns (Multiple Paragrpahs)" in the control properties. This is the XML for the listMapper <Field id="32" name="32" fieldType="SimpleText"> <DataSelector path="/Data/DB/DebtProduct"> <InputField fieldType="" path="/Data/DB/Client/strClientFirm" link="" type=""/> <InputField fieldType="" path="strClientRefDebt" link="" type=""/> </DataSelector> <DataMapper formatString="{0} Account Number: {1}" name="SimpleListMapper" type=""> <MapperData> </MapperData> </DataMapper> </Field> Note that this is the listMapper C# where I actually map the list (notice where I try and append the system.environment.newline) namespace DocEngine.Core.DataMappers { public class CSimpleListMapper:CBaseDataMapper { public override void Fill(DocEngine.Core.Interfaces.Document.IControl control, CDataSelector dataSelector) { if (control != null && dataSelector != null) { ISimpleTextControl textControl = (ISimpleTextControl)control; IContent content = textControl.CreateContent(); CInputFieldCollection fileds = dataSelector.Read(Context); StringBuilder builder = new StringBuilder(); if (fileds != null) { foreach (List<string> lst in fileds) { if (CanMap(lst) == false) continue; if (builder.Length > 0 && lst[0].Length > 0) builder.Append(Environment.NewLine); if (string.IsNullOrEmpty(FormatString)) builder.Append(lst[0]); else builder.Append(string.Format(FormatString, lst.ToArray())); } content.Value = builder.ToString(); textControl.Content = content; applyRules(control, null); } } } } } Does anybody have any clue at all how I can get MS Word 2007 (docx) to quit ignoring my newline characters??

    Read the article

  • How to select all parent objects into DataContext using single LINQ query ?

    - by too
    I am looking for an answer to a specific problem of fetching whole LINQ object hierarchy using single SELECT. At first I was trying to fill as much LINQ objects as possible using LoadOptions, but AFAIK this method allows only single table to be linked in one query using LoadWith. So I have invented a solution to forcibly set all parent objects of entity which of list is to be fetched, although there is a problem of multiple SELECTS going to database - a single query results in two SELECTS with the same parameters in the same LINQ context. For this question I have simplified this query to popular invoice example: public static class Extensions { public static IEnumerable<T> ForEach<T>(this IEnumerable<T> collection, Action<T> func) { foreach(var c in collection) { func(c); } return collection; } } public IEnumerable<Entry> GetResults(AppDataContext context, int CustomerId) { return ( from entry in context.Entries join invoice in context.Invoices on entry.EntryInvoiceId equals invoice.InvoiceId join period in context.Periods on invoice.InvoicePeriodId equals period.PeriodId // LEFT OUTER JOIN, store is not mandatory join store in context.Stores on entry.EntryStoreId equals store.StoreId into condStore from store in condStore.DefaultIfEmpty() where (invoice.InvoiceCustomerId = CustomerId) orderby entry.EntryPrice descending select new { Entry = entry, Invoice = invoice, Period = period, Store = store } ).ForEach(x => { x.Entry.Invoice = Invoice; x.Invoice.Period = Period; x.Entry.Store = Store; } ).Select(x => x.Entry); } When calling this function and traversing through result set, for example: var entries = GetResults(this.Context); int withoutStore = 0; foreach(var k in entries) { if(k.EntryStoreId == null) withoutStore++; } the resulting query to database looks like (single result is fetched): SELECT [t0].[EntryId], [t0].[EntryInvoiceId], [t0].[EntryStoreId], [t0].[EntryProductId], [t0].[EntryQuantity], [t0].[EntryPrice], [t1].[InvoiceId], [t1].[InvoiceCustomerId], [t1].[InvoiceDate], [t1].[InvoicePeriodId], [t2].[PeriodId], [t2].[PeriodName], [t2].[PeriodDateFrom], [t4].[StoreId], [t4].[StoreName] FROM [Entry] AS [t0] INNER JOIN [Invoice] AS [t1] ON [t0].[EntryInvoiceId] = [t1].[InvoiceId] INNER JOIN [Period] AS [t2] ON [t2].[PeriodId] = [t1].[InvoicePeriodId] LEFT OUTER JOIN ( SELECT 1 AS [test], [t3].[StoreId], [t3].[StoreName] FROM [Store] AS [t3] ) AS [t4] ON [t4].[StoreId] = ([t0].[EntryStoreId]) WHERE (([t1].[InvoiceCustomerId]) = @p0) ORDER BY [t0].[InvoicePrice] DESC -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [186] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.1 SELECT [t0].[EntryId], [t0].[EntryInvoiceId], [t0].[EntryStoreId], [t0].[EntryProductId], [t0].[EntryQuantity], [t0].[EntryPrice], [t1].[InvoiceId], [t1].[InvoiceCustomerId], [t1].[InvoiceDate], [t1].[InvoicePeriodId], [t2].[PeriodId], [t2].[PeriodName], [t2].[PeriodDateFrom], [t4].[StoreId], [t4].[StoreName] FROM [Entry] AS [t0] INNER JOIN [Invoice] AS [t1] ON [t0].[EntryInvoiceId] = [t1].[InvoiceId] INNER JOIN [Period] AS [t2] ON [t2].[PeriodId] = [t1].[InvoicePeriodId] LEFT OUTER JOIN ( SELECT 1 AS [test], [t3].[StoreId], [t3].[StoreName] FROM [Store] AS [t3] ) AS [t4] ON [t4].[StoreId] = ([t0].[EntryStoreId]) WHERE (([t1].[InvoiceCustomerId]) = @p0) ORDER BY [t0].[InvoicePrice] DESC -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [186] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.1 The question is why there are two queries and how can I fetch LINQ objects without such hacks?

    Read the article

  • How do I replace values within a data frame with a string in R?

    - by Arturito
    short version: How do I replace values within a data frame with a string found within another data frame? longer version: I'm a biologist working with many species of bees. I have a data set with many thousands of bees. Each row has a unique bee ID # along with all the relevant info about that specimen (data of capture, GPS location, etc). The species information for each bee has not been entered because it takes a long time to ID them. When IDing, I end up with boxes of hundred of bees, all of the same species. I enter these into a separate data frame. I am trying to write code that will update the original data file with species information (family, genus, species, sex, etc) as I ID the bees. Currently, in the original data file, the species info is blank and is interpreted as NA within R. I want to have R find all unique bee ID #'s and fill in the species info, but I am having trouble figuring out how to replace the NA values with a string (e.g. "Andrenidae") Here is a simple example of what I am trying to do: rawData<-data.frame(beeID=c(1:20),family=rep(NA,20)) speciesInfo<-data.frame(beeID=seq(1,20,3),family=rep("Andrenidae",7)) rawData[rawData$beeID == 4,"family"] <- speciesInfo[speciesInfo$beeID == 4,"family"] So, I am replacing things as I want, but with a number rather than the family name (a string). What I would eventually like to do is write a little loop to add in all the species info, e.g.: for (i in speciesInfo$beeID){ rawData[rawData$beeID == i,"family"] <- speciesInfo[speciesInfo$beeID == i,"family"] } Thanks in advance for any advice! Cheers, Zak EDIT: I just noticed that the first two methods below add a new column each time, which would cause problems if I needed to add species info multiple times (which I typically do). For example: rawData<-data.frame(beeID=c(1:20),family=rep(NA,20)) Andrenidae<-data.frame(beeID=seq(1,20,3),family=rep("Andrenidae",7)) Halictidae<-data.frame(beeID=seq(1,20,3)+1,family=rep("Halictidae",7)) # using join library(plyr) rawData <- join(rawData, Andrenidae, by = "beeID", type = "left") rawData <- join(rawData, Halictidae, by = "beeID", type = "left") # using merge rawData <- merge(x=rawData,y=Andrenidae,by='beeID',all.x=T,all.y=F) rawData <- merge(x=rawData,y=Halictidae,by='beeID',all.x=T,all.y=F) Is there a way to either collapse the columns so that I have one, unified data frame? Or a way to update the rawData rather than adding a new column each time? Thanks in advance!

    Read the article

  • How to remove the explicit dependencies to other projects' libraries in Eclipse launch configuration

    - by euluis
    In Eclipse it is possible to create launch configurations in a project, specifying the runtime dependencies from another project. A problem I found was that if you have a multiple project workspace, being possible that each project has its own libraries, it is easy to add explicit dependencies in a secondary project to libraries that are of another project and therefore subject to change. An example of this problem follows: proj1 +-- src +-- lib +-- jar1-v1.0.jar +-- jar2-v1.0.jar proj2 +-- src +-- proj2-tests.launch I don't have a dependency from the code in proj2/src to the libraries in proj1/lib. Nevertheless, I do have a dependency from proj2/src to proj1/src, although since there is an internal dependency in the code in proj1/src to its libraries jar1-v1.0.jar and jar2.v1.0.jar, I have to add a dependency in proj2-tests.launch to the libraries in proj1/lib. This translates to the following ugly lines in proj2-tests.launch: <listEntry value="<?xml version="1.0" encoding="UTF-8" standalone="no"?> <runtimeClasspathEntry path="3" projectName="proj1" type="1"/> "/> <listEntry value="<?xml version="1.0" encoding="UTF-8" standalone="no"?> <runtimeClasspathEntry internalArchive="/proj1/lib/jar1-v1.0.jar" path="3" type="2"/> "/> <listEntry value="<?xml version="1.0" encoding="UTF-8" standalone="no"?> <runtimeClasspathEntry internalArchive="/proj1/lib/jar2-v1.0.jar" path="3" type="2"/> "/> This wouldn't be a big problem if there wasn't the need from time to time to evolve the software, upgrade the libraries and etc. Consider the common need to upgrade the libraries jar1-v1.0.jar and jar2-v1.0.jar to their versions v1.1. Consider that you have about 10 projects in one workspace, having about 5 libraries each and about 4 launch configurations. You get a maintenance overhead in doing a simple upgrade of a library, which normally must imply changes in files for which there wasn't the need for. Or maybe I'm doing something wrong... What I would like to state is proj2 depends on proj1 and on its libraries and having this translated to simply that in the *.launch files. Is that possible?

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • difference between calling javascript function on body load or directly from script.

    - by Abbas
    i am using a javascript where in i am creating multiple div (say 5) at runtime, using javascript function, all the divs contain some text, which is again set at runtime, now i want to disable all the divs at runtime and have the page numbers in the bottom, so that whenever user clicks on the page number only that div should get visible else other should get disable, i have created a function, which accepts parameter, as page number, i enable the div whose page number is clicked and using a for loop, i disable all the other divs, now here my problem is i have created two functions, 1st (for adding divs and disabling all the divs except 1st) and writing content to it, and other for enabling the div whose page number is clicked, and i have called the Adding div function on body onload; now first time when i run, page everthing goes well, but next time when i click on any of the page number, it just gets enabled and again that AddDiv function, runs and re-enables all the divs.. Please reply why this is happening and how should i resolve my issue... Below is my script, content for the div are coming using Json. <body onload="JsonScript();"> <script language="javascript" type="text/javascript"> function JsonScript() { var existingDiv = document.getElementById("form1"); var newAnchorDiv = document.createElement("div"); newAnchorDiv.id = "anchorDiv"; var list = { "Article": articleList }; for(var i=0; i < list.Article.length; i++) { var newDiv = document.createElement("div"); newDiv.id = "div"+(i+1); newDiv.innerHTML = list.Article[i].toString(); newAnchorDiv.innerHTML += "<a href='' onclick='displayMessage("+(i+1)+")'>"+(i+1)+"</a>&nbsp;"; existingDiv.appendChild(newDiv); existingDiv.appendChild(newAnchorDiv); } for(var j = 2; j < list.Article.length + 1; j ++) { var getDivs = document.getElementById("div"+j); getDivs.style.display = "none"; } } function displayMessage(currentId) { var list = {"Article" : articleList} document.getElementById("div"+currentId).style.display = 'block'; for(var i = 1; i < list.Article.length + 1; i++) { if (i != currentId) { document.getElementById("div"+i).style.display = 'none'; } } } </script> Thanks and Regards

    Read the article

  • How Do I Prevent Rails From Treating Edit Fields_For Differently From New Fields_For

    - by James
    I am using rails3 beta3 and couchdb via couchrest. I am not using active record. I want to add multiple "Sections" to a "Guide" and add and remove sections dynamically via a little javascript. I have looked at all the screencasts by Ryan Bates and they have helped immensely. The only difference is that I want to save all the sections as an array of sections instead of individual sections. Basically like this: "sections" => [{"title" => "Foo1", "content" => "Bar1"}, {"title" => "Foo2", "content" => "Bar2"}] So, basically I need the params hash to look like that when the form is submitted. When I create my form I am doing the following: <%= form_for @guide, :url => { :action => "create" } do |f| %> <%= render :partial => 'section', :collection => @guide.sections %> <%= f.submit "Save" %> <% end %> And my section partial looks like this: <%= fields_for "sections[]", section do |guide_section_form| %> <%= guide_section_form.text_field :section_title %> <%= guide_section_form.text_area :content, :rows => 3 %> <% end %> Ok, so when I create the guide with sections, it is working perfectly as I would like. The params hash is giving me a sections array just like I would want. The problem comes when I want edit guide/sections and save them again because rails is inserting the id of the guide in the id and name of each form field, which is screwing up the params hash on form submission. Just to be clear, here is the raw form output for a new resource: <input type="text" size="30" name="sections[][section_title]" id="sections__section_title"> <textarea rows="3" name="sections[][content]" id="sections__content" cols="40"></textarea> And here is what it looks like when editing an existing resource: <input type="text" value="Foo1" size="30" name="sections[cd2f2759895b5ae6cb7946def0b321f1][section_title]" id="sections_cd2f2759895b5ae6cb7946def0b321f1_section_title"> <textarea rows="3" name="sections[cd2f2759895b5ae6cb7946def0b321f1][content]" id="sections_cd2f2759895b5ae6cb7946def0b321f1_content" cols="40">Bar1</textarea> How do I force rails to always use the new resource behavior and not automatically add the id to the name and value. Do I have to create a custom form builder? Is there some other trick I can do to prevent rails from putting the id of the guide in there? I have tried a bunch of stuff and nothing is working. Thanks in advance!

    Read the article

  • JPA entity design / cannot delete entity

    - by timaschew
    I though its simple what I want, but I cannot find any solution for my problem. I'm using playframework 1.2.3 and it's using Hibernate as JPA. So I think playframework has nothing to do with the problem. I have some classes (I omit the nonrelevant fields) public class User { ... } public class Task { public DataContainer dataContainer; } public class DataContainer { public Session session; public User user; } public class Session { ... } So I have association from Task to DataContainer and from DataContainer to Sesssion and the DataContainer belongs to a User. The DataContainers can have always the same User, but the Session have to be different for each instance. And the DataContainer of a Task have also to be different in each instance. A DataContainer can have a Sesesion or not (it's optinal). I use only unidirectional assoc. It should be sufficient. In other words: Every Task must has one DataContainer. Every DataContainer must has one/the same User and can have one Session. To create a DB schema I use JPA annotations: @Entity public class User extends Model { ... } @Entity public class Task extends Model { @OneToOne(optional = false, cascade = CascadeType.ALL) public DataContainer dataContainer; } @Entity public class DataContainer extends Model { @OneToOne(optional = true, cascade = CascadeType.ALL) public Session session; @ManyToOne(optional = false, cascade = CascadeType.ALL) public User user; } @Entity public class Session extends Model { ... } BTW: Model is a play class and provides the primary id as long type. When I create some for each entity a object and 'connect them', I mean the associations, it works fine. But when I try to delete a Session, I get a constraint violation exception, because a DataContainer still refers to the Session I want to delete. I want that the Session (field) of the DataContainer will be set to null respectively the foreign key (session_id) should be unset in the database. This will be okay, because its optional. I don't know, I think I have multiple problems. Am I using the right annotation @OneToOne ? I found on the internet some additional annotation and attributes: @JoinColumn and a mappedBy attribute for the inverse relationship. But I don't have it, because its not bidirectional. Or is a bidirectional assoc. essentially? Another try was to use @OnDelete(action = OnDeleteAction.CASCADE) the the contraint changed from NO ACTIONs when update or delete to: ADD CONSTRAINT fk4745c17e6a46a56 FOREIGN KEY (session_id) REFERENCES annotation_session (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE; But in this case, when I delete a session, the DataContainer and User is deleted. That's wrong for me. EDIT: I'm using postgresql 9, the jdbc stuff is included in play, my only db config is db=postgres://app:app@localhost:5432/app

    Read the article

  • Decode base64 data as array in Python

    - by skerit
    I'm using this handy Javascript function to decode a base64 string and get an array in return. This is the string: base64_decode_array('6gAAAOsAAADsAAAACAEAAAkBAAAKAQAAJgEAACcBAAAoAQAA') This is what's returned: 234,0,0,0,235,0,0,0,236,0,0,0,8,1,0,0,9,1,0,0,10,1,0,0,38,1,0,0,39,1,0,0,40,1,0,0 The problem is I don't really understand the javascript function: var base64chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'.split(""); var base64inv = {}; for (var i = 0; i < base64chars.length; i++) { base64inv[base64chars[i]] = i; } function base64_decode_array (s) { // remove/ignore any characters not in the base64 characters list // or the pad character -- particularly newlines s = s.replace(new RegExp('[^'+base64chars.join("")+'=]', 'g'), ""); // replace any incoming padding with a zero pad (the 'A' character is zero) var p = (s.charAt(s.length-1) == '=' ? (s.charAt(s.length-2) == '=' ? 'AA' : 'A') : ""); var r = []; s = s.substr(0, s.length - p.length) + p; // increment over the length of this encrypted string, four characters at a time for (var c = 0; c < s.length; c += 4) { // each of these four characters represents a 6-bit index in the base64 characters list // which, when concatenated, will give the 24-bit number for the original 3 characters var n = (base64inv[s.charAt(c)] << 18) + (base64inv[s.charAt(c+1)] << 12) + (base64inv[s.charAt(c+2)] << 6) + base64inv[s.charAt(c+3)]; // split the 24-bit number into the original three 8-bit (ASCII) characters r.push((n >>> 16) & 255); r.push((n >>> 8) & 255); r.push(n & 255); } // remove any zero pad that was added to make this a multiple of 24 bits return r; } What's the function of those "<<<" and "" characters. Or is there a function like this for Python?

    Read the article

  • Infinite loop when adding a row to a list in a class in python3

    - by Margaret
    I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in this question. Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake? class Dataset: individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most def __init__(self, individuals=[]): and class Node: ds = Dataset() #The data that is associated with this Node links = [] #List of Nodes, the offspring Nodes of this node level = 0 #Tree depth of this Node split_value = '' #Field used to split out this Node from the parent node node_value = '' #Value used to split out this Node from the parent Node def split_dataset(self, split_value): fields = [] #List of options for split_value amongst the individuals datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key for field in self.ds.field_set()[split_value]: #Populates the keys of fields[] fields.append(field) datasets[field] = Dataset() for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit for field in fields: #Creates subnodes from each of the datasets.Dataset options self.add_subnode(datasets[field],split_value,field) def add_subnode(self, dataset, split_value='', node_value=''): def __init__(self, level, dataset=Dataset()): My initialisation code is currently: if __name__ == '__main__': filename = (sys.argv[1]) #Takes in a CSV file predicted_value = "# class" #Identifies the field from the CSV file that should be predicted base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes n = root.links[0] n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.

    Read the article

  • Abstract mapping with custom JiBX marshaller

    - by aweigold
    I have created a custom JiBX marshaller and verified it works. It works by doing something like the following: <binding xmlns:tns="http://foobar.com/foo" direction="output"> <namespace uri="http://foobar.com/foo" default="elements"/> <mapping class="java.util.HashMap" marshaller="com.foobar.Marshaller1"/> <mapping name="context" class="com.foobar.Context"> <structure field="configuration"/> </mapping> </binding> However I need to create multiple marshallers for different HashMaps. So I tried to reference it with abstract mapping like this: <binding xmlns:tns="http://foobar.com/foo" direction="output"> <namespace uri="http://foobar.com/foo" default="elements"/> <mapping abstract="true" type-name="configuration" class="java.util.HashMap" marshaller="com.foobar.Marshaller1"/> <mapping abstract="true" type-name="overrides" class="java.util.HashMap" marshaller="com.foobar.Marshaller2"/> <mapping name="context" class="com.foobar.Context"> <structure map-as="configuration" field="configuration"/> <structure map-as="overrides" field="overrides"/> </mapping> </binding> However when doing so, when I attempt to build the binding, I receive the following: Error during code generation for file "E:\project\src\main\jibx\foo.jibx" - this may be due to an error in your binding or classpath, or to an error in the JiBX code My guess is that either I'm missing something I need to implement to enable my custom marshaller for abstract mapping, or custom marshallers do not support abstract mapping. I have found the IAbstractMarshaller interface in the JiBX API (http://jibx.sourceforge.net/api/org/jibx/runtime/IAbstractMarshaller.html), however the documentation seems unclear to me on if this is what I need to implement, as well as how it works if so. I have not been able to find an implementation of this interface to work off of as an example. My question is, how do you do abstract mapping with custom marshallers (if it's possible)? If it is done via the IAbstractMarshaller interface, how does it work and/or how should I implement it?

    Read the article

  • multithreading with database

    - by Darsin
    I am looking out for a strategy to utilize multithreading (probably asynchronous delegates) to do a synchronous operation. I am new to multithreading so i will outline my scenario first. This synchronous operation right now is done for one set of data (portfolio) based on the the parameters provided. The (psudeo-code) implementation is given below: public DataSet DoTests(int fundId, DateTime portfolioDate) { // Get test results for the portfolio // Call the database adapter method, which in turn is a stored procedure, // which in turns runs a series of "rule" stored procs and fills a local temp table and returns it back. DataSet resultsDataSet = GetTestResults(fundId, portfolioDate); try { // Do some local processing on the results DoSomeProcessing(resultsDataSet); // Save the results in Test, TestResults and TestAllocations tables in a transaction. // Sets a global transaction which is provided to all the adapter methods called below // It is defined in the Base class StartTransaction("TestTransaction"); // Save Test and get a testId int testId = UpdateTest(resultsDataSet); // Adapter method, uses the same transaction // Update testId in the other tables in the dataset UpdateTestId(resultsDataSet, testId); // Update TestResults UpdateTestResults(resultsDataSet); // Adapter method, uses the same transaction // Update TestAllocations UpdateTestAllocations(resultsDataSet); // Adapter method, uses the same transaction // It is defined in the base class CommitTransaction("TestTransaction"); } catch { RollbackTransaction("TestTransaction"); } return resultsDataSet; } Now the requirement is to do it for multiple set of data. One way would be to call the above DoTests() method in a loop and get the data. I would prefer doing it in parallel. But there are certain catches: StartTransaction() method creates a connection (and transaction) every time it is called. All the underlying database tables, procedures are the same for each call of DoTests(). (obviously). Thus my question are: Will using multithreading anyway improve performance? What are the chances of deadlock especially when new TestId's are being created and the Tests, TestResults and TestAllocations are being saved? How can these deadlocked be handled? Is there any other more efficient way of doing the above operation apart from looping over the DoTests() method repeatedly?

    Read the article

< Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >