Search Results

Search found 13969 results on 559 pages for 'word count'.

Page 488/559 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • Java: Ignoring escapes when parsing XML

    - by Personman
    I'm using a DocumentBuilder to parse XML files. However, the specification for the project requires that within text nodes, strings like " and < be returned literally, and not turned into the corresponding ASCII values. A previous similar question, http://stackoverflow.com/questions/1979785/read-escaped-quote-as-escaped-quote-from-xml, received one answer that seems to be specific to Apache, and another that appears to simply not not do what it says it does. I'd love to be proven wrong on either count, however :) For reference, here is some code: file = new File(fileName); DocBderFac = DocumentBuilderFactory.newInstance(); DocBder = DocBderFac.newDocumentBuilder(); doc = DocBder.parse(file); NodeList textElmntLst = doc.getElementsByTagName(text); Element textElmnt = (Element) textElmntLst.item(0); NodeList txts = textElmnt.getChildNodes(); String txt = ((Node) txts.item(0)).getNodeValue(); System.out.println(txt); I would like that println() to produce things like &quot;3&gt;2&quot; instead of "3>2" which is what currently happens. Thanks!

    Read the article

  • where did the _syscallN macros go in <linux/unistd.h>?

    - by Evan Teran
    It used to be the case that if you needed to make a system call directly in linux without the use of an existing library, you could just include <linux/unistd.h> and it would define a macro similar to this: #define _syscall3(type,name,type1,arg1,type2,arg2,type3,arg3) \ type name(type1 arg1,type2 arg2,type3 arg3) \ { \ long __res; \ __asm__ volatile ("int $0x80" \ : "=a" (__res) \ : "0" (__NR_##name),"b" ((long)(arg1)),"c" ((long)(arg2)), \ "d" ((long)(arg3))); \ if (__res>=0) \ return (type) __res; \ errno=-__res; \ return -1; \ } Then you could just put somewhere in your code: _syscall3(ssize_t, write, int, fd, const void *, buf, size_t, count); which would define a write function for you that properly performed the system call. It seems that this system has been superseded by something (i am guessing that "[vsyscall]" page that every process gets) more robust. So what is the proper way (please be specific) for a program to perform a system call directly on newer linux kernels? I realize that I should be using libc and let it do the work for me. But let's assume that I have a decent reason for wanting to know how to do this :-).

    Read the article

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

  • Rails: using find method to access joined tables for polymorphic relationships

    - by DJTripleThreat
    Ok, I have a generic TimeSlot model that deals with a start_at and an end_at for time spans. A couple models derive from this but I'm referring to one in this question: AppointmentBlock which is a collection of Appointments. I want to validate an AppointmentBlock such that no other AppointmentBlocks have been scheduled for a particular Employee in the same time frame. Since AppointmentBlock has a polymorphic association with TimeSlot, you have to access the AppointmentBlock's start_at and end_at through the TimeSlot like so: appt_block.time_slot.start_at This means that I need to have some kind of join in my :conditions for my find() method call. Here is my code so far: #inside my appointment_block.rb model validate :employee_not_double_booked def employee_not_double_booked unless self.employee_id # this find's condition is incorrect because I need to join time_slots to get access # to start_at and end_at. How can I do this? blocks = AppointmentBlock.find(:first, :conditions => ['employee_id = ? and (start_at between ? and ? or end_at between ? and ?)', self.employee_id, self.time_slot.start_at, self.time_slot.end_at, self.time_slot.start_at, self.time_slot.end_at]) # pseudo code: # collect a list of appointment blocks that end after this # apointment block starts or start before this appointment # block ends that are also associated with this appointment # blocks assigned employee # if the count is great then 0 the employee has been double # booked. # if a block was found that means this employee is getting # double booked so raise an error errors.add "AppointmentBlock", "has already been scheduled during this time" if blocks end end Since AppointmentBlock doesn't have a start_at or an end_at how can I join with the time_slots table to get those conditions to work?

    Read the article

  • Dynamically add data stored in php to nested json

    - by HoGo
    I am trying to dynamicaly generate data in json for jQuery gantt chart. I know PHP but am totally green with JavaScript. I have read dozen of solutions on how dynamicaly add data to json, and tried few dozens of combinations and nothing. Here is the json format: var data = [{ name: "Sprint 0", desc: "Analysis", values: [{ from: "/Date(1320192000000)/", to: "/Date(1322401600000)/", label: "Requirement Gathering", customClass: "ganttRed" }] },{ name: " ", desc: "Scoping", values: [{ from: "/Date(1322611200000)/", to: "/Date(1323302400000)/", label: "Scoping", customClass: "ganttRed" }] }, <!-- Somoe more data--> }]; now I have all data in php db result. Here it goes: $rows=$db->fetchAllRows($result); $rowsNum=count($rows); And this is how I wanted to create json out of it: var data=''; <?php foreach ($rows as $row){ ?> data['name']="<?php echo $row['name'];?>"; data['desc']="<?php echo $row['desc'];?>"; data['values'] = {"from" : "/Date(<?php echo $row['from'];?>)/", "to" : "/Date(<?php echo $row['to'];?>)/", "label" : "<?php echo $row['label'];?>", "customClass" : "ganttOrange"}; } However this does not work. I have tried without loop and replacing php variables with plain text just to check, but it did not work either. Displays chart without added items. If I add new item by adding it to the list of values, it works. So there is no problem with the Gantt itself or paths. Based on all above I assume the problem is with adding plain data to json. Can anyone please help me to fix it?

    Read the article

  • GROUP BY ID range?

    - by d0ugal
    Given a data set like this; +-----+---------------------+--------+ | id | date | result | +-----+---------------------+--------+ | 121 | 2009-07-11 13:23:24 | -1 | | 122 | 2009-07-11 13:23:24 | -1 | | 123 | 2009-07-11 13:23:24 | -1 | | 124 | 2009-07-11 13:23:24 | -1 | | 125 | 2009-07-11 13:23:24 | -1 | | 126 | 2009-07-11 13:23:24 | -1 | | 127 | 2009-07-11 13:23:24 | -1 | | 128 | 2009-07-11 13:23:24 | -1 | | 129 | 2009-07-11 13:23:24 | -1 | | 130 | 2009-07-11 13:23:24 | -1 | | 131 | 2009-07-11 13:23:24 | -1 | | 132 | 2009-07-11 13:23:24 | -1 | | 133 | 2009-07-11 13:23:24 | -1 | | 134 | 2009-07-11 13:23:24 | -1 | | 135 | 2009-07-11 13:23:24 | -1 | | 136 | 2009-07-11 13:23:24 | -1 | | 137 | 2009-07-11 13:23:24 | -1 | | 138 | 2009-07-11 13:23:24 | 1 | | 139 | 2009-07-11 13:23:24 | 0 | | 140 | 2009-07-11 13:23:24 | -1 | +-----+---------------------+--------+ How would I go about grouping the results by day 5 records at a time. The above results is part of the live data, there is over 100,000 results rows in the table and its growing. Basically I want to measure the change over time, so want to take a SUM of the result every X records. In the real data I'll be doing it ever 100 or 1000 but for the data above perhaps every 5. If i could sort it by date I would do something like this; SELECT DATE_FORMAT(date, '%h%i') ym, COUNT(result) 'Total Games', SUM(result) as 'Score' FROM nn_log GROUP BY ym; I can't figure out a way of doing something similar with numbers. The order is sorted by the date but I hope to split the data up every x results. It's safe to assume there are no blank rows. Doing it above with the data you could do multiple selects like; SELECT SUM(result) FROM table LIMIT 0,5; SELECT SUM(result) FROM table LIMIT 5,5; SELECT SUM(result) FROM table LIMIT 10,5; Thats obviously not a very good way to scale up to a bigger problem. I could just write a loop but I'd like to reduce the number of queries.

    Read the article

  • How to populate Range variable from a Sub/Function call?

    - by Ken Ingram
    I am trying to get this sub to work but the operationalRange variable is not being assigned. Despite the fact that the function selectBodyRow(bodyName) works fine. Sub sortRows(bodyName As String, ByRef wksht As Worksheet) Dim operationalRange As Range Set operationalRange = selectBodyRow(bodyName) Debug.Print "Sorting Worksheet: " & wksht.Name If Not operationalRange Is Nothing Then operationalRange.Select Debug.Print "Sorting " & operationalRange.Count & "Rows." ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Clear ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Add Key:=operationalRange, _ SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Add Key:=operationalRange, _ SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal With ActiveWorkbook.Worksheets(wksht.Name).Sort .SetRange operationalRange .Header = xlGuess .MatchCase = False .Orientation = xlTopToBottom .SortMethod = xlPinYin .Apply End With Else MsgBox "Body is not being Set" End If End Sub The Sub being called by the above Sub is: Function selectBodyRow(bodyName As String) As Range Dim rangeStart As String, rangeEnd As String Dim selectionStart As Range, selectionEnd As Range Dim result As Range, srchRng As Range, cngrs As Variant If bodyName = "WEST" Then rangeStart = "<-WEST START->" rangeEnd = "<-WEST END->" ElseIf bodyName = "EAST" Then rangeStart = "<-EAST START->" rangeEnd = "<-EAST END->" End If Set srchRng = Range("A:A") srchRng.Select Set selectionStart = srchRng.Find(What:=rangeStart, After:=ActiveCell, LookIn _ :=xlValues, LookAt:=xlPart, SearchOrder:=xlByRows, SearchDirection:= _ xlNext, MatchCase:=False, SearchFormat:=False) Set selectionEnd = srchRng.Find(What:=rangeEnd, After:=ActiveCell, LookIn _ :=xlValues, LookAt:=xlPart, SearchOrder:=xlByRows, SearchDirection:= _ xlNext, MatchCase:=False, SearchFormat:=False) Set result = Range(selectionStart.Offset(1, 0), selectionEnd.Offset(-1, 0)) result.EntireRow.Select End Function

    Read the article

  • get_post_meta return empty string

    - by Jean-philippe Emond
    I guest it is a little issues but I running a SQL to get some post id. $result = $wpdb->get_results("SELECT wppm.post_id FROM wp_postmeta wppm INNER JOIN wp_posts wpp ON wppm.post_id=wpp.ID WHERE wppm.meta_key LIKE 'activity'"); (count: 302) After that, I get all id and I run get_post_meta like that: foreach($result as $id){ $activity = get_post_meta($id); var_dump($activity); foreach($activity as $key=>$value){ if(is_array($value) && $key=="age"){ var_dump($value); } } } (var_dump result: string "") samething if I run with: $activity = get_post_meta($id,'activity',true); Where we need to get a result. What is wrong? Thank you for your help!!! [Bonus Question] If the "activity" meta_key as an array Value. and I get directly like: $result = $wpdb->get_results("SELECT wppm.meta_value FROM wp_postmeta wppm INNER JOIN wp_posts wpp ON wppm.post_id=wpp.ID WHERE wppm.meta_key LIKE 'activity'"); How I parse it? Thanks again!

    Read the article

  • Parsing multiple files at a time in Perl

    - by sfactor
    I have a large data set (around 90GB) to work with. There are data files (tab delimited) for each hour of each day and I need to perform operations in the entire data set. For example, get the share of OSes which are given in one of the columns. I tried merging all the files into one huge file and performing the simple count operation but it was simply too huge for the server memory. So, I guess I need to perform the operation each file at a time and then add up in the end. I am new to perl and am especially naive about the performance issues. How do I do such operations in a case like this. As an example two columns of the file are. ID OS 1 Windows 2 Linux 3 Windows 4 Windows Lets do something simple, counting the share of the OSes in the data set. So, each .txt file has millions of these lines and there are many such files. What would be the most efficient way to operate on the entire files.

    Read the article

  • Prolog singleton variables in Python

    - by Rubens
    I'm working on a little set of scripts in python, and I came to this: line = "a b c d e f g" a, b, c, d, e, f, g = line.split() I'm quite aware of the fact that these are decisions taken during implementation, but shouldn't (or does) python offer something like: _, _, var_needed, _, _, another_var_needed, _ = line.split() as well as Prolog does offer, in order to exclude the famous singleton variables. I'm not sure, but wouldn't it avoid unnecessary allocation? Or creating references to the result of the split call does not count up as overhead? EDIT: Sorry, my point here is: in Prolog, as far as I'm concerned, in an expression like: test(L, N) :- test(L, 0, N). test([], N, N). test([_|T], M, N) :- V is M + 1, test(T, V, N). The variable represented by _ is not accessible, for what I suppose the reference to the value that does exist in the list [_|T] is not even created. But, in Python, if I use _, I can use the last value assigned to _, and also, I do suppose the assignment occurs for each of the variables _ -- which may be considered an overhead. My question here is if shouldn't there be (or if there is) a syntax to avoid such unnecessary attributions.

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • Need to reload current_cart to get the test passed

    - by leomayleomay
    I'm testing my online store app with RSpec, here's what I'm doing: # spec/controllers/line_items_controller_spec.rb require 'spec_helper' describe LineItemsController do describe "POST 'create'" do before do @current_cart = Factory(:cart) controller.stub!(:current_cart).and_return(@current_cart) end it 'should merge two same line_items into one' do @product = Factory(:product, :name => "Tee") post 'create', {:product_id => @product.id} post 'create', {:product_id => @product.id} assert LineItem.count.should == 1 assert LineItem.first.quantity.should == 2 end end end # app/controllers/line_items_controller.rb class LineItemsController < ApplicationController def create current_cart.line_items.each do |line_item| if line_item.product_id == params[:product_id] line_item.quantity += 1 if line_item.save render :text => "success" else render :text => "failed" end return end end @line_item = current_cart.line_items.new(:product_id => params[:product_id]) if @line_item.save render :text => "success" else render :text => "failed" end end end The problem right now is it never added up two line_items having the same product into one, because the second time I entered into the line_items_controller#create, the current_cart.line_items is [], I have run current_cart.reload to get the test passed, any idea what's going wrong?

    Read the article

  • How to select LI except first and second ?

    - by Wazdesign
    Here is the structure of the content, I want to select all LI except the first two (ie no-link) jQuery(document).ready(function(){ var nosubnav = jQuery('.first-level li:not(:has(ul))'); var nosubnavsize = jQuery('.first-level li:not(:has(ul))').size(); jQuery(nosubnav).css('border' , '1px solid red'); alert('List item which does not have submenu '+nosubnavsize); }); div class="navigation-container"> <ul class="first-level"> <li><a href="#">No Link</a></li> <li><a href="#">No Link</a></li> <li><a href="#">Link 1</a></li> <li><a href="#">Link 2</a> <ul> <li><a href="#">Link2.1</a></li> <li><a href="#">Link2.2</a> <ul> <li><a href="#">Link 2.2.1</a></li> </ul> </li> </ul> </li> <li><a href="#">Link </a></li> </ul> </div> related Question : http://stackoverflow.com/questions/2771801/how-to-count-li-which-does-not-have-ul

    Read the article

  • vb.net sqlite how to loop through selected records and pass each record as a parameter to another fu

    - by mazrabul
    Hi, I have a sqlite table with following fields: Langauge level hours German 2 50 French 3 40 English 1 60 German 1 10 English 2 50 English 3 60 German 1 20 French 2 40 I want to loop through the records based on language and other conditions and then pass the current selected record to a different function. So I have the following mixture of actual code and psudo code. I need help with converting the psudo code to actual code, please. I am finding it difficult to do so. Here is what I have: Private sub mainp() Dim oslcConnection As New SQLite.SQLiteConnection Dim oslcCommand As SQLite.SQLiteCommand Dim langs() As String = {"German", "French", "English"} Dim i as Integer = 0 oslcConnection.ConnectionString = "Data Source=" & My.Settings.dbFullPath & ";" oslcConnection.Open() oslcCommand = oslcConnection.CreateCommand Do While i <= langs.count If langs(i) = "German" Then oslcCommand.CommandText = "SELECT * FROM table WHERE language = '" & langs(i) & "';" For each record selected 'psudo code If level = 1 Then 'psudo code update level to 2 'psudo code minorp(currentRecord) 'psudo code: calling minorp function and passing the whole record as a parameter End If 'psudo code If level = 2 Then 'psudo code update level to 3 'psudo code minorp(currentRecord) 'psudo code: calling minorp function and passing the whole record as a parameter End If 'psudo code Next 'psudo code End If If langs(i) = "French" Then oslcCommand.CommandText = "SELECT * FROM table WHERE language = '" & langs(i) & "';" For each record selected 'psudo code If level = 1 Then 'psudo code update level to 2 'psudo code minorp(currentRecord) 'psudo code: calling minorp function and passing the whole record as a parameter End If 'psudo code If level = 2 Then 'psudo code update level to 3 'psudo code minorp(currentRecord) 'psudo code: calling minorp function and passing the whole record as a parameter End If 'psudo code Next 'psudo code End If Loop End Sub Many thanks for your help.

    Read the article

  • c++-to-python swig caused memory leak! Related to Py_BuildValue and SWIG_NewPointerObj

    - by usfree74
    Hey gurus, I have the following Swig code that caused memory leak. PyObject* FindBestMatch(const Bar& fp) { Foo* ptr(new Foo()); float match; // call a function to fill the foo pointer return Py_BuildValue( "(fO)", match, SWIG_NewPointerObj(ptr, SWIGTYPE_p_Foo, 0 /* own */)); } I figured that ptr is not freed properly. So I did the following: PyObject* FindBestMatch(const Bar& fp) { Foo* ptr(new Foo()); float match; // call a function to fill the foo pointer *PyObject *o = SWIG_NewPointerObj(ptr, SWIGTYPE_p_Foo, 1 /* own */);* <------- 1 means pass the ownership to python PyObject *result = Py_BuildValue("(fO)", match, o); Py_XDECREF(o); return result; } But I am not very sure whether this will cause memory corruption. Here, Py_XDECREF(o) will decrease the ref count, which can free memory used by object "o". But o is part of the return value "result". Freeing "o" can cause data corrupt, I guess? I tried my change. It works fine and the caller (python code) does see the expected data. But this could be because nobody else overwrites to that memory area. So what's the right way to deal with memory management of the above code? I search the swig docs, but don't see very concrete description. Please help! Thanks, xin

    Read the article

  • Why are my connections not closed even if I explicitly dispose of the DataContext?

    - by Chris Simpson
    I encapsulate my linq to sql calls in a repository class which is instantiated in the constructor of my overloaded controller. The constructor of my repository class creates the data context so that for the life of the page load, only one data context is used. In my destructor of the repository class I explicitly call the dispose of the DataContext though I do not believe this is necessary. Using performance monitor, if I watch my User Connections count and repeatedly load a page, the number increases once per page load. Connections do not get closed or reused (for about 20 minutes). I tried putting Pooling=false in my config to see if this had any effect but it did not. In any case with pooling I wouldn't expect a new connection for every load, I would expect it to reuse connections. I've tried putting a break point in the destructor to make sure the dispose is being hit and sure enough it is. So what's happening? Some code to illustrate what I said above: The controller: public class MyController : Controller { protected MyRepository rep; public MyController () { rep = new MyRepository(); } } The repository: public class MyRepository { protected MyDataContext dc; public MyRepository() { dc = getDC(); } ~MyRepository() { if (dc != null) { //if (dc.Connection.State != System.Data.ConnectionState.Closed) //{ // dc.Connection.Close(); //} dc.Dispose(); } } // etc } Note: I add a number of hints and context information to the DC for auditing purposes. This is essentially why I want one connection per page load

    Read the article

  • iPhone - Problem with in-app purchases

    - by Satyam svv
    I've created iPhone app with in-app purchase. Now, I'm in testing phase. I created provisioning profile com.satyam.testapp In iTunes connected I created the application and uploaded the images, screen shots, desscription etc. I also created two id's for in-app purchase. One is com.satyam.testapp.book1 and the other one is com.satyam.testapp.book5 I created test account also for verifying my in-app purchases. Using com.stayam.testapp i created developer test profile and using the same in my developed application. I logged out the itunes app store account in my iphone. Now i started running my application on my iphone. Its saying that no items are there to purchase. But its not even asking me for credentials where i've to enter test account username and password..... how to debug it? Here's my delegate: - (void)productsRequest:(SKProductsRequest *)request didReceiveResponse:(SKProductsResponse *)response { NSArray *myProduct = [[NSArray alloc] initWithArray:response.products]; for(int i=0;i<[myProduct count];i++) { SKProduct *product = [myProduct objectAtIndex:i]; NSLog(@"Name: %@ - Price: %f",[product localizedTitle],[[product price] doubleValue]); NSLog(@"Product identifier: %@", [product productIdentifier]); } for(NSString *invalidProduct in response.invalidProductIdentifiers) NSLog(@"Problem in iTunes connect configuration for product: %@", invalidProduct); [request autorelease]; [myProduct release]; }

    Read the article

  • Debug formatting code

    - by Arcadian
    I'm trying to debug my code here: private void CheckFormatting() { StringReader objReaderf = new StringReader(txtInput.Text); List<String> formatTextList = new List<String>(); do { formatTextList.Add(objReaderf.ReadLine()); } while (objReaderf.Peek() != -1); objReaderf.Close(); for (int i = 0; i < formatTextList.Count; i++) { if (!Regex.IsMatch(formatTextList[i], "G[0-9]{2}:[0-9]{2}:[0-9]{2}:[0-9]{2} JG[0-9]{2")) { MessageBox.Show("Line " + formatTextList[i] + " is not formatted correctly.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); break; } else { this.WriteToFile(); MessageBox.Show("Your entries have been saved.", "Saved", MessageBoxButtons.OK, MessageBoxIcon.Information); } } } what it is supposed to do is to check each line in the list. if one of them isn't formatted correctly, then break the loop and display a message box, if all the lines are formatted properly then it should call the WriteToFile method. However, when testing it using input that WAS correctly formatted it displayed the error message and broke the loop. Anyone figure out why? There's some rep points in it for you :) Thanks in advance

    Read the article

  • MongoDB complex MapReduce of video logs

    - by Justin Hourigan
    I have a dataset from video streaming logs. Each video is identified by a FileGUID. The log entries record the FileGUID, the fragment of the video watched and the bandwidth it was watched at. I would like to create a mapreduce outputting, for each video, a count for fragments both total and for each bandwidth. Ideally it would look like; {"FileGUID":"50acb3a5796634df0e073285", { "1":{"total":76, "0832":34, "1028":42}, "2":{"total":42, "0832":28, "1028":14}, ... } } Is this possible with one mapreduce or is it a multi-step process, or should I use a different method? Here is a sample of the data. { "_id": ObjectId("50acb3a5796634df0e073285"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:57.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(1028), "Segment": NumberInt(1), "Fragment": NumberInt(237), "Status": NumberInt(200), "Size": NumberInt(576790), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073284"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:52.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(1028), "Segment": NumberInt(1), "Fragment": NumberInt(236), "Status": NumberInt(200), "Size": NumberInt(577100), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073283"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:47.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(0832), "Segment": NumberInt(1), "Fragment": NumberInt(234), "Status": NumberInt(200), "Size": NumberInt(576664), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073282"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:42.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(0832), "Segment": NumberInt(1), "Fragment": NumberInt(233), "Status": NumberInt(200), "Size": NumberInt(575692), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" }

    Read the article

  • iPhone: Does it ever make sense for an object to retain its delegate?

    - by randombits
    According to the rules of memory management in a non garbage collected world, one is not supposed to retain a the calling object in a delegate. Scenario goes like this: I have a class that inherits from UITableViewController and contains a search bar. I run expensive search operations in a secondary thread. This is all done with an NSOperationQueue and subclasses NSOperation instances. I pass the controller as a delegate that adheres to a callback protocol into the NSOperation. There are edge cases when the application crashes because once an item is selected from the UITableViewController, I dismiss it and thus its retain count goes to 0 and dealloc gets invoked on it. The delegate didn't get to send its message in time as the results are being passed at about the same time the dealloc happens. Should I design this differently? Should I call retain on my controller from the delegate to ensure it exists until the NSOperation itself is dealloc'd? Will this cause a memory leak? Right now if I put a retain on the controller, the crashes goes away. I don't want to leak memory though and need to understand if there are cases where retaining the delegate makes sense. Just to recap. UITableViewController creates an NSOperationQueue and NSOperation that gets embedded into the queue. The UITableViewController passes itself as a delegate to NSOperation. NSOperation calls a method on UITableViewController when it's ready. If I retain the UITableViewController, I guarantee it's there, but I'm not sure if I'm leaking memory. If I only use an assign property, edge cases occur where the UITableViewController gets dealloc'd and objc_msgSend() gets called on an object that doesn't exist in memory and a crash is imminent.

    Read the article

  • sybase - values from one table that aren't on another, on opposite ends of a 3-table join

    - by Lazy Bob
    Hypothetical situation: I work for a custom sign-making company, and some of our clients have submitted more sign designs than they're currently using. I want to know what signs have never been used. 3 tables involved: table A - signs for a company sign_pk(unique) | company_pk | sign_description 1 --------------------1 ---------------- small 2 --------------------1 ---------------- large 3 --------------------2 ---------------- medium 4 --------------------2 ---------------- jumbo 5 --------------------3 ---------------- banner table B - company locations company_pk | company_location(unique) 1 ------|------ 987 1 ------|------ 876 2 ------|------ 456 2 ------|------ 123 table C - signs at locations (it's a bit of a stretch, but each row can have 2 signs, and it's a one to many relationship from company location to signs at locations) company_location | front_sign | back_sign 987 ------------ 1 ------------ 2 987 ------------ 2 ------------ 1 876 ------------ 2 ------------ 1 456 ------------ 3 ------------ 4 123 ------------ 4 ------------ 3 So, a.company_pk = b.company_pk and b.company_location = c.company_location. What I want to try and find is how to query and get back that sign_pk 5 isn't at any location. Querying each sign_pk against all of the front_sign and back_sign values is a little impractical, since all the tables have millions of rows. Table a is indexed on sign_pk and company_pk, table b on both fields, and table c only on company locations. The way I'm trying to write it is along the lines of "each sign belongs to a company, so find the signs that are not the front or back sign at any of the locations that belong to the company tied to that sign." My original plan was: Select a.sign_pk from a, b, c where a.company_pk = b.company_pk and b.company_location = c.company_location and a.sign_pk *= c.front_sign group by a.sign_pk having count(c.front_sign) = 0 just to do the front sign, and then repeat for the back, but that won't run because c is an inner member of an outer join, and also in an inner join. This whole thing is fairly convoluted, but if anyone can make sense of it, I'll be your best friend.

    Read the article

  • Using reflection to find all linq2sql tables and ensure they match the database

    - by Jake Stevenson
    I'm trying to use reflection to automatically test that all my linq2sql entities match the test database. I thought I'd do this by getting all the classes that inherit from DataContext from my assembly: var contexttypes = Assembly.GetAssembly(typeof (BaseRepository<,>)).GetTypes().Where( t => t.IsSubclassOf(typeof(DataContext))); foreach (var contexttype in contexttypes) { var context = Activator.CreateInstance(contexttype); var tableProperties = type.GetProperties().Where(t=> t.PropertyType.Name == typeof(ITable<>).Name); foreach (var propertyInfo in tableProperties) { var table = (propertyInfo.GetValue(context, null)); } } So far so good, this loops through each ITable< in each datacontext in the project. If I debug the code, "table" is properly instantiated, and if I expand the results view in the debugger I can see actual data. BUT, I can't figure out how to get my code to actually query that table. I'd really like to just be able to do table.FirstOrDefault() to get the top row out of each table and make sure the SQL fetch doesn't fail. But I cant cast that table to be anything I can query. Any suggestions on how I can make this queryable? Just the ability to call .Count() would be enough for me to ensure the entities don't have anything that doesn't match the table columns.

    Read the article

  • What is the proper way to use a Logger in a Serializable Java class?

    - by Tim Visher
    I have the following (doctored) class in a system I'm working on and Findbugs is generating a SE_BAD_FIELD warning and I'm trying to understand why it would say that before I fix it in the way that I thought I would. The reason I'm confused is because the description would seem to indicate that I had used no other non-serializable instance fields in the class but bar.model.Foo is also not serializable and used in the exact same way (as far as I can tell) but Findbugs generates no warning for it. import bar.model.Foo; import java.io.File; import java.io.Serializable; import java.util.List; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Demo implements Serializable { private final Logger logger = LoggerFactory.getLogger(this.getClass()); private final File file; private final List<Foo> originalFoos; private Integer count; private int primitive = 0; public Demo() { for (Foo foo : originalFoos) { this.logger.debug(...); } } ... } My initial blush at a solution is to get a logger reference from the factory right as I use it: public DispositionFile() { Logger logger = LoggerFactory.getLogger(this.getClass()); for (Foo foo : originalFoos) { this.logger.debug(...); } } That doesn't seem particularly efficient, though. Thoughts?

    Read the article

  • How to Search and Navigate XML Nodes

    - by edison681
    I have the following XML : <LOCALCELL_V18 ID = "0x2d100000"> <MXPWR ID = "0x3d1003a0">100</MXPWR> </LOCALCELL_V18> <LOCALCELL_V18 ID = "0x2d140000"> <MXPWR ID = "0x3d1403a0">200</MXPWR> </LOCALCELL_V18> <LOCALCELL_V18 ID = "0x2d180000"> <MXPWR ID = "0x3d1803a0">300</MXPWR> </LOCALCELL_V18> I want to get the inner text of each <MXPWR>. however, it is not allowed to use ID# to locate the inner text since it is not always the same. here is my code: XmlNodeList LocalCell = xmlDocument.GetElementsByTagName("LOCALCELL_V18"); foreach (XmlNode LocalCell_Children in LocalCell) { XmlElement MXPWR = (XmlElement)LocalCell_Children; XmlNodeList MXPWR_List = MXPWR.GetElementsByTagName("MXPWR"); for (int i = 0; i < MXPWR_List.Count; i++) { MaxPwr_form_str = MXPWR_List[i].InnerText; } } Any opinion will be appreciated.

    Read the article

  • UITableView: NSMutableArray not appearing in table

    - by Michael Orcutt
    I'm new to objective c and iPhone programming. I can't seem to figure out why no cells will fill when I run this code. The xml content displays in the console log and xcode displays no errors. Can any help? - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict { if(![elementName compare:@"Goal"] ) { tempElement = [[xmlGoal alloc] init]; } else if(![elementName compare:@"Title"]) { currentAttribute = [NSMutableString string]; } else if(![elementName compare:@"Progress"]) { currentAttribute = [NSMutableString string]; } } - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if(![elementName compare:@"Goal"]) { [xmlElementObjects addObject:tempElement]; } else if(![elementName compare:@"Title"]) { NSLog(@"The Title of this event is %@", currentAttribute); [tempElement setTitled:currentAttribute]; } else if(![elementName compare:@"Progress"]) { NSLog(@"The Progress of this event is %@", currentAttribute); [tempElement setProgressed:currentAttribute]; } } - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if(self.currentAttribute) { [self.currentAttribute appendString:string]; } } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [xmlElementObjects count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [mtableview dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } // Set up the cell... cell.textLabel.text = [xmlElementObjects objectAtIndex:indexPath.row]; return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { }

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >