Search Results

Search found 16838 results on 674 pages for 'writing patterns dita cms'.

Page 634/674 | < Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >

  • AudioConverterConvertBuffer problem with insz error

    - by Samuel
    Hi Codegurus, I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format _ streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ; _streamFormat.mBitsPerChannel = 16; _streamFormat.mChannelsPerFrame = 2; _streamFormat.mBytesPerPacket = 4; _streamFormat.mBytesPerFrame = 4; _streamFormat.mFramesPerPacket = 1; _streamFormat.mSampleRate = 44100; _streamFormat.mReserved = 0; to this format _streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0; _streamFormatOutput.mBitsPerChannel = 16; _streamFormatOutput.mChannelsPerFrame = 1; _streamFormatOutput.mBytesPerPacket = 2; _streamFormatOutput.mBytesPerFrame = 2; _streamFormatOutput.mFramesPerPacket = 1; _streamFormatOutput.mSampleRate = 44100; _streamFormatOutput.mReserved = 0; and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows This is to set the channel map for PCM output file SInt32 channelMap[1] = {0}; status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap); and this is to convert the buffer in a while loop AudioBufferList audioBufferList; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); for (int y=0; y<audioBufferList.mNumberBuffers; y++) { AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; //frames = audioBuffer.mData; NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels); NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize); numBytesIO = audioBuffer.mDataByteSize; convertedBuf = malloc(sizeof(char)*numBytesIO); status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf); char errchar[10]; NSLog(@"status audio converter convert %d",status); if (status != 0) { NSLog(@"Fail conversion"); assert(0); } NSLog(@"Bytes converted %d",numBytesIO); status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf); NSLog(@"status for writebyte %d, bytes written %d",status,numBytesIO); free(convertedBuf); if (numBytesIO != audioBuffer.mDataByteSize) { NSLog(@"Something wrong in writing"); assert(0); } countByteBuf = countByteBuf + numBytesIO; But the insz problem is there... so it cant convert. I would appreciate any input Thanks in advance

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • How to obtain a pointer out of a C++ vtable?

    - by Josh Haberman
    Say you have a C++ class like: class Foo { public: virtual ~Foo() {} virtual DoSomething() = 0; }; The C++ compiler translates a call into a vtable lookup: Foo* foo; // Translated by C++ to: // foo->vtable->DoSomething(foo); foo->DoSomething(); Suppose I was writing a JIT compiler and I wanted to obtain the address of the DoSomething() function for a particular instance of class Foo, so I can generate code that jumps to it directly instead of doing a table lookup and an indirect branch. My questions are: Is there any standard C++ way to do this (I'm almost sure the answer is no, but wanted to ask for the sake of completeness). Is there any remotely compiler-independent way of doing this, like a library someone has implemented that provides an API for accessing a vtable? I'm open to completely hacks, if they will work. For example, if I created my own derived class and could determine the address of its DoSomething method, I could assume that the vtable is the first (hidden) member of Foo and search through its vtable until I find my pointer value. However, I don't know a way of getting this address: if I write &DerivedFoo::DoSomething I get a pointer-to-member, which is something totally different. Maybe I could turn the pointer-to-member into the vtable offset. When I compile the following: class Foo { public: virtual ~Foo() {} virtual void DoSomething() = 0; }; void foo(Foo *f, void (Foo::*member)()) { (f->*member)(); } On GCC/x86-64, I get this assembly output: Disassembly of section .text: 0000000000000000 <_Z3fooP3FooMS_FvvE>: 0: 40 f6 c6 01 test sil,0x1 4: 48 89 74 24 e8 mov QWORD PTR [rsp-0x18],rsi 9: 48 89 54 24 f0 mov QWORD PTR [rsp-0x10],rdx e: 74 10 je 20 <_Z3fooP3FooMS_FvvE+0x20> 10: 48 01 d7 add rdi,rdx 13: 48 8b 07 mov rax,QWORD PTR [rdi] 16: 48 8b 74 30 ff mov rsi,QWORD PTR [rax+rsi*1-0x1] 1b: ff e6 jmp rsi 1d: 0f 1f 00 nop DWORD PTR [rax] 20: 48 01 d7 add rdi,rdx 23: ff e6 jmp rsi I don't fully understand what's going on here, but if I could reverse-engineer this or use an ABI spec I could generate a fragment like the above for each separate platform, as a way of obtaining a pointer out of a vtable.

    Read the article

  • Using PHP to read a web page with fsockopen(), but fgets is not working

    - by asdasd
    Im using this code here: http://www.digiways.com/articles/php/httpredirects/ public function ReadHttpFile($strUrl, $iHttpRedirectMaxRecursiveCalls = 5) { // parsing the url getting web server name/IP, path and port. $url = parse_url($strUrl); // setting path to '/' if not present in $strUrl if (isset($url['path']) === false) $url['path'] = '/'; // setting port to default HTTP server port 80 if (isset($url['port']) === false) $url['port'] = 80; // connecting to the server] // reseting class data $this->success = false; unset($this->strFile); unset($this->aHeaderLines); $this->strLocation = $strUrl; $fp = fsockopen ($url['host'], $url['port'], $errno, $errstr, 30); // Return if the socket was not open $this-success is set to false. if (!$fp) return; $header = 'GET / HTTP/1.1\r\n'; $header .= 'Host: '.$url['host'].$url['path']; if (isset($url['query'])) $header .= '?'.$url['query']; $header .= '\r\n'; $header .= 'Connection: Close\r\n\r\n'; // sending the request to the server echo "Header is: ".str_replace('\n', '\n', $header).""; $length = strlen($header); if($length != fwrite($fp, $header, $length)) { echo 'error writing to header, exiting'; return; } // $bHeader is set to true while we receive the HTTP header // and after the empty line (end of HTTP header) it's set to false. $bHeader = true; // continuing untill there's no more text to read from the socket while (!feof($fp)) { echo "in loop"; // reading a line of text from the socket // not more than 8192 symbols. $good = $strLine = fgets($fp, 128); if(!$good) { echo 'bad'; return; } // removing trailing \n and \r characters. $strLine = ereg_replace('[\r\n]', '', $strLine); if ($bHeader == false) $this-strFile .= $strLine.'\n'; else $this-aHeaderLines[] = trim($strLine); if (strlen($strLine) == 0) $bHeader = false; echo "read: $strLine"; return; } echo "after loop"; fclose ($fp); } This is all I get: Header is: GET / HTTP/1.1\r\n Host: www.google.com/\r\n Connection: Close\r\n\r\n in loopbad So it fails the fgets($fp, 128);

    Read the article

  • JNI - GetObjectField returns NULL

    - by Daniel
    I'm currently working on Mangler's Android implementation. I have a java class that looks like so: public class VentriloEventData { public short type; public class _pcm { public int length; public short send_type; public int rate; public byte channels; }; _pcm pcm; } The signature for my pcm object: $ javap -s -p VentriloEventData ... org.mangler.VentriloEventData$_pcm pcm; Signature: Lorg/mangler/VentriloEventData$_pcm; I am implementing a native JNI function called getevent, which will write to the fields in an instance of the VentriloEventData class. For what it's worth, it's defined and called in Java like so: public static native int getevent(VentriloEventData data); VentriloEventData data = new VentriloEventData(); getevent(data); And my JNI implementation of getevent: JNIEXPORT jint JNICALL Java_org_mangler_VentriloInterface_getevent(JNIEnv* env, jobject obj, jobject eventdata) { v3_event *ev = v3_get_event(V3_BLOCK); if(ev != NULL) { jclass event_class = (*env)->GetObjectClass(env, eventdata); // Event type. jfieldID type_field = (*env)->GetFieldID(env, event_class, "type", "S"); (*env)->SetShortField( env, eventdata, type_field, 1234 ); // Get PCM class. jfieldID pcm_field = (*env)->GetFieldID(env, event_class, "pcm", "Lorg/mangler/VentriloEventData$_pcm;"); jobject pcm = (*env)->GetObjectField( env, eventdata, pcm_field ); jclass pcm_class = (*env)->GetObjectClass(env, pcm); // Set PCM fields. jfieldID pcm_length_field = (*env)->GetFieldID(env, pcm_class, "length", "I"); (*env)->SetIntField( env, pcm, pcm_length_field, 1337 ); free(ev); } return 0; } The code above works fine for writing into the type field (that is not wrapped by the _pcm class). Once getevent is called, data.type is verified to be 1234 at the java side :) My problem is that the assertion "pcm != NULL" will fail. Note that pcm_field != NULL, which probably indicates that the signature to that field is correct... so there must be something wrong with my call to GetObjectField. It looks fine though if I compare it to the official JNI docs. Been bashing my head on this problem for the past 2 hours and I'm getting a little desperate.. hoping a different perspective will help me out on this one.

    Read the article

  • NSMutableURLRequest and a Password with special Characters won't work

    - by twickl
    I'm writing a small program (using Cocoa Touch), which communicates with a webservice. The code for calling the webservice, is the following: - (IBAction)send:(id)sender { if ([number.text length] > 0) { [[UIApplication sharedApplication] beginIgnoringInteractionEvents]; [activityIndicator startAnimating]; NSString *modded; modded = [self computeNumber]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL: [NSURL URLWithString:@"https://tester:%=&[email protected]/RPC2"]]; [theRequest setHTTPMethod:@"POST"]; [theRequest addValue:@"text/xml" forHTTPHeaderField:@"content-type"]; [theRequest setCachePolicy:NSURLCacheStorageNotAllowed]; [theRequest setTimeoutInterval:5.0]; NSString* pStr = [[NSString alloc] initWithFormat:@"<?xml version=\"1.0\" encoding=\"UTF-8\"?><methodCall><methodName>samurai.SessionInitiate</methodName><params><param><value><struct><member><name>LocalUri</name><value><string></string></value></member><member><name>RemoteUri</name><value><string>sip:%@@sipgate.net</string></value></member><member><name>TOS</name><value><string>text</string></value></member><member><name>Content</name><value><string>%@</string></value></member><member><name>Schedule</name><value><string></string></value></member></struct></value></param></params></methodCall>", modded, TextView.text]; NSData* pBody = [pStr dataUsingEncoding:NSUTF8StringEncoding]; [theRequest setHTTPBody:pBody]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; if (!theConnection) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error" message:@"A Connection could not be established!" delegate:nil cancelButtonTitle:@"OK" otherButtonTitles: nil]; [alert show]; [alert release]; sendButton.enabled = TRUE; return; } [pStr release]; [pBody release]; } } The Username and Password have to be in the URL, and it works in most cases, but when the password consists of special characters like in the example "%=&=-Test2009", the Webservice does not respond. If it is something like "Test2009" it works fine. Does anyone have an idea why, and maybe a solution for that?

    Read the article

  • How to define generic super type for static factory method?

    - by Esko
    If this has already been asked, please link and close this one. I'm currently prototyping a design for a simplified API of a certain another API that's a lot more complex (and potentially dangerous) to use. Considering the related somewhat complex object creation I decided to use static factory methods to simplify the API and I currently have the following which works as expected: public class Glue<T> { private List<Type<T>> types; private Glue() { types = new ArrayList<Type<T>>(); } private static class Type<T> { private T value; /* some other properties, omitted for simplicity */ public Type(T value) { this.value = value; } } public static <T> Glue<T> glueFactory(String name, T first, T second) { Glue<T> g = new Glue<T>(); Type<T> firstType = new Glue.Type<T>(first); Type<T> secondType = new Glue.Type<T>(second); g.types.add(firstType); g.types.add(secondType); /* omitted complex stuff */ return g; } } As said, this works as intended. When the API user (=another developer) types Glue<Horse> strongGlue = Glue.glueFactory("2HP", new Horse(), new Horse()); he gets exactly what he wanted. What I'm missing is that how do I enforce that Horse - or whatever is put into the factory method - always implements both Serializable and Comparable? Simply adding them to factory method's signature using <T extends Comparable<T> & Serializable> doesn't necessarily enforce this rule in all cases, only when this simplified API is used. That's why I'd like to add them to the class' definition and then modify the factory method accordingly. PS: No horses (and definitely no ponies!) were harmed in writing of this question.

    Read the article

  • Design Technique: How to design a complex system for processing orders, products and units.

    - by Shyam
    Hi, Programming is fun: I learned that by trying out simple challenges, reading up some books and following some tutorials. I am able to grasp the concepts of writing with OO (I do so in Ruby), and write a bit of code myself. What bugs me though is that I feel re-inventing the wheel: I haven't followed an education or found a book (a free one that is) that explains me the why's instead of the how's, and I've learned from the A-team that it is the plan that makes it come together. So, armed with my nuby Ruby skills, I decided I wanted to program a virtual store. I figured out the following: My virtual Store will have: Products and Services Inventories Orders and Shipping Customers Now this isn't complex at all. With the help of some cool tools (CMapTools), I drew out some concepts, but quickly enough (thanks to my inferior experience in designing), my design started to bite me. My very first product-line were virtual "laptops". So, I created a class (Ruby): class Product attr_accessor :name, :price def initialize(name, price) @name = name @price = price end end which can be instantiated by doing (IRb) x = Product.new("Banana Pro", 250) Since I want my virtual customers to be able to purchase more than one product, or various types, I figured out I needed some kind of "Order" mechanism. class Order def initialize(order_no) @order_no = order_no @line_items = [] end def add_product(myproduct) @line_items << myproduct end def show_order() puts @order_no @line_items.each do |x| puts x.name.to_s + "\t" + x.price.to_s end end end that can be instantiated by doing (IRb) z = Order.new(1234) z.add_product(x) z.show_order Splendid, I have now a very simple ordering system that allows me to add products to an order. But, here comes my real question. What if I have three models of my product (economy, business, showoff)? Or have my products be composed out of separate units (bigger screen, nicer keyboard, different OS)? Surely I could make them three separate products, or add complexity to my product class, but I am looking for are best practices to design a flexible product object that can be used in the real world, to facilitate a complex system. My apologies if my grammar and my spelling are with error, as english is not my first language and I took the time to check as far I could understand and translate properly! Thank you for your answers, comments and feedback!

    Read the article

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

  • java concurrency: many writers, one reader

    - by Janning
    I need to gather some statistics in my software and i am trying to make it fast and correct, which is not easy (for me!) first my code so far with two classes, a StatsService and a StatsHarvester public class StatsService { private Map<String, Long> stats = new HashMap<String, Long>(1000); public void notify ( String key ) { Long value = 1l; synchronized (stats) { if (stats.containsKey(key)) { value = stats.get(key) + 1; } stats.put(key, value); } } public Map<String, Long> getStats ( ) { Map<String, Long> copy; synchronized (stats) { copy = new HashMap<String, Long>(stats); stats.clear(); } return copy; } } this is my second class, a harvester which collects the stats from time to time and writes them to a database. public class StatsHarvester implements Runnable { private StatsService statsService; private Thread t; public void init ( ) { t = new Thread(this); t.start(); } public synchronized void run ( ) { while (true) { try { wait(5 * 60 * 1000); // 5 minutes collectAndSave(); } catch (InterruptedException e) { e.printStackTrace(); } } } private void collectAndSave ( ) { Map<String, Long> stats = statsService.getStats(); // do something like: // saveRecords(stats); } } At runtime it will have about 30 concurrent running threads each calling notify(key) about 100 times. Only one StatsHarvester is calling statsService.getStats() So i have many writers and only one reader. it would be nice to have accurate stats but i don't care if some records are lost on high concurrency. The reader should run every 5 Minutes or whatever is reasonable. Writing should be as fast as possible. Reading should be fast but if it locks for about 300ms every 5 minutes, its fine. I've read many docs (Java concurrency in practice, effective java and so on), but i have the strong feeling that i need your advice to get it right. I hope i stated my problem clear and short enough to get valuable help.

    Read the article

  • Is there a path of least resistance that a newcomer to graphics-technology-adoption can take at this point in the .NET graphics world?

    - by Rao
    For the past 5 months or so, I've spent time learning C# using Andrew Troelsen's book and getting familiar with stuff in the .NET 4 stack... bits of ADO.NET, EF4 and a pinch of WCF to taste. I'm really interested in graphics development (not for games though), which is why I chose to go the .NET route when I decided choose from either Java or .NET to learn... since I heard about WPF and saw some sexy screenshots and all. I'm even almost done with the 4 WPF chapters in Troelsen's book. Now, all of a sudden I saw some post on a forum about how "WPF was dead" in the face of something called Silverlight. I searched more and saw all the confusion going on at present... even stuff like "Silverlight is dead too!" wrt HTML5. From what I gather, we are in a delicate period of time that will eventually decide which technology will stabilize, right? Even so, as someone new moving into UI & graphics development via .NET, I wish I could get some guidance from people more experienced people. Maybe I'm reading too much? Maybe I have missed some pieces of information? Maybe a path exists that minimizes tears of blood? In any case, here is a sample vomiting of my thoughts on which I'd appreciate some clarification or assurance or spanking: My present interest lies in desktop development. But on graduating from college, I wish to market myself as a .NET developer. The industry seems to be drooling for web stuff. Can Silverlight do both equally well? (I see on searches that SL works "out of browser"). I have two fair-sized hobby projects planned that will have hawt UIs with lots of drag n drop, sliding animations etc. These are intended to be desktop apps that will use reflection, database stuff using EF4, networking over LAN, reading-writing of files... does this affect which graphics technology can be used? At some laaaater point, if I become interested in doing a bit of 3D stuff in .NET, will that affect which technologies can be used? Or what if I look up to the heavens, stick out my middle finger, and do something crazy like go learn HTML5 even though my knowledge of it can be encapsulated in 2 sentences? Sorry I seem confused so much, I just want to know if there's a path of least resistance that a newcomer to graphics-technology-adoption can take at this point in the graphics world.

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • C++ program Telephone Directory from a file

    - by Stacy Doyle
    I am writing a program for a phone directory. The user inputs a name and the program searches the file and either outputs the number or an error because the persons name is not in the file. The program should also ask the user if they would like to continue using the program and look up another number. So far runs and asks for the name and then prints the error message that I put in place saying that the name is not in the database. I am guessing that I must not really be having my program look in the file but not sure what to do also don't know how to get the program to run again if the user chooses to continue. #include <iostream> #include <fstream> #include <string> #include <iomanip> using namespace std; char chr; int main() { string first; string last; string number; string firstfile; string lastfile; string numberfile; int cont; ifstream infile; infile.open("name and numbers.dat"); //opening the file infile>>firstfile>>lastfile>>numberfile; cout<<"Enter a first and last name."<<endl; //Asking user for the input cin>>first>>last; //input the data { if(first==firstfile && last==lastfile) //if the entered information matches the information in the file cout<<first<<" "<<last<<"'s number is "<<numberfile<<endl; //this is printed else cout<<"Sorry that is not in our database."<<endl; //if the information doesn't match this is printed } cout<<"Would you like to search for another name? Y or N"<<endl; //user is asked if they would like to continue cin>>cont; infile.close(); //close file cin>>chr; return 0; }

    Read the article

  • JavaCC: How can one exclude a string from a token? (A.k.a. understanding token ambiguity.)

    - by java.is.for.desktop
    Hello, everyone! I had already many problems with understanding, how ambiguous tokens can be handled elegantly (or somehow at all) in JavaCC. Let's take this example: I want to parse XML processing instruction. The format is: "<?" <target> <data> "?>": target is an XML name, data can be anything except ?>, because it's the closing tag. So, lets define this in JavaCC: (I use lexical states, in this case DEFAULT and PROC_INST) TOKEN : <#NAME : (very-long-definition-from-xml-1.1-goes-here) > TOKEN : <WSS : (" " | "\t")+ > // WSS = whitespaces <DEFAULT> TOKEN : {<PI_START : "<?" > : PROC_INST} <PROC_INST> TOKEN : {<PI_TARGET : <NAME> >} <PROC_INST> TOKEN : {<PI_DATA : ~[] >} // accept everything <PROC_INST> TOKEN : {<PI_END : "?>" > : DEFAULT} Now the part which recognizes processing instructions: void PROC_INSTR() : {} { ( <PI_START> (t=<PI_TARGET>){System.out.println("target: " + t.image);} <WSS> (t=<PI_DATA>){System.out.println("data: " + t.image);} <PI_END> ) {} } Let's test it with <?mytarget here-goes-some-data?>: The target is recognized: "target: mytarget". But now I get my favorite JavaCC parsing error: !! procinstparser.ParseException: Encountered "" at line 1, column 15. !! Was expecting one of: !! Encountered nothing? Was expecting nothing? Or what? Thank you, JavaCC! I know, that I could use the MORE keyword of JavaCC, but this would give me the whole processing instruction as one token, so I'd had to parse/tokenize it further by myself. Why should I do that? Am I writing a parser that does not parse? The problem is (i guess): hence <PI_DATA> recognizes "everything", my definition is wrong. I should tell JavaCC to recognize "everything except ?>" as processing instruction data. But how can it be done? NOTE: I can only exclude single characters using ~["a"|"b"|"c"], I can't exclude strings such as ~["abc"] or ~["?>"]. Another great anti-feature of JavaCC. Thank you.

    Read the article

  • VBA Excel - Workbook_SheetChange

    - by user2947014
    Hopefully this question hasn't already been asked, I tried searching for an answer and couldn't find anything. This is probably a simple question, but I am writing my first macro in excel and am having a problem that I can't find out a solution to. I wrote a couple of macros that basically sum up columns dynamically (so that the number of rows can change and the formula moves down automatically) based on a value in another column of the same row, and I call those macros from the event Workbook_SheetChange. The problem I'm having is, I change a cell's value from my macro to display the result of the sum, and this then calls Workbook_SheetChange again, which I do not want. Right now it works, but I can trace it and see that Workbook_SheetChange is being called multiple times. This is preventing me from adding other cell changes to the macros, because then it results in an infinite loop. I want the macros to run every time a change is made to the sheet, but I don't see any way around allowing the macros to change a cell's value, so I don't know what to do. I will paste my code below, in case it is helpful. Private Sub Workbook_SheetChange(ByVal Sh As Object, ByVal Target As Range) Dim Row As Long Dim Col As Long Row = Target.Row Col = Target.Column If Col <> 7 Then Range("G" & Row).Select Selection.Formula = "=IF(F" & Row & "=""Win"",E" & Row & ",IF(F" & Row & "=""Loss"",-D" & Row & ",0))" Target.Select End If Call SumRiskColumn End Sub Private Sub Workbook_SheetCalculate(ByVal Sh As Object) Call SumOutcomeColumn End Sub Sub SumOutcomeColumn() Dim N As Long N = Cells(Rows.Count, "A").End(xlUp).Row Cells(N + 1, "G").Formula = "=SUM(G2:G" & N & ")" End Sub Sub SumRiskColumn() Dim N As Long N = Cells(Rows.Count, "A").End(xlUp).Row Dim CurrTotalRisk As Long CurrTotalRisk = 0 For i = 2 To N If IsEmpty(ActiveSheet.Cells(i, 6)) And Not IsEmpty(ActiveSheet.Cells(i, 1)) And Not IsEmpty(ActiveSheet.Cells(i, 2)) And Not IsEmpty(ActiveSheet.Cells(i, 3)) Then CurrTotalRisk = CurrTotalRisk + ActiveSheet.Cells(i, 4).Value End If Next i Cells(N + 1, "D").Value = CurrTotalRisk End Sub Thank you for any help you can give me! I really appreciate it.

    Read the article

  • How to measure a canvas that has auto height and with

    - by Wymmeroo
    Hi Folks, I'm a beginner in silverlight so i hope i can get an answer that brings me some more light in the measure process of silverlight. I found an interessting flap out control from silverlight slide control and now I try to use it in my project. So that the slide out is working proper, I have to place the user control on a canvas. The user control then uses for itself the height of its content. I just wanna change that behavior so that the height is set to the available space from the parent canvas. You see the uxBorder where the height is set. How can I measure the actual height and set it to the border? I tried it with Height={Binding ElementName=notificationCanvas, Path=ActualHeight} but this dependency property has no callback, so the actualHeight is never set. What I want to achieve is a usercontrol like the tweetboard per example on Jesse Liberty's blog Sorry for my English writing, I hope you understand my question. <Canvas x:Name="notificationCanvas" Background="Red"> <SlideEffectEx:SimpleSlideControl GripWidth="20" GripTitle="Task" GripHeight="100"> <Border x:Name="uxBorder" BorderThickness="2" CornerRadius="5" BorderBrush="DarkGray" Background="DarkGray" Padding="5" Width="300" Height="700" > <StackPanel> <TextBlock Text="Tasks"></TextBlock> <Button x:Name="btn1" Margin="5" Content="{Binding ElementName=MainBorder, Path=Height}"></Button> <Button x:Name="btn2" Margin="5" Content="Second Button"></Button> <Button x:Name="btn3" Margin="5" Content="Third Button"></Button> <Button x:Name="btn1_Copy" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy1" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy2" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy3" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy4" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy5" Margin="5" Content="First Button"/> <Button x:Name="btn1_Copy6" Margin="5" Content="First Button"/> </StackPanel> </Border> </SlideEffectEx:SimpleSlideControl>

    Read the article

  • Idiomatic use of auto_ptr to transfer ownership to a container

    - by heycam
    I'm refreshing my C++ knowledge after not having used it in anger for a number of years. In writing some code to implement some data structure for practice, I wanted to make sure that my code was exception safe. So I've tried to use std::auto_ptrs in what I think is an appropriate way. Simplifying somewhat, this is what I have: class Tree { public: ~Tree() { /* delete all Node*s in the tree */ } void insert(const string& to_insert); ... private: struct Node { ... vector<Node*> m_children; }; Node* m_root; }; template<T> void push_back(vector<T*>& v, auto_ptr<T> x) { v.push_back(x.get()); x.release(); } void Tree::insert(const string& to_insert) { Node* n = ...; // find where to insert the new node ... push_back(n->m_children, auto_ptr<Node>(new Node(to_insert)); ... } So I'm wrapping the function that would put the pointer into the container, vector::push_back, and relying on the by-value auto_ptr argument to ensure that the Node* is deleted if the vector resize fails. Is this an idiomatic use of auto_ptr to save a bit of boilerplate in my Tree::insert? Any improvements you can suggest? Otherwise I'd have to have something like: Node* n = ...; // find where to insert the new node auto_ptr<Node> new_node(new Node(to_insert)); n->m_children.push_back(new_node.get()); new_node.release(); which kind of clutters up what would have been a single line of code if I wasn't worrying about exception safety and a memory leak. (Actually I was wondering if I could post my whole code sample (about 300 lines) and ask people to critique it for idiomatic C++ usage in general, but I'm not sure whether that kind of question is appropriate on stackoverflow.)

    Read the article

  • CoGetClassObject gives many First-chance exceptions in ATL project. Should I worry?

    - by Andrew
    Hello, I have written a COM object that in turn uses a thrid party ActiveX control. In my FinalConstruct() for my COM object, I instantiate the ActiveX control with the follow code: HRESULT hRes; LPCLASSFACTORY2 pClassFactory; hRes = CoInitializeEx(NULL,COINIT_APARTMENTTHREADED); bool bTest = SUCCEEDED(hRes); if (!bTest) return E_FAIL; if (SUCCEEDED(CoGetClassObject(__uuidof(SerialPortSniffer), CLSCTX_INPROC_SERVER, NULL, IID_IClassFactory2, (LPVOID *)(&pClassFactory)))) { ... more set up code When I step over the line if (SUCCEEDED(CoGetClassObject(__uuidof(SerialPortSniffer), ..., I get 20+ lines in the Output window stating: First-chance exception at 0x0523f82e in SillyComDriver.exe: 0xC0000005: Access violation writing location 0x00000000. I also get the lines: First-chance exception at 0x051e3f3d in SillyComDriver.exe: 0xC0000096: Privileged instruction. First-chance exception at 0x100ab9e6 in SillyComDriver.exe: 0xC000001D: Illegal Instruction. Notice these are first-chance exceptions. The program runs as expected I can access the third party methods/properties. Still, I'm left wondering why they are occurring. Perhaps my way of instantiating the ActiveX control (for which I want use of it's methods/properties and not it's GUI stuff) is incorrect? Besides the code I'm showing, I also put the line import "spsax.dll" no_namespace in the stdafx.h That's all the code necessary for my simple demo project. I noticed this problem because I had (inadvertently) set the "break on exceptions" options in my "real" project and it was breaking on this line. Once I removed it, it also works. If you're read this far thank you, and perhaps I can ask one other minor question. In my demo project, if I right click on SerialPortSniffer and "go to definition", it takes me to the file C:....\AppData\Local\Temp\spsax.tlh. Can someone explain that? Finally, in my "real" project, right clicking on SerialPortSniffer and going to difinition leads to "The symbol 'SerialPortSniffer' is not defined". It doesn't seem to affect the program though. Is there some setting I've messed up? By the way, all my code is written w/ VS2008. Thanks, Dave

    Read the article

  • How to generate and encode (for use in GA), random, strict, binary rooted trees with N leaves?

    - by Peter Simon
    First, I am an engineer, not a computer scientist, so I apologize in advance for any misuse of nomenclature and general ignorance of CS background. Here is the motivational background for my question: I am contemplating writing a genetic algorithm optimizer to aid in designing a power divider network (also called a beam forming network, or BFN for short). The BFN is intended to distribute power to each of N radiating elements in an array of antennas. The fraction of the total input power to be delivered to each radiating element has been specified. Topologically speaking, a BFN is a strictly binary, rooted tree. Each of the (N-1) interior nodes of the tree represents the input port of an unequal, binary power splitter. The N leaves of the tree are the power divider outputs. Given a particular power divider topology, one is still free to map the power divider outputs to the array inputs in an arbitrary order. There are N! such permutations of the outputs. There are several considerations in choosing the desired ordering: 1) The power ratio for each binary coupler should be within a specified range of values. 2) The ordering should be chosen to simplify the mechanical routing of the transmission lines connecting the power divider. The number of ouputs N of the BFN may range from, say, 6 to 22. I have already written a genetic algorithm optimizer that, given a particular BFN topology and desired array input power distribution, will search through the N! permutations of the BFN outputs to generate a design with compliant power ratios and good mechanical routing. I would now like to generalize my program to automatically generate and search through the space of possible BFN topologies. As I understand it, for N outputs (leaves of the binary tree), there are $C_{N-1}$ different topologies that can be constructed, where $C_N$ is the Catalan number. I would like to know how to encode an arbitrary tree having N leaves in a way that is consistent with a chromosomal description for use in a genetic algorithm. Also associated with this is the need to generate random instances for filling the initial population, and to implement crossover and mutations operators for this type of chromosome. Any suggestions will be welcome. Please minimize the amount of CS lingo in your reply, since I am not likely to be acquainted with it. Thanks in advance, Peter

    Read the article

  • Issues with LINQ (to Entity) [adding records]

    - by Mario
    I am using LINQ to Entity in a project, where I pull a bunch of data (from the database) and organize it into a bunch of objects and save those to the database. I have not had problems writing to the db before using LINQ to Entity, but I have run into a snag with this particular one. Here's the error I get (this is the "InnerException", the exception itself is useless!): New transaction is not allowed because there are other threads running in the session. I have seen that before, when I was trying to save my changes inside a loop. In this case, the loop finishes, and it tries to make that call, only to give me the exception. Here's the current code: try { //finalResult is a list of the keys to match on for the records being pulled foreach (int i in finalResult) { var queryEff = (from eff in dbMRI.MemberEligibility where eff.Member_Key == i && eff.EffDate >= DateTime.Now select eff.EffDate).Min(); if (queryEff != null) { //Add a record to the Process table Process prRecord = new Process(); prRecord.GroupData = qa; prRecord.Member_Key = i; prRecord.ProcessDate = DateTime.Now; prRecord.RecordType = "F"; prRecord.UsernameMarkedBy = "Autocard"; prRecord.GroupsId = qa.GroupsID; prRecord.Quantity = 2; prRecord.EffectiveDate = queryEff; dbMRI.AddObject("Process", prRecord); } } dbMRI.SaveChanges(); //<-- Crashes here foreach (int i in finalResult) { var queryProc = from pro in dbMRI.Process where pro.Member_Key == i && pro.UsernameMarkedBy == "Autocard" select pro; foreach (var qp in queryProc) { Audit aud = new Audit(); aud.Member_Key = i; aud.ProcessId = qp.ProcessId; aud.MarkDate = DateTime.Now; aud.MarkedByUsername = "Autocard"; aud.GroupData = qa; dbMRI.AddObject("Audit", aud); } } dbMRI.SaveChanges(); //<-- AND here (if the first one is commented out) } catch (Exception e) { //Do Something here } Basically, I need it to insert a record, get the identity for that inserted record and insert a record into another table with the identity from the first record. Given some other constraints, it is not possible to create a FK relationship between the two (I've tried, but some other parts of the app won't allow it, AND my DBA team for whatever reason hates FK's, but that's for a different topic :)) Any ideas what might be causing this? Thank!

    Read the article

  • Binary file reading problem

    - by ScReYm0
    Ok i have problem with my code for reading binary file... First i will show you my writing code: void book_saving(char *file_name, struct BOOK *current) { FILE *out; BOOK buf; out = fopen(file_name, "wb"); if(out != NULL) { printf_s("Writting to file..."); do { if(current != NULL) { strcpy(buf.catalog_number, current->catalog_number); strcpy(buf.author, current->author); buf.price = current->price; strcpy(buf.publisher, current->publisher); strcpy(buf.title, current->title); buf.price = current->year_published; fwrite(&buf, sizeof(BOOK), 1, out); } current = current->next; }while(current != NULL); printf_s("Done!\n"); fclose(out); } } and here is my "version" for reading it back: int book_open(struct BOOK *current, char *file_name) { FILE *in; BOOK buf; BOOK *vnext; int count; int i; in = fopen("west", "rb"); printf_s("Reading database from %s...", file_name); if(!in) { printf_s("\nERROR!"); return 1; } i = fread(&buf,sizeof(BOOK), 1, in); while(!feof(in)) { if(current != NULL) { current = malloc(sizeof(BOOK)); current->next = NULL; } strcpy(current->catalog_number, buf.catalog_number); strcpy(current->title, buf.title); strcpy(current->publisher, buf.publisher); current->price = buf.price; current->year_published = buf.year_published; fread(&buf, 1, sizeof(BOOK), in); while(current->next != NULL) current = current->next; fclose(in); } printf_s("Done!"); return 0; } I just need to save my linked list in binary file and to be able to read it back ... please help me. The program just don't read it or its crash every time different situation ...

    Read the article

  • How to use Crtl in a Delphi unit in a C++Builder project? (or link to C++Builder C runtime library)

    - by Craig Peterson
    I have a Delphi unit that is statically linking a C .obj file using the {$L xxx} directive. The C file is compiled with C++Builder's command line compiler. To satisfy the C file's runtime library dependencies (_assert, memmove, etc), I'm including the crtl unit Allen Bauer mentioned here. unit FooWrapper; interface implementation uses Crtl; // Part of the Delphi RTL {$L FooLib.obj} // Compiled with "bcc32 -q -c foolib.c" procedure Foo; cdecl; external; end. If I compile that unit in a Delphi project (.dproj) everthing works correctly. If I compile that unit in a C++Builder project (.cbproj) it fails with the error: [ILINK32 Error] Fatal: Unable to open file 'CRTL.OBJ' And indeed, there isn't a crtl.obj file in the RAD Studio install folder. There is a .dcu, but no .pas. Trying to add crtdbg to the uses clause (the C header where _assert is defined) gives an error that it can't find crtdbg.dcu. If I remove the uses clause, it instead fails with errors that __assert and _memmove aren't found. So, in a Delphi unit in a C++Builder project, how can I export functions from the C runtime library so they're available for linking? I'm already aware of Rudy Velthuis's article. I'd like to avoid manually writing Delphi wrappers if possible, since I don't need them in Delphi, and C++Builder must already include the necessary functions. Edit For anyone who wants to play along at home, the code is available in Abbrevia's Subversion repository at https://tpabbrevia.svn.sourceforge.net/svnroot/tpabbrevia/trunk. I've taken David Heffernan's advice and added a "AbCrtl.pas" unit that mimics crtl.dcu when compiled in C++Builder. That got the PPMd support working, but the Lzma and WavPack libraries both fail with link errors: [ILINK32 Error] Error: Unresolved external '_beginthreadex' referenced from ABLZMA.OBJ [ILINK32 Error] Error: Unresolved external 'sprintf' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external 'strncmp' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external '_ftol' referenced from ABWAVPACK.OBJ AFAICT, all of them are declared correctly, and the _beginthreadex one is actually declared in AbLzma.pas, so it's used by the pure Delphi compile as well. To see it yourself, just download the trunk (or just the "source" and "packages" directories), disable the {$IFDEF BCB} block at the bottom of AbDefine.inc, and try to compile the C++Builder "Abbrevia.cbproj" project.

    Read the article

  • protobuf-net NOT faster than binary serialization?

    - by Ashish Gupta
    I wrote a program to serialize a 'Person' class using XMLSerializer, BinaryFormatter and ProtoBuf. I thought protobuf-net should be faster than the other two. Protobuf serialization was faster than XMLSerialization but much slower than the binary serialization. Is my understanding incorrect? Please make me understand this. Thank you for the help. Following is the output:- Person got created using protocol buffer in 347 milliseconds Person got created using XML in 1462 milliseconds Person got created using binary in 2 milliseconds Code below using System; using System.Collections.Generic; using System.Linq; using System.Text; using ProtoBuf; using System.IO; using System.Diagnostics; using System.Runtime.Serialization.Formatters.Binary; namespace ProtocolBuffers { class Program { static void Main(string[] args) { string XMLSerializedFileName = "PersonXMLSerialized.xml"; string ProtocolBufferFileName = "PersonProtocalBuffer.bin"; string BinarySerializedFileName = "PersonBinary.bin"; var person = new Person { Id = 12345, Name = "Fred", Address = new Address { Line1 = "Flat 1", Line2 = "The Meadows" } }; Stopwatch watch = Stopwatch.StartNew(); watch.Start(); using (var file = File.Create(ProtocolBufferFileName)) { Serializer.Serialize(file, person); } watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds.ToString()); Console.WriteLine("Person got created using protocol buffer in " + watch.ElapsedMilliseconds.ToString() + " milliseconds " ); watch.Reset(); watch.Start(); System.Xml.Serialization.XmlSerializer x = new System.Xml.Serialization.XmlSerializer(person.GetType()); using (TextWriter w = new StreamWriter(XMLSerializedFileName)) { x.Serialize(w, person); } watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds.ToString()); Console.WriteLine("Person got created using XML in " + watch.ElapsedMilliseconds.ToString() + " milliseconds"); watch.Reset(); watch.Start(); using (Stream stream = File.Open(BinarySerializedFileName, FileMode.Create)) { BinaryFormatter bformatter = new BinaryFormatter(); //Console.WriteLine("Writing Employee Information"); bformatter.Serialize(stream, person); } watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds.ToString()); Console.WriteLine("Person got created using binary in " + watch.ElapsedMilliseconds.ToString() + " milliseconds"); Console.ReadLine(); } } [ProtoContract] [Serializable] public class Person { [ProtoMember(1)] public int Id {get;set;} [ProtoMember(2)] public string Name { get; set; } [ProtoMember(3)] public Address Address {get;set;} } [ProtoContract] [Serializable] public class Address { [ProtoMember(1)] public string Line1 {get;set;} [ProtoMember(2)] public string Line2 {get;set;} } }

    Read the article

  • Why does Module::Build's testcover gives me "use of uninitialized value" warnings?

    - by Kurt W. Leucht
    I'm kinda new to Module::Build, so maybe I did something wrong. Am I the only one who gets warnings when I change my dispatch from "test" to "testcover"? Is there a bug in Devel::Cover? Is there a bug in Module::Build? I probably just did something wrong. I'm using ActiveState Perl v5.10.0 with Module::Build version 0.31012 and Devel::Cover 0.64 and Eclipse 3.4.1 with EPIC 0.6.34 for my IDE. UPDATE: I upgraded to Module::Build 0.34 and the warnings are still output. *UPDATE: Looks like a bug in B::Deparse. Hope it gets fixed someday.* Here's my unit test build file: use strict; use warnings; use Module::Build; my $build = Module::Build->resume ( properties => { config_dir => '_build', }, ); $build->dispatch('test'); When I run this unit test build file, I get the following output: t\MyLib1.......ok t\MyLib2.......ok t\MyLib3.......ok All tests successful. Files=3, Tests=24, 0 wallclock secs ( 0.00 cusr + 0.00 csys = 0.00 CPU) But when I change the dispatch line to 'testcover' I get the following output which always includes a bunch of "use of uninitialized value in bitwise and" warning messages: Deleting database D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db t\MyLib1.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. t\MyLib2.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. t\MyLib3.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. All tests successful. Files=3, Tests=24, 0 wallclock secs ( 0.00 cusr + 0.00 csys = 0.00 CPU) Reading database from D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db ---------------------------- ------ ------ ------ ------ ------ ------ ------ File stmt bran cond sub pod time total ---------------------------- ------ ------ ------ ------ ------ ------ ------ .../lib/ActivePerl/Config.pm 0.0 0.0 0.0 0.0 0.0 n/a 0.0 ...l/lib/ActiveState/Path.pm 0.0 0.0 0.0 0.0 100.0 n/a 4.8 <SNIP> blib/lib/<SNIP>/MyLib2.pm 100.0 90.0 n/a 100.0 100.0 0.0 98.5 blib/lib/<SNIP>/MyLib3.pm 100.0 90.9 100.0 100.0 100.0 0.6 98.0 Total 14.4 6.7 3.8 18.3 20.0 100.0 11.6 ---------------------------- ------ ------ ------ ------ ------ ------ ------ Writing HTML output to D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db/coverage.html ... done.

    Read the article

  • Flatten date range memberships retaining only the highest priority membership (TRANSACT-SQL)

    - by shadowranger
    Problem statement: A table contains an item_id, a category_id and a date range (begin_date and end_date). No item may be in more than one category on any given date (in general; during daily rebuilding it can be in an invalid state temporarily). By default, all items are added (and re-added if removed) to a category (derived from outside data) automatically on a daily basis, and their membership in that category matches the lifespan of the item (items have their own begin and end date, and usually spend their entire lives in the same category, which is why this matches). For items in category X, it is occasionally desirable to override the default category by adding them to category Y. Membership in category Y could entirely replace membership in category X (that is, the begin and end dates for membership in category Y would match the begin and end dates of the item itself), or it could override it for an arbitrary period of time (at the beginning, middle or end the item's lifespan, possibly overriding for short periods at multiple times). Membership in category Y is not renewed automatically and additions to that category is done by manual data entry. Every day, when category X is rebuilt, we get an overlap, where any item in category Y will now be in category X as well (which is forbidden, as noted previously). Goal: After each repopulation of category X (which is done in a rather complicated and fragile manner, and ideally would be left alone), I'm trying to find an efficient means of writing a stored procedure that: Identifies the overlaps Changes existing entries, adds new ones where necessary (such as in the case where an item starts in category X, switches to category Y, then eventually switches back to category X before ending), or removes entries (when an item is in category Y for its entire life) such that every item remains in category Y (and only Y) where specified, while category X membership is maintained when not overridden by category Y. Does not affect memberships of categories A, B, C, Z, etc., which do not have override categories and are built separately, by completely different rules. Note: It can be assumed that X membership covers the entire lifespan of the item before this procedure is called, so it is unnecessary to query any data outside this table. Bonus credit: If for some reason there are two adjacent or overlapping memberships in for the same item in category Y, stitching them together into a single entry is appreciated, but not necessary. Example: item_id category_id begin_date end_date 1 X 20080101 20090628 1 Y 20090101 20090131 1 Y 20090601 20090628 2 X 20080201 20080731 2 Y 20080201 20080731 Should become: item_id category_id begin_date end_date 1 X 20080101 20081231 1 Y 20090101 20090131 1 X 20090201 20090531 1 Y 20090601 20090628 2 Y 20080201 20080731 If it matters, this needs to work on SQL Server 2005 and SQL Server 2008

    Read the article

< Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >