Search Results

Search found 5429 results on 218 pages for 'smart pointers'.

Page 188/218 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • where does a novice begin with error logging in asp.net c# ?

    - by korben
    i'm a novice teaching myself asp.net in c# via trial and error learn by doing, unfortunately this means lots of errors! i have a custom errors page now that is basically a 404 so that site visitors don't get that ugly application error message .NET throws, but i WOULD like to be able to see what's going wrong myself as people use the site. so i'm looking to build or learn from a fairly basic error logging c# class, that will send the same information given in a browser when hitting a .NET error, send this into a TXT file and email me the error at the same time would be great i don't know where to even begin, can someone give me some pointers? an open source class that does this already that i could plugin and play with would work as well. otherwise some links or guidance on where to start reading would be great too. i sort of have a mental block on understand msdn info-dump pages though, i'm hoping to find some articles on real people talking about implementing the same thing themselves or something like that please note i'm not looking to use some extensive or complicated third party service for this, i'm hoping to learn from the process of implementing a concise customized one

    Read the article

  • Web API Getting Http 500 error : Issue Solved See Below

    - by Joe Grasso
    Here is my MVC Controller and everything is fine: private UnitOfWork UOW; public InventoryController() { UOW = new UnitOfWork(); } // GET: /Inventory/ public ActionResult Index() { var products = UOW.ProductRepository.GetAll().ToList(); return View(products); } Same method call in API Controller gives me an Http 500 Error: private UnitOfWork _unitOfWork; public TestController() { _unitOfWork = new UnitOfWork(); } public IEnumerable<Product> Get() { var products = _unitOfWork.ProductRepository.GetAll().ToList(); return products; } Debugging shows that indeed there is data being returned in both controllers' UOW calls. I then added a customer configuration in Global: public static void CustomizeConfig(HttpConfiguration config) { config.Formatters.Remove(config.Formatters.XmlFormatter); var json = config.Formatters.JsonFormatter; json.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver(); } I am still receiving an Http 500 in API Controller ONLY and at a loss as to why. Any ideas? UPDATE: It appears using lazy loading caused the problem. When I set the associated properties to NON-VIRTUAL the Test API provided the necessary JSON string. However, whereas before I had the Vendor class included, I only have VendorId. I really wanted to included the associated classes. Any ideas? I know there are alot of smart people out there. Anyone?

    Read the article

  • Looking for Opinions and Sugestions on my Website. (General Question)

    - by MrEnder
    I am looking for general opinions and suggestions about the site in whole. Its ok I don't mind hearing where I went wrong on anything. I'm still learning. All tips and pointers on any subject relating are highly welcome. If you are going to make a suggestion and post a snippet or code please explain how it works in detail. The site was design as a interface to display the labs I have to do in my college (the labs are all basic things that I'm way beyond). So I decided to take it all to the next level with this. The link is http://opentech.durhamcollege.ca/~intn2201/brittains/labs/ It is 100% designed by me with the exception of the icons. (I could not think of anything better to draw) There is no specific area of the site I want suggestions or opinions on this is a general question. You may answer about the site in whole or an area of the coding just please specify. If you have any questions related to my site or code you may ask them as well. Thank you for your time and any comments Shelby

    Read the article

  • Digitally sign MS Office (Word, Excel, etc..) and PDF files on the server

    - by Sébastien Nussbaumer
    I need to digitally sign MS Office and PDF files that are stored on a server. I really mean a digital signature that is integrated in the document, according to each specific file formats. This is the process I had in mind : Create a hash of the file's content Send the hash to a custom written java applet in the browser The user encrypts the hash with his/her private key (on an usb token via PKCS#11 for example), thus effectively signing the file. The applet then sends the signature to the server On the server I would then incorporate the signature in the file's (MS Office and PDF files can do that without changing the file's content, probably by just setting some metadata field) What is cool is that you never have to download and upload the complete file to the server again. What is even cooler, the customer doesn't need Office or PDF Writer to sign the files. Parts 2, 3 and 4 are OK for me, my company bought all the JAVA technology I need for that for a previous project I worked on. Problem : I can't seem to find any documentation/examples to do parts 1 and 5 for Office files . Are my google skills failing me this time ? Do you have any pointers to documentation or examples for doing that for MS Office files ? The underlying technology isn't that important to me : I can use Java, .Net, COM, any working technology is OK ! Note : I'm 95% sure I can nail points 1 and 5 for PDF files using iText Thanks ** Edit : If I can't do that with hashes and must download the complete file to the client, it's also possible. But then I still need the documentation to be able to sign Office file... in java this time (from an applet)

    Read the article

  • XmlSerializer 'forgetting' my namespace

    - by Michel
    Hi, i have to create an XML file with all the elements prefixed, like this: <ps:Request num="123" xmlns:ps="www.ladieda.com"> <ps:ClientId>5566</ps:ClientId> <ps:Request> When i serialize my object, c# is smart and does this: <Request num="123" xmlns="www.ladieda.com"> <ClientId>5566</ClientId> <Request> That is good, because the ps: is not necessary. But is there a way to force C# to serialize all the prefixes? My serialize code is this (for incoming object pObject): String XmlizedString = null; MemoryStream memoryStream = new MemoryStream(); XmlSerializer xs = new XmlSerializer(pObject.GetType()); XmlTextWriter xmlTextWriter = new XmlTextWriter(memoryStream, Encoding.UTF8); xs.Serialize(xmlTextWriter, pObject); memoryStream = (MemoryStream)xmlTextWriter.BaseStream; XmlizedString = UTF8ByteArrayToString(memoryStream.ToArray()); return XmlizedString; private String UTF8ByteArrayToString(Byte[] characters) { UTF8Encoding encoding = new UTF8Encoding(); String constructedString = encoding.GetString(characters); return (constructedString); }

    Read the article

  • how to merge ecommerce transaction data between two databases

    - by yamspog
    We currently run an ecommerce solution for a leisure and travel company. Everytime we have a release, we must bring the ecommerce site down as we update database schema and the data access code. We are using a custom built ORM where each data entity is responsible for their own CRUD operations. This is accomplished by dynamically generating the SQL based on attributes in the data entity. For example, the data entity for an address would be... [tableName="address"] public class address : dataEntity { [column="address1"] public string address1; [column="city"] public string city; } So, if we add a new column to the database, we must update the schema of the database and also update the data entity. As you can expect, the business people are not too happy about this outage as it puts a crimp in their cash-flow. The operations people are not happy as they have to deal with a high-pressure time when database and applications are upgraded. The programmers are upset as they are constantly getting in trouble for the legacy system that they inherited. Do any of you smart people out there have some suggestions?

    Read the article

  • Segmenting and masking all shades of red from an image using opencv

    - by vrinda
    I am trying to segment all shades of red form an image using hue saturation values and use InRangeS function to create a mask which should have all red areas whitened and all others blacked(a new 1 channel image). Thwn Inpaint them to kind of obscure the segmented portions. My code is as given. However I am unable to get an output image, it doesnt segment the desired color range. Any pointers on my approach and why it isnt working. ? using namespace std; int main() { IplImage *img1=cvLoadImage("/home/techrascal/projects/test1/image2.jpeg"); //IplImage *img3; IplImage *imghsv; IplImage *img4; CvSize sz=cvGetSize(img1); imghsv=cvCreateImage(sz,IPL_DEPTH_8U,3); img4=cvCreateImage(sz,IPL_DEPTH_8U,1); int width = img1->width; int height = img1->height; int bpp = img1->nChannels; cvNamedWindow("original", 1); cvNamedWindow("hsv",1); cvNamedWindow("Blurred",1); int r,g,b; // create inpaint mask: img 4 will behave as mask cvCvtColor(img1,imghsv,CV_BGR2HSV); CvScalar hsv_min = cvScalar(0, 0, 0, 0); CvScalar hsv_max = cvScalar(255, 0, 0, 0); //cvShowImage("hsv",imghsv); cvInRangeS( imghsv, hsv_min, hsv_max, img4 ); cvInpaint(img1, img4, img1, 3,CV_INPAINT_NS ); cvShowImage("Blurred",img1); cvReleaseImage(&img1); cvReleaseImage(&imghsv); cvReleaseImage(&img4); //cvReleaseImage(&img3); char d=cvWaitKey(10000); cvDestroyAllWindows(); return 0;}

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • How to deal with seniors' bad coding style/practices?

    - by KaluSingh Gabbar
    I am new to work but the company I work in hires a lot of non-comp-science people who are smart enough to get the work done (complex) but lack the style and practices that should help other people read their code. For example they adopt C++ but still use C-like 3 page functions which drives new folks nuts when they try to read that. Also we feel very risky changing it as it's never easy to be sure we are not breaking something. Now, I am involved in the project with these guys and I can't change the entire code base myself or design so that code looks good, what can I do in this situation? PS we actually have 3 page functions & because we do not have a concept of design, all we can do is assume what they might have thought as there is no way to know why is it designed the way it is. I am not complaining.I am asking for suggestion,already reading some books to solve the issues Pragmatic Programmer; Design portion from B.Stroustrup; Programming and principles by B.Stroustrup;

    Read the article

  • Polymorphic urls with singular resources

    - by Brendon Muir
    I'm getting strange output when using the following routing setup: resources :warranty_types do resources :decisions end resource :warranty_review, :only => [] do resources :decisions end I have many warranty_types but only one warranty_review (thus the singular route declaration). The decisions are polymorphically associated with both. I have just a single decisions controller and a single _form.html.haml partial to render the form for a decision. This is the view code: = simple_form_for @decision, :url => [@decision_tree_owner, @decision.becomes(Decision)] do |form| The warranty_type url looks like this (for a new decision): /warranty_types/2/decisions whereas the warranty_review url looks like this: /admin/warranty_review/decisions.1 I think because the warranty_review id has no where to go, it's just getting appended to the end as an extension. Can someone explain what's going on here and how I might be able to fix it? I can work around it by trying to detect for a warranty_review class and substituting @decision_tree_owner with :warranty_review and this generates the correct url, but this is messy. I would have thought that the routing would be smart enough to realise that warranty_review is a singular resource and thus discard the id from the URL. This is Rails 3 by the way :)

    Read the article

  • Need an Overview of Possibilities for multicolumn programming

    - by Sam
    Hi folks, From source1 and source2 i gather that IE9 will NOT support multi-column css3!! Since it is still the most popular browser (another thing i cannot understand), i am left but no other choice than to use Programming Power to make multi-columns work. Now, I use three divs that float to left, and which are manually filled with text. Please don't laugh i know its stupid! But I would wish to not to have to worry about the columns and just have a one piece of (un-interrupted) text which all goes into only 1 div, and then have a program smart enough to split it up into X equally wide columns. Question: before i start reinvent the wheel, what methods of programming power have you known that tackle this elegantly? Please suggest your best working multi-column layout sources so I can evaluate which option is the best (I will update the below table). Exploring all possibilities 2011 and further, to enable multi column text user experience: Language Author SourceCodeUsage WorksOnAllMajorBrowser? ================================================================================= html manual labour put text manually in separate left-floating divs "Y" // Upside: control! Downside: few changes necessitates to reflow 3 divs manually! CSS3 w3c css3.info/preview/multi-column-layout/ "N" // {-moz-column-count: 3; -webkit-column-count: 3; } Thats all! javascript a list apart will add url soon ? // php ? ? ? //

    Read the article

  • Boost multi_index_container crash in release mode

    - by Zan Lynx
    I have a program that I just changed to using a boost::multi_index_container collection. After I did that and tested my code in debug mode, I was feeling pretty good about myself. However, then I compiled a release build with NDEBUG set, and the code crashed. Not immediately, but sometimes in single-threaded tests and often in multi-threaded tests. The segmentation faults happen deep inside boost insert and rotate functions related to the index updates and they are happening because a node has NULL left and right pointers. My code looks a bit like this: struct Implementation { typedef std::pair<uint32_t, uint32_t> update_pair_type; struct watch {}; struct update {}; typedef boost::multi_index_container< update_pair_type, boost::multi_index::indexed_by< boost::multi_index::ordered_unique< boost::multi_index::tag<watch>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::first> >, boost::multi_index::ordered_non_unique< boost::multi_index::tag<update>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::second> > > > update_map_type; typedef std::vector< update_pair_type > update_list_type; update_map_type update_map; update_map_type::iterator update_hint; void register_update(uint32_t watch, uint32_t update); void do_updates(uint32_t start, uint32_t end); }; void Implementation::register_update(uint32_t watch, uint32_t update) { update_pair_type new_pair( watch_offset, update_offset ); update_hint = update_map.insert(update_hint, new_pair); if( update_hint->second != update_offset ) { bool replaced _unused_ = update_map.replace(update_hint, new_pair); assert(replaced); } }

    Read the article

  • How to partition bits in a bit array with less than linear time

    - by SiLent SoNG
    This is an interview question I faced recently. Given an array of 1 and 0, find a way to partition the bits in place so that 0's are grouped together, and 1's are grouped together. It does not matter whether 1's are ahead of 0's or 0's are ahead of 1's. An example input is 101010101, and output is either 111110000 or 000011111. Solve the problem in less than linear time. Make the problem simpler. The input is an integer array, with each element either 1 or 0. Output is the same integer array with integers partitioned well. To me, this is an easy question if it can be solved in O(N). My approach is to use two pointers, starting from both ends of the array. Increases and decreases each pointer; if it does not point to the correct integer, swap the two. int * start = array; int * end = array + length - 1; while (start < end) { // Assume 0 always at the end if (*end == 0) { --end; continue; } // Assume 1 always at the beginning if (*start == 1) { ++start; continue; } swap(*start, *end); } However, the interview insists there is a sub-linear solution. This makes me thinking hard but still not get an answer. Can anyone help on this interview question?

    Read the article

  • How do I create a simple Windows form to access a SQL Server database?

    - by NoCatharsis
    I believe this is a very novice question, and if I'm using the wrong forum to ask, please advise. I have a basic understanding of databasing with MS SQL Server, and programming with C++ and C#. I'm trying to teach myself more by setting up my own database with MS SQL Server Express 2008 R2 and accessing it via Windows forms created in C# Express 2010. At this point, I just want to keep it to free or Express dev tools (not necessarily Microsoft though). Anyway, I created a database using the instructions provided here and I set the data types appropriately for each column (no errors in setup at least). Now I'm designing the GUI in C# Express but I've kind of hit a wall as far as the database connection. Is there a simple way to access the database I created locally using C# Express? Can anyone suggest a guide that has all this spelled out already? I am a self-learner so I look forward to teaching myself how to use these applications, but any pointers to start me off in the right direction would be greatly appreciated.

    Read the article

  • Checkbox alignment in Internet Explorer, Firefox and Chrome

    - by Andrej
    Checkbox alignment with its label (i.e., vertical centering) cross different web browsers makes me crazy. Pasted below is standard html code: <label for="ch"><input id="ch" type="checkbox">My Checkbox</label> I tested different CSS tricks (e.g., link 1, link 2); most solutions works fine in FF, but are completely off in Chrome or IE8. I'm looking for any references or pointers to solve this issue. Thanks in advance. EDIT According to Elq suggestion I modified the HTML <div class="row"> <input type="checkbox" id="ch1" /> <label for="ch1">Test</label> </div> and CSS .row{ display: table-row; } label{ display: table-cell; vertical-align: middle; } Works now in Firefox, Internet Explorer 8, and Chrome on Windows. Fails on Firefox and Chrome on Linux. Also works in Firefox and Safari on Mac, but fails on Chrome.

    Read the article

  • [C++][OpenMP] Proper use of "atomic directive" to lock STL container

    - by conradlee
    I have a large number of sets of integers, which I have, in turn, put into a vector of pointers. I need to be able to update these sets of integers in parallel without causing a race condition. More specifically. I am using OpenMP's "parallel for" construct. For dealing with shared resources, OpenMP offers a handy "atomic directive," which allows one to avoid a race condition on a specific piece of memory without using locks. It would be convenient if I could use the "atomic directive" to prevent simultaneous updating to my integer sets, however, I'm not sure whether this is possible. Basically, I want to know whether the following code could lead to a race condition vector< set<int>* > membershipDirectory(numSets, new set<int>); #pragma omp for schedule(guided,expandChunksize) for(int i=0; i<100; i++) { set<int>* sp = membershipDirectory[5]; #pragma omp atomic sp->insert(45); } (Apologies for any syntax errors in the code---I hope you get the point) I have seen a similar example of this for incrementing an integer, but I'm not sure whether it works when working with a pointer to a container as in my case.

    Read the article

  • design an extendable database model

    - by wishi_
    Hi! Currently I'm doing a project whose specifications are unclear - well who doesn't. I wonder what's the best development strategy to design a DB, that's going to be extended sooner or later with additional tables and relations. I want to include "changeability". My main concern is that I want to apply design patterns (it's a university project) and I want to separate the constant factors from those, that change by choosing appropriate design patterns - in my case MVC and a set of sub-patterns at model level. When it comes to the DB however, I may have to resdesign my model in my MVC approach, because my domain model at a later stage my require a different set of classes representing the DB tables. I use Hibernate as an abstraction layer between DB and application. Would you start with a very minimal DB, just a few tables and relations? And what if I want an efficient DB, too? I wonder what strategies are applied in the real world. Stakeholder analysis for example isn't a sufficient planing solution when it comes to changing requirements. I think - at a DB level - my design pattern ends. So there's breach whose impact I'd like to minimize with a smart strategy.

    Read the article

  • How to handle 'this' pointer in constructor?

    - by Kyle
    I have objects which create other child objects within their constructors, passing 'this' so the child can save a pointer back to its parent. I use boost::shared_ptr extensively in my programming as a safer alternative to std::auto_ptr or raw pointers. So the child would have code such as shared_ptr<Parent>, and boost provides the shared_from_this() method which the parent can give to the child. My problem is that shared_from_this() cannot be used in a constructor, which isn't really a crime because 'this' should not be used in a constructor anyways unless you know what you're doing and don't mind the limitations. Google's C++ Style Guide states that constructors should merely set member variables to their initial values. Any complex initialization should go in an explicit Init() method. This solves the 'this-in-constructor' problem as well as a few others as well. What bothers me is that people using your code now must remember to call Init() every time they construct one of your objects. The only way I can think of to enforce this is by having an assertion that Init() has already been called at the top of every member function, but this is tedious to write and cumbersome to execute. Are there any idioms out there that solve this problem at any step along the way?

    Read the article

  • How to implement a network protocol?

    - by gotch4
    Here is a generic question. I'm not in search of the best answer, I'd just like you to express your favourite practices. I want to implement a network protocol in Java (but this is a rather general question, I faced the same issues in C++), this is not the first time, as I have done this before. But I think I am missing a good way to implement it. In fact usually it's all about exchanging text messages and some byte buffers between hosts, storing the status and wait until the next message comes. The problem is that I usually end up with a bunch of switch and more or less complex if statements that react to different statuses / messages. The whole thing usually gets complicated and hard to mantain. Not to mention that sometimes what comes out has some "blind spot", I mean statuses of the protocol that have not been covered and that behave in a unpredictable way. I tried to write down some state machine classes, that take care of checking start and end statuses for each action in more or less smart ways. This makes programming the protocol very complicated as I have to write lines and lines of code to cover every possible situation. What I'd like is something like a good pattern, or a best practice that is used in programming complex protocols, easy to mantain and to extend and very readable. What are your suggestions?

    Read the article

  • How to re-prompt after a trap return in bash?

    - by verbose
    I have a script that is supposed to trap SIGTERM and SIGTSTP. This is what I have in the main block: trap 'killHandling' TERM And in the function: killHandling () { echo received kill signal, ignoring return } ... and similar for SIGINT. The problem is one of user interface. The script prompts the user for some input, and if the SIGTERM or SIGINT occurs when the script is waiting for input, it's confusing. Here is the output in that case: Enter something: # SIGTERM received received kill signal, ignoring # shell waits at blank line for user input, user gets confused # user hits "return", which then gets read as blank input from the user # bad things happen because of the blank input I have definitely seen scripts which handle this more elegantly, like so: Enter something: # SIGTERM received received kill signal, ignoring Enter something: # re-prompts user for user input, user is not confused What is the mechanism used to accomplish the latter? Unfortunately I can't simply change my trap code to do the re-prompt as the script prompts the user for several things and what the prompt says is context-dependent. And there has to be a better way than writing context-dependent trap functions. I'd be very grateful for any pointers. Thanks!

    Read the article

  • JavaScript window object element properties

    - by Timothy
    A coworker showed me the following code and asked me why it worked. <span id="myspan">Do you like my hat?</span> <script type="text/javascript"> var spanElement = document.getElementById("myspan"); alert("Here I am! " + spanElement.innerHTML + "\n" + myspan.innerHTML); </script> I explained that a property is attached to the window object with the name of the element's id when the browser parses the document which then contains a reference to the appropriate dom node. It's sort of as if window.myspan = document.getElementById("myspan") is called behind the scenes as the page is being rendered. The ensuing discussion we had raised a few of questions: The window object and most of the DOM are not part of the official JavaScript/ECMA standards, but is the above behavior documented in any other official literature, perhaps browser-related? The above works in a browser (at least the main contenders) because there is a window object, but fails in something like rhino. Is writing code that relys on this considered bad practice because it makes too many assumptions about the execution environment? Are there any browsers in which the above would fail, or is this considered standard behavior across the board? Does anyone here know the answers to those questions and would be willing to enlighten me? I tried a quick internet search, but I admit I'm not sure how to even properly phrase the query. Pointers to references and documentation are welcome.

    Read the article

  • Building "isolated" and "automatically updated" caches (java.util.List) in Java.

    - by Aidos
    Hi Guys, I am trying to write a framework which contains a lot of short-lived caches created from a long-living cache. These short-lived caches need to be able to return their entier contents, which is a clone from the original long-living cache. Effectively what I am trying to build is a level of transaction isolation for the short-lived caches. The user should be able to modify the contents of the short-lived cache, but changes to the long-living cache should not be propogated through (there is also a case where the changes should be pushed through, depending on the Cache type). I will do my best to try and explain: master-cache contains: [A,B,C,D,E,F] temporary-cache created with state [A,B,C,D,E,F] 1) temporary-cache adds item G: [A,B,C,D,E,F] 2) temporary-cache removes item B: [A,C,D,E,F] master-cache contains: [A,B,C,D,E,F] 3) master-cache adds items [X,Y,Z]: [A,B,C,D,E,F,X,Y,Z] temporary-cache contains: [A,C,D,E,F] Things get even harder when the values in the items can change and shouldn't always be updated (so I can't even share the underlying object instances, I need to use clones). I have implemented the simple approach of just creating a new instance of the List using the standard Collection constructor on ArrayList, however when you get out to about 200,000 items the system just runs out of memory. I know the value of 200,000 is excessive to iterate, but I am trying to stress my code a bit. I had thought that it might be able to somehow "proxy" the list, so the temporary-cache uses the master-cache, and stores all of it's changes (effectively a Memento for the change), however that quickly becomes a nightmare when you want to iterate the temporary-cache, or retrieve an item at a specific index. Also given that I want some modifications to the contents of the list to come through (depending on the type of the temporary-cache, whether it is "auto-update" or not) and I get completly out of my depth. Any pointers to techniques or data-structures or just general concepts to try and research will be greatly appreciated. Cheers, Aidos

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • How to treat Base* pointer as Derived<T>* pointer?

    - by dehmann
    I would like to store pointers to a Base class in a vector, but then use them as function arguments where they act as a specific class, see here: #include <iostream> #include <vector> class Base {}; template<class T> class Derived : public Base {}; void Foo(Derived<int>* d) { std::cerr << "Processing int" << std::endl; } void Foo(Derived<double>* d) { std::cerr << "Processing double" << std::endl; } int main() { std::vector<Base*> vec; vec.push_back(new Derived<int>()); vec.push_back(new Derived<double>()); Foo(vec[0]); Foo(vec[1]); delete vec[0]; delete vec[1]; return 0; } This doesn't compile: error: call of overloaded 'Foo(Base*&)' is ambiguous Is it possible to make it work? I need to process the elements of the vector differently, according to their int, double, etc. types.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >