Search Results

Search found 8250 results on 330 pages for 'dunn less'.

Page 299/330 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Converting C source to C++

    - by Barry Kelly
    How would you go about converting a reasonably large (300K), fairly mature C codebase to C++? The kind of C I have in mind is split into files roughly corresponding to modules (i.e. less granular than a typical OO class-based decomposition), using internal linkage in lieu private functions and data, and external linkage for public functions and data. Global variables are used extensively for communication between the modules. There is a very extensive integration test suite available, but no unit (i.e. module) level tests. I have in mind a general strategy: Compile everything in C++'s C subset and get that working. Convert modules into huge classes, so that all the cross-references are scoped by a class name, but leaving all functions and data as static members, and get that working. Convert huge classes into instances with appropriate constructors and initialized cross-references; replace static member accesses with indirect accesses as appropriate; and get that working. Now, approach the project as an ill-factored OO application, and write unit tests where dependencies are tractable, and decompose into separate classes where they are not; the goal here would be to move from one working program to another at each transformation. Obviously, this would be quite a bit of work. Are there any case studies / war stories out there on this kind of translation? Alternative strategies? Other useful advice? Note 1: the program is a compiler, and probably millions of other programs rely on its behaviour not changing, so wholesale rewriting is pretty much not an option. Note 2: the source is nearly 20 years old, and has perhaps 30% code churn (lines modified + added / previous total lines) per year. It is heavily maintained and extended, in other words. Thus, one of the goals would be to increase mantainability. [For the sake of the question, assume that translation into C++ is mandatory, and that leaving it in C is not an option. The point of adding this condition is to weed out the "leave it in C" answers.]

    Read the article

  • can this code be broken?

    - by user105165
    Consider the below html string <p>This is a paragraph tag</p> <font>This is a font tag</font> <div>This is a div tag</div> <span>This is a span tag</span> This string is processed to tokanize the text found in it and we get 2 results as below 1) Token Array : $tokenArray == array( 'This is a paragraph tag', 'This is a div tag', '<font>This is a font tag</font>', '<span>This is a span tag</span>' ); 2) Tokenized template : $templateString == "<p>{0}</p>{2}<div>{1}</div>{3}"; If you observe, the sequence of the text strings segments from the original HTML strings is different from the tokenized template The PHP code below is used to order the tokenized template and accordingly the token array to match the original html string class CreateTemplates { public static $tokenArray = array(); public static $tokenArrayNew = array(); function foo($templateString,$tokenArray) { CreateTemplates::$tokenArray = $tokenArray; $ptn = "/{[0-9]*}*/"; // Search Pattern from the template string $templateString = preg_replace_callback($ptn,array(&$this, 'callbackhandler') ,$templateString); // function call return $templateString; } // Function defination private static function callbackhandler($matches) { static $newArr = array(); static $cnt; $tokenArray = CreateTemplates::$tokenArray; array_push($newArr, $matches[0]); CreateTemplates::$tokenArrayNew[count($newArr)] = $tokenArray[substr($matches[0],1,(strlen($matches[0])-2))]; $cnt = count($newArr)-1; return '{'.$cnt.'}'; } // function ends } // class ends Final output is (ordered template and token array) $tokenArray == array('This is a paragraph tag', '<font>This is a font tag</font>', 'This is a div tag', '<span>This is a span tag</span>' ); $templateString == "<p>{0}</p>{1}<div>{2}</div>{3}"; Which is the expected result. Now, I am not confident whether this is the right way to achieve this. I want to see how this code can be broken or not. Under what conditions will this code break? (important) Is there any other way to achieve this? (less important)

    Read the article

  • Calculate minimum moves to solve a puzzle

    - by Luke
    I'm in the process of creating a game where the user will be presented with 2 sets of colored tiles. In order to ensure that the puzzle is solvable, I start with one set, copy it to a second set, then swap tiles from one set to another. Currently, (and this is where my issue lies) the number of swaps is determined by the level the user is playing - 1 swap for level 1, 2 swaps for level 2, etc. This same number of swaps is used as a goal in the game. The user must complete the puzzle by swapping a tile from one set to the other to make the 2 sets match (by color). The order of the tiles in the (user) solved puzzle doesn't matter as long as the 2 sets match. The problem I have is that as the number of swaps I used to generate the puzzle approaches the number of tiles in each set, the puzzle becomes easier to solve. Basically, you can just drag from one set in whatever order you need for the second set and solve the puzzle with plenty of moves left. What I am looking to do is after I finish building the puzzle, calculate the minimum number of moves required to solve the puzzle. Again, this is almost always less than the number of swaps used to create the puzzle, especially as the number of swaps approaches the number of tiles in each set. My goal is to calculate the best case scenario and then give the user a "fudge factor" (i.e. 1.2 times the minimum number of moves). Solving the puzzle in under this number of moves will result in passing the level. A little background as to how I currently have the game configured: Levels 1 to 10: 9 tiles in each set. 5 different color tiles. Levels 11 to 20: 12 tiles in each set. 7 different color tiles. Levels 21 to 25: 15 tiles in each set. 10 different color tiles. Swapping within a set is not allowed. For each level, there will be at least 2 tiles of a given color (one for each set in the solved puzzle). Is there any type of algorithm anyone could recommend to calculate the minimum number of moves to solve a given puzzle?

    Read the article

  • How to download file into string with progress callback?

    - by Kaminari
    I would like to use the WebClient (or there is another better option?) but there is a problem. I understand that opening up the stream takes some time and this can not be avoided. However, reading it takes a strangely much more amount of time compared to read it entirely immediately. Is there a best way to do this? I mean two ways, to string and to file. Progress is my own delegate and it's working good. FIFTH UPDATE: Finally, I managed to do it. In the meantime I checked out some solutions what made me realize that the problem lies elsewhere. I've tested custom WebResponse and WebRequest objects, library libCURL.NET and even Sockets. The difference in time was gzip compression. Compressed stream lenght was simply half the normal stream lenght and thus download time was less than 3 seconds with the browser. I put some code if someone will want to know how i solved this: (some headers are not needed) public static string DownloadString(string URL) { WebClient client = new WebClient(); client.Headers["User-Agent"] = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5"; client.Headers["Accept"] = "application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; client.Headers["Accept-Encoding"] = "gzip,deflate,sdch"; client.Headers["Accept-Charset"] = "ISO-8859-2,utf-8;q=0.7,*;q=0.3"; Stream inputStream = client.OpenRead(new Uri(URL)); MemoryStream memoryStream = new MemoryStream(); const int size = 32 * 4096; byte[] buffer = new byte[size]; if (client.ResponseHeaders["Content-Encoding"] == "gzip") { inputStream = new GZipStream(inputStream, CompressionMode.Decompress); } int count = 0; do { count = inputStream.Read(buffer, 0, size); if (count > 0) { memoryStream.Write(buffer, 0, count); } } while (count > 0); string result = Encoding.Default.GetString(memoryStream.ToArray()); memoryStream.Close(); inputStream.Close(); return result; } I think that asyncro functions will be almost the same. But i will simply use another thread to fire this function. I dont need percise progress indication.

    Read the article

  • How to cache queries in EJB and return result efficient (performance POV)

    - by Maxym
    I use JBoss EJB 3.0 implementation (JBoss 4.2.3 server) At the beginning I created native query all the time using construction like Query query = entityManager.createNativeQuery("select * from _table_"); Of couse it is not that efficient, I performed some tests and found out that it really takes a lot of time... Then I found a better way to deal with it, to use annotation to define native queries: @NamedNativeQuery( name = "fetchData", value = "select * from _table_", resultClass=Entity.class ) and then just use it Query query = entityManager.createNamedQuery("fetchData"); the performance of code line above is two times better than where I started from, but still not that good as I expected... then I found that I can switch to Hibernate annotation for NamedNativeQuery (anyway, JBoss's implementation of EJB is based on Hibernate), and add one more thing: @NamedNativeQuery( name = "fetchData2", value = "select * from _table_", resultClass=Entity.class, readOnly=true) readOnly - marks whether the results are fetched in read-only mode or not. It sounds good, because at least in this case of mine I don't need to update data, I wanna just fetch it for report. When I started server to measure performance I noticed that query without readOnly=true (by default it is false) returns result with each iteration better and better, and at the same time another one (fetchData2) works like "stable" and with time difference between them is shorter and shorter, and after 5 iterations speed of both was almost the same... The questions are: 1) is there any other way to speed query using up? Seems that named queries should be prepared once, but I can't say it... In fact if to create query once and then just use it it would be better from performance point of view, but it is problematic to cache this object, because after creating query I can set parameters (when I use ":variable" in query), and it changes query object (isn't it?). well, is here any way to cache them? Or named query is the best option I can use? 2) any other approaches how to make results retrieveng faster. I mean, for instance I don't need those Entities to be attached, I won't update them, all I need is just fetch collection of data. Maybe readOnly is the only available way, so I can't speed it up, but who knows :) P.S. I don't ask about DB performance, all I need now is how not to create query all the time, so use it efficient, and to "allow" EJB to do less job with the same result concerning data returning.

    Read the article

  • Remove never-run call to templated function, get allocation error on run-time

    - by Narfanator
    First off, I'm a bit at a loss as to how to ask this question. So I'm going to try throwing lots of information at the problem. Ok, so, I went to completely redesign my test project for my experimental core library thingy. I use a lot of template shenanigans in the library. When I removed the "user" code, the tests gave me a memory allocation error. After quite a bit of experimenting, I narrowed it down to this bit of code (out of a couple hundred lines): void VOODOO(components::switchBoard &board){ board.addComponent<using_allegro::keyInputs<'w'> >(); } Fundementally, what's weirding me out is that it appears that the act of compiling this function (and the template function it then uses, and the template functions those then use...), makes this bug not appear. This code is not being run. Similar code (the same, but for different key vals) occurs elsewhere, but is within Boost TDD code. I realize I certainly haven't given enough information for you to solve it for me; I tried, but it more-or-less spirals into most of the code base. I think I'm most looking for "here's what the problem could be", "here's where to look", etc. There's something that's happening during compile because of this line, but I don't know enough about that step to begin looking. Sooo, how can a (presumably) compilied, but never actually run, bit of templated code, when removed, cause another part of code to fail? Error: Unhandled exceptionat 0x6fe731ea (msvcr90d.dll) in Switchboard.exe: 0xC0000005: Access violation reading location 0xcdcdcdc1. Callstack: operator delete(void * pUser Data) allocator< class name related to key inputs callbacks ::deallocate vector< same class ::_Insert_n(...) vector< " " ::insert(...) vector<" "::push_back(...) It looks like maybe the vector isn't valid, because _MyFirst and similar data members are showing values of 0xcdcdcdcd in the debugger. But the vector is a member variable...

    Read the article

  • Maps with a nested vector

    - by wawiti
    For some reason the compiler won't let me retrieve the vector of integers from the map that I've created, I want to be able to overwrite this vector with a new vector. The error the compiler gives me is ridiculous. Thanks for your help!! The compiler didn't like this part of my code: line_num = miss_words[word_1]; Error: [Wawiti@localhost Lab2]$ g++ -g -Wall *.cpp -o lab2 main.cpp: In function ‘int main(int, char**)’: main.cpp:156:49: error: no match for ‘operator=’ in ‘miss_words.std::map<_Key, _Tp, _Compare, _Alloc>::operator[]<std::basic_string<char>, std::vector<int>, std::less<std::basic_string<char> >, std::allocator<std::pair<const std::basic_string<char>, std::vector<int> > > >((*(const key_type*)(& word_1))) = line_num.std::vector<_Tp, _Alloc>::push_back<int, std::allocator<int> >((*(const value_type*)(& line)))’ main.cpp:156:49: note: candidate is: In file included from /usr/lib/gcc/x86_64-redhat->linux/4.7.2/../../../../include/c++/4.7.2vector:70:0, from header.h:19, from main.cpp:15: /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/bits/vector.tcc:161:5: note: std::vector<_Tp, _Alloc>& std::vector<_Tp, _Alloc>::operator=(const std::vector<_Tp, _Alloc>&) [with _Tp = int; _Alloc = std::allocator<int>] /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/bits/vector.tcc:161:5: note: no known conversion for argument 1 from ‘void’ to ‘const std::vector<int>&’ CODE: map<string, vector<int> > miss_words; // Creates a map for misspelled words string word_1; // String for word; string sentence; // To store each line; vector<int> line_num; // To store line numbers ifstream file; // Opens file to be spell checked file.open(argv[2]); int line = 1; while(getline(file, sentence)) // Reads in file sentence by sentence { sentence=remove_punct(sentence); // Removes punctuation from sentence stringstream pars_sentence; // Creates stringstream pars_sentence << sentence; // Places sentence in a stringstream while(pars_sentence >> word_1) // Picks apart sentence word by word { if(dictionary.find(word_1)==dictionary.end()) { line_num = miss_words[word_1]; //Compiler doesn't like this miss_words[word_1] = line_num.push_back(line); } } line++; // Increments line marker }

    Read the article

  • How to handle very frequent updates to a Lucene index

    - by fsm
    I am trying to prototype an indexing/search application which uses very volatile indexing data sources (forums, social networks etc), here are some of the performance requirements, Very fast turn-around time (by this I mean that any new data (such as a new message on a forum) should be available in the search results very soon (less than a minute)) I need to discard old documents on a fairly regular basis to ensure that the search results are not dated. Last but not least, the search application needs to be responsive. (latency on the order of 100 milliseconds, and should support at least 10 qps) All of the requirements I have currently can be met w/o using Lucene (and that would let me satisfy all 1,2 and 3), but I am anticipating other requirements in the future (like search relevance etc) which Lucene makes easier to implement. However, since Lucene is designed for use cases far more complex than the one I'm currently working on, I'm having a hard time satisfying my performance requirements. Here are some questions, a. I read that the optimize() method in the IndexWriter class is expensive, and should not be used by applications that do frequent updates, what are the alternatives? b. In order to do incremental updates, I need to keep committing new data, and also keep refreshing the index reader to make sure it has the new data available. These are going to affect 1 and 3 above. Should I try duplicate indices? What are some common approaches to solving this problem? c. I know that Lucene provides a delete method, which lets you delete all documents that match a certain query, in my case, I need to delete all documents which are older than a certain age, now one option is to add a date field to every document and use that to delete documents later. Is it possible to do range queries on document ids (I can create my own id field since I think that the one created by lucene keeps changing) to delete documents? Is it any faster than comparing dates represented as strings? I know these are very open questions, so I am not looking for a detailed answer, I will try to treat all of your answers as suggestions and use them to inform my design. Thanks! Please let me know if you need any other information.

    Read the article

  • How to do a proper search with nhibernate

    - by Denis Rosca
    Hello everyone, i'm working on a small project that is supposed to allow basic searches of the database. Currently i'm using nhibernate for the database interaction. In the database i have 2 tables: Person and Address. The Person table has a many-to-one relationship with Address. The code i've come up with for doing searches is: public IList<T> GetByParameterList(List<QueryParameter> parameterList) { if (parameterList == null) { return GetAll(); } using (ISession session = NHibernateHelper.OpenSession()) { ICriteria criteria = session.CreateCriteria<T>(); foreach (QueryParameter param in parameterList) { switch (param.Constraint) { case ConstraintType.Less: criteria.Add(Expression.Lt(param.ParameterName, param.ParameterValue)); break; case ConstraintType.More: criteria.Add(Expression.Gt(param.ParameterName, param.ParameterValue)); break; case ConstraintType.LessOrEqual: criteria.Add(Expression.Le(param.ParameterName, param.ParameterValue)); break; case ConstraintType.EqualOrMore: criteria.Add(Expression.Ge(param.ParameterName, param.ParameterValue)); break; case ConstraintType.Equals: criteria.Add(Expression.Eq(param.ParameterName, param.ParameterValue)); break; case ConstraintType.Like: criteria.Add(Expression.Like(param.ParameterName, param.ParameterValue)); break; } } try { IList<T> result = criteria.List<T>(); return result; } catch { //TODO: Implement some exception handling throw; } } } The query parameter is a helper object that i use to create criterias and send it to the dal, it looks like this: public class QueryParameter { public QueryParameter(string ParameterName, Object ParameterValue, ConstraintType constraintType) { this.ParameterName = ParameterName; this.ParameterValue = ParameterValue; this.Constraint = constraintType; } public string ParameterName { get; set; } public Object ParameterValue { get; set; } public ConstraintType Constraint { get; set; } } Now this works well if i'm doing a search like FirstName = "John" , but not when i try to give a parameter like Street = "Some Street". It seems that nhibernate is looking for a street column in the Person table but not in the Address table. Any idea on how should i change my code for so i could do a proper search? Tips? Maybe some alternatives? Disclaimer: i'm kind of a noob so please be gentle ;) Thanks, Denis.

    Read the article

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • How to compare filename of uploaded file and string

    - by user225269
    I use this code to upload image files in xammp server: <?php if ((($_FILES["file"]["type"] == "image/gif") || ($_FILES["file"]["type"] == "image/jpeg") || ($_FILES["file"]["type"] == "image/pjpeg")) && ($_FILES["file"]["size"] < 100000)) { if ($_FILES["file"]["error"] > 0) { echo "Return Code: " . $_FILES["file"]["error"] . "<br />"; } else { echo "Upload: " . $_FILES["file"]["name"] . "<br />"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />"; echo "Temp file: " . $_FILES["file"]["tmp_name"] . "<br />"; if (file_exists("upload/" . $_FILES["file"]["name"])) { echo $_FILES["file"]["name"] . " already exists. "; } else { move_uploaded_file($_FILES["file"]["tmp_name"], "upload/" . $_FILES["file"]["name"]); echo "Stored in: " . "upload/" . $_FILES["file"]["name"]; } } } else { echo "Invalid file, File must be less than 100Kb in size with .jpg, .jpeg, or .gif file extension"; } ?> What do I do to compare the file name of the uploaded files with the text inputted by the user? My goal is to be able to compare the user input(ID number) and the file name of the image file which should also be an ID number. So that I will be able to display the image that corresponds with the ID Number provided. What do I need to do?Please give me an idea on how can I achieve this. Thanks

    Read the article

  • Difference between Cloud and Virtualization

    - by Akash Kava
    Ops: This does not belong to ServerFault because it focuses on Programing Architecture. I have following questions regarding differences between Cloud and Virtualization.. How Cloud is different then Virtualization? Currently I tried to find out pricing of Rackspace, Amazone and all similar cloud providers, I found that our current 6 dedicated servers came cheaper then their pricing. So how one can claim cloud is cheaper? Is it cheaper only in comparison of normal hosting? We re organized our infrastructure in virtual environment to reduce or configuration overhead at time of failure, we did not have to rewrite any peice of code that is already written for earlier setup. So moving to virtualization does not require any re programming. But cloud is absoltely different and it will require entire reprogramming right? Is it really worth to recode when our current IT costs are 3-4 times lower then cloud hosting including raid backups and all sort of clustering for high availability? New programming architecture means new overheads of training staff, new methods of testing and new deployment schemes, does it justify over "on demand resource usage" words of cloud? We are having current development architecture with simple Server side ASP.NET WebServices with no local context and on client side Flex/Silverlight which offers pretty good REST architecture and its highly scalable. How does cloud differs from REST model of deployment? On storage, SQL Server or MySQL offers pretty good replication and high availibility then what is advantage in cloud? Data guarantee, one of our vendor hosting some other customer's app on cloud (one of most used), lost Entire Hard Disk (the virtual) and entire module in first 6 months. Second provider said its your duty to take backup, fine I agree, but no provider gives SLA for data guarantee, they give 99% uptime. However in most business apps, uptime is less important then data integrity. In our 10 years of dedicated hosting experience we had only one hard disk crash. This makes me little skeptical to go for cloud and loosing control over data. And I feel its just a big marketing buzz to sell virtulization in different form. Size of data, currently all providers charge very heavy for large data, if you are hosting only below 100GB cloud can be good alternative, but I think virtual servers and dedicated servers above 100GB to few TBs are still cheaper. Why would want to pay so high on cloud when there is no data guarentee as well as it doesnt say anything about redundancy. (I wish SO had something for spell check for Internet Explorer, sorry for wrong spellings in my post)

    Read the article

  • small scale web site - global javascript file style/format/pattern - improving maintainability

    - by yaya3
    I frequently create (and inherit) small to medium websites where I have the following sort of code in a single file (normally named global.js or application.js or projectname.js). If functions get big, I normally put them in a seperate file, and call them at the bottom of the file below in the $(document).ready() section. If I have a few functions that are unique to certain pages, I normally have another switch statement for the body class inside the $(document).ready() section. How could I restructure this code to make it more maintainable? Note: I am less interested in the functions innards, more so the structure, and how different types of functions should be dealt with. I've also posted the code here - http://pastie.org/999932 in case it makes it any easier var ProjectNameEnvironment = {}; function someFunctionUniqueToTheHomepageNotWorthMakingConfigurable () { $('.foo').hide(); $('.bar').click(function(){ $('.foo').show(); }); } function functionThatIsWorthMakingConfigurable(config) { var foo = config.foo || 700; var bar = 200; return foo * bar; } function globallyRequiredJqueryPluginTrigger (tooltip_string) { var tooltipTrigger = $(tooltip_string); tooltipTrigger.tooltip({ showURL: false ... }); } function minorUtilityOneLiner (selector) { $(selector).find('li:even').not('li ul li').addClass('even'); } var Lightbox = {}; Lightbox.setup = function(){ $('li#foo a').attr('href','#alpha'); $('li#bar a').attr('href','#beta'); } Lightbox.init = function (config){ if (typeof $.fn.fancybox =='function') { Lightbox.setup(); var fade_in_speed = config.fade_in_speed || 1000; var frame_height = config.frame_height || 1700; $(config.selector).fancybox({ frameHeight : frame_height, callbackOnShow: function() { var content_to_load = config.content_to_load; ... }, callbackOnClose : function(){ $('body').height($('body').height()); } }); } else { if (ProjectNameEnvironment.debug) { alert('the fancybox plugin has not been loaded'); } } } // ---------- order of execution ----------- $(document).ready(function () { urls = urlConfig(); (function globalFunctions() { $('.tooltip-trigger').each(function(){ globallyRequiredJqueryPluginTrigger(this); }); minorUtilityOneLiner('ul.foo') Lightbox.init({ selector : 'a#a-lightbox-trigger-js', ... }); Lightbox.init({ selector : 'a#another-lightbox-trigger-js', ... }); })(); if ( $('body').attr('id') == 'home-page' ) { (function homeFunctions() { someFunctionUniqueToTheHomepageNotWorthMakingConfigurable (); })(); } });

    Read the article

  • C - What is the proper format to allow a function to show an error was encountered?

    - by BrainSteel
    I have a question about what a function should do if the arguments to said function don't line up quite right, through no fault of the function call. Since that sentence doesn't make much sense, I'll offer my current issue. To keep it simple, here is the most relevant and basic function I have. float getYValueAt(float x, PHYS_Line line, unsigned short* error) *error = 0; if(x < line.start.x || x > line.end.x){ *error = 1; return -1; } if(line.slope.value != 0){ //line's equation: y - line.start.y = line.slope.value(x - line.start.x) return line.slope.value * (x - line.start.x) + line.start.y; } else if(line.slope.denom == 0){ if(line.start.x == x) return line.start.y; else{ *error = 1; return -1; } } else if(line.slope.num == 0){ return line.start.y; } } The function attempts to find the point on a line, given a certain x value. However, under some circumstances, this may not be possible. For example, on the line x = 3, if 5 is passed as a value, we would have a problem. Another problem arises if the chosen x value is not within the interval the line is on. For this, I included the error pointer. Given this format, a function call could work as follows: void foo(PHYS_Line some_line){ unsigned short error = 0; float y = getYValueAt(5, some_line, &error); if(error) fooey(); else do_something_with_y(y); } My question pertains to the error. Note that the value returned is allowed to be negative. Returning -1 does not ensure that an error has occurred. I know that it is sometimes preferred to use the following method to track an error: float* getYValueAt(float x, PHYS_Line line); and then return NULL if an error occurs, but I believe this requires dynamic memory allocation, which seems even less sightly than the solution I was using. So, what is standard practice for an error occurring?

    Read the article

  • Circular database relationships. Good, Bad, Exceptions?

    - by jim
    I have been putting off developing this part of my app for sometime purely because I want to do this in a circular way but get the feeling its a bad idea from what I remember my lecturers telling me back in school. I have a design for an order system, ignoring the everything that doesn't pertain to this example I'm left with: CreditCard Customer Order I want it so that, Customers can have credit cards (0-n) Customers have orders (1-n) Orders have one customer(1-1) Orders have one credit card(1-1) Credit cards can have one customer(1-1) (unique ids so we can ignore uniqueness of cc number, husband/wife may share cc instances ect) Basically the last part is where the issue shows up, sometimes credit cards are declined and they wish to use a different one, this needs to update which their 'current' card is but this can only change the current card used for that order, not the other orders the customer may have on disk. Effectively this creates a circular design between the three tables. Possible solutions: Either Create the circular design, give references: cc ref to order, customer ref to cc customer ref to order or customer ref to cc customer ref to order create new table that references all three table ids and put unique on the order so that only one cc may be current to that order at any time Essentially both model the same design but translate differently, I am liking the latter option best at this point in time because it seems less circular and more central. (If that even makes sense) My questions are, What if any are the pros and cons of each? What is the pitfalls of circular relationships/dependancies? Is this a valid exception to the rule? Is there any reason I should pick the former over the latter? Thanks and let me know if there is anything you need clarified/explained. --Update/Edit-- I have noticed an error in the requirements I stated. Basically dropped the ball when trying to simplify things for SO. There is another table there for Payments which adds another layer. The catch, Orders can have multiple payments, with the possibility of using different credit cards. (if you really want to know even other forms of payment). Stating this here because I think the underlying issue is still the same and this only really adds another layer of complexity.

    Read the article

  • How to manipulate file paths intelligently in .Net 3.0?

    - by Hamish Grubijan
    Scenario: I am maintaining a function which helps with an install - copies files from PathPart1/pending_install/PathPart2/fileName to PathPart1/PathPart2/fileName. It seems that String.Replace() and Path.Combine() do not play well together. The code is below. I added this section: // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); in order to take care of a bug (code is sensitive to a constant @"pending_install\" vs @"pending_install" which I did not like and changed (long story, but there was a good opportunity for constant reuse). Now the whole function: //You want to uncompress only the files downloaded. Not every file in the dest directory. private void UncompressFiles() { string strSrcDir = _application.Client.TempDir; ArrayList arrFiles = new ArrayList(); GetAllCompressedFiles(ref arrFiles, strSrcDir); IEnumerator enumer = arrFiles.GetEnumerator(); while (enumer.MoveNext()) { string strDestFile = enumer.Current.ToString().Replace(_application.Client.TempDir, String.Empty); // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); strDestFile = Path.Combine(_application.Client.BaseDir, strDestFile); strDestFile = strDestFile.Replace(Path.GetExtension(strDestFile), String.Empty); ZSharpLib.ZipExtractor.ExtractZip(enumer.Current.ToString(), strDestFile); FileUtility.DeleteFile(enumer.Current.ToString()); } } Please do not laugh at the use of ArrayList and the way it is being iterated - it was pioneered by a C++ coder during a .Net 1.1 era. I will change it. What I am interested in: what is a better way of replacing PathPart1/pending_install/PathPart2/fileName with PathPart1/PathPart2/fileName within the current code. Note that _application.Client.TempDir is just _application.Client.BaseDir + @"\pending_install". While there are many ways to improve the code, I am mainly concerned with the part which has to do with String.Replace(...) and Path.Combine(,). I do not want to make changes outside of this function. I wish Path.Combine(,) took an optional bool flag, but it does not. So ... given my constraints, how can I rework this so that it starts to sucks less? Thanks!

    Read the article

  • error unidentfied identfier "exams" and i dont know why in c++

    - by user320950
    // basic file operations #include <iostream> #include <fstream> using namespace std; void read_file_in_array(int exam[100][3]); double calculate_total(int exam1[], int exam2[], int exam3[]); // function that calcualates grades to see how many 90,80,70,60 //void display_totals(); int main() { int go,go2,go3; go=read_file_in_array(exam); go2=calculate_total(exam1,exam2,exam3); //go3=display_totals(); cout << go,go2,go3; return 0; }/* int display_totals() { int grade_total; grade_total=calculate_total(exam1,exam2,exam3); return 0; } */ double calculate_total(int exam1[],int exam2[],int exam3[]) { int calc_tot,above90=0, above80=0, above70=0, above60=0,i,j; calc_tot=read_file_in_array(exam); for(i=0;i<100;i++) { exam1[i]=exam[100][0]; exam2[i]=exam[100][1]; exam3[i]=exam[100][2]; if(exam1[i] <=90 && exam1[i] >=100) { above90++; cout << above90; } } return exam3[i]; } void read_file_in_array(double exam[100][3]) { ifstream infile; int num, i=0,j=0; infile.open("grades.txt");// file containing numbers in 3 columns if(infile.fail()) // checks to see if file opended { cout << "error" << endl; } while(!infile.eof()) // reads file to end of line { for(i=0;i<100;i++) // array numbers less than 100 { for(j=0;j<3;j++) // while reading get 1st array or element infile >> exam[i][j]; infile >> exam[i][j]; infile >> exam[i][j]; cout << exam[i][j] << endl; } exam[i][j]=exam1[i]; exam[i][j]=exam2[i]; exam[i][j]=exam3[i]; } infile.close(); }

    Read the article

  • Java exercise - display table with 2d array

    - by TheHacker66
    I'm struggling to finish a java exercise, it involves using 2d arrays to dinamically create and display a table based on a command line parameter. Example: java table 5 +-+-+-+-+-+ |1|2|3|4|5| +-+-+-+-+-+ |2|3|4|5|1| +-+-+-+-+-+ |3|4|5|1|2| +-+-+-+-+-+ |4|5|1|2|3| +-+-+-+-+-+ |5|1|2|3|4| +-+-+-+-+-+ What i have done so far: public static void main(String[] args) { int num = Integer.parseInt(args[0]); String[][] table = new String[num*2+1][num]; int[] numbers = new int[num]; int temp = 0; for(int i=0; i<numbers.length; i++) numbers[i] = i+1; // wrong for(int i=0; i<table.length; i++){ for(int j=0; j<num;j++){ if(i%2!=0){ temp=numbers[0]; for(int k=1; k<numbers.length; k++){ numbers[k-1]=numbers[k]; } numbers[numbers.length-1]=temp; for(int l=0; l<numbers.length; l++){ table[i][j] = "|"+numbers[l]; } } else table[i][j] = "+-"; } } for(int i=0; i<table.length; i++){ for(int j=0; j<num; j++) System.out.print(table[i][j]); if(i%2==0) System.out.print("+"); else System.out.print("|"); System.out.println();} } This doesn't work, since it prints 1|2|3|4 in every row, which isn't what i need. I found the issue, and it's because the first for loop changes the array order more times than needed and basically it returns as it was at the beginning. I know that probably there's a way to achieve this by writing more code, but i always tend to nest as much as possible to "optimize" the code while i write it, so that's why i tried solving this exercise by using less variables and loops as possible. Thanks in advance for your help!

    Read the article

  • Best way to version control a WCF application with Git?

    - by Sam
    Suppose I have the following projects. The format is [ProjectName] : [ProjectDependency1, ProjectDependency2, etc.] // Service CoolLibrary WcfApp.Core WcfApp.Contracts WcfApp.Services : CoolLibrary, WcfApp.Core, WcfApp.Contracts // Clients CustomerX.App : WcfApp.Contracts CustomerY.App : WcfApp.Contracts CustomerZ.App : WcfApp.Contracts (On a side note, WcfApp.Contracts should not depend on WcfApp.Core, right? Else CustomerX.App would also depend on and thus be exposed to the service domain model?) (CoolLibrary is shared with other applications, so I can't just put it inside of WcfApp.Services.) All of this code is in-house. I was thinking of having 6 repositories for this. The format is [repository folder name] : [Projects included in repository.] 1. CoolLibrary.git : CoolLibrary 2. WcfApp.Contracts.git : WcfApp.Contracts 3. WcfApp.git : WcfApp.Core, WcfApp.Services 4. CustomerX.App.git : CustomerX.App 5. CustomerY.App.git : CustomerY.App 6. CustomerZ.App.git : CustomerZ.App How should I manage my project dependencies? I see three options: I could use binaries which I have to manually copy to each dependent repository. This would be easiest at the start, but my repositories would be a little bloated, and it'd become more tedious as I add more client apps for customers. I could import dependent code as submodules. This is what I will probably end up doing, although I keep reading on the web that submodules are a hassle. I also read that I can use something called the subtree merge strategy, but I am not sure how it is different from just cloning the repo into a subdirectory and adding the subdirectory to .gitignore. Is the difference that the subtree is recorded in the master repository, so (for example) cloning it from a different location will also pull the subtree? I know I asked a lot of questions in this post, but the most important two questions I have are: 1. Am I using the right number and layout of repositories? Should I use less or more? 2. Which of the three dependency management strategies would you recommend? Is there another strategy I haven't considered?

    Read the article

  • Coping with feelings of technical mediocrity

    - by Karim
    As I've progressed as a programmer, I noticed more nuance and areas I could study in depth. In part, I've come to think of myself from, at one point, a "guru" to now much less, even mediocre or inadequate. Is this normal, or is it a sign of a destructive excessive ambition? Background I started to program when I was still a kid, I had about 10 or 11 years. I really enjoy my work and never get bored from it. It's amazing how somebody could be paid for what he really likes to do and would be doing it anyway even for free. When I first started to program, I was feeling proud of what I was doing, each application I built was for me a success and after 2-3 year I had a feeling that I'm a coding guru. It was a nice feeling. ;-) But the more I was in the field and the more types of software I started to develop, I was starting to have a feeling that I'm completely wrong in thinking I'm a guru. I felt that I'm not even a mediocre developer. Each new field I start to work on is giving me this feeling. Like when I once developed a device driver for a client, I saw how much I need to learn about device drivers. When I developed a video filter for an application, I saw how much do I still need to learn about DirectShow, Color Spaces, and all the theory behind that. The worst thing was when I started to learn algorithms. It was several years ago. I knew then the basic structures and algorithms like the sorting, some types of trees, some hashtables, strings, etc. and when I really wanted to learn a group of structures I learned about 5-6 new types and saw that in fact even this small group has several hundred subtypes of structures. It's depressing how little time people have in their lives to learn all this stuff. I'm now a software developer with about 10 years of experience and I still feel that I'm not a proficient developer when I think about things that others do in the industry.

    Read the article

  • Having trouble doing an Update with a Linq to Sql object

    - by Pure.Krome
    Hi folks, i've got a simple linq to sql object. I grab it from the database and change a field then save. No rows have been updated. :( When I check the full Sql code that is sent over the wire, I notice that it does an update to the row, not via the primary key but on all the fields via the where clause. Is this normal? I would have thought that it would be easy to update the field(s) with the where clause linking on the Primary Key, instead of where'ing (is that a word :P) on each field. here's the code... using (MyDatabase db = new MyDatabase()) { var boardPost = (from bp in db.BoardPosts where bp.BoardPostId == boardPostId select bp).SingleOrDefault(); if (boardPost != null && boardPost.BoardPostId > 0) { boardPost.ListId = listId; // This changes the value from 0 to 'x' db.SubmitChanges(); } } and here's some sample sql.. exec sp_executesql N'UPDATE [dbo].[BoardPost] SET [ListId] = @p6 WHERE ([BoardPostId] = @p0) AND .... <snip the other fields>',N'@p0 int,@p1 int,@p2 nvarchar(9),@p3 nvarchar(10),@p4 int,@p5 datetime,@p6 int',@p0=1276,@p1=212787,@p2=N'ttreterte',@p3=N'ttreterte3',@p4=1,@p5='2009-09-25 12:32:12.7200000',@p6=72 Now, i know there's a datetime field in this update .. and when i checked the DB it's value was/is '2009-09-25 12:32:12.720' (less zero's, than above) .. so i'm not sure if that is messing up the where clause condition... but still! should it do a where clause on the PK's .. if anything .. for speed! Yes / no ? UPDATE After reading nitzmahone's reply, I then tried playing around with the optimistic concurrency on some values, and it still didn't work :( So then I started some new stuff ... with the optimistic concurrency happening, it includes a where clause on the field it's trying to update. When that happens, it doesn't work. so.. in the above sql, the where clause looks like this ... WHERE ([BoardPostId] = @p0) AND ([ListId] IS NULL) AND ... <rest snipped>) This doesn't sound right! the value in the DB is null, before i do the update. but when i add the ListId value to the where clause (or more to the point, when L2S add's it because of the optomistic concurrecy), it fails to find/match the row. wtf?

    Read the article

  • jquery in ajax loaded content

    - by Kim Gysen
    My application is supposed to be a single page application and I have the following code that works fine: home.php: <div id="container"> </div> accordion.php: //Click functions: load content $('#parents').click(function(){ //Load parent in container $('#container').load('http://www.blabla.com/entities/parents/parents.php'); }); parents.php: <div class="entity_wrapper"> Some divs and selectors </div> <script type="text/javascript"> $(document).ready(function(){ //Some jQuery / javascript }); </script> So the content loads fine, while the scripts dynamically loaded execute fine as well. I apply this system repetitively and it continues to work smoothly. I've seen that there are a lot of frameworks available on SPA's (such as backbone.js) but I don't understand why I need them if this works fine. From the backbone.js website: When working on a web application that involves a lot of JavaScript, one of the first things you learn is to stop tying your data to the DOM. It's all too easy to create JavaScript applications that end up as tangled piles of jQuery selectors and callbacks, all trying frantically to keep data in sync between the HTML UI, your JavaScript logic, and the database on your server. For rich client-side applications, a more structured approach is often helpful. Well, I totally don't have the feeling that I'm going through the stuff they mention. Adding the javascript per page works really well for me. They are html containers with clear scope and the javascript is just related to that part. More over, the front end doesn't do that much, most of the logic is managed based on Ajax calls to external PHP scripts. Sometimes the js can be a bit more extended for some functionalities, but all just loads as smooth in less than a second. If you think that this is bad coding, please tell me why I cannot do this and more importantly, what is the alternative I should apply. At the moment, I really don't see a reason on why I would change this approach as it just works too well. I'm kinda stuck on this question because it just worries me sick as it seems to easy to be true. Why would people go through hard times if it would be as easy as this...

    Read the article

  • How would you structure your entity model for storing arbitrary key/value data with different data t

    - by Nathan Ridley
    I keep coming across scenarios where it will be useful to store a set of arbitrary data in a table using a per-row key/value model, rather than a rigid column/field model. The problem is, I want to store the values with their correct data type rather than converting everything to a string. This means I have to choose either a single table with multiple nullable columns, one for each data type, or a set of value tables, one for each data type. I'm also unsure as to whether I should use full third normal form and separate the keys into a separate table, referencing them via a foreign key from the value table(s), or if it would be better to keep things simple and store the string keys in the value table(s) and accept the duplication of strings. Old/bad: This solution makes adding additional values a pain in a fluid environment because the table needs to be modified regularly. MyTable ============================ ID Key1 Key2 Key3 int int string date ---------------------------- 1 Value1 Value2 Value3 2 Value4 Value5 Value6 Single Table Solution This solution allows simplicity via a single table. The querying code still needs to check for nulls to determine which data type the field is storing. A check constraint is probably also required to ensure only one of the value fields contains non-nulll data. DataValues ============================================================= ID RecordID Key IntValue StringValue DateValue int int string int string date ------------------------------------------------------------- 1 1 Key1 Value1 NULL NULL 2 1 Key2 NULL Value2 NULL 3 1 Key3 NULL NULL Value3 4 2 Key1 Value4 NULL NULL 5 2 Key2 NULL Value5 NULL 6 2 Key3 NULL NULL Value6 Multiple-Table Solution This solution allows for more concise purposing of each table, though the code needs to know the data type in advance as it needs to query a different table for each data type. Indexing is probably simpler and more efficient because there are less columns that need indexing. IntegerValues =============================== ID RecordID Key Value int int string int ------------------------------- 1 1 Key1 Value1 2 2 Key1 Value4 StringValues =============================== ID RecordID Key Value int int string string ------------------------------- 1 1 Key2 Value2 2 2 Key2 Value5 DateValues =============================== ID RecordID Key Value int int string date ------------------------------- 1 1 Key3 Value3 2 2 Key3 Value6 How do you approach this problem? Which solution is better? Also, should the key column be separated into a separate table and referenced via a foreign key or be should it be kept in the value table and bulk updated if for some reason the key name changes?

    Read the article

  • C++ class member functions instantiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched Stack Overflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state information was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (VC++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • Multiset container appears to stop sorting

    - by Sarah
    I would appreciate help debugging some strange behavior by a multiset container. Occasionally, the container appears to stop sorting. This is an infrequent error, apparent in only some simulations after a long time, and I'm short on ideas. (I'm an amateur programmer--suggestions of all kinds are welcome.) My container is a std::multiset that holds Event structs: typedef std::multiset< Event, std::less< Event > > EventPQ; with the Event structs sorted by their double time members: struct Event { public: explicit Event(double t) : time(t), eventID(), hostID(), s() {} Event(double t, int eid, int hid, int stype) : time(t), eventID( eid ), hostID( hid ), s(stype) {} bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } double time; ... }; The program iterates through periods of adding events with unordered times to EventPQ currentEvents and then pulling off events in order. Rarely, after some events have been added (with perfectly 'legal' times), events start getting executed out of order. What could make the events ever not get ordered properly? (Or what could mess up the iterator?) I have checked that all the added event times are legitimate (i.e., all exceed the current simulation time), and I have also confirmed that the error does not occur because two events happen to get scheduled for the same time. I'd love suggestions on how to work through this. The code for executing and adding events is below for the curious: double t = 0.0; double nextTimeStep = t + EPID_DELTA_T; EventPQ::iterator eventIter = currentEvents.begin(); while ( t < EPID_SIM_LENGTH ) { // Add some events to currentEvents while ( ( *eventIter ).time < nextTimeStep ) { Event thisEvent = *eventIter; t = thisEvent.time; executeEvent( thisEvent ); eventCtr++; currentEvents.erase( eventIter ); eventIter = currentEvents.begin(); } t = nextTimeStep; nextTimeStep += EPID_DELTA_T; } void Simulation::addEvent( double et, int eid, int hid, int s ) { assert( currentEvents.find( Event(et) ) == currentEvents.end() ); Event thisEvent( et, eid, hid, s ); currentEvents.insert( thisEvent ); }

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >