Search Results

Search found 4882 results on 196 pages for 'odd behaviour'.

Page 169/196 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Why doesn't Firefox redownload images already on a page?

    - by vvo
    Hello, i just read this article : https://developer.mozilla.org/en/HTTP_Caching_FAQ There's a firefox behavior (and some other browsers i guess) i'd like to understand : if i take any webpage and try to insert the same image multiple times in javascript, the image is only downloaded ONCE even if i specifiy all needed headers to say "do no ever use cache". (see article) I know there are workarounds (like addind query strings to end of urls etc) but why do firefox act like that, if i say that an image do not have to be cached, why is the image still taken from cache when i try to re-insert it ? Plus, what cache is used for this ? (I guess it's the memory cache) Is this behavior the same for dynamic inclusion for example ? ANSWSER IS NO :) I just tested it and the same headers for a js script will make firefox redownload it each time you append the script to the DOM. PS: I know you're wondering WHY i need to do that (appending same image multiple times and force to redownload but this is the way our app works) thank you The good answer is : firefox will store images for the current page load in the memory cache even if you specify he doesnt have to cache them. You can't change this behavior but this is odd because it's not the same for javascript files for example Could someone explain or link to a document describing how firefox cache WORKS?

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • While in a transaction, how can reads to an affected row be prevented until the transaction is done?

    - by Mahn
    I'm fairly sure this has a simple solution, but I haven't been able to find it so far. Provided an InnoDB MySQL database with the isolation level set to SERIALIZABLE, and given the following operation: BEGIN WORK; SELECT * FROM users WHERE userID=1; UPDATE users SET credits=100 WHERE userID=1; COMMIT; I would like to make sure that as soon as the select inside the transaction is issued, the row corresponding to userID=1 is locked for reads until the transaction is done. As it stands now, UPDATEs to this row will wait for the transaction to be finished if it is in process, but SELECTs simply will read the previous value. I understand this is the expected behaviour in this case, but I wonder if there is a way to lock the row in such a way that SELECTs will also wait until the transaction is finished to return the values? The reason I'm looking for that is that at some point, and with enough concurrent users, it could happen that while the previous transaction is in process someone else reads the "credits" to calculate something else. Ideally the code run by that someone else should wait for the transaction to finish to use the new value, because otherwise it could lead to irreversible desync issues. Note that I don't want to lock the entire table for reads, just the specific row. Also, I could add a boolean "locked" field to the tables and set it to 1 every time I'm starting a transaction but I don't really feel this is the most elegant solution here, unless there is absolutely no other way to handle this through mysql directly.

    Read the article

  • Detailstext in TableView from Coredata Iphone

    - by user554867
    Hey, that is my first question here at stackoverflow. Let me try to be specific as I can. The project is as follows: I am trying to parse an xml file in an iphone app, which works already. I am saving the parsed data into the CoreData of the Iphone. So far so good: I have two elements of the xml file which I want to show in the tableview as the text and the detailtext. Now the strange behaviour occurs: If I take the data of the Core Data and trying to visualize them in the tableview, then both elements are displayed in seperate cells and not as detailtext of the element. If I am just taking a constant string for the detailtext, then it works, but if I am trying to take for every cell a specific element from the Core Data then it is shown seperately in the tableview. I googled a lot, reading here and there. I dont know which code I should show because I dont have a clue where exactly the mistake could be. Maybe someone can answer that immediately because it is a stupid mistake somewhere, which is common. Thanks for your help.

    Read the article

  • Visual C++ function suddenly 170x slower

    - by Mikael
    For the past few months I've been working on a Visual C++ project to take images from cameras and process them. Up until today this has taken about 65 ms to update the data but now it has suddenly increased significantly. What happens is: I launch my program and for the first 30 or so iterations it performs as expected, then suddenly the loop time increases from 65 ms to 250 ms. The odd thing is, after timing each function I found out that the part of the code which is causing the slowdown is fairly basic and has not been modified in over a month. The data which goes into it is unchanged and identical every iteration but the execution time which is initially less than 1 ms suddenly increases to 170 ms while the rest of the code is still performing as expected (time-wise). Basically, I am calling the same function over and over, for the first 30 calls it performs as it should, after that it slows down for no apparent reason. It might also be worth noting that it is a sudden change in execution time, not a gradual increase. What could be causing this? The code is leaking some memory (~50 kb/s) but not nearly enough to warrant a sudden 4x slowdown. If anyone has any ideas I'd love to hear them!

    Read the article

  • Mapping many-to-many association table with extra column(s)

    - by user635524
    My database contains 3 tables: User and Service entities have many-to-many relationship and are joined with the SERVICE_USER table as follows: USERS - SERVICE_USER - SERVICES SERVICE_USER table contains additional BLOCKED column. What is the best way to perform such a mapping? These are my Entity classes @Entity @Table(name = "USERS") public class User implements java.io.Serializable { private String userid; private String email; @Id @Column(name = "USERID", unique = true, nullable = false,) public String getUserid() { return this.userid; } .... some get/set methods } @Entity @Table(name = "SERVICES") public class CmsService implements java.io.Serializable { private String serviceCode; @Id @Column(name = "SERVICE_CODE", unique = true, nullable = false, length = 100) public String getServiceCode() { return this.serviceCode; } .... some additional fields and get/set methods } I followed this example http://giannigar.wordpress.com/2009/09/04/m ... using-jpa/ Here is some test code: User user = new User(); user.setEmail("e2"); user.setUserid("ui2"); user.setPassword("p2"); CmsService service= new CmsService("cd2","name2"); List<UserService> userServiceList = new ArrayList<UserService>(); UserService userService = new UserService(); userService.setService(service); userService.setUser(user); userService.setBlocked(true); service.getUserServices().add(userService); userDAO.save(user); The problem is that hibernate persists User object and UserService one. No success with the CmsService object I tried to use EAGER fetch - no progress Is it possible to achieve the behaviour I'm expecting with the mapping provided above? Maybe there is some more elegant way of mapping many to many join table with additional column?

    Read the article

  • Resource allocation and automatic deallocation

    - by nabulke
    In my application I got many instances of class CDbaOciNotifier. They all share a pointer to only one instance of class OCIEnv. What I like to achieve is that allocation and deallocation of the resource class OCIEnv will be handled automatically inside class CDbaOciNotifier. The desired behaviour is, with the first instance of class CDbaOciNotifier the environment will be created, after that all following notifiers use that same environment. With the destruction of the last notifier, the environment will be destroyed too (call to custom deleter). What I've got so far (using a static factory method to create notifiers): #pragma once #include <string> #include <memory> #include "boost\noncopyable.hpp" class CDbaOciNotifier : private boost::noncopyable { public: virtual ~CDbaOciNotifier(void); static std::auto_ptr<CDbaOciNotifier> createNotifier(const std::string &tnsName, const std::string &user, const std::string &password); private: CDbaOciNotifier(OCIEnv* envhp); // All notifiers share one environment static OCIEnv* m_ENVHP; // Custom deleter static void freeEnvironment(OCIEnv *env); OCIEnv* m_envhp; }; CPP: #include "DbaOciNotifier.h" using namespace std; OCIEnv* CDbaOciNotifier::m_ENVHP = 0; CDbaOciNotifier::~CDbaOciNotifier(void) { } CDbaOciNotifier::CDbaOciNotifier(OCIEnv* envhp) :m_envhp(envhp) { } void CDbaOciNotifier::freeEnvironment(OCIEnv *env) { OCIHandleFree((dvoid *) env, (ub4) OCI_HTYPE_ENV); *env = null; } auto_ptr<CDbaOciNotifier> CDbaOciNotifier::createNotifier(const string &tnsName, const string &user, const string &password) { if(!m_ENVHP) { OCIEnvCreate( (OCIEnv **) &m_ENVHP, OCI_EVENTS|OCI_OBJECT, (dvoid *)0, (dvoid * (*)(dvoid *, size_t)) 0, (dvoid * (*)(dvoid *, dvoid *, size_t))0, (void (*)(dvoid *, dvoid *)) 0, (size_t) 0, (dvoid **) 0 ); } //shared_ptr<OCIEnv> spEnvhp(m_ENVHP, freeEnvironment); ...got so far... return auto_ptr<CDbaOciNotifier>(new CDbaOciNotifier(m_ENVHP)); } I'd like to avoid counting references (notifiers) myself, and use something like shared_ptr. Do you see an easy solution to my problem?

    Read the article

  • Cake returned the time consumed in data lookup in JQuery Alert Box

    - by kwokwai
    Hi all, When I was doing some self-learning on JQuery Ajax in Cakephp, I found out some strange behaviour in the JQuery Alert Box. Here are a few lines of code of the JQuery Ajax I used: $(document).ready(function(){ $(document).change(function(){ var usr = $("#data\\[User\\]\\[name\\]").val(); $.post{"http://www.mywebsite.com/controllers/action/", usr, function(msg){alert(msg);} } }); }); The Alert box shows me a message returned from the Action: Helloworld <!--0.656s--> I am not sure why the number of time consumption was displayed in the Alert box, since it was not in my code as follows: function action($data=null){ $this->autoRender = false; $result2=$this->__avail($data); if($result2==1) {return "OK";} else {return "NOT";} } CakePHP rteurned some extra information in the Alert box. Later I altered a single line of code and tried out this instead, and the time consumption was not displayed on screen then: $(document).ready(function(){ $(document).change(function(){ var usr = $("#data\\[User\\]\\[name\\]").val(); $.post{"http://www.mywebsite.com/controllers/action/", usr, function(msg){$("#username").append('<span>'+msg+</span'>);} } }); });

    Read the article

  • Name lookup for names not dependent on template parameter in VC++2008 Express. Is it a bug?

    - by Maciej H
    While experimenting a bit with C++ templates I managed to produce this simple code, for which the output is different, than I expected according to my understanding of C++ rules. void bar(double d) { std::cout << "bar(double) function called" << std::endl; } template <typename T> void foo(T t) { bar(3); } void bar(int i) { std::cout << "bar(int) function called" << std::endl; } int main() { foo(3); return 0; } When I compile this code is VC++2008 Express function bar(int) gets called. That would be the behaviour I would expect if bar(3);in the template body was dependent on the template parameter. But it's not. The rule I found here says "The C++ standard prescribes that all names that are not dependent on template parameters are bound to their present definitions when parsing a template function or class". Am I wrong, that "present definition" of bar when parsing the template function foo is the definition of void bar(double d);? Why it's not the case if I am wrong. There are no forward declarations of bar in this compilation unit.

    Read the article

  • LINQ - 'Could not translate expression' with previously used and proven query condition

    - by tomfumb
    I am fairly new to LINQ and can't get my head around some inconsistency in behaviour. Any knowledgeable input would be much appreciated. I see similar issues on SO and elsewhere but they don't seem to help. I have a very simple setup - a company table and an addresses table. Each company can have 0 or more addresses, and if 0 one must be specified as the main address. I'm trying to handle the cases where there are 0 addresses, using an outer join and altering the select statement accordingly. Please note I'm currently binding the output straight to a GridView so I would like to keep all processing within the query. The following DOES work IQueryable query = from comp in context.Companies join addr in context.Addresses on comp.CompanyID equals addr.CompanyID into outer // outer join companies to addresses table to include companies with no address from addr in outer.DefaultIfEmpty() where (addr.IsMain == null ? true : addr.IsMain) == true // if a company has no address ensure it is not ruled out by the IsMain condition - default to true if null select new { comp.CompanyID, comp.Name, AddressID = (addr.AddressID == null ? -1 : addr.AddressID), // use -1 to represent a company that has no addresses MainAddress = String.Format("{0}, {1}, {2} {3} ({4})", addr.Address1, addr.City, addr.Region, addr.PostalCode, addr.Country) }; but this displays an empty address in the GridView as ", , ()" So I updated the MainAddress field to be MainAddress = (addr.AddressID == null ? "" : String.Format("{0}, {1}, {2} {3} ({4})", addr.Address1, addr.City, addr.Region, addr.PostalCode, addr.Country)) and now I'm getting the Could not translate expression error and a bunch of spewey auto-generated code in the error which means very little to me. The condition I added to MainAddress is no different to the working condition on AddressID, so can anybody tell me what's going on here? Any help greatly appreciated.

    Read the article

  • Why does my binding break down on SilverLight ProgressBars?

    - by Bill Jeeves
    I asked a similar question about charts but I have given up on that and I am using progress bars instead. Essentially, I have ten progress bars in a Silverlight control. Each is showing a different value and updating every couple of seconds (it's a process monitor). Each progress bar has the same minimum value and maximum value so the bars can be compared. Trying to follow the M-V-VM model, I have bound the value of each bar to a property in my ViewModel. All of the maximum values for the bar are bound to a single property. When the model updates, the values and the maximum can all update. This allows the bars to re-scale as the sizes grow. I'm finding that the binding will stop working sometimes on one or more bars. I suspect it is because a bar's value becomes higher than the maximum occasionally. This is because if I update the maximums first and they are going down, the values will be to high. If I update the values first when the maximum needs increasing, the values are too high again. Is there a way to stop this behaviour? Some way, perhaps, to tell the progress bars that it's OK to temporarily go too high? Or some way to tell the bindings that they shouldn't be disabled when this happens? Or maybe I've got this completely wrong and there's some other issue with ProgressBar binding I don't know about?

    Read the article

  • C++: Switch statement within while loop?

    - by Jason
    I just started C++ but have some prior knowledge to other languages (vb awhile back unfortunately), but have an odd predicament. I disliked using so many IF statements and wanted to use switch/cases as it seemed cleaner, and I wanted to get in the practice.. But.. Lets say I have the following scenario (theorietical code): while(1) { //Loop can be conditional or 1, I use it alot, for example in my game char something; std::cout << "Enter something\n -->"; std::cin >> something; //Switch to read "something" switch(something) { case 'a': cout << "You entered A, which is correct"; break; case 'b': cout << "..."; break; } } And that's my problem. Lets say I wanted to exit the WHILE loop, It'd require two break statements? This obviously looks wrong: case 'a': cout << "You entered A, which is correct"; break; break; So can I only do an IF statement on the 'a' to use break;? Am I missing something really simple? This would solve a lot of my problems that I have right now.

    Read the article

  • Picture.writeToStream() not writing out all bitmaps

    - by quickdraw mcgraw
    I'm using webview.capturePicture() to create a Picture object that contains all the drawing objects for a webpage. I can successfully render this Picture object to a bitmap using the canvas.drawPicture(picture, dst) with no problems. However when I use picture.writeToStream(fos) to serialize the picture object out to file, and then Picture.createFromStream(fis) to read the data back in and create a new picture object, the resultant bitmap when rendered as above is missing any larger images (anything over around 20KB! by observation). This occurs on all the Android OS platforms that I have tested 1.5, 1.6 and 2.1. Looking at the native code for Skia which is the underlying Android graphics library and the output file produced from the picture.writeToStream() I can see how the file format is constructed. I can see that some of the images in this Skia spool file are not being written out (the larger ones), the code that appears to be the problem is in skBitmap.cpp in the method void SkBitmap::flatten(SkFlattenableWriteBuffer& buffer) const; It writes out the bitmap fWidth, fHeight, fRowBytes, FConfig and isOpaque values but then just writes out SERIALIZE_PIXELTYPE_NONE (0). This means that the spool file does not contain any pixel information about the actual image and therefore cannot restore the picture object correctly. Effectively this renders the writeToStream and createFromStream() APIs useless as they do not reliably store and recreate the picture data. Has anybody else seen this behaviour and if so am I using the API incorrectly, can it be worked around, is there an explanation i.e. incomplete API / bug and if so are there any plans for a fix in a future release of Android? Thanks in advance.

    Read the article

  • Pass command line arguments to JUnit test case being run programmatically

    - by __nv__
    I am attempting to run a JUnit Test from a Java Class with: JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.run(classToRun); Problem is my JUnit test requires a database connection that is currently hardcoded in the JUnit test itself. What I am looking for is a way to run the JUnit test programmatically(above) but pass a database connection to it that I create in my Java Class that runs the test, and not hardcoded within the JUnit class. Basically something like JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.addParameters(java.sql.Connection); core.run(classToRun); Then within the classToRun: @Test Public void Test1(Connection dbConnection){ Statement st = dbConnection.createStatement(); ResultSet rs = st.executeQuery("select total from dual"); rs.next(); String myTotal = rs.getString("TOTAL"); //btw my tests are selenium testcases:) selenium.isTextPresent(myTotal); } I know about The @Parameters, but it doesn't seem applicable here as it is more for running the same test case multiple times with differing values. I want all of my test cases to share a database connection that I pass in through a configuration file to my java client that then runs those test cases (also passed in through the configuration file). Is this possible? P.S. I understand this seems like an odd way of doing things.

    Read the article

  • [MVC] logic before dispatcher + controller?

    - by Spoonface
    I believe for a typical MVC web application the router / dispatcher routine is used to decide which controller is loaded based primarily on the area requested in the url by the user. However, in addition to checking the url query string, I also like to use the dispatcher to check whether the user is currently logged in or not to decide which controller is loaded. For example if they are logged in and request the login page, the dispatcher would load their account instead. But is this a fairly non-standard design? Would it violate MVC in any way? I only ask as the examples I've read through this weekend have had no major calculations performed before the dispatcher routine, and commonly check whether the user is logged in or not per controller, and then redirect where necessary. But to me it seems odd to redirect a logged in user from the login area to account area if you could just load the account controller in the first place? I hope I've explained my consternation well enough, but could anyone offer some details on how they handle logged in users, and similar session data?

    Read the article

  • GTK+ and GdkPixbuf

    - by Daniel
    Hi all, I think I've got an understanding problem of GTK. My simple application has a stream of images and I'd like to display them within my GTK Window. Up to now, it looks like this: GdkPixbuf *pb = gdk_pixbuf_new_from_data(img2, GDK_COLORSPACE_RGB, FALSE, 24/3, 320, 240, 320*3, NULL, NULL); if(pb == NULL) fprintf(stderr, "Pixbuf is null!\n"); if(image != NULL) gtk_container_remove(GTK_CONTAINER(window), image); image = gtk_image_new_from_pixbuf(pb); gtk_container_add(GTK_CONTAINER(window), image); printf("Updated!\n"); img2 is my (rgb) buffer that gets updated from a stream each time. I guess gtk_container_remove and gtk_container_add might be stupid to use for this? Here's what I've got in addition: GtkWidget *window; GtkWidget *image; gtk_init(&argc, &argv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); gtk_signal_connect(GTK_OBJECT(window), "destroy", GTK_SIGNAL_FUNC(destroy), NULL); /* ... */ start_routine_for_stream_that_calls_the_above(...) /* ... */ gtk_widget_show_all(window); gtk_main(); My problem is that it's not working this way... either I see only the last GdkPixbuf image or I see none, which is the correct behaviour ... But how do I manage it to show an (stream of) updated GdkPixbuf? Thanks for help

    Read the article

  • Serial Data Not Transmitted in C# Application

    - by Jim Fell
    Hello. I have a C# application wherein serial (COM1) data appears to sometimes not get transmitted. Following is a simplified snippet of my code (calls to textBox writes have been removed): try { serialPort1.Write("D"); serialPort1.Write(msg, 0, 512); serialPort1.Write("d"); serialPort1.Write(pCsum, 0, 2); } catch (SystemException ex) { /* ... */ } What is odd is that this same code works just fine when the port is configured for 115.2Kbps. However, when running at 9600bps data that should be transmitted by this code seems to not get transmitted. I have verified this by monitoring the receive flag on the remote device. No exceptions are thrown from within the try statement. Is there something else (Flush, etc.) that I should be doing to make sure the data is transmitted? Any thoughts or suggestions you may have would be appreciated. I'm using Microsoft Visual C# 2008 Express Edition. Thanks.

    Read the article

  • Integrating jQuery autocomplete with Google site search

    - by user1715700
    I have a bit of an odd situation. I have to implement search on a public facing website -but, that search must be able to search both web pages and have an autocomplete/suggestion functionality that comes from a list of terms that are in a DB table. So, I'm wondering a couple things: 1) should I be looking at Google search and jQuery autocomplete? 2) is there something else I should be looking at instead? 3) if this is the right path to be heading down are the enough pointers on implementation? The crux of my problem is that the terms that I need to use for the autocomplete/suggest functionality reside within a database and not on the webpages. So, I thought Google would be appropriate for search the webpages and that I could sort of fill in the blanks so to speak with these terms from the DB. I'm going to say that there are roughly 20-40,000 terms or so that need autocomplete. But that is really just a very rough guess -it could be less. I'm open to ideas and not really married to any particular solution. However, I will admit to liking the ideas of offloading the search to Google. I hear they have a good algorithm ;) Any ideas, thoughts, or leads are greatly appreciated!

    Read the article

  • HTTPS causes jQuery to ignore request

    - by Josh
    I have an odd bug, this jQuery code executes correctly when calling the page via HTTP, but once I connect to the page via HTTPS it doesn't execute. The code basically tracks when a link is clicked. <html> <head> <title>Test Page</title> <script type="text/javascript" src="/scripts/jquery-1.4.2.min.js"></script> <script type="text/javascript"> $(document).ready( function() { $('.fbspb').click(function() { $.get("/services/lt.ashx?ac=fbspb"); return true; }); }); </script> </head> <body> <a href="http://www.facebook.com" class="fbspb" target="_blank">Facebook</a> </body> </html> I've tried updating the URL in the get use a full HTTPS path with no success. No error is raised when I try to HTTPS.

    Read the article

  • Window manipulation and inctences control

    - by touki
    In my application there are only 2 windows — win_a & win_b, on each of these windows there is button that call another window, e.g. click on btn1 of win_a will call win_b, click on btn2 of win_b will show win_a. Desired behaviour: 1. Only one instance of object is premitted at the same time, e.g. situation, where 2 instances of win_a running at the same time is not permitted. When you click on button that calls windows that already exist this action will only change a focus to needed window. If you call a window that previously had been created, but after this has been closed this action will create a new instance of this window. E.g. there are 2 running windows. you close one of them and after try to call this window back, so related button will create it. How to write it in WPF (XAML + C#). For the moment I wrote a version that can create a lot of instances of the same window (no number of instances control implemented), but I want to see only one instance of the same window, as we can see it in a lot of applications. Example of my code: Window win = new Window(); win.Show(); Thanks.

    Read the article

  • Need for J2me source code

    - by tikamchandrakar
    For J2me It strikes me as odd that you need an extra "api key" and so on. But actually, what I really want is NOT create an extra facebook application that needs to be registered on Facebook. I don't want to create any extra configuration effords necessary for the user of my application to undergo. All my user should need is his well-known login data for facebook. Everything else should be completely transparent to him. So, I thought maybe would u can do the login process, creating a request to the REST server via http. I know this would provide me with an XML. I hope that the this API will somehow automatically transform that XML into an intuitive object model that represents the facebook user data of the respective user. So, I would expect something like userData = new FacebookData(new FacebookConnection("user_name", "password")). Done. If you get, what I mean. No api key. No secret key. Just the well-known login data. Practically, the equivalent to thunderbird webmail, which allows you to access your MSN hotmail account via Thunderbird. Thunderbird webmail will automatically converts the htmls obtained from a hotmail browser login into the data structure usually passed on to a mail client. Hope you get what I mean. I was expecting the equilalent for the your API.

    Read the article

  • Fastest method to define whether a number is a triangular number

    - by psihodelia
    A triangular number is the sum of the n natural numbers from 1 to n. What is the fastest method to find whether a given positive integer number is a triangular one? I suppose, there must be a hidden pattern in a binary representation of such numbers (like if you need to find whether a number is even/odd you check its least significant bit). Here is a cut of the first 1200th up to 1300th triangular numbers, you can easily see a bit-pattern here (if not, try to zoom out): (720600, '10101111111011011000') (721801, '10110000001110001001') (723003, '10110000100000111011') (724206, '10110000110011101110') (725410, '10110001000110100010') (726615, '10110001011001010111') (727821, '10110001101100001101') (729028, '10110001111111000100') (730236, '10110010010001111100') (731445, '10110010100100110101') (732655, '10110010110111101111') (733866, '10110011001010101010') (735078, '10110011011101100110') (736291, '10110011110000100011') (737505, '10110100000011100001') (738720, '10110100010110100000') (739936, '10110100101001100000') (741153, '10110100111100100001') (742371, '10110101001111100011') (743590, '10110101100010100110') (744810, '10110101110101101010') (746031, '10110110001000101111') (747253, '10110110011011110101') (748476, '10110110101110111100') (749700, '10110111000010000100') (750925, '10110111010101001101') (752151, '10110111101000010111') (753378, '10110111111011100010') (754606, '10111000001110101110') (755835, '10111000100001111011') (757065, '10111000110101001001') (758296, '10111001001000011000') (759528, '10111001011011101000') (760761, '10111001101110111001') (761995, '10111010000010001011') (763230, '10111010010101011110') (764466, '10111010101000110010') (765703, '10111010111100000111') (766941, '10111011001111011101') (768180, '10111011100010110100') (769420, '10111011110110001100') (770661, '10111100001001100101') (771903, '10111100011100111111') (773146, '10111100110000011010') (774390, '10111101000011110110') (775635, '10111101010111010011') (776881, '10111101101010110001') (778128, '10111101111110010000') (779376, '10111110010001110000') (780625, '10111110100101010001') (781875, '10111110111000110011') (783126, '10111111001100010110') (784378, '10111111011111111010') (785631, '10111111110011011111') (786885, '11000000000111000101') (788140, '11000000011010101100') (789396, '11000000101110010100') (790653, '11000001000001111101') (791911, '11000001010101100111') (793170, '11000001101001010010') (794430, '11000001111100111110') (795691, '11000010010000101011') (796953, '11000010100100011001') (798216, '11000010111000001000') (799480, '11000011001011111000') (800745, '11000011011111101001') (802011, '11000011110011011011') (803278, '11000100000111001110') (804546, '11000100011011000010') (805815, '11000100101110110111') (807085, '11000101000010101101') (808356, '11000101010110100100') (809628, '11000101101010011100') (810901, '11000101111110010101') (812175, '11000110010010001111') (813450, '11000110100110001010') (814726, '11000110111010000110') (816003, '11000111001110000011') (817281, '11000111100010000001') (818560, '11000111110110000000') (819840, '11001000001010000000') (821121, '11001000011110000001') (822403, '11001000110010000011') (823686, '11001001000110000110') (824970, '11001001011010001010') (826255, '11001001101110001111') (827541, '11001010000010010101') (828828, '11001010010110011100') (830116, '11001010101010100100') (831405, '11001010111110101101') (832695, '11001011010010110111') (833986, '11001011100111000010') (835278, '11001011111011001110') (836571, '11001100001111011011') (837865, '11001100100011101001') (839160, '11001100110111111000') (840456, '11001101001100001000') (841753, '11001101100000011001') (843051, '11001101110100101011') (844350, '11001110001000111110') For example, can you also see a rotated normal distribution curve, represented by zeros between 807085 and 831405?

    Read the article

  • Manipulating data in sql / asp.net / c# - how?

    - by SLC
    Not sure how to word the question... Basically, so far all my SQL stuff has been stored procedures and dumped into a gridview. The odd case where I had to perform an action based on a value (such as highlighting a row green if a certain value was true) were done as the gridview was rendering in one of the overrides. Now however I have to do something far more complicated - pull three sets of data down, run a series of checks on all three and some date related checks and stuff, then populate a gridview with some of the items. In logic terms, I want to run three queries, and store the lists of results (presumably in Lists?) then run some logic, then populate the gridview. Specifically what I don't know how to do is: Best way of pulling the data, and putting it into a List or other datastructure that lets me easily run through it, and retrieve data based on column (myList.age, or more likely, myList["Age"]). One I have compared the data, I assume I create a new list that will be put into the gridview... how do I put the contents of a list INTO a gridview? How would I add other stuff such as buttons or checkboxes at the same time? Any nudge in the right direction would be appreciated! Particularly doing cool stuff with lists and sql (if there is anything cool you can do with them)

    Read the article

  • Is re-throwing an exception legal in a nested 'try'?

    - by Alexander Gessler
    Is the following well-defined in C++, or not? I am forced to 'convert' exceptions to return codes (the API in question is used by many C users, so I need to make sure all C++ exceptions are caught & handled before control is returned to the caller). enum ErrorCode {…}; ErrorCode dispatcher() { try { throw; } catch (std::bad_alloc&) { return ErrorCode_OutOfMemory; } catch (std::logic_error&) { return ErrorCode_LogicError; } catch (myownstdexcderivedclass&) { return ErrorCode_42; } catch(...) { return ErrorCode_UnknownWeWillAllDie; } } ErrorCode apifunc() { try { // foo() might throw anything foo(); } catch(...) { // dispatcher rethrows the exception and does fine-grained handling return dispatcher(); } return ErrorCode_Fine; } ErrorCode apifunc2() { try { // bar() might throw anything bar(); } catch(...) { return dispatcher(); } return ErrorCode_Fine; } I hope the sample shows my intention. My guess is that this is undefined behaviour, but I'm not sure. Please provide quotes from the standard, if applicable. Alternative approaches are appreciated as well. Thanks!

    Read the article

  • What is the best way to create a running integer id on the AppEngine data storage?

    - by Freed
    For various reasons, I need a unique running integer id for my entities stored on the Google AppEngine. The automatically generated key sort of has this behaviour, but it doesn't start from 1 (or 0) and doesn't guarantee that the generated integer part will come from a continuous sequence. What would be the best way to efficiently implement this on AppEngine? Is there any support from the storage system? To add to the complexity, I might need to do this over entities from different entity groups, meaning I can't just get the highest id right now and save an entity with the next id in a transaction. Might memcache be the way to go..? Edit: I havn't yet implemented this, but to clarify on the memcache idea. I know memcache is unreliable, but in practice it probably won't lose data "too often" to hurt performance. Basically, I would have a memcache entry for the last used id, update it (somehow atomically) whenever I create a new entity and use that id. In the case of memcache not having a value for this entry, I'd get the highest id so far by doing a query on my entities sorted by the id and update memcache (unless someone else had already done so). The only problem I can see with this right now would be atomicity of the operation as a whole if the save of my new entity was also part of a transaction. Thoughts..?

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >